首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the study of multiple failure time data with recurrent clinical endpoints, the classical independent censoring assumption in survival analysis can be violated when the evolution of the recurrent events is correlated with a censoring mechanism such as death. Moreover, in some situations, a cure fraction appears in the data because a tangible proportion of the study population benefits from treatment and becomes recurrence free and insusceptible to death related to the disease. A bivariate joint frailty mixture cure model is proposed to allow for dependent censoring and cure fraction in recurrent event data. The latency part of the model consists of two intensity functions for the hazard rates of recurrent events and death, wherein a bivariate frailty is introduced by means of the generalized linear mixed model methodology to adjust for dependent censoring. The model allows covariates and frailties in both the incidence and the latency parts, and it further accounts for the possibility of cure after each recurrence. It includes the joint frailty model and other related models as special cases. An expectation-maximization (EM)-type algorithm is developed to provide residual maximum likelihood estimation of model parameters. Through simulation studies, the performance of the model is investigated under different magnitudes of dependent censoring and cure rate. The model is applied to data sets from two colorectal cancer studies to illustrate its practical value.  相似文献   

2.
A predictive continuous time model is developed for continuous panel data to assess the effect of time‐varying covariates on the general direction of the movement of a continuous response that fluctuates over time. This is accomplished by reparameterizing the infinitesimal mean of an Ornstein–Uhlenbeck processes in terms of its equilibrium mean and a drift parameter, which assesses the rate that the process reverts to its equilibrium mean. The equilibrium mean is modeled as a linear predictor of covariates. This model can be viewed as a continuous time first‐order autoregressive regression model with time‐varying lag effects of covariates and the response, which is more appropriate for unequally spaced panel data than its discrete time analog. Both maximum likelihood and quasi‐likelihood approaches are considered for estimating the model parameters and their performances are compared through simulation studies. The simpler quasi‐likelihood approach is suggested because it yields an estimator that is of high efficiency relative to the maximum likelihood estimator and it yields a variance estimator that is robust to the diffusion assumption of the model. To illustrate the proposed model, an application to diastolic blood pressure data from a follow‐up study on cardiovascular diseases is presented. Missing observations are handled naturally with this model.  相似文献   

3.
A heteroskedastic random coefficients model was described for analyzing weight performances between the 100th and the 650th days of age of Maine-Anjou beef cattle. This model contained both fixed effects, random linear regression and heterogeneous variance components. The objective of this study was to analyze the difference of growth curves between animals born as twin and single bull calves. The method was based on log-linear models for residual and individual variances expressed as functions of explanatory variables. An expectation-maximization (EM) algorithm was proposed for calculating restricted maximum likelihood (REML) estimates of the residual and individual components of variances and covariances. Likelihood ratio tests were used to assess hypotheses about parameters of this model. Growth of Maine-Anjou cattle was described by a third order regression on age for a mean growth curve, two correlated random effects for the individual variability and independent errors. Three sources of heterogeneity of residual variances were detected. The difference of weight performance between bulls born as single and twin bull calves was estimated to be equal to about 15 kg for the growth period considered.  相似文献   

4.
We propose a semiparametric mean residual life mixture cure model for right-censored survival data with a cured fraction. The model employs the proportional mean residual life model to describe the effects of covariates on the mean residual time of uncured subjects and the logistic regression model to describe the effects of covariates on the cure rate. We develop estimating equations to estimate the proposed cure model for the right-censored data with and without length-biased sampling, the latter is often found in prevalent cohort studies. In particular, we propose two estimating equations to estimate the effects of covariates in the cure rate and a method to combine them to improve the estimation efficiency. The consistency and asymptotic normality of the proposed estimates are established. The finite sample performance of the estimates is confirmed with simulations. The proposed estimation methods are applied to a clinical trial study on melanoma and a prevalent cohort study on early-onset type 2 diabetes mellitus.  相似文献   

5.
Targeted maximum likelihood estimation is a versatile tool for estimating parameters in semiparametric and nonparametric models. We work through an example applying targeted maximum likelihood methodology to estimate the parameter of a marginal structural model. In the case we consider, we show how this can be easily done by clever use of standard statistical software. We point out differences between targeted maximum likelihood estimation and other approaches (including estimating function based methods). The application we consider is to estimate the effect of adherence to antiretroviral medications on virologic failure in HIV positive individuals.  相似文献   

6.

Interval-censored failure times arise when the status with respect to an event of interest is only determined at intermittent examination times. In settings where there exists a sub-population of individuals who are not susceptible to the event of interest, latent variable models accommodating a mixture of susceptible and nonsusceptible individuals are useful. We consider such models for the analysis of bivariate interval-censored failure time data with a model for bivariate binary susceptibility indicators and a copula model for correlated failure times given joint susceptibility. We develop likelihood, composite likelihood, and estimating function methods for model fitting and inference, and assess asymptotic-relative efficiency and finite sample performance. Extensions dealing with higher-dimensional responses and current status data are also described.

  相似文献   

7.
Tests for monotone mean residual life, using randomly censored data   总被引:1,自引:1,他引:0  
At any age the mean residual life function gives the expected remaining life at that age. Reliabilists and biometricians have found it useful to categorize failure distributions by the monotonicity properties of the mean residual life function. Hollander and Proschan (1975, Biometrika 62, 585-593) have derived tests of the null hypothesis that the underlying failure distribution is exponential, versus the alternative that it has a monotone mean residual life function. These tests are based on a complete sample. Often, however, data are incomplete because of withdrawals from the study and because of survivors at the time the data are analyzed. In this paper we generalize the Hollander-Proschan tests to accommodate randomly censored data. The efficiency loss due to the presence of censoring is also investigated.  相似文献   

8.
Data from a litter matched tumorigenesis experiment are analysed using a generalised linear mixed model (GLMM) approach to the analysis of clustered survival data in which there is a dependence of failure time observations within the same litter. Maximum likelihood (ML) and residual maximum likelihood (REML) estimates of risk variable parameters, variance component parameters and the prediction of random effects are given. Estimation of treatment effect parameter (carcinogen effect) has good agreement with previous analyses obtained in the literature though the dependence structure within a litter is modelled in different ways. The variance component estimation provides the estimated dispersion of the random effects. The prediction of random effects, is useful, for instance, in identifying high risk litters and individuals. The present analysis illustrates its wider application to detecting increased risk of occurrence of disease in particular families of a study population.  相似文献   

9.
In this paper, we propose a generalization of the mixture (binary) cure rate model, motivated by the existence of a zero-modified (inflation or deflation) distribution, on the initial number of causes, under a competing cause scenario. This non-linear transformation cure rate model is in the same form of models studied in the past; however, following our approach, we are able to give a realistic interpretation to a specific class of proper transformation functions, for the cure rate modeling. The estimation of the parameters is then carried out using the maximum likelihood method along with a profile approach. A simulation study examines the accuracy of the proposed estimation method and the model discrimination based on the likelihood ratio test. For illustrative purposes, analysis of two real life data-sets, one on recidivism and another on cutaneous melanoma, is also carried out.  相似文献   

10.
Generalized linear mixed models (GLMMs) have become a frequently used tool for the analysis of non-Gaussian longitudinal data. Estimation is based on maximum likelihood theory, which assumes that the underlying probability model is correctly specified. Recent research is showing that the results obtained from these models are not always robust against departures from the assumptions on which these models are based. In the present work we have used simulations with a logistic random-intercept model to study the impact of misspecifying the random-effects distribution on the type I and II errors of the tests for the mean structure in GLMMs. We found that the misspecification can either increase or decrease the power of the tests, depending on the shape of the underlying random-effects distribution, and it can considerably inflate the type I error rate. Additionally, we have found a theoretical result which states that whenever a subset of fixed-effects parameters, not included in the random-effects structure equals zero, the corresponding maximum likelihood estimator will consistently estimate zero. This implies that under certain conditions a significant effect could be considered as a reliable result, even if the random-effects distribution is misspecified.  相似文献   

11.
Recent developments in marginal likelihood estimation for model selection in the field of Bayesian phylogenetics and molecular evolution have emphasized the poor performance of the harmonic mean estimator (HME). Although these studies have shown the merits of new approaches applied to standard normally distributed examples and small real-world data sets, not much is currently known concerning the performance and computational issues of these methods when fitting complex evolutionary and population genetic models to empirical real-world data sets. Further, these approaches have not yet seen widespread application in the field due to the lack of implementations of these computationally demanding techniques in commonly used phylogenetic packages. We here investigate the performance of some of these new marginal likelihood estimators, specifically, path sampling (PS) and stepping-stone (SS) sampling for comparing models of demographic change and relaxed molecular clocks, using synthetic data and real-world examples for which unexpected inferences were made using the HME. Given the drastically increased computational demands of PS and SS sampling, we also investigate a posterior simulation-based analogue of Akaike's information criterion (AIC) through Markov chain Monte Carlo (MCMC), a model comparison approach that shares with the HME the appealing feature of having a low computational overhead over the original MCMC analysis. We confirm that the HME systematically overestimates the marginal likelihood and fails to yield reliable model classification and show that the AICM performs better and may be a useful initial evaluation of model choice but that it is also, to a lesser degree, unreliable. We show that PS and SS sampling substantially outperform these estimators and adjust the conclusions made concerning previous analyses for the three real-world data sets that we reanalyzed. The methods used in this article are now available in BEAST, a powerful user-friendly software package to perform Bayesian evolutionary analyses.  相似文献   

12.
Most existing statistical methods for mapping quantitative trait loci (QTL) are not suitable for analyzing survival traits with a skewed distribution and censoring mechanism. As a result, researchers incorporate parametric and semi-parametric models of survival analysis into the framework of the interval mapping for QTL controlling survival traits. In survival analysis, accelerated failure time (AFT) model is considered as a de facto standard and fundamental model for data analysis. Based on AFT model, we propose a parametric approach for mapping survival traits using the EM algorithm to obtain the maximum likelihood estimates of the parameters. Also, with Bayesian information criterion (BIC) as a model selection criterion, an optimal mapping model is constructed by choosing specific error distributions with maximum likelihood and parsimonious parameters. Two real datasets were analyzed by our proposed method for illustration. The results show that among the five commonly used survival distributions, Weibull distribution is the optimal survival function for mapping of heading time in rice, while Log-logistic distribution is the optimal one for hyperoxic acute lung injury.  相似文献   

13.
Zeh J  Poole D  Miller G  Koski W  Baraff L  Rugh D 《Biometrics》2002,58(4):832-840
Annual survival probability of bowhead whales, Balaena mysticetus, was estimated using both Bayesian and maximum likelihood implementations of Cormack and Jolly-Seber (JS) models for capture-recapture estimation in open populations and reduced-parameter generalizations of these models. Aerial photographs of naturally marked bowheads collected between 1981 and 1998 provided the data. The marked whales first photographed in a particular year provided the initial 'capture' and 'release' of those marked whales and photographs in subsequent years the 'recaptures'. The Cormack model, often called the Cormack-Jolly-Seber (CJS) model, and the program MARK were used to identify the model with a single survival and time-varying capture probabilities as the most appropriate for these data. When survival was constrained to be one or less, the maximum likelihood estimate computed by MARK was one, invalidating confidence interval computations based on the asymptotic standard error or profile likelihood. A Bayesian Markov chain Monte Carlo (MCMC) implementation of the model was used to produce a posterior distribution for annual survival. The corresponding reduced-parameter JS model was also fit via MCMC because it is the more appropriate of the two models for these photoidentification data. Because the CJS model ignores much of the information on capture probabilities provided by the data, its results are less precise and more sensitive to the prior distributions used than results from the JS model. With priors for annual survival and capture probabilities uniform from 0 to 1, the posterior mean for bowhead survival rate from the JS model is 0.984, and 95% of the posterior probability lies between 0.948 and 1. This high estimated survival rate is consistent with other bowhead life history data.  相似文献   

14.
Maximum likelihood estimation of the model parameters for a spatial population based on data collected from a survey sample is usually straightforward when sampling and non-response are both non-informative, since the model can then usually be fitted using the available sample data, and no allowance is necessary for the fact that only a part of the population has been observed. Although for many regression models this naive strategy yields consistent estimates, this is not the case for some models, such as spatial auto-regressive models. In this paper, we show that for a broad class of such models, a maximum marginal likelihood approach that uses both sample and population data leads to more efficient estimates since it uses spatial information from sampled as well as non-sampled units. Extensive simulation experiments based on two well-known data sets are used to assess the impact of the spatial sampling design, the auto-correlation parameter and the sample size on the performance of this approach. When compared to some widely used methods that use only sample data, the results from these experiments show that the maximum marginal likelihood approach is much more precise.  相似文献   

15.
Yau KK 《Biometrics》2001,57(1):96-102
A method for modeling survival data with multilevel clustering is described. The Cox partial likelihood is incorporated into the generalized linear mixed model (GLMM) methodology. Parameter estimation is achieved by maximizing a log likelihood analogous to the likelihood associated with the best linear unbiased prediction (BLUP) at the initial step of estimation and is extended to obtain residual maximum likelihood (REML) estimators of the variance component. Estimating equations for a three-level hierarchical survival model are developed in detail, and such a model is applied to analyze a set of chronic granulomatous disease (CGD) data on recurrent infections as an illustration with both hospital and patient effects being considered as random. Only the latter gives a significant contribution. A simulation study is carried out to evaluate the performance of the REML estimators. Further extension of the estimation procedure to models with an arbitrary number of levels is also discussed.  相似文献   

16.
In this paper, we consider the inherent association between mean and covariance in the joint mean–covariance modeling and propose a joint mean–covariance random effect model based on the modified Cholesky decomposition for longitudinal data. Meanwhile, we apply M-H algorithm to simulate the posterior distributions of model parameters. Besides, a computationally efficient Monte Carlo expectation maximization (MCEM) algorithm is developed for carrying out maximum likelihood estimation. Simulation studies show that the model taking into account the inherent association between mean and covariance has smaller standard deviations of the estimators of parameters, which makes the statistical inferences much more reliable. In the real data analysis, the estimation of parameters in the mean and covariance structure is highly efficient.  相似文献   

17.
Summary .   Motivated by the spatial modeling of aberrant crypt foci (ACF) in colon carcinogenesis, we consider binary data with probabilities modeled as the sum of a nonparametric mean plus a latent Gaussian spatial process that accounts for short-range dependencies. The mean is modeled in a general way using regression splines. The mean function can be viewed as a fixed effect and is estimated with a penalty for regularization. With the latent process viewed as another random effect, the model becomes a generalized linear mixed model. In our motivating data set and other applications, the sample size is too large to easily accommodate maximum likelihood or restricted maximum likelihood estimation (REML), so pairwise likelihood, a special case of composite likelihood, is used instead. We develop an asymptotic theory for models that are sufficiently general to be used in a wide variety of applications, including, but not limited to, the problem that motivated this work. The splines have penalty parameters that must converge to zero asymptotically: we derive theory for this along with a data-driven method for selecting the penalty parameter, a method that is shown in simulations to improve greatly upon standard devices, such as likelihood crossvalidation. Finally, we apply the methods to the data from our experiment ACF. We discover an unexpected location for peak formation of ACF.  相似文献   

18.
An estimator of the hazard rate function from discrete failure time data is obtained by semiparametric smoothing of the (nonsmooth) maximum likelihood estimator, which is achieved by repeated multiplication of a Markov chain transition-type matrix. This matrix is constructed so as to have a given standard discrete parametric hazard rate model, termed the vehicle model, as its stationary hazard rate. As with the discrete density estimation case, the proposed estimator gives improved performance when the vehicle model is a good one and otherwise provides a nonparametric method comparable to the only purely nonparametric smoother discussed in the literature. The proposed semiparametric smoothing approach is then extended to hazard models with covariates and is illustrated by applications to simulated and real data sets.  相似文献   

19.
Zero‐truncated data arises in various disciplines where counts are observed but the zero count category cannot be observed during sampling. Maximum likelihood estimation can be used to model these data; however, due to its nonstandard form it cannot be easily implemented using well‐known software packages, and additional programming is often required. Motivated by the Rao–Blackwell theorem, we develop a weighted partial likelihood approach to estimate model parameters for zero‐truncated binomial and Poisson data. The resulting estimating function is equivalent to a weighted score function for standard count data models, and allows for applying readily available software. We evaluate the efficiency for this new approach and show that it performs almost as well as maximum likelihood estimation. The weighted partial likelihood approach is then extended to regression modelling and variable selection. We examine the performance of the proposed methods through simulation and present two case studies using real data.  相似文献   

20.
Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters   总被引:2,自引:0,他引:2  
Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号