首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An estimation method for the semiparametric mixed effects model   总被引:6,自引:0,他引:6  
Tao H  Palta M  Yandell BS  Newton MA 《Biometrics》1999,55(1):102-110
A semiparametric mixed effects regression model is proposed for the analysis of clustered or longitudinal data with continuous, ordinal, or binary outcome. The common assumption of Gaussian random effects is relaxed by using a predictive recursion method (Newton and Zhang, 1999) to provide a nonparametric smooth density estimate. A new strategy is introduced to accelerate the algorithm. Parameter estimates are obtained by maximizing the marginal profile likelihood by Powell's conjugate direction search method. Monte Carlo results are presented to show that the method can improve the mean squared error of the fixed effects estimators when the random effects distribution is not Gaussian. The usefulness of visualizing the random effects density itself is illustrated in the analysis of data from the Wisconsin Sleep Survey. The proposed estimation procedure is computationally feasible for quite large data sets.  相似文献   

2.
The classical model for the analysis of progression of markers in HIV-infected patients is the mixed effects linear model. However, longitudinal studies of viral load are complicated by left censoring of the measures due to a lower quantification limit. We propose a full likelihood approach to estimate parameters from the linear mixed effects model for left-censored Gaussian data. For each subject, the contribution to the likelihood is the product of the density for the vector of the completely observed outcome and of the conditional distribution function of the vector of the censored outcome, given the observed outcomes. Values of the distribution function were computed by numerical integration. The maximization is performed by a combination of the Simplex algorithm and the Marquardt algorithm. Subject-specific deviations and random effects are estimated by modified empirical Bayes replacing censored measures by their conditional expectations given the data. A simulation study showed that the proposed estimators are less biased than those obtained by imputing the quantification limit to censored data. Moreover, for models with complex covariance structures, they are less biased than Monte Carlo expectation maximization (MCEM) estimators developed by Hughes (1999) Mixed effects models with censored data with application to HIV RNA Levels. Biometrics 55, 625-629. The method was then applied to the data of the ALBI-ANRS 070 clinical trial for which HIV-1 RNA levels were measured with an ultrasensitive assay (quantification limit 50 copies/ml). Using the proposed method, estimates obtained with data artificially censored at 500 copies/ml were close to those obtained with the real data set.  相似文献   

3.
Interim analyses in clinical trials are planned for ethical as well as economic reasons. General results have been published in the literature that allow the use of standard group sequential methodology if one uses an efficient test statistic, e.g., when Wald-type statistics are used in random-effects models for ordinal longitudinal data. These models often assume that the random effects are normally distributed. However, this is not always the case. We will show that, when the random-effects distribution is misspecified in ordinal regression models, the joint distribution of the test statistics over the different interim analyses is still a multivariate normal distribution, but a sandwich-type correction to the covariance matrix is needed in order to obtain the correct covariance matrix. The independent increment structure is also investigated. A bias in estimation will occur due to the misspecification. However, we will also show that the treatment effect estimate will be unbiased under the null hypothesis, thus maintaining the type I error. Extensive simulations based on a toenail dermatophyte onychomycosis trial are used to illustrate our results.  相似文献   

4.
Jara A  Hanson TE 《Biometrika》2011,98(3):553-566
We propose a class of dependent processes in which density shape is regressed on one or more predictors through conditional tail-free probabilities by using transformed Gaussian processes. A particular linear version of the process is developed in detail. The resulting process is flexible and easy to fit using standard algorithms for generalized linear models. The method is applied to growth curve analysis, evolving univariate random effects distributions in generalized linear mixed models, and median survival modelling with censored data and covariate-dependent errors.  相似文献   

5.
Song X  Davidian M  Tsiatis AA 《Biometrics》2002,58(4):742-753
Joint models for a time-to-event (e.g., survival) and a longitudinal response have generated considerable recent interest. The longitudinal data are assumed to follow a mixed effects model, and a proportional hazards model depending on the longitudinal random effects and other covariates is assumed for the survival endpoint. Interest may focus on inference on the longitudinal data process, which is informatively censored, or on the hazard relationship. Several methods for fitting such models have been proposed, most requiring a parametric distributional assumption (normality) on the random effects. A natural concern is sensitivity to violation of this assumption; moreover, a restrictive distributional assumption may obscure key features in the data. We investigate these issues through our proposal of a likelihood-based approach that requires only the assumption that the random effects have a smooth density. Implementation via the EM algorithm is described, and performance and the benefits for uncovering noteworthy features are illustrated by application to data from an HIV clinical trial and by simulation.  相似文献   

6.
Li E  Wang N  Wang NY 《Biometrics》2007,63(4):1068-1078
Summary .   Joint models are formulated to investigate the association between a primary endpoint and features of multiple longitudinal processes. In particular, the subject-specific random effects in a multivariate linear random-effects model for multiple longitudinal processes are predictors in a generalized linear model for primary endpoints. Li, Zhang, and Davidian (2004, Biometrics 60 , 1–7) proposed an estimation procedure that makes no distributional assumption on the random effects but assumes independent within-subject measurement errors in the longitudinal covariate process. Based on an asymptotic bias analysis, we found that their estimators can be biased when random effects do not fully explain the within-subject correlations among longitudinal covariate measurements. Specifically, the existing procedure is fairly sensitive to the independent measurement error assumption. To overcome this limitation, we propose new estimation procedures that require neither a distributional or covariance structural assumption on covariate random effects nor an independence assumption on within-subject measurement errors. These new procedures are more flexible, readily cover scenarios that have multivariate longitudinal covariate processes, and can be implemented using available software. Through simulations and an analysis of data from a hypertension study, we evaluate and illustrate the numerical performances of the new estimators.  相似文献   

7.
Summary Statistical models that include random effects are commonly used to analyze longitudinal and correlated data, often with the assumption that the random effects follow a Gaussian distribution. Via theoretical and numerical calculations and simulation, we investigate the impact of misspecification of this distribution on both how well the predicted values recover the true underlying distribution and the accuracy of prediction of the realized values of the random effects. We show that, although the predicted values can vary with the assumed distribution, the prediction accuracy, as measured by mean square error, is little affected for mild‐to‐moderate violations of the assumptions. Thus, standard approaches, readily available in statistical software, will often suffice. The results are illustrated using data from the Heart and Estrogen/Progestin Replacement Study using models to predict future blood pressure values.  相似文献   

8.
We propose a class of longitudinal data models with random effects that generalizes currently used models in two important ways. First, the random-effects model is a flexible mixture of multivariate normals, accommodating population heterogeneity, outliers, and nonlinearity in the regression on subject-specific covariates. Second, the model includes a hierarchical extension to allow for meta-analysis over related studies. The random-effects distributions are decomposed into one part that is common across all related studies (common measure), and one part that is specific to each study and that captures the variability intrinsic between patients within the same study. Both the common measure and the study-specific measures are parameterized as mixture-of-normals models. We carry out inference using reversible jump posterior simulation to allow a random number of terms in the mixtures. The sampler takes advantage of the small number of entertained models. The motivating application is the analysis of two studies carried out by the Cancer and Leukemia Group B (CALGB). In both studies, we record for each patient white blood cell counts (WBC) over time to characterize the toxic effects of treatment. The WBCs are modeled through a nonlinear hierarchical model that gathers the information from both studies.  相似文献   

9.
We propose a state space model for analyzing equally or unequally spaced longitudinal count data with serial correlation. With a log link function, the mean of the Poisson response variable is a nonlinear function of the fixed and random effects. The random effects are assumed to be generated from a Gaussian first order autoregression (AR(1)). In this case, the mean of the observations has a log normal distribution. We use a combination of linear and nonlinear methods to take advantage of the Gaussian process embedded in a nonlinear function. The state space model uses a modified Kalman filter recursion to estimate the mean and variance of the AR(1) random error given the previous observations. The marginal likelihood is approximated by numerically integrating out the AR(1) random error. Simulation studies with different sets of parameters show that the state space model performs well. The model is applied to Epileptic Seizure data and Primary Care Visits Data. Missing and unequally spaced observations are handled naturally with this model.  相似文献   

10.
Quantitative trait loci (QTL) are usually searched for using classical interval mapping methods which assume that the trait of interest follows a normal distribution. However, these methods cannot take into account features of most survival data such as a non-normal distribution and the presence of censored data. We propose two new QTL detection approaches which allow the consideration of censored data. One interval mapping method uses a Weibull model (W), which is popular in parametrical modelling of survival traits, and the other uses a Cox model (C), which avoids making any assumption on the trait distribution. Data were simulated following the structure of a published experiment. Using simulated data, we compare W, C and a classical interval mapping method using a Gaussian model on uncensored data (G) or on all data (G'=censored data analysed as though records were uncensored). An adequate mathematical transformation was used for all parametric methods (G, G' and W). When data were not censored, the four methods gave similar results. However, when some data were censored, the power of QTL detection and accuracy of QTL location and of estimation of QTL effects for G decreased considerably with censoring, particularly when censoring was at a fixed date. This decrease with censoring was observed also with G', but it was less severe. Censoring had a negligible effect on results obtained with the W and C methods.  相似文献   

11.
Elashoff RM  Li G  Li N 《Biometrics》2008,64(3):762-771
Summary .   In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel ( Prentice et al., 1978 , Biometrics 34, 541–554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease.  相似文献   

12.
In longitudinal studies, measurements of the same individuals are taken repeatedly through time. Often, the primary goal is to characterize the change in response over time and the factors that influence change. Factors can affect not only the location but also more generally the shape of the distribution of the response over time. To make inference about the shape of a population distribution, the widely popular mixed-effects regression, for example, would be inadequate, if the distribution is not approximately Gaussian. We propose a novel linear model for quantile regression (QR) that includes random effects in order to account for the dependence between serial observations on the same subject. The notion of QR is synonymous with robust analysis of the conditional distribution of the response variable. We present a likelihood-based approach to the estimation of the regression quantiles that uses the asymmetric Laplace density. In a simulation study, the proposed method had an advantage in terms of mean squared error of the QR estimator, when compared with the approach that considers penalized fixed effects. Following our strategy, a nearly optimal degree of shrinkage of the individual effects is automatically selected by the data and their likelihood. Also, our model appears to be a robust alternative to the mean regression with random effects when the location parameter of the conditional distribution of the response is of interest. We apply our model to a real data set which consists of self-reported amount of labor pain measurements taken on women repeatedly over time, whose distribution is characterized by skewness, and the significance of the parameters is evaluated by the likelihood ratio statistic.  相似文献   

13.
Hogan JW  Lin X  Herman B 《Biometrics》2004,60(4):854-864
The analysis of longitudinal repeated measures data is frequently complicated by missing data due to informative dropout. We describe a mixture model for joint distribution for longitudinal repeated measures, where the dropout distribution may be continuous and the dependence between response and dropout is semiparametric. Specifically, we assume that responses follow a varying coefficient random effects model conditional on dropout time, where the regression coefficients depend on dropout time through unspecified nonparametric functions that are estimated using step functions when dropout time is discrete (e.g., for panel data) and using smoothing splines when dropout time is continuous. Inference under the proposed semiparametric model is hence more robust than the parametric conditional linear model. The unconditional distribution of the repeated measures is a mixture over the dropout distribution. We show that estimation in the semiparametric varying coefficient mixture model can proceed by fitting a parametric mixed effects model and can be carried out on standard software platforms such as SAS. The model is used to analyze data from a recent AIDS clinical trial and its performance is evaluated using simulations.  相似文献   

14.
Regression models in survival analysis are most commonly applied for right‐censored survival data. In some situations, the time to the event is not exactly observed, although it is known that the event occurred between two observed times. In practice, the moment of observation is frequently taken as the event occurrence time, and the interval‐censored mechanism is ignored. We present a cure rate defective model for interval‐censored event‐time data. The defective distribution is characterized by a density function whose integration assumes a value less than one when the parameter domain differs from the usual domain. We use the Gompertz and inverse Gaussian defective distributions to model data containing cured elements and estimate parameters using the maximum likelihood estimation procedure. We evaluate the performance of the proposed models using Monte Carlo simulation studies. Practical relevance of the models is illustrated by applying datasets on ovarian cancer recurrence and oral lesions in children after liver transplantation, both of which were derived from studies performed at A.C. Camargo Cancer Center in São Paulo, Brazil.  相似文献   

15.
A method is proposed that aims at identifying clusters of individuals that show similar patterns when observed repeatedly. We consider linear‐mixed models that are widely used for the modeling of longitudinal data. In contrast to the classical assumption of a normal distribution for the random effects a finite mixture of normal distributions is assumed. Typically, the number of mixture components is unknown and has to be chosen, ideally by data driven tools. For this purpose, an EM algorithm‐based approach is considered that uses a penalized normal mixture as random effects distribution. The penalty term shrinks the pairwise distances of cluster centers based on the group lasso and the fused lasso method. The effect is that individuals with similar time trends are merged into the same cluster. The strength of regularization is determined by one penalization parameter. For finding the optimal penalization parameter a new model choice criterion is proposed.  相似文献   

16.
Dobson A  Henderson R 《Biometrics》2003,59(4):741-751
We present a variety of informal graphical procedures for diagnostic assessment of joint models for longitudinal and dropout time data. A random effects approach for Gaussian responses and proportional hazards dropout time is assumed. We consider preliminary assessment of dropout classification categories based on residuals following a standard longitudinal data analysis with no allowance for informative dropout. Residual properties conditional upon dropout information are discussed and case influence is considered. The proposed methods do not require computationally intensive methods over and above those used to fit the proposed model. A longitudinal trial into the treatment of schizophrenia is used to illustrate the suggestions.  相似文献   

17.
Often in biomedical studies, the routine use of linear mixed‐effects models (based on Gaussian assumptions) can be questionable when the longitudinal responses are skewed in nature. Skew‐normal/elliptical models are widely used in those situations. Often, those skewed responses might also be subjected to some upper and lower quantification limits (QLs; viz., longitudinal viral‐load measures in HIV studies), beyond which they are not measurable. In this paper, we develop a Bayesian analysis of censored linear mixed models replacing the Gaussian assumptions with skew‐normal/independent (SNI) distributions. The SNI is an attractive class of asymmetric heavy‐tailed distributions that includes the skew‐normal, skew‐t, skew‐slash, and skew‐contaminated normal distributions as special cases. The proposed model provides flexibility in capturing the effects of skewness and heavy tail for responses that are either left‐ or right‐censored. For our analysis, we adopt a Bayesian framework and develop a Markov chain Monte Carlo algorithm to carry out the posterior analyses. The marginal likelihood is tractable, and utilized to compute not only some Bayesian model selection measures but also case‐deletion influence diagnostics based on the Kullback–Leibler divergence. The newly developed procedures are illustrated with a simulation study as well as an HIV case study involving analysis of longitudinal viral loads.  相似文献   

18.
Fieuws S  Verbeke G 《Biometrics》2006,62(2):424-431
A mixed model is a flexible tool for joint modeling purposes, especially when the gathered data are unbalanced. However, computational problems due to the dimension of the joint covariance matrix of the random effects arise as soon as the number of outcomes and/or the number of used random effects per outcome increases. We propose a pairwise approach in which all possible bivariate models are fitted, and where inference follows from pseudo-likelihood arguments. The approach is applicable for linear, generalized linear, and nonlinear mixed models, or for combinations of these. The methodology will be illustrated for linear mixed models in the analysis of 22-dimensional, highly unbalanced, longitudinal profiles of hearing thresholds.  相似文献   

19.
Statistical analysis of longitudinal data often involves modeling treatment effects on clinically relevant longitudinal biomarkers since an initial event (the time origin). In some studies including preventive HIV vaccine efficacy trials, some participants have biomarkers measured starting at the time origin, whereas others have biomarkers measured starting later with the time origin unknown. The semiparametric additive time-varying coefficient model is investigated where the effects of some covariates vary nonparametrically with time while the effects of others remain constant. Weighted profile least squares estimators coupled with kernel smoothing are developed. The method uses the expectation maximization approach to deal with the censored time origin. The Kaplan–Meier estimator and other failure time regression models such as the Cox model can be utilized to estimate the distribution and the conditional distribution of left censored event time related to the censored time origin. Asymptotic properties of the parametric and nonparametric estimators and consistent asymptotic variance estimators are derived. A two-stage estimation procedure for choosing weight is proposed to improve estimation efficiency. Numerical simulations are conducted to examine finite sample properties of the proposed estimators. The simulation results show that the theory and methods work well. The efficiency gain of the two-stage estimation procedure depends on the distribution of the longitudinal error processes. The method is applied to analyze data from the Merck 023/HVTN 502 Step HIV vaccine study.  相似文献   

20.
Lachos VH  Bandyopadhyay D  Dey DK 《Biometrics》2011,67(4):1594-1604
HIV RNA viral load measures are often subjected to some upper and lower detection limits depending on the quantification assays. Hence, the responses are either left or right censored. Linear (and nonlinear) mixed-effects models (with modifications to accommodate censoring) are routinely used to analyze this type of data and are based on normality assumptions for the random terms. However, those analyses might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear (and nonlinear) models replacing the Gaussian assumptions for the random terms with normal/independent (NI) distributions. The NI is an attractive class of symmetric heavy-tailed densities that includes the normal, Student's-t, slash, and the contaminated normal distributions as special cases. The marginal likelihood is tractable (using approximations for nonlinear models) and can be used to develop Bayesian case-deletion influence diagnostics based on the Kullback-Leibler divergence. The newly developed procedures are illustrated with two HIV AIDS studies on viral loads that were initially analyzed using normal (censored) mixed-effects models, as well as simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号