首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We study a linear mixed effects model for longitudinal data, where the response variable and covariates with fixed effects are subject to measurement error. We propose a method of moment estimation that does not require any assumption on the functional forms of the distributions of random effects and other random errors in the model. For a classical measurement error model we apply the instrumental variable approach to ensure identifiability of the parameters. Our methodology, without instrumental variables, can be applied to Berkson measurement errors. Using simulation studies, we investigate the finite sample performances of the estimators and show the impact of measurement error on the covariates and the response on the estimation procedure. The results show that our method performs quite satisfactory, especially for the fixed effects with measurement error (even under misspecification of measurement error model). This method is applied to a real data example of a large birth and child cohort study.  相似文献   

2.
Greene WF  Cai J 《Biometrics》2004,60(4):987-996
We consider measurement error in covariates in the marginal hazards model for multivariate failure time data. We explore the bias implications of normal additive measurement error without assuming a distribution for the underlying true covariate. To correct measurement-error-induced bias in the regression coefficient of the marginal model, we propose to apply the SIMEX procedure and demonstrate its large and small sample properties for both known and estimated measurement error variance. We illustrate this method using the Lipid Research Clinics Coronary Primary Prevention Trial data with total cholesterol as the covariate measured with error and time until angina and time until nonfatal myocardial infarction as the correlated outcomes of interest.  相似文献   

3.
We propose a conditional scores procedure for obtaining bias-corrected estimates of log odds ratios from matched case-control data in which one or more covariates are subject to measurement error. The approach involves conditioning on sufficient statistics for the unobservable true covariates that are treated as fixed unknown parameters. For the case of Gaussian nondifferential measurement error, we derive a set of unbiased score equations that can then be solved to estimate the log odds ratio parameters of interest. The procedure successfully removes the bias in naive estimates, and standard error estimates are obtained by resampling methods. We present an example of the procedure applied to data from a matched case-control study of prostate cancer and serum hormone levels, and we compare its performance to that of regression calibration procedures.  相似文献   

4.
X Liu  K Y Liang 《Biometrics》1992,48(2):645-654
Ignoring measurement error may cause bias in the estimation of regression parameters. When the true covariates are unobservable, multiple imprecise measurements can be used in the analysis to correct for the associated bias. We suggest a simple estimating procedure that gives consistent estimates of regression parameters by using the repeated measurements with error. The relative Pitman efficiency of our estimator based on models with and without measurement error has been found to be a simple function of the number of replicates and the ratio of intra- to inter-variance of the true covariate. The procedure thus provides a guide for deciding the number of repeated measurements in the design stage. An example from a survey study is presented.  相似文献   

5.
Stratified Cox regression models with large number of strata and small stratum size are useful in many settings, including matched case-control family studies. In the presence of measurement error in covariates and a large number of strata, we show that extensions of existing methods fail either to reduce the bias or to correct the bias under nonsymmetric distributions of the true covariate or the error term. We propose a nonparametric correction method for the estimation of regression coefficients, and show that the estimators are asymptotically consistent for the true parameters. Small sample properties are evaluated in a simulation study. The method is illustrated with an analysis of Framingham data.  相似文献   

6.
Longitudinal data often contain missing observations and error-prone covariates. Extensive attention has been directed to analysis methods to adjust for the bias induced by missing observations. There is relatively little work on investigating the effects of covariate measurement error on estimation of the response parameters, especially on simultaneously accounting for the biases induced by both missing values and mismeasured covariates. It is not clear what the impact of ignoring measurement error is when analyzing longitudinal data with both missing observations and error-prone covariates. In this article, we study the effects of covariate measurement error on estimation of the response parameters for longitudinal studies. We develop an inference method that adjusts for the biases induced by measurement error as well as by missingness. The proposed method does not require the full specification of the distribution of the response vector but only requires modeling its mean and variance structures. Furthermore, the proposed method employs the so-called functional modeling strategy to handle the covariate process, with the distribution of covariates left unspecified. These features, plus the simplicity of implementation, make the proposed method very attractive. In this paper, we establish the asymptotic properties for the resulting estimators. With the proposed method, we conduct sensitivity analyses on a cohort data set arising from the Framingham Heart Study. Simulation studies are carried out to evaluate the impact of ignoring covariate measurement error and to assess the performance of the proposed method.  相似文献   

7.
In epidemiologic studies, subjects are often misclassified as to their level of exposure. Ignoring this misclassification error in the analysis introduces bias in the estimates of certain parameters and invalidates many hypothesis tests. For situations in which there is misclassification of exposure in a follow-up study with categorical data, we have developed a model that permits consideration of any number of exposure categories and any number of multiple-category covariates. When used with logistic and Poisson regression procedures, this model helps assess the potential for bias when misclassification is ignored. When reliable ancillary information is available, the model can be used to correct for misclassification bias in the estimates produced by these regression procedures.  相似文献   

8.
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation‐extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error‐prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.  相似文献   

9.
In many observational studies, individuals are measured repeatedly over time, although not necessarily at a set of prespecified occasions. Instead, individuals may be measured at irregular intervals, with those having a history of poorer health outcomes being measured with somewhat greater frequency and regularity; i.e., those individuals with poorer health outcomes may have more frequent follow-up measurements and the intervals between their repeated measurements may be shorter. In this article, we consider estimation of regression parameters in models for longitudinal data where the follow-up times are not fixed by design but can depend on previous outcomes. In particular, we focus on general linear models for longitudinal data where the repeated measures are assumed to have a multivariate Gaussian distribution. We consider assumptions regarding the follow-up time process that result in the likelihood function separating into two components: one for the follow-up time process, the other for the outcome process. The practical implication of this separation is that the former process can be ignored when making likelihood-based inferences about the latter; i.e., maximum likelihood (ML) estimation of the regression parameters relating the mean of the longitudinal outcomes to covariates does not require that a model for the distribution of follow-up times be specified. As a result, standard statistical software, e.g., SAS PROC MIXED (Littell et al., 1996, SAS System for Mixed Models), can be used to analyze the data. However, we also demonstrate that misspecification of the model for the covariance among the repeated measures will, in general, result in regression parameter estimates that are biased. Furthermore, results of a simulation study indicate that the potential bias due to misspecification of the covariance can be quite considerable in this setting. Finally, we illustrate these results using data from a longitudinal observational study (Lipshultz et al., 1995, New England Journal of Medicine 332, 1738-1743) that explored the cardiotoxic effects of doxorubicin chemotherapy for the treatment of acute lymphoblastic leukemia in children.  相似文献   

10.
Chen H  Wang Y 《Biometrics》2011,67(3):861-870
In this article, we propose penalized spline (P-spline)-based methods for functional mixed effects models with varying coefficients. We decompose longitudinal outcomes as a sum of several terms: a population mean function, covariates with time-varying coefficients, functional subject-specific random effects, and residual measurement error processes. Using P-splines, we propose nonparametric estimation of the population mean function, varying coefficient, random subject-specific curves, and the associated covariance function that represents between-subject variation and the variance function of the residual measurement errors which represents within-subject variation. Proposed methods offer flexible estimation of both the population- and subject-level curves. In addition, decomposing variability of the outcomes as a between- and within-subject source is useful in identifying the dominant variance component therefore optimally model a covariance function. We use a likelihood-based method to select multiple smoothing parameters. Furthermore, we study the asymptotics of the baseline P-spline estimator with longitudinal data. We conduct simulation studies to investigate performance of the proposed methods. The benefit of the between- and within-subject covariance decomposition is illustrated through an analysis of Berkeley growth data, where we identified clearly distinct patterns of the between- and within-subject covariance functions of children's heights. We also apply the proposed methods to estimate the effect of antihypertensive treatment from the Framingham Heart Study data.  相似文献   

11.
Pan W  Lin X  Zeng D 《Biometrics》2006,62(2):402-412
We propose a new class of models, transition measurement error models, to study the effects of covariates and the past responses on the current response in longitudinal studies when one of the covariates is measured with error. We show that the response variable conditional on the error-prone covariate follows a complex transition mixed effects model. The naive model obtained by ignoring the measurement error correctly specifies the transition part of the model, but misspecifies the covariate effect structure and ignores the random effects. We next study the asymptotic bias in naive estimator obtained by ignoring the measurement error for both continuous and discrete outcomes. We show that the naive estimator of the regression coefficient of the error-prone covariate is attenuated, while the naive estimators of the regression coefficients of the past responses are generally inflated. We then develop a structural modeling approach for parameter estimation using the maximum likelihood estimation method. In view of the multidimensional integration required by full maximum likelihood estimation, an EM algorithm is developed to calculate maximum likelihood estimators, in which Monte Carlo simulations are used to evaluate the conditional expectations in the E-step. We evaluate the performance of the proposed method through a simulation study and apply it to a longitudinal social support study for elderly women with heart disease. An additional simulation study shows that the Bayesian information criterion (BIC) performs well in choosing the correct transition orders of the models.  相似文献   

12.
We introduce a new method, moment reconstruction, of correcting for measurement error in covariates in regression models. The central idea is similar to regression calibration in that the values of the covariates that are measured with error are replaced by "adjusted" values. In regression calibration the adjusted value is the expectation of the true value conditional on the measured value. In moment reconstruction the adjusted value is the variance-preserving empirical Bayes estimate of the true value conditional on the outcome variable. The adjusted values thereby have the same first two moments and the same covariance with the outcome variable as the unobserved "true" covariate values. We show that moment reconstruction is equivalent to regression calibration in the case of linear regression, but leads to different results for logistic regression. For case-control studies with logistic regression and covariates that are normally distributed within cases and controls, we show that the resulting estimates of the regression coefficients are consistent. In simulations we demonstrate that for logistic regression, moment reconstruction carries less bias than regression calibration, and for case-control studies is superior in mean-square error to the standard regression calibration approach. Finally, we give an example of the use of moment reconstruction in linear discriminant analysis and a nonstandard problem where we wish to adjust a classification tree for measurement error in the explanatory variables.  相似文献   

13.
Summary .   Missing data, measurement error, and misclassification are three important problems in many research fields, such as epidemiological studies. It is well known that missing data and measurement error in covariates may lead to biased estimation. Misclassification may be considered as a special type of measurement error, for categorical data. Nevertheless, we treat misclassification as a different problem from measurement error because statistical models for them are different. Indeed, in the literature, methods for these three problems were generally proposed separately given that statistical modeling for them are very different. The problem is more challenging in a longitudinal study with nonignorable missing data. In this article, we consider estimation in generalized linear models under these three incomplete data models. We propose a general approach based on expected estimating equations (EEEs) to solve these three incomplete data problems in a unified fashion. This EEE approach can be easily implemented and its asymptotic covariance can be obtained by sandwich estimation. Intensive simulation studies are performed under various incomplete data settings. The proposed method is applied to a longitudinal study of oral bone density in relation to body bone density.  相似文献   

14.
This article considers the problem of segmented regression in the presence of covariate measurement error in main study/validation study designs. First, we derive a closed and interpretable form for the full likelihood. After that, we use the likelihood results to compute the bias of the estimated changepoint in the case when the measurement error is ignored. We find the direction of the bias in the estimated changepoint to be determined by the design distribution of the observed covariates, and the bias can be in either direction. We apply the methodology to data from a nutritional study that investigates the relation between dietary folate and blood serum homocysteine levels and find that the analysis that ignores covariate measurement error would have indicated a much higher minimum daily dietary folate intake requirement than is obtained in the analysis that takes covariate measurement error into account.  相似文献   

15.
Population abundances are rarely, if ever, known. Instead, they are estimated with some amount of uncertainty. The resulting measurement error has its consequences on subsequent analyses that model population dynamics and estimate probabilities about abundances at future points in time. This article addresses some outstanding questions on the consequences of measurement error in one such dynamic model, the random walk with drift model, and proposes some new ways to correct for measurement error. We present a broad and realistic class of measurement error models that allows both heteroskedasticity and possible correlation in the measurement errors, and we provide analytical results about the biases of estimators that ignore the measurement error. Our new estimators include both method of moments estimators and "pseudo"-estimators that proceed from both observed estimates of population abundance and estimates of parameters in the measurement error model. We derive the asymptotic properties of our methods and existing methods, and we compare their finite-sample performance with a simulation experiment. We also examine the practical implications of the methods by using them to analyze two existing population dynamics data sets.  相似文献   

16.
Ratio estimation with measurement error in the auxiliary variate   总被引:1,自引:0,他引:1  
Gregoire TG  Salas C 《Biometrics》2009,65(2):590-598
Summary .  With auxiliary information that is well correlated with the primary variable of interest, ratio estimation of the finite population total may be much more efficient than alternative estimators that do not make use of the auxiliary variate. The well-known properties of ratio estimators are perturbed when the auxiliary variate is measured with error. In this contribution we examine the effect of measurement error in the auxiliary variate on the design-based statistical properties of three common ratio estimators. We examine the case of systematic measurement error as well as measurement error that varies according to a fixed distribution. Aside from presenting expressions for the bias and variance of these estimators when they are contaminated with measurement error we provide numerical results based on a specific population. Under systematic measurement error, the biasing effect is asymmetric around zero, and precision may be improved or degraded depending on the magnitude of the error. Under variable measurement error, bias of the conventional ratio-of-means estimator increased slightly with increasing error dispersion, but far less than the increased bias of the conventional mean-of-ratios estimator. In similar fashion, the variance of the mean-of-ratios estimator incurs a greater loss of precision with increasing error dispersion compared with the other estimators we examine. Overall, the ratio-of-means estimator appears to be remarkably resistant to the effects of measurement error in the auxiliary variate.  相似文献   

17.
In many observational studies, individuals are measured repeatedly over time, although not necessarily at a set of pre-specified occasions. Instead, individuals may be measured at irregular intervals, with those having a history of poorer health outcomes being measured with somewhat greater frequency and regularity. In this paper, we consider likelihood-based estimation of the regression parameters in marginal models for longitudinal binary data when the follow-up times are not fixed by design, but can depend on previous outcomes. In particular, we consider assumptions regarding the follow-up time process that result in the likelihood function separating into two components: one for the follow-up time process, the other for the outcome measurement process. The practical implication of this separation is that the follow-up time process can be ignored when making likelihood-based inferences about the marginal regression model parameters. That is, maximum likelihood (ML) estimation of the regression parameters relating the probability of success at a given time to covariates does not require that a model for the distribution of follow-up times be specified. However, to obtain consistent parameter estimates, the multinomial distribution for the vector of repeated binary outcomes must be correctly specified. In general, ML estimation requires specification of all higher-order moments and the likelihood for a marginal model can be intractable except in cases where the number of repeated measurements is relatively small. To circumvent these difficulties, we propose a pseudolikelihood for estimation of the marginal model parameters. The pseudolikelihood uses a linear approximation for the conditional distribution of the response at any occasion, given the history of previous responses. The appeal of this approximation is that the conditional distributions are functions of the first two moments of the binary responses only. When the follow-up times depend only on the previous outcome, the pseudolikelihood requires correct specification of the conditional distribution of the current outcome given the outcome at the previous occasion only. Results from a simulation study and a study of asymptotic bias are presented. Finally, we illustrate the main results using data from a longitudinal observational study that explored the cardiotoxic effects of doxorubicin chemotherapy for the treatment of acute lymphoblastic leukemia in children.  相似文献   

18.
We present a parametric family of regression models for interval-censored event-time (survival) data that accomodates both fixed (e.g. baseline) and time-dependent covariates. The model employs a three-parameter family of survival distributions that includes the Weibull, negative binomial, and log-logistic distributions as special cases, and can be applied to data with left, right, interval, or non-censored event times. Standard methods, such as Newton-Raphson, can be employed to estimate the model and the resulting estimates have an asymptotically normal distribution about the true values with a covariance matrix that is consistently estimated by the information function. The deviance function is described to assess model fit and a robust sandwich estimate of the covariance may also be employed to provide asymptotically robust inferences when the model assumptions do not apply. Spline functions may also be employed to allow for non-linear covariates. The model is applied to data from a long-term study of type 1 diabetes to describe the effects of longitudinal measures of glycemia (HbA1c) over time (the time-dependent covariate) on the risk of progression of diabetic retinopathy (eye disease), an interval-censored event-time outcome.  相似文献   

19.
Previously, we showed that in randomised experiments, correction for measurement error in a baseline variable induces bias in the estimated treatment effect, and conversely that ignoring measurement error avoids bias. In observational studies, non-zero baseline covariate differences between treatment groups may be anticipated. Using a graphical approach, we argue intuitively that if baseline differences are large, failing to correct for measurement error leads to a biased estimate of the treatment effect. In contrast, correction eliminates bias if the true and observed baseline differences are equal. If this equality is not satisfied, the corrected estimator is also biased, but typically less so than the uncorrected estimator. Contrasting these findings, we conclude that there must be a threshold for the true baseline difference, above which correction is worthwhile. We derive expressions for the bias of the corrected and uncorrected estimators, as functions of the correlation of the baseline variable with the study outcome, its reliability, the true baseline difference, and the sample sizes. Comparison of these expressions defines a theoretical decision threshold about whether to correct for measurement error. The results show that correction is usually preferred in large studies, and also in small studies with moderate baseline differences. If the group sample sizes are very disparate, correction is less advantageous. If the equivalent balanced sample size is less than about 25 per group, one should correct for measurement error if the true baseline difference is expected to exceed 0.2-0.3 standard deviation units. These results are illustrated with data from a cohort study of atherosclerosis.  相似文献   

20.
Summary Ye, Lin, and Taylor (2008, Biometrics 64 , 1238–1246) proposed a joint model for longitudinal measurements and time‐to‐event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two‐stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time‐to‐event data. In the second stage, the posterior expectation of an individual's random effects from the mixed‐model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time‐to‐event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008) . In agreement with the methodology proposed by Ye et al. (2008) , an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号