首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long‐term dietary intake and disease occurrence. Long‐term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ‐reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short‐term instrument such as 24‐hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR‐reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero‐augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long‐term intake with 24HR and FFQ‐reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method.  相似文献   

2.
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.  相似文献   

3.
In nutritional epidemiology, dietary intake assessed with a food frequency questionnaire is prone to measurement error. Ignoring the measurement error in covariates causes estimates to be biased and leads to a loss of power. In this paper, we consider an additive error model according to the characteristics of the European Prospective Investigation into Cancer and Nutrition (EPIC)‐InterAct Study data, and derive an approximate maximum likelihood estimation (AMLE) for covariates with measurement error under logistic regression. This method can be regarded as an adjusted version of regression calibration and can provide an approximate consistent estimator. Asymptotic normality of this estimator is established under regularity conditions, and simulation studies are conducted to empirically examine the finite sample performance of the proposed method. We apply AMLE to deal with measurement errors in some interested nutrients of the EPIC‐InterAct Study under a sensitivity analysis framework.  相似文献   

4.
Summary Dietary assessment of episodically consumed foods gives rise to nonnegative data that have excess zeros and measurement error. Tooze et al. (2006, Journal of the American Dietetic Association 106 , 1575–1587) describe a general statistical approach (National Cancer Institute method) for modeling such food intakes reported on two or more 24‐hour recalls (24HRs) and demonstrate its use to estimate the distribution of the food's usual intake in the general population. In this article, we propose an extension of this method to predict individual usual intake of such foods and to evaluate the relationships of usual intakes with health outcomes. Following the regression calibration approach for measurement error correction, individual usual intake is generally predicted as the conditional mean intake given 24HR‐reported intake and other covariates in the health model. One feature of the proposed method is that additional covariates potentially related to usual intake may be used to increase the precision of estimates of usual intake and of diet‐health outcome associations. Applying the method to data from the Eating at America's Table Study, we quantify the increased precision obtained from including reported frequency of intake on a food frequency questionnaire (FFQ) as a covariate in the calibration model. We then demonstrate the method in evaluating the linear relationship between log blood mercury levels and fish intake in women by using data from the National Health and Nutrition Examination Survey, and show increased precision when including the FFQ information. Finally, we present simulation results evaluating the performance of the proposed method in this context.  相似文献   

5.
Exposure measurement error can result in a biased estimate of the association between an exposure and outcome. When the exposure–outcome relationship is linear on the appropriate scale (e.g. linear, logistic) and the measurement error is classical, that is the result of random noise, the result is attenuation of the effect. When the relationship is non‐linear, measurement error distorts the true shape of the association. Regression calibration is a commonly used method for correcting for measurement error, in which each individual's unknown true exposure in the outcome regression model is replaced by its expectation conditional on the error‐prone measure and any fully measured covariates. Regression calibration is simple to execute when the exposure is untransformed in the linear predictor of the outcome regression model, but less straightforward when non‐linear transformations of the exposure are used. We describe a method for applying regression calibration in models in which a non‐linear association is modelled by transforming the exposure using a fractional polynomial model. It is shown that taking a Bayesian estimation approach is advantageous. By use of Markov chain Monte Carlo algorithms, one can sample from the distribution of the true exposure for each individual. Transformations of the sampled values can then be performed directly and used to find the expectation of the transformed exposure required for regression calibration. A simulation study shows that the proposed approach performs well. We apply the method to investigate the relationship between usual alcohol intake and subsequent all‐cause mortality using an error model that adjusts for the episodic nature of alcohol consumption.  相似文献   

6.
Epidemiological studies of diet and disease rely on the accurate determination of dietary intake and subsequent estimates of nutrient exposure. Although methodically developed and tested, the instruments most often used to collect self-reported intake data are subject to error. It had been assumed that this error was only random in nature; however, an increasing body of literature suggests that systematic error in the reporting of true dietary intake exists as well. Here, we review studies in which dietary intake by self report was determined while energy expenditure was simultaneously measured using the doubly labeled water (DLW) method. In seeking to establish the relative accuracy of each instrument to capture true habitual energy intake, we conclude that none of the self-reported intake instruments demonstrates greater accuracy against DLW. Instead, it is evident that the physical and psychological characteristics of study participants play a significant role in the underreporting bias observed in these studies. Further research is needed to identify underreporters and to determine how to account for this bias in studies of diet and health.  相似文献   

7.
This article considers the problem of segmented regression in the presence of covariate measurement error in main study/validation study designs. First, we derive a closed and interpretable form for the full likelihood. After that, we use the likelihood results to compute the bias of the estimated changepoint in the case when the measurement error is ignored. We find the direction of the bias in the estimated changepoint to be determined by the design distribution of the observed covariates, and the bias can be in either direction. We apply the methodology to data from a nutritional study that investigates the relation between dietary folate and blood serum homocysteine levels and find that the analysis that ignores covariate measurement error would have indicated a much higher minimum daily dietary folate intake requirement than is obtained in the analysis that takes covariate measurement error into account.  相似文献   

8.
Ko H  Davidian M 《Biometrics》2000,56(2):368-375
The nonlinear mixed effects model is used to represent data in pharmacokinetics, viral dynamics, and other areas where an objective is to elucidate associations among individual-specific model parameters and covariates; however, covariates may be measured with error. For additive measurement error, we show substitution of mismeasured covariates for true covariates may lead to biased estimators for fixed effects and random effects covariance parameters, while regression calibration may eliminate bias in fixed effects but fail to correct that in covariance parameters. We develop methods to take account of measurement error that correct this bias and may be implemented with standard software, and we demonstrate their utility via simulation and application to data from a study of HIV dynamics.  相似文献   

9.
Recurrent events could be stopped by a terminal event, which commonly occurs in biomedical and clinical studies. In this situation, dependent censoring is encountered because of potential dependence between these two event processes, leading to invalid inference if analyzing recurrent events alone. The joint frailty model is one of the widely used approaches to jointly model these two processes by sharing the same frailty term. One important assumption is that recurrent and terminal event processes are conditionally independent given the subject‐level frailty; however, this could be violated when the dependency may also depend on time‐varying covariates across recurrences. Furthermore, marginal correlation between two event processes based on traditional frailty modeling has no closed form solution for estimation with vague interpretation. In order to fill these gaps, we propose a novel joint frailty‐copula approach to model recurrent events and a terminal event with relaxed assumptions. Metropolis–Hastings within the Gibbs Sampler algorithm is used for parameter estimation. Extensive simulation studies are conducted to evaluate the efficiency, robustness, and predictive performance of our proposal. The simulation results show that compared with the joint frailty model, the bias and mean squared error of the proposal is smaller when the conditional independence assumption is violated. Finally, we apply our method into a real example extracted from the MarketScan database to study the association between recurrent strokes and mortality.  相似文献   

10.
Objective: To determine the effects of food viscosity on the ability of rats to compensate for calories in a dietary supplement. Research Methods and Procedures: In a series of four experiments, rats consumed dietary supplements equated for caloric and nutritive content but differing in viscosity. Experiments 1 to 3 examined the ability of the rats to compensate for the calories consumed in low‐ compared with high‐viscosity premeals by reducing intake of a subsequent test meal. Caloric compensation was assessed with a wide range of premeal viscosity levels and with two different non‐nutritive thickening agents. Experiment 4 assessed the effects of consuming daily a low‐viscosity compared with an equicaloric high‐viscosity dietary supplement on longer term body weight gain. Results: Consuming a lower viscosity premeal was followed by significantly more caloric intake (i.e., less caloric compensation) compared with consuming premeals with higher viscosity levels. This effect was not specific to one thickening agent. Furthermore, rats given a low‐viscosity supplement daily gained significantly more weight over a 10‐week period compared with rats given a high‐viscosity supplement. Discussion: The results of these experiments suggest that food viscosity may be an important determinant of short‐term caloric intake and longer term body weight gain.  相似文献   

11.
Summary Ye, Lin, and Taylor (2008, Biometrics 64 , 1238–1246) proposed a joint model for longitudinal measurements and time‐to‐event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two‐stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time‐to‐event data. In the second stage, the posterior expectation of an individual's random effects from the mixed‐model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time‐to‐event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008) . In agreement with the methodology proposed by Ye et al. (2008) , an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques.  相似文献   

12.
Species distribution modelling (SDM) has become an essential method in ecology and conservation. In the absence of survey data, the majority of SDMs are calibrated with opportunistic presence‐only data, incurring substantial sampling bias. We address the challenge of correcting for sampling bias in the data‐sparse situations. We modelled the relative intensity of bat records in their entire range using three modelling algorithms under the point‐process modelling framework (GLMs with subset selection, GLMs fitted with an elastic‐net penalty, and Maxent). To correct for sampling bias, we applied model‐based bias correction by incorporating spatial information on site accessibility or sampling efforts. We evaluated the effect of bias correction on the models’ predictive performance (AUC and TSS), calculated on spatial‐block cross‐validation and a holdout data set. When evaluated with independent, but also sampling‐biased test data, correction for sampling bias led to improved predictions. The predictive performance of the three modelling algorithms was very similar. Elastic‐net models have intermediate performance, with slight advantage for GLMs on cross‐validation and Maxent on hold‐out evaluation. Model‐based bias correction is very useful in data‐sparse situations, where detailed data are not available to apply other bias correction methods. However, bias correction success depends on how well the selected bias variables describe the sources of bias. In this study, accessibility covariates described bias in our data better than the effort covariate, and their use led to larger changes in predictive performance. Objectively evaluating bias correction requires bias‐free presence–absence test data, and without them the real improvement for describing a species’ environmental niche cannot be assessed.  相似文献   

13.
Objective: This study describes patterns of bias in self‐reported dietary recall data of girls by examining differences among girls classified as under‐reporters, plausible reporters, and over‐reporters on weight, dietary patterns, and psychosocial characteristics. Research Methods and Procedures: Participants included 176 girls at age 11 and their parents. Girls’ weight and height were measured. Three 24‐hour dietary recalls and responses to psychosocial measures were collected. Plausibility cut‐offs for reported energy intake as a percentage of predicted energy requirements were used to divide the sample into under‐reporters, plausible reporters, and over‐reporters. Differences among these three groups on dietary and psychosocial variables were assessed to examine possible sources of bias in reporting. Results: Using a ±1 standard deviation cut‐off for energy intake plausibility, 50% of the sample was categorized as plausible reporters, 34% as under‐reporters, and 16% as over‐reporters. Weight status of under‐reporters was significantly higher than that of plausible reporters and over‐reporters. With respect to reported dietary intake, under‐reporters were no different from plausible reporters on intakes of foods with higher nutrient densities and lower energy densities and were significantly lower than plausible reporters on intakes of foods with lower nutrient densities and higher energy densities. Over‐reporters reported significantly higher intakes of all food groups and the majority of subgroups, relative to plausible reporters. Under‐reporters had significantly higher levels of weight concern and dietary restraint than both plausible reporters and over‐reporters. Discussion: Techniques to categorize plausible and implausible reporters can and should be used to provide an improved understanding of the nature of error in children's dietary intake data and account for this error in analysis and interpretation.  相似文献   

14.
We consider a regression model where the error term is assumed to follow a type of asymmetric Laplace distribution. We explore its use in the estimation of conditional quantiles of a continuous outcome variable given a set of covariates in the presence of random censoring. Censoring may depend on covariates. Estimation of the regression coefficients is carried out by maximizing a non‐differentiable likelihood function. In the scenarios considered in a simulation study, the Laplace estimator showed correct coverage and shorter computation time than the alternative methods considered, some of which occasionally failed to converge. We illustrate the use of Laplace regression with an application to survival time in patients with small cell lung cancer.  相似文献   

15.
We propose a conditional scores procedure for obtaining bias-corrected estimates of log odds ratios from matched case-control data in which one or more covariates are subject to measurement error. The approach involves conditioning on sufficient statistics for the unobservable true covariates that are treated as fixed unknown parameters. For the case of Gaussian nondifferential measurement error, we derive a set of unbiased score equations that can then be solved to estimate the log odds ratio parameters of interest. The procedure successfully removes the bias in naive estimates, and standard error estimates are obtained by resampling methods. We present an example of the procedure applied to data from a matched case-control study of prostate cancer and serum hormone levels, and we compare its performance to that of regression calibration procedures.  相似文献   

16.
Ye W  Lin X  Taylor JM 《Biometrics》2008,64(4):1238-1246
SUMMARY: In this article we investigate regression calibration methods to jointly model longitudinal and survival data using a semiparametric longitudinal model and a proportional hazards model. In the longitudinal model, a biomarker is assumed to follow a semiparametric mixed model where covariate effects are modeled parametrically and subject-specific time profiles are modeled nonparametrially using a population smoothing spline and subject-specific random stochastic processes. The Cox model is assumed for survival data by including both the current measure and the rate of change of the underlying longitudinal trajectories as covariates, as motivated by a prostate cancer study application. We develop a two-stage semiparametric regression calibration (RC) method. Two variations of the RC method are considered, risk set regression calibration and a computationally simpler ordinary regression calibration. Simulation results show that the two-stage RC approach performs well in practice and effectively corrects the bias from the naive method. We apply the proposed methods to the analysis of a dataset for evaluating the effects of the longitudinal biomarker PSA on the recurrence of prostate cancer.  相似文献   

17.
Objective: The fat content of a diet has been shown to affect total energy intake, but controlled feeding trials have only compared very high (40% of total calories) fat diets with very low (20% of total calories) fat diets. This study was designed to measure accurately the voluntary food and energy intake over a range of typical intake for dietary fat. Methods and Procedures: Twenty‐two non‐obese subjects were studied for 4 days on each of three diets, which included core foods designed to contain 26, 34, and 40% fat, respectively of total calories and ad lib buffet foods of similar fat content. All diets were matched for determinants of energy density except dietary fat. Subjects consumed two meals/day in an inpatient unit and were provided the third meal and snack foods while on each diet. All food provided and not eaten was measured by research staff. Results: Voluntary energy intake increased significantly as dietary fat content increased (P = 0.008). On the 26% dietary fat treatment, subjects consumed 23.8% dietary fat (core and ad lib foods combined) and 2,748 ± 741 kcal/day (mean ± s.d.); at 34% dietary fat, subjects consumed 32.7% fat and 2,983 ± 886 kcal/day; and at 40% dietary fat subjects consumed 38.1% fat and 3,018 ± 963 kcal/day. Discussion: These results show that energy intake increases as dietary fat content increases across the usual range of dietary fat consumed in the United States. Even small reductions in dietary fat could help in lowering total energy intake and reducing weight gain in the population.  相似文献   

18.
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation‐extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error‐prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.  相似文献   

19.
The relationship between nutrient consumption and chronic disease risk is the focus of a large number of epidemiological studies where food frequency questionnaires (FFQ) and food records are commonly used to assess dietary intake. However, these self-assessment tools are known to involve substantial random error for most nutrients, and probably important systematic error as well. Study subject selection in dietary intervention studies is sometimes conducted in two stages. At the first stage, FFQ-measured dietary intakes are observed and at the second stage another instrument, such as a 4-day food record, is administered only to participants who have fulfilled a prespecified criterion that is based on the baseline FFQ-measured dietary intake (e.g., only those reporting percent energy intake from fat above a prespecified quantity). Performing analysis without adjusting for this truncated sample design and for the measurement error in the nutrient consumption assessments will usually provide biased estimates for the population parameters. In this work we provide a general statistical analysis technique for such data with the classical additive measurement error that corrects for the two sources of bias. The proposed technique is based on multiple imputation for longitudinal data. Results of a simulation study along with a sensitivity analysis are presented, showing the performance of the proposed method under a simple linear regression model.  相似文献   

20.
Chang  Ted; Kott  Phillip S. 《Biometrika》2008,95(3):555-571
When we estimate the population total for a survey variableor variables, calibration forces the weighted estimates of certaincovariates to match known or alternatively estimated populationtotals called benchmarks. Calibration can be used to correctfor sample-survey nonresponse, or for coverage error resultingfrom frame undercoverage or unit duplication. The quasi-randomizationtheory supporting its use in nonresponse adjustment treats responseas an additional phase of random sampling. The functional formof a quasi-random response model is assumed to be known, itsparameter values estimated implicitly through the creation ofcalibration weights. Unfortunately, calibration depends uponknown benchmark totals while the covariates in a plausible modelfor survey response may not be the benchmark covariates. Moreover,it may be prudent to keep the number of covariates in a responsemodel small. We use calibration to adjust for nonresponse whenthe benchmark model and covariates may differ, provided thenumber of the former is at least as great as that of the latter.We discuss the estimation of a total for a vector of surveyvariables that do not include the benchmark covariates, butthat may include some of the model covariates. We show how tomeasure both the additional asymptotic variance due to the nonresponsein a calibration-weighted estimator and the full asymptoticvariance of the estimator itself. All variances are determinedwith respect to the randomization mechanism used to select thesample, the response model generating the subset of sample respondents,or both. Data from the U.S. National Agricultural StatisticalService's 2002 Census of Agriculture and simulations are usedto illustrate alternative adjustments for nonresponse. The paperconcludes with some remarks about adjustment for coverage error.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号