首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long‐term dietary intake and disease occurrence. Long‐term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ‐reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short‐term instrument such as 24‐hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR‐reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero‐augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long‐term intake with 24HR and FFQ‐reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method.  相似文献   

2.
The relationship between nutrient consumption and chronic disease risk is the focus of a large number of epidemiological studies where food frequency questionnaires (FFQ) and food records are commonly used to assess dietary intake. However, these self-assessment tools are known to involve substantial random error for most nutrients, and probably important systematic error as well. Study subject selection in dietary intervention studies is sometimes conducted in two stages. At the first stage, FFQ-measured dietary intakes are observed and at the second stage another instrument, such as a 4-day food record, is administered only to participants who have fulfilled a prespecified criterion that is based on the baseline FFQ-measured dietary intake (e.g., only those reporting percent energy intake from fat above a prespecified quantity). Performing analysis without adjusting for this truncated sample design and for the measurement error in the nutrient consumption assessments will usually provide biased estimates for the population parameters. In this work we provide a general statistical analysis technique for such data with the classical additive measurement error that corrects for the two sources of bias. The proposed technique is based on multiple imputation for longitudinal data. Results of a simulation study along with a sensitivity analysis are presented, showing the performance of the proposed method under a simple linear regression model.  相似文献   

3.
Food records, including 24-hour recalls and diet diaries, are considered to provide generally superior measures of long-term dietary intake relative to questionnaire-based methods. Despite the expense of processing food records, they are increasingly used as the main dietary measurement in nutritional epidemiology, in particular in sub-studies nested within prospective cohorts. Food records are, however, subject to excess reports of zero intake. Measurement error is a serious problem in nutritional epidemiology because of the lack of gold standard measurements and results in biased estimated diet-disease associations. In this paper, a 3-part measurement error model, which we call the never and episodic consumers (NEC) model, is outlined for food records. It allows for both real zeros, due to never consumers, and excess zeros, due to episodic consumers (EC). Repeated measurements are required for some study participants to fit the model. Simulation studies are used to compare the results from using the proposed model to correct for measurement error with the results from 3 alternative approaches: a crude approach using the mean of repeated food record measurements as the exposure, a linear regression calibration (RC) approach, and an EC model which does not allow real zeros. The crude approach results in badly attenuated odds ratio estimates, except in the unlikely situation in which a large number of repeat measurements is available for all participants. Where repeat measurements are available for all participants, the 3 correction methods perform equally well. However, when only a subset of the study population has repeat measurements, the NEC model appears to provide the best method for correcting for measurement error, with the 2 alternative correction methods, in particular the linear RC approach, resulting in greater bias and loss of coverage. The NEC model is extended to include adjustment for measurements from food frequency questionnaires, enabling better estimation of the proportion of never consumers when the number of repeat measurements is small. The methods are applied to 7-day diary measurements of alcohol intake in the EPIC-Norfolk study.  相似文献   

4.
DON is the only mycotoxin of the trichothecene group that has been regulated in food so far by the German authorities. The quantitative determination of DON in food and feed therefore is of great importance regarding compliance with legal limits. An important source of measurement error is the bias introduced by different absolute concentration of calibration standards. To make an estimation of the contribution of calibration standards to total error six different DON-standards provided by the participants of the project “Analysis and occurence of important Fusarium toxins (DON and ZEA) and uptake of these toxins by the German consumer” sponsored by the German Ministry of Consumer Protection, Nutrition and Agriculture (BMVEL, Projekt BLE 00HS055) were analyzed and compared to three batches of commercial standard solutions. The range of the results was between 29.2 and 26.3% for the laboratory standards and 11.3 respective 3% for the commercial standards depending on different calculation modes (UV height and area and fluorescence height). Therefore a substantial reduction of the measurement error by using commercial standards instead of laboratory-prepared standards seems possible.  相似文献   

5.
Sampling experiments were performed to investigate mean square error and bias in estimates of mean shape produced by different geometric morphometric methods. The experiments use the isotropic error model, which assumes equal and independent variation at each landmark. The case of three landmarks in the plane (i.e., triangles) was emphasized because it could be investigated systematically and the results displayed on the printed page. The amount of error in the estimates was displayed as RMSE surfaces over the space of all possible configurations of three landmarks. Patterns of bias were shown as vector fields over this same space. Experiments were also performed using particular combinations of four or more landmarks in both two and three dimensions.It was found that the generalized Procrustes analysis method produced estimates with the least error and no pattern of bias. Averages of Bookstein shape coordinates performed well if the longest edge was used as the baseline. The method of moments (Stoyan, 1990, Model. Biomet. J. 32, 843) used in EDMA (Lele, 1993, Math. Geol. 25, 573) exhibits larger errors. When variation is not small, it also shows a pattern of bias for isosceles triangles with one side much shorter than the other two and for triangles whose vertices are approximately collinear causing them to resemble their own reflections. Similar problems were found for the log-distance method of Rao and Suryawanshi (1996, Proc. Nat. Acad. Sci. 95, 4121). These results and their implications for the application of different geometric morphometric methods are discussed.  相似文献   

6.
Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan’s graphical method either over-or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 min scan duration are at least as good as those obtained using existing methods for data acquired over a 90 min scan duration.  相似文献   

7.
In a meta-analysis of randomized trials of the effects of dietary sodium interventions on blood pressure, we found substantial heterogeneity among the studies. We were interested in evaluating whether measurement error, known to be a problem for dietary sodium measures, publication bias, or confounding factors could be responsible for the heterogeneity. A measurement error correction was developed that corrects both the slope and the intercept and takes into account the sample size of each study and the number of measurements taken on an individual. The measurement error correction had a minimal effect on the estimates, although it performed well in simulated data. A smoothed scatter plot was used to assess publication bias. Metaregressions provide a convenient way to jointly assess the effects of several factors, but care must be taken to fit an appropriate model.  相似文献   

8.

Background

One method of identifying cis regulatory differences is to analyze allele-specific expression (ASE) and identify cases of allelic imbalance (AI). RNA-seq is the most common way to measure ASE and a binomial test is often applied to determine statistical significance of AI. This implicitly assumes that there is no bias in estimation of AI. However, bias has been found to result from multiple factors including: genome ambiguity, reference quality, the mapping algorithm, and biases in the sequencing process. Two alternative approaches have been developed to handle bias: adjusting for bias using a statistical model and filtering regions of the genome suspected of harboring bias. Existing statistical models which account for bias rely on information from DNA controls, which can be cost prohibitive for large intraspecific studies. In contrast, data filtering is inexpensive and straightforward, but necessarily involves sacrificing a portion of the data.

Results

Here we propose a flexible Bayesian model for analysis of AI, which accounts for bias and can be implemented without DNA controls. In lieu of DNA controls, this Poisson-Gamma (PG) model uses an estimate of bias from simulations. The proposed model always has a lower type I error rate compared to the binomial test. Consistent with prior studies, bias dramatically affects the type I error rate. All of the tested models are sensitive to misspecification of bias. The closer the estimate of bias is to the true underlying bias, the lower the type I error rate. Correct estimates of bias result in a level alpha test.

Conclusions

To improve the assessment of AI, some forms of systematic error (e.g., map bias) can be identified using simulation. The resulting estimates of bias can be used to correct for bias in the PG model, without data filtering. Other sources of bias (e.g., unidentified variant calls) can be easily captured by DNA controls, but are missed by common filtering approaches. Consequently, as variant identification improves, the need for DNA controls will be reduced. Filtering does not significantly improve performance and is not recommended, as information is sacrificed without a measurable gain. The PG model developed here performs well when bias is known, or slightly misspecified. The model is flexible and can accommodate differences in experimental design and bias estimation.

Electronic supplementary material

The online version of this article (doi:10.1186/1471-2164-15-920) contains supplementary material, which is available to authorized users.  相似文献   

9.
Greene WF  Cai J 《Biometrics》2004,60(4):987-996
We consider measurement error in covariates in the marginal hazards model for multivariate failure time data. We explore the bias implications of normal additive measurement error without assuming a distribution for the underlying true covariate. To correct measurement-error-induced bias in the regression coefficient of the marginal model, we propose to apply the SIMEX procedure and demonstrate its large and small sample properties for both known and estimated measurement error variance. We illustrate this method using the Lipid Research Clinics Coronary Primary Prevention Trial data with total cholesterol as the covariate measured with error and time until angina and time until nonfatal myocardial infarction as the correlated outcomes of interest.  相似文献   

10.
Although having been much criticized, diversity indices are still widely used in animal and plant ecology to evaluate, survey, and conserve ecosystems. It is possible to quantify biodiversity by using estimators for which statistical characteristics and performance are, as yet, poorly defined. In the present study, four of the most frequently used diversity indices were compared: the Shannon index, the Simpson index, the Camargo eveness index, and the Pielou regularity index. Comparisons were performed by simulating the Zipf–Mandelbrot parametric model and estimating three statistics of these indices, i.e., the relative bias, the coefficient of variation, and the relative root-mean-squared error. Analysis of variance was used to determine which of the factors contributed most to the observed variation in the four diversity estimators: abundance distribution model or sample size. The results have revealed that the Camargo eveness index tends to demonstrate a high bias and a large relative root-mean-squared error whereas the Simpson index is least biased and the Shannon index shows a smaller relative root-mean-squared error, regardless of the abundance distribution model used and even when sample size is small. Shannon and Pielou estimators are sensitive to changes in species abundance pattern and present a nonnegligible bias for small sample sizes (<1000 individuals). Received: May 8, 1998 / Accepted: May 6, 1999  相似文献   

11.
Gustafson P  Le Nhu D 《Biometrics》2002,58(4):878-887
It is well known that imprecision in the measurement of predictor variables typically leads to bias in estimated regression coefficients. We compare the bias induced by measurement error in a continuous predictor with that induced by misclassification of a binary predictor in the contexts of linear and logistic regression. To make the comparison fair, we consider misclassification probabilities for a binary predictor that correspond to dichotomizing an imprecise continuous predictor in lieu of its precise counterpart. On this basis, nondifferential binary misclassification is seen to yield more bias than nondifferential continuous measurement error. However, it is known that differential misclassification results if a binary predictor is actually formed by dichotomizing a continuous predictor subject to nondifferential measurement error. When the postulated model linking the response and precise continuous predictor is correct, this differential misclassification is found to yield less bias than continuous measurement error, in contrast with nondifferential misclassification, i.e., dichotomization reduces the bias due to mismeasurement. This finding, however, is sensitive to the form of the underlying relationship between the response and the continuous predictor. In particular, we give a scenario where dichotomization involves a trade-off between model fit and misclassification bias. We also examine how the bias depends on the choice of threshold in the dichotomization process and on the correlation between the imprecise predictor and a second precise predictor.  相似文献   

12.
利用挥发性脂肪酸(VFA)的化学计量模型\[CH4=0.5Ace-0.25Pro+0.5But-0.25Val,模型1,式中,CH4、Ace、Pro、But和Val分别表示甲烷、乙酸、丙酸、丁酸和戊酸的产量\]预测瘤胃甲烷产量的精度.选用常见的10种化学成分差异显著的饲料原料(包括4种精饲料和6种粗饲料)进行体外模拟反刍家畜瘤胃发酵试验,测定发酵72 h后的VFA组成和CH4产量.利用模型精度分析方法比较CH4产量预测值与实测值间的差异.结果表明: 模型1估算的CH4生成量普遍高于实测值,其偏差、斜率和随机误差分别为62.6%、11.7%和25.7%,固定误差>70%.考虑到VFA代谢生成氢的80%用于合成CH4,VFA化学计量模型表达为模型2\[CH4= 0.8(0.5Ace-0.25Pro+0.5But-0.25Val)\].与模型1相比(均方预测误差=0.60),模型2的预测精度大大提升(均方预测误差=0.18),模型2的偏差、斜率和随机误差分别为2.1%、5.7%和92.3%,固定误差<10%.模型1认为VFA生成过程所产生的氢全部被甲烷菌用于合成CH4,没有考虑氢代谢的其他途径,这是导致预测值大于实测值的一个重要原因.  相似文献   

13.
Ko H  Davidian M 《Biometrics》2000,56(2):368-375
The nonlinear mixed effects model is used to represent data in pharmacokinetics, viral dynamics, and other areas where an objective is to elucidate associations among individual-specific model parameters and covariates; however, covariates may be measured with error. For additive measurement error, we show substitution of mismeasured covariates for true covariates may lead to biased estimators for fixed effects and random effects covariance parameters, while regression calibration may eliminate bias in fixed effects but fail to correct that in covariance parameters. We develop methods to take account of measurement error that correct this bias and may be implemented with standard software, and we demonstrate their utility via simulation and application to data from a study of HIV dynamics.  相似文献   

14.
The warning signals of toxic insects are often 'multimodal', combining bright coloration with sounds or odours (or both). Pyrazine (a common insect warning odour) can elicit an intrinsic avoidance in domestic chicks Gallus gallus domesticus, both against novel coloured food, and also against food colours that are specifically associated with aposematism, namely yellow and red. In three experiments, we investigated the role of novelty in this innate bias against yellow coloured food in the presence of pyrazine. Naive chicks were familiarized either to pyrazine odour or to coloured food before being tested for a bias against yellow (warningly coloured) food as opposed to green (nonwarningly coloured) food. In experiment 1, pyrazine novelty was shown to be vital for eliciting a bias against yellow food. However, experiment 2 suggested that colour novelty was not important: chicks familiarized with coloured crumbs still avoided yellow crumbs when pyrazine was presented. In a third experiment that gave chicks an even greater degree of pre-exposure to coloured crumbs, the bias against yellow food eventually waned, although pyrazine continued to elicit an aversion to yellow even after birds had had experience of up to 24 palatable yellow crumbs. Pyrazine novelty has been an important pressure in the evolution of multimodal warning signals, and can continue to promote the avoidance of warningly coloured food, even when it is relatively familiar. The implications for warning signals are discussed. Copyright 1999 The Association for the Study of Animal Behaviour.  相似文献   

15.
16.
Aim Public land survey records are commonly used to reconstruct historical forest structure over large landscapes. Reconstruction studies have been criticized for using absolute measures of forest attributes, such as density and basal area, because of potential selection bias by surveyors and unknown measurement error. Current methods to identify bias are based upon statistical techniques whose assumptions may be violated for survey data. Our goals were to identify and directly estimate common sources of bias and error, and to test the accuracy of statistical methods to identify them. Location Forests in the western USA: Mogollon Plateau, Arizona; Blue Mountains, Oregon; Front Range, Colorado. Methods We quantified both selection bias and measurement error for survey data in three ponderosa pine landscapes by directly comparing measurements of bearing trees in survey notes with remeasurements of bearing trees at survey corners (384 corners and 812 trees evaluated). Results Selection bias was low in all areas and there was little variability among surveyors. Surveyors selected the closest tree to the corner 95% to 98% of the time, and hence bias may have limited impacts on reconstruction studies. Bourdo’s methods were able to successfully detect presence or absence of bias most of the time, but do not measure the rate of bias. Recording and omission errors were common but highly variable among surveyors. Measurements for bearing trees made by surveyors were generally accurate. Most bearings were less than 5° in error and most distances were within 5% of our remeasurements. Many, but not all, surveyors in the western USA probably estimated diameter of bearing trees at stump height (0.3 m). These estimates deviated from reconstructed diameters by a mean absolute error of 7.0 to 10.6 cm. Main conclusions Direct comparison of survey data at relocated corners is the only method that can determine if bias and error are meaningful. Data from relocated trees show that biased selection of trees is not likely to be an important source of error. Many surveyor errors would have no impact on reconstruction studies, but omission errors have the potential to have a large impact on results. We suggest how to reduce potential errors through data screening.  相似文献   

17.
Adaptation kinetics in bacterial chemotaxis.   总被引:24,自引:10,他引:14       下载免费PDF全文
Cells of Escherichia coli, tethered to glass by a single flagellum, were subjected to constant flow of a medium containing the attractant alpha-methyl-DL-aspartate. The concentration of this chemical was varied with a programmable mixing apparatus over a range spanning the dissociation constant of the chemoreceptor at rates comparable to those experienced by cells swimming in spatial gradients. When an exponentially increasing ramp was turned on (a ramp that increases the chemoreceptor occupancy linearly), the rotational bias of the cells (the fraction of time spent spinning counterclockwise) changed rapidly to a higher stable level, which persisted for the duration of the ramp. The change in bias increased with ramp rate, i.e., with the time rate of change of chemoreceptor occupancy. This behavior can be accounted for by a model for adaptation involving proportional control, in which the flagellar motors respond to an error signal proportional to the difference between the current occupancy and the occupancy averaged over the recent past. Distributions of clockwise and counterclockwise rotation intervals were found to be exponential. This result cannot be explained by a response regular model in which transitions between rotational states are generated by threshold crossings of a regular subject to statistical fluctuation; this mechanism generates distributions with far too many long events. However, the data can be fit by a model in which transitions between rotational states are governed by first-order rate constants. The error signal acts as a bias regulator, controlling the values of these constants.  相似文献   

18.
The aim of this study is to test whether projection bias exists in consumers’ purchasing decisions for food products. To achieve our aim, we used a non-hypothetical experiment (i.e., experimental auction), where hungry and non-hungry participants were incentivized to reveal their willingness to pay (WTP). The results confirm the existence of projection bias when consumers made their decisions on food products. In particular, projection bias existed because currently hungry participants were willing to pay a higher price premium for cheeses than satiated ones, both in hungry and satiated future states. Moreover, participants overvalued the food product more when they were delivered in the future hungry condition than in the satiated one. Our study provides clear, quantitative and meaningful evidence of projection bias because our findings are based on economic valuation of food preferences. Indeed, the strength of this study is that findings are expressed in terms of willingness to pay which is an interpretable amount of money.  相似文献   

19.
The sensory bias model for the evolution of mating preferences states that mating preferences evolve as correlated responses to selection on nonmating behaviors sharing a common sensory system. The critical assumption is that pleiotropy creates genetic correlations that affect the response to selection. I simulated selection on populations of neural networks to test this. First, I selected for various combinations of foraging and mating preferences. Sensory bias predicts that populations with preferences for like-colored objects (red food and red mates) should evolve more readily than preferences for differently colored objects (red food and blue mates). Here, I found no evidence for sensory bias. The responses to selection on foraging and mating preferences were independent of one another. Second, I selected on foraging preferences alone and asked whether there were correlated responses for increased mating preferences for like-colored mates. Here, I found modest evidence for sensory bias. Selection for a particular foraging preference resulted in increased mating preference for similarly colored mates. However, the correlated responses were small and inconsistent. Selection on foraging preferences alone may affect initial levels of mating preferences, but these correlations did not constrain the joint evolution of foraging and mating preferences in these simulations.  相似文献   

20.
Physiologically based pharmacokinetic (PBPK) modeling has been extensively used to study the factors of effect drug absorption, distribution, metabolize and extraction progress in human. In this study, Compound A(CPD A) is a BCS Class II drug, which has been extensive applied in clinical as lipid-lowering drug, administered orally after food, they displayed positive food effects in human, A PBPK model was built to mechanistic investigate the food effect of CPD A tablet in our study. By using gastroplus™ software, the PBPK models accurately predicted the results of food effects and predicted data were within 2-fold error of the observed results. The PBPK model mechanistic illuminated the changes of pharmacokinetic values for the positive food effects of the compound in human. Here in, the PBPK modeling which were combined with ACAT absorption models in it, successfully simulated the food effect in human of the drug. The simulation results were proved that PBPK model can be able to serve as a potential tool to predict the food effect on certain oral drugs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号