共查询到20条相似文献,搜索用时 0 毫秒
1.
Luis E. Castañeda Gemma Calabria Luz A. Betancourt Enrico L. Rezende Mauro Santos 《Journal of thermal biology》2012
Biological measurements inherently involve some measurement error (ME), which is a major concern because measurement accuracy (how closely a measurement reproduces the true value of the attribute being measured) and statistical power steadily decrease with increasing ME. However, ME has been largely overlooked in the thermal biology literature, which can be explained by the fact that thermotolerance estimates often involve the collapse or death of the tested individuals and measurements cannot be repeated in vivo with the same specimen under identical conditions. Here we assess inter- and intra-researcher (test-retest) reliability of heat tolerance measured as knockdown time from digital recordings of Drosophila subobscura flies individually assayed in vials with a dynamic method. We provide a summary of various estimators used to compute measurement reliability (the degree to which the measurement is affected by ME) together with their statistical properties. Our results indicate that the estimation of heat knockdown time has poor reliability: intra-researcher ME=29% and inter-researcher ME=34%. This difference is attributed to lack of ‘accurateness’ (the difference in the marginal distributions of the measurements taken by the two researchers) because measurement imprecision was essentially the same in both estimates (27%). In view of our results we conclude that thermal biologists should report the reliability of thermotolerance estimates and, when necessary, adopt some straightforward guidelines suggested here to improve measurement reliability. 相似文献
2.
Michael Root 《Biology & philosophy》2009,24(3):375-385
In the United States, the racial and ethnic statistics published by the National Center for Health Statistics (NCHS) assume
that each member of the U.S. population has a race and ethnicity and that if a member is black or white with respect to his
risk of one disease, he is the same race with respect to his risk of another. Such an assumption is mistaken. Race and ethnicity
are taken by the NCHS to be an intrinsic property of members of a population, when they should be taken to depend on interest.
The actual or underlying race or ethnicity of members of a population depends on the risk whose variation within the population
we wish to describe or explain.
相似文献
Michael RootEmail: |
3.
Tosteson TD Buonaccorsi JP Demidenko E Wells WA 《Biometrical journal. Biometrische Zeitschrift》2005,47(4):409-416
Measurement error in a continuous test variable may bias estimates of the summary properties of receiver operating characteristics (ROC) curves. Typically, unbiased measurement error will reduce the diagnostic potential of a continuous test variable. This paper explores the effects of possibly heterogenous measurement error on estimated ROC curves for binormal test variables. Corrected estimators for specific points on the curve are derived under the assumption of known or estimated measurement variances for individual test results. These estimators and associated confidence intervals do not depend on normal assumptions for the distribution of the measurement error and are shown to be approximately unbiased for moderate size samples in a simulation study. An application from a study of emerging imaging modalities in breast cancer is used to demonstrate the new techniques. 相似文献
4.
度量误差对模型参数估计值的影响及参数估计方法的比较研究 总被引:3,自引:1,他引:3
基于模型V=aDb,首先在Matlab下用模拟实验的方法,研究了度量误差对模型参数估计的影响,结果表明:当V的误差固定而D的误差不断增大时,用通常最小二乘法对模型进行参数估计,参数a的估计值不断增大,参数b的估计值不断减小,参数估计值随着 D的度量误差的增大越来越远离参数真实值;然后对消除度量误差影响的参数估计方法进行研究,分别用回归校准法、模拟外推法和度量误差模型方法对V和D都有度量误差的数据进行参数估计,结果表明:回归校准法、模拟外推法和度量误差模型方法都能得到参数的无偏估计,克服了用通常最小二乘法进行估计造成的参数估计的系统偏差,结果进一步表明度量误差模型方法优于回归校准法和模拟外推法. 相似文献
5.
Charles J. Utermohle Stephen L. Zegura 《American journal of physical anthropology》1982,57(3):303-310
This study investigates intra- and interobserver measurement error in craniometry. Data consists of 72 craniometric measurements taken on a series of 28 Sadlermuit Eskimo crania. Utermohle measured the series twice; Zegura measured it once. Statistical procedures used to demonstrate measurement imprecision include the mean difference, the method error statistic, two-way anova without replication, the t-test for paired comparisons, Fisher's distribution-free sign test, and the t-test for independent samples. The results indicate less intraobserver repeatability than expected as well as an alarming lack of interobserver reproducibility for many of these craniometric measurements. We hope these results will serve as a caution against the widespread belief that craniometric measurements are always produced with a high degree of precision by experienced craniometrists. In addition, these results suggest that investigators employing craniometric measurements to study population affinities, functional morphology, forensics, fossil primates, and human microevolution might profit from conducting a measurement error analysis as an important baseline for the interpretation of the biological significance of their results. 相似文献
6.
Evaluation of a two‐part regression calibration to adjust for dietary exposure measurement error in the Cox proportional hazards model: A simulation study 下载免费PDF全文
George O. Agogo Hilko van der Voet Pieter van't Veer Fred A. van Eeuwijk Hendriek C. Boshuizen 《Biometrical journal. Biometrische Zeitschrift》2016,58(4):766-782
Dietary questionnaires are prone to measurement error, which bias the perceived association between dietary intake and risk of disease. Short‐term measurements are required to adjust for the bias in the association. For foods that are not consumed daily, the short‐term measurements are often characterized by excess zeroes. Via a simulation study, the performance of a two‐part calibration model that was developed for a single‐replicate study design was assessed by mimicking leafy vegetable intake reports from the multicenter European Prospective Investigation into Cancer and Nutrition (EPIC) study. In part I of the fitted two‐part calibration model, a logistic distribution was assumed; in part II, a gamma distribution was assumed. The model was assessed with respect to the magnitude of the correlation between the consumption probability and the consumed amount (hereafter, cross‐part correlation), the number and form of covariates in the calibration model, the percentage of zero response values, and the magnitude of the measurement error in the dietary intake. From the simulation study results, transforming the dietary variable in the regression calibration to an appropriate scale was found to be the most important factor for the model performance. Reducing the number of covariates in the model could be beneficial, but was not critical in large‐sample studies. The performance was remarkably robust when fitting a one‐part rather than a two‐part model. The model performance was minimally affected by the cross‐part correlation. 相似文献
7.
We consider measurement error in covariates in the marginal hazards model for multivariate failure time data. We explore the bias implications of normal additive measurement error without assuming a distribution for the underlying true covariate. To correct measurement-error-induced bias in the regression coefficient of the marginal model, we propose to apply the SIMEX procedure and demonstrate its large and small sample properties for both known and estimated measurement error variance. We illustrate this method using the Lipid Research Clinics Coronary Primary Prevention Trial data with total cholesterol as the covariate measured with error and time until angina and time until nonfatal myocardial infarction as the correlated outcomes of interest. 相似文献
8.
9.
An estimator for the proportional hazards model with multiple longitudinal covariates measured with error 总被引:2,自引:0,他引:2
In many longitudinal studies, it is of interest to characterize the relationship between a time-to-event (e.g. survival) and several time-dependent and time-independent covariates. Time-dependent covariates are generally observed intermittently and with error. For a single time-dependent covariate, a popular approach is to assume a joint longitudinal data-survival model, where the time-dependent covariate follows a linear mixed effects model and the hazard of failure depends on random effects and time-independent covariates via a proportional hazards relationship. Regression calibration and likelihood or Bayesian methods have been advocated for implementation; however, generalization to more than one time-dependent covariate may become prohibitive. For a single time-dependent covariate, Tsiatis and Davidian (2001) have proposed an approach that is easily implemented and does not require an assumption on the distribution of the random effects. This technique may be generalized to multiple, possibly correlated, time-dependent covariates, as we demonstrate. We illustrate the approach via simulation and by application to data from an HIV clinical trial. 相似文献
10.
11.
George O. Agogo 《Biometrical journal. Biometrische Zeitschrift》2017,59(1):94-109
Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long‐term dietary intake and disease occurrence. Long‐term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ‐reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short‐term instrument such as 24‐hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR‐reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero‐augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long‐term intake with 24HR and FFQ‐reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method. 相似文献
12.
Endpoint error in smoothing and differentiating raw kinematic data: An evaluation of four popular methods 总被引:1,自引:0,他引:1
‘Endpoint error’ describes the erratic behavior at the beginning and end of the computed acceleration data which is commonly observed after smoothing and differentiating raw displacement data. To evaluate endpoint error produced by four popular smoothing and differentiating techniques, Lanshammar's (1982, J. Biomechanics 15, 99–105) modification of the Pezzack et al. (1977, J. Biomechanics, 10, 377–382) raw angular displacement data set was truncated at three different locations corresponding to the major peaks in the criterion acceleration curve. Also, for each data subset, three padding conditions were applied. Each data subset was smoothed and differentiated using the Butterworth digital filter, cubic spline, quintic spline, and Fourier series to obtain acceleration values. RMS residual errors were calculated between the computed and criterion accelerations in the endpoint regions. Although no method completely eliminated endpoint error, the results demonstrated clear superiority of the quintic spline over the other three methods in producing accurate acceleration values close to the endpoints of the modified Pezzack et al. (1977) data set. In fact, the quintic spline performed best with non-padded data (cumulative error=48.0 rad s−2). Conversely, when applied to non-padded data, the Butterworth digital filter produced wildly deviating values beginning more than the 10 points from the terminal data point (cumulative error=226.6 rad s−2). Each of the four methods performed better when applied to data subsets padded by linear extrapolation (average cumulative error=68.8 rad s−2) than when applied to analogous subsets padded by reflection (average cumulative error=86.1 rad s−2). 相似文献
13.
Leroy G Danchin-Burge C Palhiere I Baumung R Fritz S Mériaux JC Gautier M 《Animal genetics》2012,43(3):309-314
On the basis of correlations between pairwise individual genealogical kinship coefficients and allele sharing distances computed from genotyping data, we propose an approximate Bayesian computation (ABC) approach to assess pedigree file reliability through gene-dropping simulations. We explore the features of the method using simulated data sets and show precision increases with the number of markers. An application is further made with five dog breeds, four sheep breeds and one cattle breed raised in France and displaying various characteristics and population sizes, using microsatellite or SNP markers. Depending on the breeds, pedigree error estimations range between 1% and 9% in dog breeds, 1% and 10% in sheep breeds and 4% in cattle breeds. 相似文献
14.
15.
Fluctuating asymmetry (FA), i.e. small, non-directional deviations from perfect symmetry in morphological characters, increases under genetic and/or environmental stress. Ecological and evolutionary studies addressing FA became popular in past decades; however, their outcomes remain controversial. The discrepancies might be at least partly explained by inconsistent and non-standardised methodology. Our aim was to improve the methodology of these studies by identifying factors that affect the reproducibility of FA measurements in plant leaves. Six observers used a highly standardised measurement protocol to measure FA using the width, area and weight of the same set of leaves of 10 plant species that differed in leaf size, shape of the leaf margin and other leaf traits. On average, 24% of the total variation in the data was due to measurement error. Reproducibility of measurements varied with the shape of leaf margin, leaf size, the measured character and the experience of the observer. The lowest reproducibility of the width of leaf halves was found for simple leaves with serrate margins and the highest for simple leaves with entire margins and for compound pinnate leaves. The reproducibility was significantly lower for the weight of leaf halves than for either their width or area, especially for plants with small leaves. The reproducibility was also lower for measurements made by experienced observers than by naïve observers. The size of press-dried leaves decreased slightly but significantly relative to fresh leaves, but the FA of press-dried leaves adequately reflected the FA of fresh leaves. In contrast, preservation in 60% ethanol did not affect leaf size, but it decreased the width-based values of FA to 89.3% of the values measured from fresh leaves. We suggest that although reproducibility of leaf FA measurements depends upon many factors, the shape of the leaf margin is the most important source of variation. We recommend, whenever possible, choosing large-leaved plants with entire leaf margins as model objects for studies involving measurements of FA using the width of leaf halves. These measurements should be conducted with high accuracy from images of fresh or press-dried leaves. 相似文献
16.
数字图像处理法确定林带疏透度随机误差研究 总被引:4,自引:5,他引:4
在分析以数字图像处理为测定法确定林带疏透度误差来源的基础上,对其中的随机误差进行研究,结果表明:林带整体疏透度的机误方差小于其冠部和干部的较大者;所研究的各类型林带各部位疏透度随机误差均遵从正态分布;林带整体疏透度随机误差的分布与树种和带内配置无关;北京杨、双阳快杨和其它类杂交杨的矩形或品字形配置林带各自冠部与干部疏透度机误方差之间无显著差异,而乡土杨林带干部的显著大于冠部的。本文还分别各类型林带的各部位确定了由林带疏透度测定值估计其总体实际值的随机误差限,并讨论了在测定林带疏透度过程中据该误差限计算样本量和划定测定范围的应用意义。最后总结提出:以增加测定同一林带不同样段像片数限定随机误差,通过模型计算订正疏透度测定值中的投影误差和影缩误差确定林带疏透度是建立完善的“数字图像处理法确定林带疏透度”新方法的可行途径。 相似文献
17.
We investigate the effects of measurement error on the estimationof nonparametric variance functions. We show that either ignoringmeasurement error or direct application of the simulation extrapolation,SIMEX, method leads to inconsistent estimators. Nevertheless,the direct SIMEX method can reduce bias relative to a naiveestimator. We further propose a permutation SIMEX method thatleads to consistent estimators in theory. The performance ofboth the SIMEX methods depends on approximations to the exactextrapolants. Simulations show that both the SIMEX methods performbetter than ignoring measurement error. The methodology is illustratedusing microarray data from colon cancer patients. 相似文献
18.
Carroll RJ 《Biometrics》2003,59(2):211-220
In classical problems, e.g., comparing two populations, fitting a regression surface, etc., variability is a nuisance parameter. The term "nuisance parameter" is meant here in both the technical and the practical sense. However, there are many instances where understanding the structure of variability is just as central as understanding the mean structure. The purpose of this article is to review a few of these problems. I focus in particular on two issues: (a) the determination of the validity of an assay; and (b) the issue of the power for detecting health effects from nutrient intakes when the latter are measured by food frequency questionnaires. I will also briefly mention the problems of variance structure in generalized linear mixed models, robust parameter design in quality technology, and the signal in microarrays. In these and other problems, treating variance structure as a nuisance instead of a central part of the modeling effort not only leads to inefficient estimation of means, but also to misleading conclusions. 相似文献
19.
In macroevolutionary studies, different approaches are commonly used to measure phylogenetic signal-the tendency of related taxa to resemble one another-including the K statistic and the Mantel test. The latter was recently criticized for lacking statistical power. Using new simulations, we show that the power of the Mantel test depends on the metrics used to define trait distances and phylogenetic distances between species. Increasing power is obtained by lowering variance and increasing negative skewness in interspecific distances, as obtained using Euclidean trait distances and the complement of Abouheif proximity as a phylogenetic distance. We show realistic situations involving "measurement error" due to intraspecific variability where the Mantel test is more powerful to detect a phylogenetic signal than a permutation test based on the K statistic. We highlight limitations of the K-statistic (univariate measure) and show that its application should take into account measurement errors using repeated measures per species to avoid estimation bias. Finally, we argue that phylogenetic distograms representing Euclidean trait distance as a function of the square root of patristic distance provide an insightful representation of the phylogenetic signal that can be used to assess both the impact of measurement error and the departure from a Brownian evolution model. 相似文献
20.
《Journal of liposome research》2013,23(3-4):259-267
AbstractModern techniques in nuclear magnetic resonance (NMR) allow investigators to probe molecular interactions with greater sensitivity and speed than ever before. Exploiting the nuclear Overhauser effect (NOE), the intermolecular interactions between dimethylsulfoxide (DMSO) and lipid vesicles were investigated. The DMSO methyl proton signal varies with experimental mixing time suggesting the system behaves in a manner similar to that of a ligand weakly binding to a macromolecule. 相似文献