共查询到20条相似文献,搜索用时 74 毫秒
1.
2.
3.
4.
We introduce sequential testing procedures for the planning and analysis of reliability studies to assess an exposure's measurement error. The designs allow repeated evaluation of reliability of the measurements and stop testing if early evidence shows the measurement error is within the level of tolerance. Methods are developed and critical values tabulated for a number of two-stage designs. The methods are exemplified using an example evaluating the reliability of biomarkers associated with oxidative stress. 相似文献
5.
6.
Covariate measurement error in regression is typically assumed to act in an additive or multiplicative manner on the true covariate value. However, such an assumption does not hold for the measurement error of sleep-disordered breathing (SDB) in the Wisconsin Sleep Cohort Study (WSCS). The true covariate is the severity of SDB, and the observed surrogate is the number of breathing pauses per unit time of sleep, which has a nonnegative semicontinuous distribution with a point mass at zero. We propose a latent variable measurement error model for the error structure in this situation and implement it in a linear mixed model. The estimation procedure is similar to regression calibration but involves a distributional assumption for the latent variable. Modeling and model-fitting strategies are explored and illustrated through an example from the WSCS. 相似文献
7.
Wang CY 《Biometrics》2000,56(1):106-112
Consider the problem of estimating the correlation between two nutrient measurements, such as the percent energy from fat obtained from a food frequency questionnaire (FFQ) and that from repeated food records or 24-hour recalls. Under a classical additive model for repeated food records, it is known that there is an attenuation effect on the correlation estimation if the sample average of repeated food records for each subject is used to estimate the underlying long-term average. This paper considers the case in which the selection probability of a subject for participation in the calibration study, in which repeated food records are measured, depends on the corresponding FFQ value, and the repeated longitudinal measurement errors have an autoregressive structure. This paper investigates a normality-based estimator and compares it with a simple method of moments. Both methods are consistent if the first two moments of nutrient measurements exist. Furthermore, joint estimating equations are applied to estimate the correlation coefficient and related nuisance parameters simultaneously. This approach provides a simple sandwich formula for the covariance estimation of the estimator. Finite sample performance is examined via a simulation study, and the proposed weighted normality-based estimator performs well under various distributional assumptions. The methods are applied to real data from a dietary assessment study. 相似文献
8.
9.
10.
Outcome mismeasurement can lead to biased estimation in several contexts. Magder and Hughes (1997, American Journal of Epidemiology 146, 195-203) showed that failure to adjust for imperfect outcome measures in logistic regression analysis can conservatively bias estimation of covariate effects, even when the mismeasurement rate is the same across levels of the covariate. Other authors have addressed the need to account for mismeasurement in survival analysis in selected cases (Snapinn, 1998, Biometrics 54, 209-218; Gelfand and Wang, 2000, Statistics in Medicine 19, 1865-1879; Balasubramanian and Lagakos, 2001, Biometrics 57, 1048-1058, 2003, Biometrika 90, 171-182). We provide a general, more widely applicable, adjusted proportional hazards (APH) method for estimation of cumulative survival and hazard ratios in discrete time when the outcome is measured with error. We show that mismeasured failure status in a standard proportional hazards (PH) model can conservatively bias estimation of hazard ratios and that inference, in most practical situations, is more severely affected by poor specificity than by poor sensitivity. However, in simulations over a wide range of conditions, the APH method with correctly specified mismeasurement rates performs very well. 相似文献
11.
12.
13.
14.
In many clinical studies with a survival outcome, administrative censoring occurs when follow-up ends at a prespecified date and many subjects are still alive. An additional complication in some trials is that there is noncompliance with the assigned treatment. For this setting, we study the estimation of the causal effect of treatment on survival probability up to a given time point among those subjects who would comply with the assignment to both treatment and control. We first discuss the standard instrumental variable (IV) method for survival outcomes and parametric maximum likelihood methods, and then develop an efficient plug-in nonparametric empirical maximum likelihood estimation (PNEMLE) approach. The PNEMLE method does not make any assumptions on outcome distributions, and makes use of the mixture structure in the data to gain efficiency over the standard IV method. Theoretical results of the PNEMLE are derived and the method is illustrated by an analysis of data from a breast cancer screening trial. From our limited mortality analysis with administrative censoring times 10 years into the follow-up, we find a significant benefit of screening is present after 4 years (at the 5% level) and this persists at 10 years follow-up. 相似文献
15.
16.
R. A. J. Smit S. Trompet A. J. M. de Craen J. W. Jukema 《Netherlands heart journal》2014,22(4):186-189
Cardiovascular disease (CVD) remains the leading cause of death in developed countries, despite the decline of CVD mortality over the last two decades. From observational, predictive research, efforts have been made to find causal risk factors for CVD. However, in recent years, some of these findings have been shown to be mistaken. Possible explanations for the discrepant findings are confounding and reverse causation. Genetic epidemiology has tried to address these problems through the use of Mendelian randomisation. In this paper, we discuss the promise and limitations of using genetic variation for establishing causality of cardiovascular risk factors. 相似文献
17.
Radiostereometric analysis (RSA) is a highly accurate technique used to provide three-dimensional (3D) measurements of orthopaedic implant migration for clinical research applications, yet its implementation in routine clinical examinations has been limited. Previous studies have introduced a modified RSA procedure that separates the calibration examinations from the patient examinations, allowing routine clinical radiographs to be analyzed using RSA. However, in order to calibrate the wide range of clinical views, a new calibration object is required. In this study, a universal, isotropic calibration object was designed to calibrate any pair of radiographic views used in the clinic for RSA. A numerical simulation technique was used to design the calibration object, followed by a phantom validation test of a prototype to verify the performance of the novel object, and to compare the measurement reliability to the conventional calibration cage. The 3D bias for the modified calibration method using the new calibration object was 0.032 ± 0.006 mm, the 3D repeatability standard deviation was 0.015 mm, and the 3D repeatability limit was 0.042 mm. Although statistical differences were present between the universal calibration object and the conventional cage, the differences were considered to be not clinically meaningful. The 3D bias and repeatability values obtained using the universal calibration object were well under the threshold acceptable for RSA, therefore it was successfully validated. The universal calibration object will help further the adoption of RSA into a more routine practice, providing the opportunity to generate quantitative databases on joint replacement performance. 相似文献
18.
We compared several validation study designs for estimating the odds ratio of disease with misclassified exposure. We assumed that the outcome and misclassified binary covariate are available and that the error-free binary covariate is measured in a subsample, the validation sample. We considered designs in which the total size of the validation sample is fixed and the probability of selection into the validation sample may depend on outcome and misclassified covariate values. Design comparisons were conducted for rare and common disease scenarios, where the optimal design is the one that minimizes the variance of the maximum likelihood estimator of the true log odds ratio relating the outcome to the exposure of interest. Misclassification rates were assumed to be independent of the outcome. We used a sensitivity analysis to assess the effect of misspecifying the misclassification rates. Under the scenarios considered, our results suggested that a balanced design, which allocates equal numbers of validation subjects into each of the four outcome/mismeasured covariate categories, is preferable for its simplicity and good performance. A user-friendly Fortran program is available from the second author, which calculates the optimal sampling fractions for all designs considered and the efficiencies of these designs relative to the optimal hybrid design for any scenario of interest. 相似文献
19.
20.
度量误差对模型参数估计值的影响及参数估计方法的比较研究 总被引:3,自引:1,他引:3
基于模型V=aDb,首先在Matlab下用模拟实验的方法,研究了度量误差对模型参数估计的影响,结果表明:当V的误差固定而D的误差不断增大时,用通常最小二乘法对模型进行参数估计,参数a的估计值不断增大,参数b的估计值不断减小,参数估计值随着 D的度量误差的增大越来越远离参数真实值;然后对消除度量误差影响的参数估计方法进行研究,分别用回归校准法、模拟外推法和度量误差模型方法对V和D都有度量误差的数据进行参数估计,结果表明:回归校准法、模拟外推法和度量误差模型方法都能得到参数的无偏估计,克服了用通常最小二乘法进行估计造成的参数估计的系统偏差,结果进一步表明度量误差模型方法优于回归校准法和模拟外推法. 相似文献