首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
We have developed a new general approach for handling misclassification in discrete covariates or responses in regression models. The simulation and extrapolation (SIMEX) method, which was originally designed for handling additive covariate measurement error, is applied to the case of misclassification. The statistical model for characterizing misclassification is given by the transition matrix Pi from the true to the observed variable. We exploit the relationship between the size of misclassification and bias in estimating the parameters of interest. Assuming that Pi is known or can be estimated from validation data, we simulate data with higher misclassification and extrapolate back to the case of no misclassification. We show that our method is quite general and applicable to models with misclassified response and/or misclassified discrete regressors. In the case of a binary response with misclassification, we compare our method to the approach of Neuhaus, and to the matrix method of Morrissey and Spiegelman in the case of a misclassified binary regressor. We apply our method to a study on caries with a misclassified longitudinal response.  相似文献   

2.
We consider a Bayesian analysis for modeling a binary response that is subject to misclassification. Additionally, an explanatory variable is assumed to be unobservable, but measurements are available on its surrogate. A binary regression model is developed to incorporate the measurement error in the covariate as well as the misclassification in the response. Unlike existing methods, no model parameters need be assumed known. Markov chain Monte Carlo methods are utilized to perform the necessary computations. The methods developed are illustrated using atomic bomb survival data. A simulation experiment explores advantages of the approach.  相似文献   

3.
Gustafson P  Le Nhu D 《Biometrics》2002,58(4):878-887
It is well known that imprecision in the measurement of predictor variables typically leads to bias in estimated regression coefficients. We compare the bias induced by measurement error in a continuous predictor with that induced by misclassification of a binary predictor in the contexts of linear and logistic regression. To make the comparison fair, we consider misclassification probabilities for a binary predictor that correspond to dichotomizing an imprecise continuous predictor in lieu of its precise counterpart. On this basis, nondifferential binary misclassification is seen to yield more bias than nondifferential continuous measurement error. However, it is known that differential misclassification results if a binary predictor is actually formed by dichotomizing a continuous predictor subject to nondifferential measurement error. When the postulated model linking the response and precise continuous predictor is correct, this differential misclassification is found to yield less bias than continuous measurement error, in contrast with nondifferential misclassification, i.e., dichotomization reduces the bias due to mismeasurement. This finding, however, is sensitive to the form of the underlying relationship between the response and the continuous predictor. In particular, we give a scenario where dichotomization involves a trade-off between model fit and misclassification bias. We also examine how the bias depends on the choice of threshold in the dichotomization process and on the correlation between the imprecise predictor and a second precise predictor.  相似文献   

4.
Misclassification in binary outcomes can severely bias effect estimates of regression models when the models are naively applied to error‐prone data. Here, we discuss response misclassification in studies on the special class of bilateral diseases. Such diseases can affect neither, one, or both entities of a paired organ, for example, the eyes or ears. If measurements are available on both organ entities, disease occurrence in a person is often defined as disease occurrence in at least one entity. In this setting, there are two reasons for response misclassification: (a) ignorance of missing disease assessment in one of the two entities and (b) error‐prone disease assessment in the single entities. We investigate the consequences of ignoring both types of response misclassification and present an approach to adjust the bias from misclassification by optimizing an adequate likelihood function. The inherent modelling assumptions and problems in case of entity‐specific misclassification are discussed. This work was motivated by studies on age‐related macular degeneration (AMD), a disease that can occur separately in each eye of a person. We illustrate and discuss the proposed analysis approach based on real‐world data of a study on AMD and simulated data.  相似文献   

5.
CAPTURE-RECAPTURE ANALYSIS FOR ESTIMATING MANATEE REPRODUCTIVE RATES   总被引:2,自引:1,他引:1  
Modeling the life history of the endangered Florida manatee ( Trichechus manatus latirostris ) is an important step toward understanding its population dynamics and predicting its response to management actions. We developed a multi-state markresighting model for data collected under Pollock's robust design. This model estimates breeding probability conditional on a female's breeding state in the previous year; assumes sighting probability depends on breeding state; and corrects for misclassification of a cow with first-year calf, by estimating conditional sighting probability for the calf. The model is also appropriate for estimating survival and unconditional breeding probabilities when the study area is closed to temporary emigration across years. We applied this model to photo-identification data for the Northwest and Atlantic Coast populations of manatees, for years 1982–2000. With rare exceptions, manatees do not reproduce in two consecutive years. For those without a first-year calf in the previous year, the best-fitting model included constant probabilities of producing a calf for the Northwest (0.43, SE = 0.057) and Atlantic (0.38, SE = 0.045) populations. The approach we present to adjust for misclassification of breeding state could be applicable to a large number of marine mammal populations.  相似文献   

6.
Marginal methods have been widely used for the analysis of longitudinal ordinal and categorical data. These models do not require full parametric assumptions on the joint distribution of repeated response measurements but only specify the marginal or even association structures. However, inference results obtained from these methods often incur serious bias when variables are subject to error. In this paper, we tackle the problem that misclassification exists in both response and categorical covariate variables. We develop a marginal method for misclassification adjustment, which utilizes second‐order estimating functions and a functional modeling approach, and can yield consistent estimates and valid inference for mean and association parameters. We propose a two‐stage estimation approach for cases in which validation data are available. Our simulation studies show good performance of the proposed method under a variety of settings. Although the proposed method is phrased to data with a longitudinal design, it also applies to correlated data arising from clustered and family studies, in which association parameters may be of scientific interest. The proposed method is applied to analyze a dataset from the Framingham Heart Study as an illustration.  相似文献   

7.
In epidemiologic studies, subjects are often misclassified as to their level of exposure. Ignoring this misclassification error in the analysis introduces bias in the estimates of certain parameters and invalidates many hypothesis tests. For situations in which there is misclassification of exposure in a follow-up study with categorical data, we have developed a model that permits consideration of any number of exposure categories and any number of multiple-category covariates. When used with logistic and Poisson regression procedures, this model helps assess the potential for bias when misclassification is ignored. When reliable ancillary information is available, the model can be used to correct for misclassification bias in the estimates produced by these regression procedures.  相似文献   

8.
We study the effect of misclassification of a binary covariate on the parameters of a logistic regression model. In particular we consider 2 × 2 × 2 tables. We assume that a binary covariate is subject to misclassification that may depend on the observed outcome. This type of misclassification is known as (outcome dependent) differential misclassification. We examine the resulting asymptotic bias on the parameters of the model and derive formulas for the biases and their approximations as a function of the odds and misclassification probabilities. Conditions for unbiased estimation are also discussed. The implications are illustrated numerically using a case control study. For completeness we briefly examine the effect of covariate dependent misclassification of exposures and of outcomes.  相似文献   

9.
In epidemiologic studies, measurement error in the exposure variable can have a detrimental effect on the power of hypothesis testing for detecting the impact of exposure in the development of a disease. To adjust for misclassification in the hypothesis testing procedure involving a misclassified binary exposure variable, we consider a retrospective case–control scenario under the assumption of nondifferential misclassification. We develop a test under Bayesian approach from a posterior distribution generated by a MCMC algorithm and a normal prior under realistic assumptions. We compared this test with an equivalent likelihood ratio test developed under the frequentist approach, using various simulated settings and in the presence or the absence of validation data. In our simulations, we considered varying degrees of sensitivity, specificity, sample sizes, exposure prevalence, and proportion of unvalidated and validated data. In these scenarios, our simulation study shows that the adjusted model (with-validation data model) is always better than the unadjusted model (without validation data model). However, we showed that exception is possible in the fixed budget scenario where collection of the validation data requires a much higher cost. We also showed that both Bayesian and frequentist hypothesis testing procedures reach the same conclusions for the scenarios under consideration. The Bayesian approach is, however, computationally more stable in rare exposure contexts. A real case–control study was used to show the application of the hypothesis testing procedures under consideration.  相似文献   

10.
Lyles RH 《Biometrics》2002,58(4):1034-6; discussion 1036-7
Morrissey and Spiegelman (1999, Biometrics 55, 338 344) provided a comparative study of adjustment methods for exposure misclassification in case-control studies equipped with an internal validation sample. In addition to the maximum likelihood (ML) approach, they considered two intuitive procedures based on proposals in the literature. Despite appealing ease of computation associated with the latter two methods, efficiency calculations suggested that ML was often to be recommended for the analyst with access to a numerical routine to facilitate it. Here, a reparameterization of the likelihood reveals that one of the intuitive approaches, the inverse matrix method, is in fact ML under differential misclassification. This correction is intended to alert readers to the existence of a simple closed-form ML estimator for the odds ratio in this setting so that they may avoid assuming that a commercially inaccessible optimization routine must be sought to implement ML.  相似文献   

11.
MOTIVATION: Ranking gene feature sets is a key issue for both phenotype classification, for instance, tumor classification in a DNA microarray experiment, and prediction in the context of genetic regulatory networks. Two broad methods are available to estimate the error (misclassification rate) of a classifier. Resubstitution fits a single classifier to the data, and applies this classifier in turn to each data observation. Cross-validation (in leave-one-out form) removes each observation in turn, constructs the classifier, and then computes whether this leave-one-out classifier correctly classifies the deleted observation. Resubstitution typically underestimates classifier error, severely so in many cases. Cross-validation has the advantage of producing an effectively unbiased error estimate, but the estimate is highly variable. In many applications it is not the misclassification rate per se that is of interest, but rather the construction of gene sets that have the potential to classify or predict. Hence, one needs to rank feature sets based on their performance. RESULTS: A model-based approach is used to compare the ranking performances of resubstitution and cross-validation for classification based on real-valued feature sets and for prediction in the context of probabilistic Boolean networks (PBNs). For classification, a Gaussian model is considered, along with classification via linear discriminant analysis and the 3-nearest-neighbor classification rule. Prediction is examined in the steady-distribution of a PBN. Three metrics are proposed to compare feature-set ranking based on error estimation with ranking based on the true error, which is known owing to the model-based approach. In all cases, resubstitution is competitive with cross-validation relative to ranking accuracy. This is in addition to the enormous savings in computation time afforded by resubstitution.  相似文献   

12.
Suicide is a leading cause of death worldwide. Although research has made strides in better defining suicidal behaviors, there has been less focus on accurate measurement. Currently, the widespread use of self-report, single-item questions to assess suicide ideation, plans and attempts may contribute to measurement problems and misclassification. We examined the validity of single-item measurement and the potential for statistical errors. Over 1,500 participants completed an online survey containing single-item questions regarding a history of suicidal behaviors, followed by questions with more precise language, multiple response options and narrative responses to examine the validity of single-item questions. We also conducted simulations to test whether common statistical tests are robust against the degree of misclassification produced by the use of single-items. We found that 11.3% of participants that endorsed a single-item suicide attempt measure engaged in behavior that would not meet the standard definition of a suicide attempt. Similarly, 8.8% of those who endorsed a single-item measure of suicide ideation endorsed thoughts that would not meet standard definitions of suicide ideation. Statistical simulations revealed that this level of misclassification substantially decreases statistical power and increases the likelihood of false conclusions from statistical tests. Providing a wider range of response options for each item reduced the misclassification rate by approximately half. Overall, the use of single-item, self-report questions to assess the presence of suicidal behaviors leads to misclassification, increasing the likelihood of statistical decision errors. Improving the measurement of suicidal behaviors is critical to increase understanding and prevention of suicide.  相似文献   

13.
We used the EM algorithm in the context of a joint Poisson regression analysis of cancer and non-cancer mortality in the Radiation Effects Research Foundation (RERF) Life Span Study (LSS) to assess whether the observed increased risk of non-cancer death due to radiation exposure (Shimizu et al., RERF Technical Report 02-91, 1991) can be attributed solely to misclassification of cancer as non-cancer on death certificates. We show that greater levels of dose-independent misclassification than are indicated by a series of autopsies conducted on a subset of LSS members would be required to explain the non-cancer dose response, but that a relatively small amount of dose-dependence in the misclassification of cancer would explain the result. The adjustment for misclassification also results in higher risk estimates for cancer mortality. We review applications of similar statistical methods in other contexts and discuss extensions of the methods to more than two causes of death.  相似文献   

14.

Background

Misclassification has been shown to have a high prevalence in binary responses in both livestock and human populations. Leaving these errors uncorrected before analyses will have a negative impact on the overall goal of genome-wide association studies (GWAS) including reducing predictive power. A liability threshold model that contemplates misclassification was developed to assess the effects of mis-diagnostic errors on GWAS. Four simulated scenarios of case–control datasets were generated. Each dataset consisted of 2000 individuals and was analyzed with varying odds ratios of the influential SNPs and misclassification rates of 5% and 10%.

Results

Analyses of binary responses subject to misclassification resulted in underestimation of influential SNPs and failed to estimate the true magnitude and direction of the effects. Once the misclassification algorithm was applied there was a 12% to 29% increase in accuracy, and a substantial reduction in bias. The proposed method was able to capture the majority of the most significant SNPs that were not identified in the analysis of the misclassified data. In fact, in one of the simulation scenarios, 33% of the influential SNPs were not identified using the misclassified data, compared with the analysis using the data without misclassification. However, using the proposed method, only 13% were not identified. Furthermore, the proposed method was able to identify with high probability a large portion of the truly misclassified observations.

Conclusions

The proposed model provides a statistical tool to correct or at least attenuate the negative effects of misclassified binary responses in GWAS. Across different levels of misclassification probability as well as odds ratios of significant SNPs, the model proved to be robust. In fact, SNP effects, and misclassification probability were accurately estimated and the truly misclassified observations were identified with high probabilities compared to non-misclassified responses. This study was limited to situations where the misclassification probability was assumed to be the same in cases and controls which is not always the case based on real human disease data. Thus, it is of interest to evaluate the performance of the proposed model in that situation which is the current focus of our research.
  相似文献   

15.
In this paper we introduce a misclassification model for the meiosis I non‐disjunction fraction in numerical chromosomal anomalies named trisomies. We obtain posteriors, and their moments, for the probability that a non‐disjunction occurs in the first division of meiosis and for the misclassification errors. We also extend previous works by providing the exact posterior, and its moments, for the probability that a non‐disjunction occurs in the first division of meiosis assuming the model proposed in the literature which does not consider that data are subject to misclassification. We perform Monte Carlo studies in order to compare Bayes estimates obtained by using both models. An application to Down Syndrome data is also presented. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

16.
Most parametric methods for detecting foreign genes in bacterial genomes use a scoring function that measures the atypicality of a gene with respect to the bulk of the genome. Genes whose features are sufficiently atypical-lying beyond a threshold value-are deemed foreign. Yet these methods fail when the range of features of donor genomes overlaps with that of the recipient genome, leading to misclassification of foreign and native genes; existing parametric methods choose threshold parameters to balance these error rates. To circumvent this problem, we have developed a two-pronged approach to minimize the misclassification of genes. First, beyond classifying genes as merely atypical, a gene clustering method based on Jensen-Shannon entropic divergence identifies classes of foreign genes that are also similar to each other. Second, genome position is used to reassign genes among classes whose composition features overlap. This process minimizes the misclassification of either native or foreign genes that are weakly atypical. The performance of this approach was assessed using artificial chimeric genomes and then applied to the well-characterized Escherichia coli K12 genome. Not only were foreign genes identified with a high degree of accuracy, but genes originating from the same donor organism were effectively grouped.  相似文献   

17.
Background: Recent research suggests that the Bayesian paradigm may be useful for modeling biases in epidemiological studies, such as those due to misclassification and missing data. We used Bayesian methods to perform sensitivity analyses for assessing the robustness of study findings to the potential effect of these two important sources of bias. Methods: We used data from a study of the joint associations of radiotherapy and smoking with primary lung cancer among breast cancer survivors. We used Bayesian methods to provide an operational way to combine both validation data and expert opinion to account for misclassification of the two risk factors and missing data. For comparative purposes we considered a “full model” that allowed for both misclassification and missing data, along with alternative models that considered only misclassification or missing data, and the naïve model that ignored both sources of bias. Results: We identified noticeable differences between the four models with respect to the posterior distributions of the odds ratios that described the joint associations of radiotherapy and smoking with primary lung cancer. Despite those differences we found that the general conclusions regarding the pattern of associations were the same regardless of the model used. Overall our results indicate a nonsignificantly decreased lung cancer risk due to radiotherapy among nonsmokers, and a mildly increased risk among smokers. Conclusions: We described easy to implement Bayesian methods to perform sensitivity analyses for assessing the robustness of study findings to misclassification and missing data.  相似文献   

18.

Background

Imperfect diagnostic testing reduces the power to detect significant predictors in classical cross-sectional studies. Assuming that the misclassification in diagnosis is random this can be dealt with by increasing the sample size of a study. However, the effects of imperfect tests in longitudinal data analyses are not as straightforward to anticipate, especially if the outcome of the test influences behaviour. The aim of this paper is to investigate the impact of imperfect test sensitivity on the determination of predictor variables in a longitudinal study.

Methodology/Principal Findings

To deal with imperfect test sensitivity affecting the response variable, we transformed the observed response variable into a set of possible temporal patterns of true disease status, whose prior probability was a function of the test sensitivity. We fitted a Bayesian discrete time survival model using an MCMC algorithm that treats the true response patterns as unknown parameters in the model. We applied our approach to epidemiological data of bovine tuberculosis outbreaks in England and investigated the effect of reduced test sensitivity in the determination of risk factors for the disease. We found that reduced test sensitivity led to changes to the collection of risk factors associated with the probability of an outbreak that were chosen in the ‘best’ model and to an increase in the uncertainty surrounding the parameter estimates for a model with a fixed set of risk factors that were associated with the response variable.

Conclusions/Significance

We propose a novel algorithm to fit discrete survival models for longitudinal data where values of the response variable are uncertain. When analysing longitudinal data, uncertainty surrounding the response variable will affect the significance of the predictors and should therefore be accounted for either at the design stage by increasing the sample size or at the post analysis stage by conducting appropriate sensitivity analyses.  相似文献   

19.
Zucker DM  Spiegelman D 《Biometrics》2004,60(2):324-334
We consider the Cox proportional hazards model with discrete-valued covariates subject to misclassification. We present a simple estimator of the regression parameter vector for this model. The estimator is based on a weighted least squares analysis of weighted-averaged transformed Kaplan-Meier curves for the different possible configurations of the observed covariate vector. Optimal weighting of the transformed Kaplan-Meier curves is described. The method is designed for the case in which the misclassification rates are known or are estimated from an external validation study. A hybrid estimator for situations with an internal validation study is also described. When there is no misclassification, the regression coefficient vector is small in magnitude, and the censoring distribution does not depend on the covariates, our estimator has the same asymptotic covariance matrix as the Cox partial likelihood estimator. We present results of a finite-sample simulation study under Weibull survival in the setting of a single binary covariate with known misclassification rates. In this simulation study, our estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We illustrate the method on data from a study of the relationship between trans-unsaturated dietary fat consumption and cardiovascular disease incidence.  相似文献   

20.
Summary .   Missing data, measurement error, and misclassification are three important problems in many research fields, such as epidemiological studies. It is well known that missing data and measurement error in covariates may lead to biased estimation. Misclassification may be considered as a special type of measurement error, for categorical data. Nevertheless, we treat misclassification as a different problem from measurement error because statistical models for them are different. Indeed, in the literature, methods for these three problems were generally proposed separately given that statistical modeling for them are very different. The problem is more challenging in a longitudinal study with nonignorable missing data. In this article, we consider estimation in generalized linear models under these three incomplete data models. We propose a general approach based on expected estimating equations (EEEs) to solve these three incomplete data problems in a unified fashion. This EEE approach can be easily implemented and its asymptotic covariance can be obtained by sandwich estimation. Intensive simulation studies are performed under various incomplete data settings. The proposed method is applied to a longitudinal study of oral bone density in relation to body bone density.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号