首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Multiple imputation (MI) is used to handle missing at random (MAR) data. Despite warnings from statisticians, continuous variables are often recoded into binary variables. With MI it is important that the imputation and analysis models are compatible; variables should be imputed in the same form they appear in the analysis model. With an encoded binary variable more accurate imputations may be obtained by imputing the underlying continuous variable. We conducted a simulation study to explore how best to impute a binary variable that was created from an underlying continuous variable. We generated a completely observed continuous outcome associated with an incomplete binary covariate that is a categorized version of an underlying continuous covariate, and an auxiliary variable associated with the underlying continuous covariate. We simulated data with several sample sizes, and set 25% and 50% of data in the covariate to MAR dependent on the outcome and the auxiliary variable. We compared the performance of five different imputation methods: (a) Imputation of the binary variable using logistic regression; (b) imputation of the continuous variable using linear regression, then categorizing into the binary variable; (c, d) imputation of both the continuous and binary variables using fully conditional specification (FCS) and multivariate normal imputation; (e) substantive-model compatible (SMC) FCS. Bias and standard errors were large when the continuous variable only was imputed. The other methods performed adequately. Imputation of both the binary and continuous variables using FCS often encountered mathematical difficulties. We recommend the SMC-FCS method as it performed best in our simulation studies.  相似文献   

2.
In a random coefficient repeated measures model, the regression coefficients relating the observations to some underlying variable, such as time, are themselves taken to be random distributed over experimental units. In this paper, a general approach to repeated measures analysis is extended to this wider model. In the model three specific error structures for the random regression coefficients have been studied, viz, the random coefficients variance matrix is considered to be (i) diagonal, (ii) proportional to the identity matrix and (iii) completely general. An example will be analyzed to illustrate the procedure.  相似文献   

3.
Missing outcomes or irregularly timed multivariate longitudinal data frequently occur in clinical trials or biomedical studies. The multivariate t linear mixed model (MtLMM) has been shown to be a robust approach to modeling multioutcome continuous repeated measures in the presence of outliers or heavy‐tailed noises. This paper presents a framework for fitting the MtLMM with an arbitrary missing data pattern embodied within multiple outcome variables recorded at irregular occasions. To address the serial correlation among the within‐subject errors, a damped exponential correlation structure is considered in the model. Under the missing at random mechanism, an efficient alternating expectation‐conditional maximization (AECM) algorithm is used to carry out estimation of parameters and imputation of missing values. The techniques for the estimation of random effects and the prediction of future responses are also investigated. Applications to an HIV‐AIDS study and a pregnancy study involving analysis of multivariate longitudinal data with missing outcomes as well as a simulation study have highlighted the superiority of MtLMMs on the provision of more adequate estimation, imputation and prediction performances.  相似文献   

4.
Reiter  Jerome P. 《Biometrika》2008,95(4):933-946
When some of the records used to estimate the imputation modelsin multiple imputation are not used or available for analysis,the usual multiple imputation variance estimator has positivebias. We present an alternative approach that enables unbiasedestimation of variances and, hence, calibrated inferences insuch contexts. First, using all records, the imputer samplesm values of the parameters of the imputation model. Second,for each parameter draw, the imputer simulates the missing valuesfor all records n times. From these mn completed datasets, theimputer can analyse or disseminate the appropriate subset ofrecords. We develop methods for interval estimation and significancetesting for this approach. Methods are presented in the contextof multiple imputation for measurement error.  相似文献   

5.
Multiple lower limits of quantification (MLOQs) result if various laboratories are involved in the analysis of concentration data and some observations are too low to be quantified. For normally distributed data under MLOQs there exists only the multiple regression method of Helsel to estimate the mean and variance. We propose a simple imputation method and two new maximum likelihood estimation methods: the multiple truncated sample method and the multiple censored sample method. A simulation study is conducted to compare the performances of the newly introduced methods to Helsel's via the criteria root mean squared error (RMSE) and bias of the parameter estimates. Two and four lower limits of quantification (LLOQs), various amounts of unquantifiable observations and two sample sizes are studied. Furthermore, the robustness is investigated under model misspecification. The methods perform with decreasing accuracy for increasing rates of unquantified observations. Increasing sample sizes lead to smaller bias. There is almost no change in the performance between two and four LLOQs. The magnitude of the variance impairs the performance of all methods. For a smaller variance, the multiple censored sample method leads to superior estimates regarding the RMSE and bias, whereas Helsel's method is superior regarding the bias for a larger variance. Under model misspecification, Helsel's method was inferior to the other methods. Estimating the mean, the multiple censored sample method performed better, whereas the multiple truncated sample method performs best in estimating the variance. Summarizing, for a large sample size and normally distributed data we recommend to use Helsel's method. Otherwise, the multiple censored sample method should be used to obtain estimates of the mean and variance of data including MLOQs.  相似文献   

6.
Consider an experiment where a nonlinear continuous functional relationship exists between y and X. Assume that this relationship has been measured at n replicated points of X from each of t treatments or populations. Assume further that the X are fixed unknown vectors and that the location parameter v is either a fixed unknown vector or a vector of random variables. In the first case various linear hypotheses are to be tested about v, such as tests for main effects and interaction; in the second case, the mean and variance of the random variable v are to be estimated. A two-step procedure based on asymptotic theory is presented to test hypotheses or develop estimates for the fixed effects or random effects functional errors-in-variable model. An example of a one-way random effects model is given.  相似文献   

7.
Roy J  Lin X 《Biometrics》2000,56(4):1047-1054
Multiple outcomes are often used to properly characterize an effect of interest. This paper proposes a latent variable model for the situation where repeated measures over time are obtained on each outcome. These outcomes are assumed to measure an underlying quantity of main interest from different perspectives. We relate the observed outcomes using regression models to a latent variable, which is then modeled as a function of covariates by a separate regression model. Random effects are used to model the correlation due to repeated measures of the observed outcomes and the latent variable. An EM algorithm is developed to obtain maximum likelihood estimates of model parameters. Unit-specific predictions of the latent variables are also calculated. This method is illustrated using data from a national panel study on changes in methadone treatment practices.  相似文献   

8.
Zhiguo Li  Peter Gilbert  Bin Nan 《Biometrics》2008,64(4):1247-1255
Summary Grouped failure time data arise often in HIV studies. In a recent preventive HIV vaccine efficacy trial, immune responses generated by the vaccine were measured from a case–cohort sample of vaccine recipients, who were subsequently evaluated for the study endpoint of HIV infection at prespecified follow‐up visits. Gilbert et al. (2005, Journal of Infectious Diseases 191 , 666–677) and Forthal et al. (2007, Journal of Immunology 178, 6596–6603) analyzed the association between the immune responses and HIV incidence with a Cox proportional hazards model, treating the HIV infection diagnosis time as a right‐censored random variable. The data, however, are of the form of grouped failure time data with case–cohort covariate sampling, and we propose an inverse selection probability‐weighted likelihood method for fitting the Cox model to these data. The method allows covariates to be time dependent, and uses multiple imputation to accommodate covariate data that are missing at random. We establish asymptotic properties of the proposed estimators, and present simulation results showing their good finite sample performance. We apply the method to the HIV vaccine trial data, showing that higher antibody levels are associated with a lower hazard of HIV infection.  相似文献   

9.
IntroductionMonitoring early diagnosis is a priority of cancer policy in England. Information on stage has not always been available for a large proportion of patients, however, which may bias temporal comparisons. We previously estimated that early-stage diagnosis of colorectal cancer rose from 32% to 44% during 2008–2013, using multiple imputation. Here we examine the underlying assumptions of multiple imputation for missing stage using the same dataset.MethodsIndividually-linked cancer registration, Hospital Episode Statistics (HES), and audit data were examined. Six imputation models including different interaction terms, post-diagnosis treatment, and survival information were assessed, and comparisons drawn with the a priori optimal model. Models were further tested by setting stage values to missing for some patients under one plausible mechanism, then comparing actual and imputed stage distributions for these patients. Finally, a pattern-mixture sensitivity analysis was conducted.ResultsData from 196,511 colorectal patients were analysed, with 39.2% missing stage. Inclusion of survival time increased the accuracy of imputation: the odds ratio for change in early-stage diagnosis during 2008–2013 was 1.7 (95% CI: 1.6, 1.7) with survival to 1 year included, compared to 1.9 (95% CI 1.9–2.0) with no survival information. Imputation estimates of stage were accurate in one plausible simulation. Pattern-mixture analyses indicated our previous analysis conclusions would only change materially if stage were misclassified for 20% of the patients who had it categorised as late.InterpretationMultiple imputation models can substantially reduce bias from missing stage, but data on patient’s one-year survival should be included for highest accuracy.  相似文献   

10.
Multiple imputation (MI) is increasingly popular for handling multivariate missing data. Two general approaches are available in standard computer packages: MI based on the posterior distribution of incomplete variables under a multivariate (joint) model, and fully conditional specification (FCS), which imputes missing values using univariate conditional distributions for each incomplete variable given all the others, cycling iteratively through the univariate imputation models. In the context of longitudinal or clustered data, it is not clear whether these approaches result in consistent estimates of regression coefficient and variance component parameters when the analysis model of interest is a linear mixed effects model (LMM) that includes both random intercepts and slopes with either covariates or both covariates and outcome contain missing information. In the current paper, we compared the performance of seven different MI methods for handling missing values in longitudinal and clustered data in the context of fitting LMMs with both random intercepts and slopes. We study the theoretical compatibility between specific imputation models fitted under each of these approaches and the LMM, and also conduct simulation studies in both the longitudinal and clustered data settings. Simulations were motivated by analyses of the association between body mass index (BMI) and quality of life (QoL) in the Longitudinal Study of Australian Children (LSAC). Our findings showed that the relative performance of MI methods vary according to whether the incomplete covariate has fixed or random effects and whether there is missingnesss in the outcome variable. We showed that compatible imputation and analysis models resulted in consistent estimation of both regression parameters and variance components via simulation. We illustrate our findings with the analysis of LSAC data.  相似文献   

11.
Multiple imputation has become a widely accepted technique to deal with the problem of incomplete data. Typically, imputation of missing values and the statistical analysis are performed separately. Therefore, the imputation model has to be consistent with the analysis model. If the data are analyzed with a mixture model, the parameter estimates are usually obtained iteratively. Thus, if the data are missing not at random, parameter estimation and treatment of missingness should be combined. We solve both problems by simultaneously imputing values using the data augmentation method and estimating parameters using the EM algorithm. This iterative procedure ensures that the missing values are properly imputed given the current parameter estimates. Properties of the parameter estimates were investigated in a simulation study. The results are illustrated using data from the National Health and Nutrition Examination Survey.  相似文献   

12.
We develop an approach, based on multiple imputation, to using auxiliary variables to recover information from censored observations in survival analysis. We apply the approach to data from an AIDS clinical trial comparing ZDV and placebo, in which CD4 count is the time-dependent auxiliary variable. To facilitate imputation, a joint model is developed for the data, which includes a hierarchical change-point model for CD4 counts and a time-dependent proportional hazards model for the time to AIDS. Markov chain Monte Carlo methods are used to multiply impute event times for censored cases. The augmented data are then analyzed and the results combined using standard multiple-imputation techniques. A comparison of our multiple-imputation approach to simply analyzing the observed data indicates that multiple imputation leads to a small change in the estimated effect of ZDV and smaller estimated standard errors. A sensitivity analysis suggests that the qualitative findings are reproducible under a variety of imputation models. A simulation study indicates that improved efficiency over standard analyses and partial corrections for dependent censoring can result. An issue that arises with our approach, however, is whether the analysis of primary interest and the imputation model are compatible.  相似文献   

13.
Lyles RH  MacFarlane G 《Biometrics》2000,56(2):634-639
When repeated measures of an exposure variable are obtained on individuals, it can be of epidemiologic interest to relate the slope of this variable over time to a subsequent response. Subject-specific estimates of this slope are measured with error, as are corresponding estimates of the level of exposure, i.e., the intercept of a linear regression over time. Because the intercept is often correlated with the slope and may also be associated with the outcome, each error-prone covariate (intercept and slope) is a potential confounder, thereby tending to accentuate potential biases due to measurement error. Under a familiar mixed linear model for the exposure measurements, we present closed-form estimators for the true parameters of interest in the case of a continuous outcome with complete and equally timed follow-up for all subjects. Generalizations to handle incomplete follow-up, other types of outcome variables, and additional fixed covariates are illustrated via maximum likelihood. We provide examples using data from the Multicenter AIDS Cohort Study. In these examples, substantial adjustments are made to uncorrected parameter estimates corresponding to the health-related effects of exposure variable slopes over time. We illustrate the potential impact of such adjustments on the interpretation of an epidemiologic analysis.  相似文献   

14.
In the context of analyzing multiple functional limitation responses collected longitudinally from the Longitudinal Study of Aging (LSOA), we investigate the heterogeneity of these outcomes with respect to their associations with previous functional status and other risk factors in the presence of informative drop-out and confounding by baseline outcomes. We accommodate the longitudinal nature of the multiple outcomes with a unique extension of the nested random effects logistic model with an autoregressive structure to include drop-out and baseline outcome components with shared random effects. Estimation of fixed effects and variance components is by maximum likelihood with numerical integration. This shared parameter selection model assumes that drop-out is conditionally independent of the multiple functional limitation outcomes given the underlying random effect representing an individual's trajectory of functional status across time. Whereas it is not possible to fully assess the adequacy of this assumption, we assess the robustness of this approach by varying the assumptions underlying the proposed model such as the random effects structure, the drop-out component, and omission of baseline functional outcomes as dependent variables in the model. Heterogeneity among the associations between each functional limitation outcome and a set of risk factors for functional limitation, such as previous functional limitation and physical activity, exists for the LSOA data of interest. Less heterogeneity is observed among the estimates of time-level random effects variance components that are allowed to vary across functional outcomes and time. We also note that. under an autoregressive structure, bias results from omitting the baseline outcome component linked to the follow-up outcome component by subject-level random effects.  相似文献   

15.
Harrell's c‐index or concordance C has been widely used as a measure of separation of two survival distributions. In the absence of censored data, the c‐index estimates the Mann–Whitney parameter Pr(X>Y), which has been repeatedly utilized in various statistical contexts. In the presence of randomly censored data, the c‐index no longer estimates Pr(X>Y); rather, a parameter that involves the underlying censoring distributions. This is in contrast to Efron's maximum likelihood estimator of the Mann–Whitney parameter, which is recommended in the setting of random censorship.  相似文献   

16.
Liu M  Taylor JM  Belin TR 《Biometrics》2000,56(4):1157-1163
This paper outlines a multiple imputation method for handling missing data in designed longitudinal studies. A random coefficients model is developed to accommodate incomplete multivariate continuous longitudinal data. Multivariate repeated measures are jointly modeled; specifically, an i.i.d. normal model is assumed for time-independent variables and a hierarchical random coefficients model is assumed for time-dependent variables in a regression model conditional on the time-independent variables and time, with heterogeneous error variances across variables and time points. Gibbs sampling is used to draw model parameters and for imputations of missing observations. An application to data from a study of startle reactions illustrates the model. A simulation study compares the multiple imputation procedure to the weighting approach of Robins, Rotnitzky, and Zhao (1995, Journal of the American Statistical Association 90, 106-121) that can be used to address similar data structures.  相似文献   

17.
Multiple imputation (MI) has emerged in the last two decades as a frequently used approach in dealing with incomplete data. Gaussian and log‐linear imputation models are fairly straightforward to implement for continuous and discrete data, respectively. However, in missing data settings that include a mix of continuous and discrete variables, the lack of flexible models for the joint distribution of different types of variables can make the specification of the imputation model a daunting task. The widespread availability of software packages that are capable of carrying out MI under the assumption of joint multivariate normality allows applied researchers to address this complication pragmatically by treating the discrete variables as continuous for imputation purposes and subsequently rounding the imputed values to the nearest observed category. In this article, we compare several rounding rules for binary variables based on simulated longitudinal data sets that have been used to illustrate other missing‐data techniques. Using a combination of conditional and marginal data generation mechanisms and imputation models, we study the statistical properties of multiple‐imputation‐based estimates for various population quantities under different rounding rules from bias and coverage standpoints. We conclude that a good rule should be driven by borrowing information from other variables in the system rather than relying on the marginal characteristics and should be relatively insensitive to imputation model specifications that may potentially be incompatible with the observed data. We also urge researchers to consider the applied context and specific nature of the problem, to avoid uncritical and possibly inappropriate use of rounding in imputation models.  相似文献   

18.
Both neurocognitive deficits and schizophrenia are highly heritable. Genetic overlap between neurocognitive deficits and schizophrenia has been observed in both the general population and in the clinical samples. This study aimed to examine if the polygenic architecture of susceptibility to schizophrenia modified neurocognitive performance in schizophrenia patients. Schizophrenia polygenic risk scores (PRSs) were first derived from the Psychiatric Genomics Consortium (PGC) on schizophrenia, and then the scores were calculated in our independent sample of 1130 schizophrenia trios, who had PsychChip data and were part of the Schizophrenia Families from Taiwan project. Pseudocontrols generated from the nontransmitted parental alleles of the parents in these trios were compared with alleles in schizophrenia patients in assessing the replicability of PGC‐derived susceptibility variants. Schizophrenia PRS at the P‐value threshold (PT) of 0.1 explained 0.2% in the variance of disease status in this Han‐Taiwanese samples, and the score itself had a P‐value 0.05 for the association test with the disorder. Each patient underwent neurocognitive evaluation on sustained attention using the continuous performance test and executive function using the Wisconsin Card Sorting Test. We applied a structural equation model to construct the neurocognitive latent variable estimated from multiple measured indices in these 2 tests, and then tested the association between the PRS and the neurocognitive latent variable. Higher schizophrenia PRS generated at the PT of 0.1 was significantly associated with poorer neurocognitive performance with explained variance 0.5%. Our findings indicated that schizophrenia susceptibility variants modify the neurocognitive performance in schizophrenia patients.  相似文献   

19.
We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.  相似文献   

20.
Miglioretti DL 《Biometrics》2003,59(3):710-720
Health status is a complex outcome, often characterized by multiple measures. When assessing changes in health status over time, multiple measures are typically collected longitudinally. Analytic challenges posed by these multivariate longitudinal data are further complicated when the outcomes are combinations of continuous, categorical, and count data. To address these challenges, we propose a fully Bayesian latent transition regression approach for jointly analyzing a mixture of longitudinal outcomes from any distribution. Health status is assumed to be a categorical latent variable, and the multiple outcomes are treated as surrogate measures of the latent health state, observed with error. Using this approach, both baseline latent health state prevalences and the probabilities of transitioning between the health states over time are modeled as functions of covariates. The observed outcomes are related to the latent health states through regression models that include subject-specific effects to account for residual correlation among repeated measures over time, and covariate effects to account for differential measurement of the latent health states. We illustrate our approach with data from a longitudinal study of back pain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号