首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We develop regression models for limited and censored data based on the mixture between the log‐power‐normal and Bernoulli‐type distributions. A likelihood‐based approach is implemented for parameter estimation and a small‐scale simulation study is conducted to evaluate parameter recovery, with emphasis on bias estimation. The main conclusion is that the approach is very much satisfactory for moderate and large sample sizes. A real data example, the safety and immunogenecity study of measles vaccine in Haiti, is presented to illustrate how different models can be used to fit this type of data. As shown, the asymmetric models considered seem to present the best fit for the data set under study, revealing significance of the explanatory variable sex, which is not found significant with the log‐normal model.  相似文献   

2.
Auxiliary covariate data are often collected in biomedical studies when the primary exposure variable is only assessed on a subset of the study subjects. In this study, we investigate a semiparametric‐estimated likelihood estimation for the generalized linear mixed models (GLMM) in the presence of a continuous auxiliary variable. We use a kernel smoother to handle continuous auxiliary data. The method can be used to deal with missing or mismeasured covariate data problems in a variety of applications when an auxiliary variable is available and cluster sizes are not too small. Simulation study results show that the proposed method performs better than that which ignores the random effects in GLMM and that which only uses data in the validation data set. We illustrate the proposed method with a real data set from a recent environmental epidemiology study on the maternal serum 1,1‐dichloro‐2,2‐bis(p‐chlorophenyl) ethylene level in relationship to preterm births.  相似文献   

3.
In a linear mixed effects model, it is common practice to assume that the random effects follow a parametric distribution such as a normal distribution with mean zero. However, in the case of variable selection, substantial violation of the normality assumption can potentially impact the subset selection and result in poor interpretation and even incorrect results. In nonparametric random effects models, the random effects generally have a nonzero mean, which causes an identifiability problem for the fixed effects that are paired with the random effects. In this article, we focus on a Bayesian method for variable selection. We characterize the subject‐specific random effects nonparametrically with a Dirichlet process and resolve the bias simultaneously. In particular, we propose flexible modeling of the conditional distribution of the random effects with changes across the predictor space. The approach is implemented using a stochastic search Gibbs sampler to identify subsets of fixed effects and random effects to be included in the model. Simulations are provided to evaluate and compare the performance of our approach to the existing ones. We then apply the new approach to a real data example, cross‐country and interlaboratory rodent uterotrophic bioassay.  相似文献   

4.
Microarrays provide a valuable tool for the quantification of gene expression. Usually, however, there is a limited number of replicates leading to unsatisfying variance estimates in a gene‐wise mixed model analysis. As thousands of genes are available, it is desirable to combine information across genes. When more than two tissue types or treatments are to be compared it might be advisable to consider the array effect as random. Then information between arrays may be recovered, which can increase accuracy in estimation. We propose a method of variance component estimation across genes for a linear mixed model with two random effects. The method may be extended to models with more than two random effects. We assume that the variance components follow a log‐normal distribution. Assuming that the sums of squares from the gene‐wise analysis, given the true variance components, follow a scaled χ2‐distribution, we adopt an empirical Bayes approach. The variance components are estimated by the expectation of their posterior distribution. The new method is evaluated in a simulation study. Differentially expressed genes are more likely to be detected by tests based on these variance estimates than by tests based on gene‐wise variance estimates. This effect is most visible in studies with small array numbers. Analyzing a real data set on maize endosperm the method is shown to work well. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
Several analysis of the geographic variation of mortality rates in space have been proposed in the literature. Poisson models allowing the incorporation of random effects to model extra‐variability are widely used. The typical modelling approach uses normal random effects to accommodate local spatial autocorrelation. When spatial autocorrelation is absent but overdispersion persists, a discrete mixture model is an alternative approach. However, a technique for identifying regions which have significant high or low risk in any given area has not been developed yet when using the discrete mixture model. Taking into account the importance that this information provides to the epidemiologists to formulate hypothesis related to the potential risk factors affecting the population, different procedures for obtaining confidence intervals for relative risks are derived in this paper. These methods are the standard information‐based method and other four, all based on bootstrap techniques, namely the asymptotic‐bootstrap, the percentile‐bootstrap, the BC‐bootstrap and the modified information‐based method. All of them are compared empirically by their application to mortality data due to cardiovascular diseases in women from Navarra, Spain, during the period 1988–1994. In the small area example considered here, we find that the information‐based method is sensible at estimating standard errors of the component means in the discrete mixture model but it is not appropriate for providing standard errors of the estimated relative risks and hence, for constructing confidence intervals for the relative risk associated to each region. Therefore, the bootstrap‐based methods are recommended for this matter. More specifically, the BC method seems to provide better coverage probabilities in the case studied, according to a small scale simulation study that has been carried out using a scenario as encountered in the analysis of the real data.  相似文献   

6.
In this paper, our aim is to analyze geographical and temporal variability of disease incidence when spatio‐temporal count data have excess zeros. To that end, we consider random effects in zero‐inflated Poisson models to investigate geographical and temporal patterns of disease incidence. Spatio‐temporal models that employ conditionally autoregressive smoothing across the spatial dimension and B‐spline smoothing over the temporal dimension are proposed. The analysis of these complex models is computationally difficult from the frequentist perspective. On the other hand, the advent of the Markov chain Monte Carlo algorithm has made the Bayesian analysis of complex models computationally convenient. Recently developed data cloning method provides a frequentist approach to mixed models that is also computationally convenient. We propose to use data cloning, which yields to maximum likelihood estimation, to conduct frequentist analysis of zero‐inflated spatio‐temporal modeling of disease incidence. One of the advantages of the data cloning approach is that the prediction and corresponding standard errors (or prediction intervals) of smoothing disease incidence over space and time is easily obtained. We illustrate our approach using a real dataset of monthly children asthma visits to hospital in the province of Manitoba, Canada, during the period April 2006 to March 2010. Performance of our approach is also evaluated through a simulation study.  相似文献   

7.
8.
If the number of treatments in a network meta‐analysis is large, it may be possible and useful to model the main effect of treatment as random, that is to say as random realizations from a normal distribution of possible treatment effects. This then constitutes a third sort of random effect that may be considered in connection with such analyses. The first and most common models treatment‐by‐trial interaction as being random and the second, rather rarer, models the main effects of trial as being random and thus permits the recovery of intertrial information. Taking the example of a network meta‐analysis of 44 similar treatments in 10 trials, we illustrate how a hierarchical approach to modeling a random main effect of treatment can be used to produce shrunk (toward the overall mean) estimates of effects for individual treatments. As a related problem, we also consider the issue of using a random‐effect model for the within‐trial variances from trial to trial. We provide a number of possible graphical representations of the results and discuss the advantages and disadvantages of such an approach.  相似文献   

9.
Two-part regression models are frequently used to analyze longitudinal count data with excess zeros, where the same set of subjects is repeatedly observed over time. In this context, several sources of heterogeneity may arise at individual level that affect the observed process. Further, longitudinal studies often suffer from missing values: individuals dropout of the study before its completion, and thus present incomplete data records. In this paper, we propose a finite mixture of hurdle models to face the heterogeneity problem, which is handled by introducing random effects with a discrete distribution; a pattern-mixture approach is specified to deal with non-ignorable missing values. This approach helps us to consider overdispersed counts, while allowing for association between the two parts of the model, and for non-ignorable dropouts. The effectiveness of the proposal is tested through a simulation study. Finally, an application to real data on skin cancer is provided.  相似文献   

10.
Summary In recent years, nonlinear mixed‐effects (NLME) models have been proposed for modeling complex longitudinal data. Covariates are usually introduced in the models to partially explain intersubject variations. However, one often assumes that both model random error and random effects are normally distributed, which may not always give reliable results if the data exhibit skewness. Moreover, some covariates such as CD4 cell count may be often measured with substantial errors. In this article, we address these issues simultaneously by jointly modeling the response and covariate processes using a Bayesian approach to NLME models with covariate measurement errors and a skew‐normal distribution. A real data example is offered to illustrate the methodologies by comparing various potential models with different distribution specifications. It is showed that the models with skew‐normality assumption may provide more reasonable results if the data exhibit skewness and the results may be important for HIV/AIDS studies in providing quantitative guidance to better understand the virologic responses to antiretroviral treatment.  相似文献   

11.
A method is proposed that aims at identifying clusters of individuals that show similar patterns when observed repeatedly. We consider linear‐mixed models that are widely used for the modeling of longitudinal data. In contrast to the classical assumption of a normal distribution for the random effects a finite mixture of normal distributions is assumed. Typically, the number of mixture components is unknown and has to be chosen, ideally by data driven tools. For this purpose, an EM algorithm‐based approach is considered that uses a penalized normal mixture as random effects distribution. The penalty term shrinks the pairwise distances of cluster centers based on the group lasso and the fused lasso method. The effect is that individuals with similar time trends are merged into the same cluster. The strength of regularization is determined by one penalization parameter. For finding the optimal penalization parameter a new model choice criterion is proposed.  相似文献   

12.
In the development of structural equation models (SEMs), observed variables are usually assumed to be normally distributed. However, this assumption is likely to be violated in many practical researches. As the non‐normality of observed variables in an SEM can be obtained from either non‐normal latent variables or non‐normal residuals or both, semiparametric modeling with unknown distribution of latent variables or unknown distribution of residuals is needed. In this article, we find that an SEM becomes nonidentifiable when both the latent variable distribution and the residual distribution are unknown. Hence, it is impossible to estimate reliably both the latent variable distribution and the residual distribution without parametric assumptions on one or the other. We also find that the residuals in the measurement equation are more sensitive to the normality assumption than the latent variables, and the negative impact on the estimation of parameters and distributions due to the non‐normality of residuals is more serious. Therefore, when there is no prior knowledge about parametric distributions for either the latent variables or the residuals, we recommend making parametric assumption on latent variables, and modeling residuals nonparametrically. We propose a semiparametric Bayesian approach using the truncated Dirichlet process with a stick breaking prior to tackle the non‐normality of residuals in the measurement equation. Simulation studies and a real data analysis demonstrate our findings, and reveal the empirical performance of the proposed methodology. A free WinBUGS code to perform the analysis is available in Supporting Information.  相似文献   

13.
On estimation and prediction for spatial generalized linear mixed models   总被引:4,自引:0,他引:4  
Zhang H 《Biometrics》2002,58(1):129-136
We use spatial generalized linear mixed models (GLMM) to model non-Gaussian spatial variables that are observed at sampling locations in a continuous area. In many applications, prediction of random effects in a spatial GLMM is of great practical interest. We show that the minimum mean-squared error (MMSE) prediction can be done in a linear fashion in spatial GLMMs analogous to linear kriging. We develop a Monte Carlo version of the EM gradient algorithm for maximum likelihood estimation of model parameters. A by-product of this approach is that it also produces the MMSE estimates for the realized random effects at the sampled sites. This method is illustrated through a simulation study and is also applied to a real data set on plant root diseases to obtain a map of disease severity that can facilitate the practice of precision agriculture.  相似文献   

14.
Alternatives in ecosystem‐based management often differ with respect to trade‐offs between ecosystem values. Ecosystem or food‐web models and demographic models are typically employed to evaluate alternatives, but the approaches are rarely integrated to uncover conflicts between values. We applied multistate models to a capture–recapture dataset on common guillemots Uria aalge breeding in the Baltic Sea to identify factors influencing survival. The estimated relationships were employed together with Ecopath‐with‐Ecosim food‐web model simulations to project guillemot survival under six future scenarios incorporating climate change. The scenarios were based on management alternatives for eutrophication and cod fisheries, issues considered top priority for regional management, but without known direct effects on the guillemot population. Our demographic models identified prey quantity (abundance and biomass of sprat Sprattus sprattus) as the main factor influencing guillemot survival. Most scenarios resulted in projections of increased survival, in the near (2016–2040) and distant (2060–2085) future. However, in the scenario of reduced nutrient input and precautionary cod fishing, guillemot survival was projected to be lower in both future periods due to lower sprat stocks. Matrix population models suggested a substantial decline of the guillemot population in the near future, 24% per 10 years, and a smaller reduction, 1.1% per 10 years, in the distant future. To date, many stakeholders and Baltic Sea governments have supported reduced nutrient input and precautionary cod fishing and implementation is underway. Negative effects on nonfocal species have previously not been uncovered, but our results show that the scenario is likely to negatively impact the guillemot population. Linking model results allowed identifying trade‐offs associated with management alternatives. This information is critical to thorough evaluation by decision‐makers, but not easily obtained by food‐web models or demographic models in isolation. Appropriate datasets are often available, making it feasible to apply a linked approach for better‐informed decisions in ecosystem‐based management.  相似文献   

15.
Nummi T  Pan J  Siren T  Liu K 《Biometrics》2011,67(3):871-875
Summary In most research on smoothing splines the focus has been on estimation, while inference, especially hypothesis testing, has received less attention. By defining design matrices for fixed and random effects and the structure of the covariance matrices of random errors in an appropriate way, the cubic smoothing spline admits a mixed model formulation, which places this nonparametric smoother firmly in a parametric setting. Thus nonlinear curves can be included with random effects and random coefficients. The smoothing parameter is the ratio of the random‐coefficient and error variances and tests for linear regression reduce to tests for zero random‐coefficient variances. We propose an exact F‐test for the situation and investigate its performance in a real pine stem data set and by simulation experiments. Under certain conditions the suggested methods can also be applied when the data are dependent.  相似文献   

16.
Statistical models support medical research by facilitating individualized outcome prognostication conditional on independent variables or by estimating effects of risk factors adjusted for covariates. Theory of statistical models is well‐established if the set of independent variables to consider is fixed and small. Hence, we can assume that effect estimates are unbiased and the usual methods for confidence interval estimation are valid. In routine work, however, it is not known a priori which covariates should be included in a model, and often we are confronted with the number of candidate variables in the range 10–30. This number is often too large to be considered in a statistical model. We provide an overview of various available variable selection methods that are based on significance or information criteria, penalized likelihood, the change‐in‐estimate criterion, background knowledge, or combinations thereof. These methods were usually developed in the context of a linear regression model and then transferred to more generalized linear models or models for censored survival data. Variable selection, in particular if used in explanatory modeling where effect estimates are of central interest, can compromise stability of a final model, unbiasedness of regression coefficients, and validity of p‐values or confidence intervals. Therefore, we give pragmatic recommendations for the practicing statistician on application of variable selection methods in general (low‐dimensional) modeling problems and on performing stability investigations and inference. We also propose some quantities based on resampling the entire variable selection process to be routinely reported by software packages offering automated variable selection algorithms.  相似文献   

17.
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time‐dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real‐life analyses to estimate nonlinear and time‐dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real‐life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure.  相似文献   

18.
Yi Zhao  Xi Luo 《Biometrics》2019,75(3):788-798
This paper presents Granger mediation analysis, a new framework for causal mediation analysis of multiple time series. This framework is motivated by a functional magnetic resonance imaging (fMRI) experiment where we are interested in estimating the mediation effects between a randomized stimulus time series and brain activity time series from two brain regions. The independent observation assumption is thus unrealistic for this type of time‐series data. To address this challenge, our framework integrates two types of models: causal mediation analysis across the mediation variables, and vector autoregressive (VAR) models across the temporal observations. We use “Granger” to refer to VAR correlations modeled in this paper. We further extend this framework to handle multilevel data, in order to model individual variability and correlated errors between the mediator and the outcome variables. Using Rubin's potential outcome framework, we show that the causal mediation effects are identifiable under our time‐series model. We further develop computationally efficient algorithms to maximize our likelihood‐based estimation criteria. Simulation studies show that our method reduces the estimation bias and improves statistical power, compared with existing approaches. On a real fMRI data set, our approach quantifies the causal effects through a brain pathway, while capturing the dynamic dependence between two brain regions.  相似文献   

19.
Mixed models are now well‐established methods in ecology and evolution because they allow accounting for and quantifying within‐ and between‐individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi‐modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life‐history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life‐history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long‐term studies of large mammals to illustrate the potential of using mixture models for assessing within‐population variation in life‐history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life‐history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users.  相似文献   

20.
物种分布模型通常用于基础生态和应用生态研究,用来确定影响生物分布和物种丰富度的因素,量化物种与非生物条件的关系,预测物种对土地利用和气候变化的反应,并确定潜在的保护区.在传统的物种分布模型中,生物的相互作用很少被纳入,而联合物种分布模型(JSDMs)作为近年提出的一种新的可行方法,可以同时考虑环境因素和生物交互作用,因而成为分析生物群落结构和种间相互作用过程的有力工具.JSDMs以物种分布模型(SDMs)为基础,通常采用广义线性回归模型建立物种对环境变量的多变量响应,以随机效应的形式获取物种间的关联,同时结合隐变量模型(LVMs),并基于Laplace近似和马尔科夫蒙脱卡罗模拟的最大似然估计或贝叶斯方法来估算模型参数.本文对JSDMs的产生及理论基础进行归纳总结,重点介绍了不同类型JSDMs的特点及其在现代生态学中的应用,阐述了JSDMs的应用前景、使用过程中存在的问题及发展方向.随着对环境因素与多物种种间关系研究的深入,JSDMs将是今后物种分布模型研究的重点.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号