首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent work on Bayesian inference of disease mapping models discusses the advantages of the fully Bayesian (FB) approach over its empirical Bayes (EB) counterpart, suggesting that FB posterior standard deviations of small-area relative risks are more reflective of the uncertainty associated with the relative risk estimation than counterparts based on EB inference, since the latter fail to account for the variability in the estimation of the hyperparameters. In this article, an EB bootstrap methodology for relative risk inference with accurate parametric EB confidence intervals is developed, illustrated, and contrasted with the hyperprior Bayes. We elucidate the close connection between the EB bootstrap methodology and hyperprior Bayes, present a comparison between FB inference via hybrid Markov chain Monte Carlo and EB inference via penalized quasi-likelihood, and illustrate the ability of parametric bootstrap procedures to adjust for the undercoverage in the "naive" EB interval estimates. We discuss the important roles that FB and EB methods play in risk inference, map interpretation, and real-life applications. The work is motivated by a recent analysis of small-area infant mortality rates in the province of British Columbia in Canada.  相似文献   

2.
This article reviews some of the current guidance concerning the separation of variability and uncertainty in presenting the results of human health and ecological risk assessments. Such guidance and some of the published examples of its implementation using two-stage Monte Carlo simulation methods have not emphasized the fact that there is considerable judgment involved in determining which input parameters can be modeled as purely variable or purely uncertain, and which require explicit treatment in both dimensions. Failure to discuss these choices leads to confusion and misunderstanding of the proposed methods. We conclude with an example illustrating some of the reasoning and statistical calculations that might be used to inform such choices.  相似文献   

3.
The results of quantitative risk assessments are key factors in a risk manager's decision of the necessity to implement actions to reduce risk. The extent of the uncertainty in the assessment will play a large part in the degree of confidence a risk manager has in the reported significance and probability of a given risk. The two main sources of uncertainty in such risk assessments are variability and incertitude. In this paper we use two methods, a second-order two-dimensional Monte Carlo analysis and probability bounds analysis, to investigate the impact of both types of uncertainty on the results of a food-web exposure model. We demonstrate how the full extent of uncertainty in a risk estimate can be fully portrayed in a way that is useful to risk managers. We show that probability bounds analysis is a useful tool for identifying the parameters that contribute the most to uncertainty in a risk estimate and how it can be used to complement established practices in risk assessment. We conclude by promoting the use of probability analysis in conjunction with Monte Carlo analyses as a method for checking how plausible Monte Carlo results are in the full context of uncertainty.  相似文献   

4.
5.
Bayesian inference is a powerful statistical paradigm that has gained popularity in many fields of science, but adoption has been somewhat slower in biophysics. Here, I provide an accessible tutorial on the use of Bayesian methods by focusing on example applications that will be familiar to biophysicists. I first discuss the goals of Bayesian inference and show simple examples of posterior inference using conjugate priors. I then describe Markov chain Monte Carlo sampling and, in particular, discuss Gibbs sampling and Metropolis random walk algorithms with reference to detailed examples. These Bayesian methods (with the aid of Markov chain Monte Carlo sampling) provide a generalizable way of rigorously addressing parameter inference and identifiability for arbitrarily complicated models.  相似文献   

6.
Rarefaction methods have been introduced into population genetics (from ecology) for predicting and comparing the allelic richness of future samples (or sometimes populations) on the basis of currently available samples, possibly of different sizes. Here, we focus our attention on one such problem: Predicting which population is most likely to yield the future sample having the highest allelic richness. (This problem can arise when we want to construct a core collection from a larger germplasm collection.) We use extensive simulations to compare the performance of the Monte Carlo rarefaction (repeated random subsampling) method with a simple Bayesian approach we have developed-which is based on the Ewens sampling distribution. We found that neither this Bayesian method nor the (Monte Carlo) rarefaction method performed uniformly better than the other. We also examine briefly some of the other motivations offered for these methods and try to make sense of them from a Bayesian point of view.  相似文献   

7.
Zhu B  Song PX  Taylor JM 《Biometrics》2011,67(4):1295-1304
This article presents a new modeling strategy in functional data analysis. We consider the problem of estimating an unknown smooth function given functional data with noise. The unknown function is treated as the realization of a stochastic process, which is incorporated into a diffusion model. The method of smoothing spline estimation is connected to a special case of this approach. The resulting models offer great flexibility to capture the dynamic features of functional data, and allow straightforward and meaningful interpretation. The likelihood of the models is derived with Euler approximation and data augmentation. A unified Bayesian inference method is carried out via a Markov chain Monte Carlo algorithm including a simulation smoother. The proposed models and methods are illustrated on some prostate-specific antigen data, where we also show how the models can be used for forecasting.  相似文献   

8.
傅煜  雷渊才  曾伟生 《生态学报》2015,35(23):7738-7747
采用系统抽样体系江西省固定样地杉木连续观测数据和生物量数据,通过Monte Carlo法反复模拟由单木生物量模型推算区域尺度地上生物量的过程,估计了江西省杉木地上总生物量。基于不同水平建模样本量n及不同决定系数R~2的设计,分别研究了单木生物量模型参数变异性及模型残差变异性对区域尺度生物量估计不确定性的影响。研究结果表明:2009年江西省杉木地上生物量估计值为(19.84±1.27)t/hm~2,不确定性占生物量估计值约6.41%。生物量估计值和不确定性值达到平稳状态所需的运算时间随建模样本量及决定系数R~2的增大而减小;相对于模型参数变异性,残差变异性对不确定性的影响更小。  相似文献   

9.
Li Z  Sillanpää MJ 《Genetics》2012,190(1):231-249
Bayesian hierarchical shrinkage methods have been widely used for quantitative trait locus mapping. From the computational perspective, the application of the Markov chain Monte Carlo (MCMC) method is not optimal for high-dimensional problems such as the ones arising in epistatic analysis. Maximum a posteriori (MAP) estimation can be a faster alternative, but it usually produces only point estimates without providing any measures of uncertainty (i.e., interval estimates). The variational Bayes method, stemming from the mean field theory in theoretical physics, is regarded as a compromise between MAP and MCMC estimation, which can be efficiently computed and produces the uncertainty measures of the estimates. Furthermore, variational Bayes methods can be regarded as the extension of traditional expectation-maximization (EM) algorithms and can be applied to a broader class of Bayesian models. Thus, the use of variational Bayes algorithms based on three hierarchical shrinkage models including Bayesian adaptive shrinkage, Bayesian LASSO, and extended Bayesian LASSO is proposed here. These methods performed generally well and were found to be highly competitive with their MCMC counterparts in our example analyses. The use of posterior credible intervals and permutation tests are considered for decision making between quantitative trait loci (QTL) and non-QTL. The performance of the presented models is also compared with R/qtlbim and R/BhGLM packages, using a previously studied simulated public epistatic data set.  相似文献   

10.
We develop a new Bayesian approach to interval estimation for both the risk difference and the risk ratio for a 2 x 2 table with a structural zero using Markov chain Monte Carlo (MCMC) methods. We also derive a normal approximation for the risk difference and a gamma approximation for the risk ratio. We then compare the coverage and interval width of our new intervals to the score-based intervals over various parameter and sample-size configurations. Finally, we consider a Bayesian method for sample-size determination.  相似文献   

11.
12.
Model-based estimation of the human health risks resulting from exposure to environmental contaminants can be an important tool for structuring public health policy. Due to uncertainties in the modeling process, the outcomes of these assessments are usually probabilistic representations of a range of possible risks. In some cases, health surveillance data are available for the assessment population over all or a subset of the risk projection period and this additional information can be used to augment the model-based estimates. We use a Bayesian approach to update model-based estimates of health risks based on available health outcome data. Updated uncertainty distributions for risk estimates are derived using Monte Carlo sampling, which allows flexibility to model realistic situations including measurement error in the observable outcomes. We illustrate the approach by using imperfect public health surveillance data on lung cancer deaths to update model-based lung cancer mortality risk estimates in a population exposed to ionizing radiation from a uranium processing facility.  相似文献   

13.
Numerous Bayesian methods of phenotype prediction and genomic breeding value estimation based on multilocus association models have been proposed. Computationally the methods have been based either on Markov chain Monte Carlo or on faster maximum a posteriori estimation. The demand for more accurate and more efficient estimation has led to the rapid emergence of workable methods, unfortunately at the expense of well-defined principles for Bayesian model building. In this article we go back to the basics and build a Bayesian multilocus association model for quantitative and binary traits with carefully defined hierarchical parameterization of Student's t and Laplace priors. In this treatment we consider alternative model structures, using indicator variables and polygenic terms. We make the most of the conjugate analysis, enabled by the hierarchical formulation of the prior densities, by deriving the fully conditional posterior densities of the parameters and using the acquired known distributions in building fast generalized expectation-maximization estimation algorithms.  相似文献   

14.
Reversible-jump Markov chain Monte Carlo (RJ-MCMC) is a technique for simultaneously evaluating multiple related (but not necessarily nested) statistical models that has recently been applied to the problem of phylogenetic model selection. Here we use a simulation approach to assess the performance of this method and compare it to Akaike weights, a measure of model uncertainty that is based on the Akaike information criterion. Under conditions where the assumptions of the candidate models matched the generating conditions, both Bayesian and AIC-based methods perform well. The 95% credible interval contained the generating model close to 95% of the time. However, the size of the credible interval differed with the Bayesian credible set containing approximately 25% to 50% fewer models than an AIC-based credible interval. The posterior probability was a better indicator of the correct model than the Akaike weight when all assumptions were met but both measures performed similarly when some model assumptions were violated. Models in the Bayesian posterior distribution were also more similar to the generating model in their number of parameters and were less biased in their complexity. In contrast, Akaike-weighted models were more distant from the generating model and biased towards slightly greater complexity. The AIC-based credible interval appeared to be more robust to the violation of the rate homogeneity assumption. Both AIC and Bayesian approaches suggest that substantial uncertainty can accompany the choice of model for phylogenetic analyses, suggesting that alternative candidate models should be examined in analysis of phylogenetic data. [AIC; Akaike weights; Bayesian phylogenetics; model averaging; model selection; model uncertainty; posterior probability; reversible jump.].  相似文献   

15.
This article presents a Bayesian approach to forecast mortality rates. This approach formalizes the Lee-Carter method as a statistical model accounting for all sources of variability. Markov chain Monte Carlo methods are used to fit the model and to sample from the posterior predictive distribution. This paper also shows how multiple imputations can be readily incorporated into the model to handle missing data and presents some possible extensions to the model. The methodology is applied to U.S. male mortality data. Mortality rate forecasts are formed for the period 1990-1999 based on data from 1959-1989. These forecasts are compared to the actual observed values. Results from the forecasts show the Bayesian prediction intervals to be appropriately wider than those obtained from the Lee-Carter method, correctly incorporating all known sources of variability. An extension to the model is also presented and the resulting forecast variability appears better suited to the observed data.  相似文献   

16.
近年来, 分子钟定年方法(molecular dating methods)得以广泛运用, 为宏观进化研究尤其是生物多样性及其格局形成历史的相关研究提供了不可或缺且十分详尽的进化时间框架。贝叶斯方法(Bayesian methods)和马尔可夫链蒙特卡罗方法 (Markov chain Monte Carlo)可容纳多维度、多类型的数据和参数设置, 因此以BEAST、PAML-MCMCTree等软件为代表的贝叶斯节点标记法(Bayesian node-dating methods)逐渐成为分子钟定年方法中最为广泛使用的类型。贝叶斯框架的优势之一在于其可以利用复杂模型考虑各种不确定性因素, 但是该类方法中各类模型和参数的设置都可能引入误差, 从而影响进化分化时间估算的可靠性。本文介绍了贝叶斯分子钟定年方法的原理和主要类型, 并以贝叶斯节点标记法为例, 重点讨论了分子钟模型、化石标记的选择与放置、采样频率及化石标记点年龄先验分布等因素对节点定年的影响; 提供了贝叶斯时间树构建软件的使用建议、节点年龄的讨论原则和不同模型下时间树的比较方法, 针对常见的引起节点年龄潜在高估和低估风险的情况作了分析并给出了合理化建议。我们认为, 合理整合多种贝叶斯方法和模型得出的结果并从中择优, 能够提高定年结果的可靠性; 研究人员应对时间树构建结果与其参数设置的关系开展讨论, 从而为其他学者提供参考; 化石记录的更新与分子钟定年方法的改进应同步不断跟进。  相似文献   

17.
The importance of fitting distributions to data for risk analysis continues to grow as regulatory agencies, like the Environmental Protection Agency (EPA), continue to shift from deterministic to probabilistic risk assessment techniques. The use of Monte Carlo simulation as a tool for propagating variability and uncertainty in risk requires specification of the risk model's inputs in the form of distributions or tables of data. Several software tools exist to support risk assessors in their efforts to develop distributions. However, users must keep in mind that these tools do not replace clear thought about judgments that must be made in characterizing the information from data. This overview introduces risk assessors to the statistical concepts and physical reasons that support important judgments about appropriate types of parametric distributions and goodness-of-fit. In the context of using data to improve risk assessment and ultimately risk management, this paper discusses issues related to the nature of the data (representativeness, quantity, and quality, correlation with space and time, and distinguishing between variability and uncertainty for a set of data), and matching data and distributions appropriately. All data analysis (whether “Frequentist” or “Bayesian” or oblivious to the distinction) requires the use of subjective judgment. The paper offers an iterative process for developing distributions using data to characterize variability and uncertainty for inputs to risk models that provides incentives for collecting better information when the value of information exceeds its cost. Risk analysts need to focus attention on characterizing the information appropriately for purposes of the risk assessment (and risk management questions at hand), not on characterization for its own sake.  相似文献   

18.
Comparison of the performance and accuracy of different inference methods, such as maximum likelihood (ML) and Bayesian inference, is difficult because the inference methods are implemented in different programs, often written by different authors. Both methods were implemented in the program MIGRATE, that estimates population genetic parameters, such as population sizes and migration rates, using coalescence theory. Both inference methods use the same Markov chain Monte Carlo algorithm and differ from each other in only two aspects: parameter proposal distribution and maximization of the likelihood function. Using simulated datasets, the Bayesian method generally fares better than the ML approach in accuracy and coverage, although for some values the two approaches are equal in performance. MOTIVATION: The Markov chain Monte Carlo-based ML framework can fail on sparse data and can deliver non-conservative support intervals. A Bayesian framework with appropriate prior distribution is able to remedy some of these problems. RESULTS: The program MIGRATE was extended to allow not only for ML(-) maximum likelihood estimation of population genetics parameters but also for using a Bayesian framework. Comparisons between the Bayesian approach and the ML approach are facilitated because both modes estimate the same parameters under the same population model and assumptions.  相似文献   

19.
Summary In this article, we propose a Bayesian approach to dose–response assessment and the assessment of synergy between two combined agents. We consider the case of an in vitro ovarian cancer research study aimed at investigating the antiproliferative activities of four agents, alone and paired, in two human ovarian cancer cell lines. In this article, independent dose–response experiments were repeated three times. Each experiment included replicates at investigated dose levels including control (no drug). We have developed a Bayesian hierarchical nonlinear regression model that accounts for variability between experiments, variability within experiments (i.e., replicates), and variability in the observed responses of the controls. We use Markov chain Monte Carlo to fit the model to the data and carry out posterior inference on quantities of interest (e.g., median inhibitory concentration IC 50 ). In addition, we have developed a method, based on Loewe additivity, that allows one to assess the presence of synergy with honest accounting of uncertainty. Extensive simulation studies show that our proposed approach is more reliable in declaring synergy compared to current standard analyses such as the median‐effect principle/combination index method ( Chou and Talalay, 1984 , Advances in Enzyme Regulation 22, 27–55), which ignore important sources of variability and uncertainty.  相似文献   

20.
Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free) reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter) estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free) reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号