首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The testing of Bayesian point null hypotheses on variance component models have resulted in a tough assignment for which no clear and generally accepted method exists. In this work we present what we believe is a succeeding approach to such a task. It is based on a simple reparameterization of the model in terms of the total variance and the proportion of the additive genetic variance with respect to it, as well as on the explicit inclusion on the prior probability of a discrete component at origin. The reparameterization was used to bypass an arbitrariness related to the impropriety of uninformative priors onto unbounded variables while the discrete component was necessary to overcome the zero probability assigned to sets of null measure by the usual continuous variable models. The method was tested against computer simulations with appealing results.  相似文献   

2.
Consider a sample of animal abundances collected from one sampling occasion. Our focus is in estimating the number of species in a closed population. In order to conduct a noninformative Bayesian inference when modeling this data, we derive Jeffreys and reference priors from the full likelihood. We assume that the species' abundances are randomly distributed according to a distribution indexed by a finite‐dimensional parameter. We consider two specific cases which assume that the mean abundances are constant or exponentially distributed. The Jeffreys and reference priors are functions of the Fisher information for the model parameters; the information is calculated in part using the linear difference score for integer parameter models (Lindsay & Roeder 1987). The Jeffreys and reference priors perform similarly in a data example we consider. The posteriors based on the Jeffreys and reference priors are proper. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
Expected-posterior prior distributions for model selection   总被引:1,自引:0,他引:1  
  相似文献   

4.
5.
Bayesian analyses for a multiple capture-recapture model   总被引:3,自引:0,他引:3  
SMITH  PHILIP J. 《Biometrika》1991,78(2):399-407
  相似文献   

6.
Bayesian analysis of a Poisson process with a change-point   总被引:8,自引:0,他引:8  
RAFTERY  A. E.; AKMAN  V. E. 《Biometrika》1986,73(1):85-89
  相似文献   

7.
8.
9.
10.
Bayesian inference allows the transparent communication and systematic updating of model uncertainty as new data become available. When applied to material flow analysis (MFA), however, Bayesian inference is undermined by the difficulty of defining proper priors for the MFA parameters and quantifying the noise in the collected data. We start to address these issues by first deriving and implementing an expert elicitation procedure suitable for generating MFA parameter priors. Second, we propose to learn the data noise concurrent with the parametric uncertainty. These methods are demonstrated using a case study on the 2012 US steel flow. Eight experts are interviewed to elicit distributions on steel flow uncertainty from raw materials to intermediate goods. The experts' distributions are combined and weighted according to the expertise demonstrated in response to seeding questions. These aggregated distributions form our model parameters' informative priors. Sensible, weakly informative priors are adopted for learning the data noise. Bayesian inference is then performed to update the parametric and data noise uncertainty given MFA data collected from the United States Geological Survey and the World Steel Association. The results show a reduction in MFA parametric uncertainty when incorporating the collected data. Only a modest reduction in data noise uncertainty was observed using 2012 data; however, greater reductions were achieved when using data from multiple years in the inference. These methods generate transparent MFA and data noise uncertainties learned from data rather than pre-assumed data noise levels, providing a more robust basis for decision-making that affects the system.  相似文献   

11.
12.
In Bayesian phylogenetics, confidence in evolutionary relationships is expressed as posterior probability--the probability that a tree or clade is true given the data, evolutionary model, and prior assumptions about model parameters. Model parameters, such as branch lengths, are never known in advance; Bayesian methods incorporate this uncertainty by integrating over a range of plausible values given an assumed prior probability distribution for each parameter. Little is known about the effects of integrating over branch length uncertainty on posterior probabilities when different priors are assumed. Here, we show that integrating over uncertainty using a wide range of typical prior assumptions strongly affects posterior probabilities, causing them to deviate from those that would be inferred if branch lengths were known in advance; only when there is no uncertainty to integrate over does the average posterior probability of a group of trees accurately predict the proportion of correct trees in the group. The pattern of branch lengths on the true tree determines whether integrating over uncertainty pushes posterior probabilities upward or downward. The magnitude of the effect depends on the specific prior distributions used and the length of the sequences analyzed. Under realistic conditions, however, even extraordinarily long sequences are not enough to prevent frequent inference of incorrect clades with strong support. We found that across a range of conditions, diffuse priors--either flat or exponential distributions with moderate to large means--provide more reliable inferences than small-mean exponential priors. An empirical Bayes approach that fixes branch lengths at their maximum likelihood estimates yields posterior probabilities that more closely match those that would be inferred if the true branch lengths were known in advance and reduces the rate of strongly supported false inferences compared with fully Bayesian integration.  相似文献   

13.
14.
Recent advances in sequencing and genotyping technologies are contributing to a data revolution in genome-wide association studies that is characterized by the challenging large p small n problem in statistics. That is, given these advances, many such studies now consider evaluating an extremely large number of genetic markers (p) genotyped on a small number of subjects (n). Given the dimension of the data, a joint analysis of the markers is often fraught with many challenges, while a marginal analysis is not sufficient. To overcome these obstacles, herein, we propose a Bayesian two-phase methodology that can be used to jointly relate genetic markers to binary traits while controlling for confounding. The first phase of our approach makes use of a marginal scan to identify a reduced set of candidate markers that are then evaluated jointly via a hierarchical model in the second phase. Final marker selection is accomplished through identifying a sparse estimator via a novel and computationally efficient maximum a posteriori estimation technique. We evaluate the performance of the proposed approach through extensive numerical studies, and consider a genome-wide application involving colorectal cancer.  相似文献   

15.
Incorporating historical information into the design and analysis of a new clinical trial has been the subject of much recent discussion. For example, in the context of clinical trials of antibiotics for drug resistant infections, where patients with specific infections can be difficult to recruit, there is often only limited and heterogeneous information available from the historical trials. To make the best use of the combined information at hand, we consider an approach based on the multiple power prior that allows the prior weight of each historical study to be chosen adaptively by empirical Bayes. This choice of weight has advantages in that it varies commensurably with differences in the historical and current data and can choose weights near 1 if the data from the corresponding historical study are similar enough to the data from the current study. Fully Bayesian approaches are also considered. The methods are applied to data from antibiotics trials. An analysis of the operating characteristics in a binomial setting shows that the proposed empirical Bayes adaptive method works well, compared to several alternative approaches, including the meta‐analytic prior.  相似文献   

16.
17.
The taxonomy and evolutionary species boundaries in a global collection of Cercospora isolates from Beta vulgaris was investigated based on sequences of six loci. Species boundaries were assessed using concatenated multi-locus phylogenies, Generalized Mixed Yule Coalescent (GMYC), Poisson Tree Processes (PTP), and Bayes factor delimitation (BFD) framework. Cercospora beticola was confirmed as the primary cause of Cercospora leaf spot (CLS) on B. vulgaris. Cercospora apii, C. cf. flagellaris, Cercospora sp. G, and C. zebrina were also identified in association with CLS on B. vulgaris. Cercospora apii and C. cf. flagellaris were pathogenic to table beet but Cercospora sp. G and C. zebrina did not cause disease. Genealogical concordance phylogenetic species recognition, GMYC and PTP methods failed to differentiate C. apii and C. beticola as separate species. On the other hand, multi-species coalescent analysis based on BFD supported separation of C. apii and C. beticola into distinct species; and provided evidence of evolutionary independent lineages within C. beticola. Extensive intra- and intergenic recombination, incomplete lineage sorting and dominance of clonal reproduction complicate evolutionary species recognition in the genus Cercospora. The results warrant morphological and phylogenetic studies to disentangle cryptic speciation within C. beticola.  相似文献   

18.
Shrinkage Estimators for Covariance Matrices   总被引:1,自引:0,他引:1  
Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer parameters, such as compound symmetry or independence. However, these estimators will not be consistent unless the hypothesized structure is correct. If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under misspecification of the covariance structure. With large matrices, however, the inefficiency of the sandwich estimator becomes worrisome. We consider here two general shrinkage approaches to estimating the covariance matrix and regression coefficients. The first involves shrinking the eigenvalues of the unstructured ML or REML estimator. The second involves shrinking an unstructured estimator toward a structured estimator. For both cases, the data determine the amount of shrinkage. These estimators are consistent and give consistent and asymptotically efficient estimates for regression coefficients. Simulations show the improved operating characteristics of the shrinkage estimators of the covariance matrix and the regression coefficients in finite samples. The final estimator chosen includes a combination of both shrinkage approaches, i.e., shrinking the eigenvalues and then shrinking toward structure. We illustrate our approach on a sleep EEG study that requires estimation of a 24 x 24 covariance matrix and for which inferences on mean parameters critically depend on the covariance estimator chosen. We recommend making inference using a particular shrinkage estimator that provides a reasonable compromise between structured and unstructured estimators.  相似文献   

19.
Qu P  Qu Y 《Biometrics》2000,56(4):1249-1255
After continued treatment with an insecticide, within the population of the susceptible insects, resistant strains will occur. It is important to know whether there are any resistant strains, what the proportions are, and what the median lethal doses are for the insecticide. Lwin and Martin (1989, Biometrics 45, 721-732) propose a probit mixture model and use the EM algorithm to obtain the maximum likelihood estimates for the parameters. This approach has difficulties in estimating the confidence intervals and in testing the number of components. We propose a Bayesian approach to obtaining the credible intervals for the location and scale of the tolerances in each component and for the mixture proportions by using data augmentation and Gibbs sampler. We use Bayes factor for model selection and determining the number of components. We illustrate the method with data published in Lwin and Martin (1989).  相似文献   

20.
In randomized studies with missing outcomes, non-identifiable assumptions are required to hold for valid data analysis. As a result, statisticians have been advocating the use of sensitivity analysis to evaluate the effect of varying assumptions on study conclusions. While this approach may be useful in assessing the sensitivity of treatment comparisons to missing data assumptions, it may be dissatisfying to some researchers/decision makers because a single summary is not provided. In this paper, we present a fully Bayesian methodology that allows the investigator to draw a 'single' conclusion by formally incorporating prior beliefs about non-identifiable, yet interpretable, selection bias parameters. Our Bayesian model provides robustness to prior specification of the distributional form of the continuous outcomes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号