首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Kottas A  Branco MD  Gelfand AE 《Biometrics》2002,58(3):593-600
In cytogenetic dosimetry, samples of cell cultures are exposed to a range of doses of a given agent. In each sample at each dose level, some measure of cell disability is recorded. The objective is to develop models that explain cell response to dose. Such models can be used to predict response at unobserved doses. More important, such models can provide inference for unknown exposure doses given the observed responses. Typically, cell disability is viewed as a Poisson count, but in the present work, a more appropriate response is a categorical classification. In the literature, modeling in this case is very limited. What exists is purely parametric. We propose a fully Bayesian nonparametric approach to this problem. We offer comparison with a parametric model through a simulation study and the analysis of a real dataset modeling blood cultures exposed to radiation where classification is with regard to number of micronuclei per cell.  相似文献   

2.
Choroid plexuses are vascular structures located in the brain ventricles, showing specific uptake of some diagnostic and therapeutic radiopharmaceuticals currently under clinical investigation, such as integrin-binding arginine-glycine-aspartic acid (RGD) peptides. No specific geometry for choroid plexuses has been implemented in commercially available software for internal dosimetry.The aims of the present study were to assess the dependence of absorbed dose to the choroid plexuses on the organ geometry implemented in Monte Carlo simulations, and to propose an analytical model for the internal dosimetry of these structures for 18F, 64Cu, 67Cu, 68Ga, 90Y, 131I and 177Lu nuclides. A GAMOS Monte Carlo simulation based on direct organ segmentation was taken as the gold standard to validate a second simulation based on a simplified geometrical model of the choroid plexuses. Both simulations were compared with the OLINDA/EXM sphere model.The gold standard and the simplified geometrical model gave similar dosimetry results (dose difference < 3.5%), indicating that the latter can be considered as a satisfactory approximation of the real geometry. In contrast, the sphere model systematically overestimated the absorbed dose compared to both Monte Carlo models (range: 4–50% dose difference), depending on the isotope energy and organ mass. Therefore, the simplified geometric model was adopted to introduce an analytical approach for choroid plexuses dosimetry in the mass range 2–16 g. The proposed model enables the estimation of the choroid plexuses dose by a simple bi-parametric function, once the organ mass and the residence time of the radiopharmaceutical under investigation are provided.  相似文献   

3.
Model averaging is gaining popularity among ecologists for making inference and predictions. Methods for combining models include Bayesian model averaging (BMA) and Akaike’s Information Criterion (AIC) model averaging. BMA can be implemented with different prior model weights, including the Kullback–Leibler prior associated with AIC model averaging, but it is unclear how the prior model weight affects model results in a predictive context. Here, we implemented BMA using the Bayesian Information Criterion (BIC) approximation to Bayes factors for building predictive models of bird abundance and occurrence in the Chihuahuan Desert of New Mexico. We examined how model predictive ability differed across four prior model weights, and how averaged coefficient estimates, standard errors and coefficients’ posterior probabilities varied for 16 bird species. We also compared the predictive ability of BMA models to a best single-model approach. Overall, Occam’s prior of parsimony provided the best predictive models. In general, the Kullback–Leibler prior, however, favored complex models of lower predictive ability. BMA performed better than a best single-model approach independently of the prior model weight for 6 out of 16 species. For 6 other species, the choice of the prior model weight affected whether BMA was better than the best single-model approach. Our results demonstrate that parsimonious priors may be favorable over priors that favor complexity for making predictions. The approach we present has direct applications in ecology for better predicting patterns of species’ abundance and occurrence.  相似文献   

4.
The recent development of Bayesian phylogenetic inference using Markov chain Monte Carlo (MCMC) techniques has facilitated the exploration of parameter-rich evolutionary models. At the same time, stochastic models have become more realistic (and complex) and have been extended to new types of data, such as morphology. Based on this foundation, we developed a Bayesian MCMC approach to the analysis of combined data sets and explored its utility in inferring relationships among gall wasps based on data from morphology and four genes (nuclear and mitochondrial, ribosomal and protein coding). Examined models range in complexity from those recognizing only a morphological and a molecular partition to those having complex substitution models with independent parameters for each gene. Bayesian MCMC analysis deals efficiently with complex models: convergence occurs faster and more predictably for complex models, mixing is adequate for all parameters even under very complex models, and the parameter update cycle is virtually unaffected by model partitioning across sites. Morphology contributed only 5% of the characters in the data set but nevertheless influenced the combined-data tree, supporting the utility of morphological data in multigene analyses. We used Bayesian criteria (Bayes factors) to show that process heterogeneity across data partitions is a significant model component, although not as important as among-site rate variation. More complex evolutionary models are associated with more topological uncertainty and less conflict between morphology and molecules. Bayes factors sometimes favor simpler models over considerably more parameter-rich models, but the best model overall is also the most complex and Bayes factors do not support exclusion of apparently weak parameters from this model. Thus, Bayes factors appear to be useful for selecting among complex models, but it is still unclear whether their use strikes a reasonable balance between model complexity and error in parameter estimates.  相似文献   

5.
A popular approach to detecting positive selection is to estimate the parameters of a probabilistic model of codon evolution and perform inference based on its maximum likelihood parameter values. This approach has been evaluated intensively in a number of simulation studies and found to be robust when the available data set is large. However, uncertainties in the estimated parameter values can lead to errors in the inference, especially when the data set is small or there is insufficient divergence between the sequences. We introduce a Bayesian model comparison approach to infer whether the sequence as a whole contains sites at which the rate of nonsynonymous substitution is greater than the rate of synonymous substitution. We incorporated this probabilistic model comparison into a Bayesian approach to site-specific inference of positive selection. Using simulated sequences, we compared this approach to the commonly used empirical Bayes approach and investigated the effect of tree length on the performance of both methods. We found that the Bayesian approach outperforms the empirical Bayes method when the amount of sequence divergence is small and is less prone to false-positive inference when the sequences are saturated, while the results are indistinguishable for intermediate levels of sequence divergence.  相似文献   

6.
Kneib T  Fahrmeir L 《Biometrics》2006,62(1):109-118
Motivated by a space-time study on forest health with damage state of trees as the response, we propose a general class of structured additive regression models for categorical responses, allowing for a flexible semiparametric predictor. Nonlinear effects of continuous covariates, time trends, and interactions between continuous covariates are modeled by penalized splines. Spatial effects can be estimated based on Markov random fields, Gaussian random fields, or two-dimensional penalized splines. We present our approach from a Bayesian perspective, with inference based on a categorical linear mixed model representation. The resulting empirical Bayes method is closely related to penalized likelihood estimation in a frequentist setting. Variance components, corresponding to inverse smoothing parameters, are estimated using (approximate) restricted maximum likelihood. In simulation studies we investigate the performance of different choices for the spatial effect, compare the empirical Bayes approach to competing methodology, and study the bias of mixed model estimates. As an application we analyze data from the forest health survey.  相似文献   

7.
Bayesian methods for estimating dose response curves from linearized multi-stage models in quantal bioassay are studied. A Gibbs sampling approach with data augmentation is employed to compute the Bayes estimates. In addition, estimation of the “relative additional risk” and the “risk specific dose” is studied. Model selection based on conditional predictive ordinates from cross-validated data is developed. Model adequacy is addressed by means of a posterior predictive tail-area test.  相似文献   

8.
Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner.  相似文献   

9.
PurposeTo provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms.Material and methodsTwo lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images.ResultsThe dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy.ConclusionA multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry.  相似文献   

10.
Predictions of metal consumption are vital for criticality assessments and sustainability analyses. Although demand for a material varies strongly by region and end-use sector, statistical models of demand typically predict demand using regression analyses at an aggregated global level (“fully pooled models”). “Un-pooled” regression models that predict demand at a disaggregated country or regional level face challenges due to limited data availability and large uncertainty. In this paper, we propose a Bayesian hierarchical model that can simultaneously identify heterogeneous demand parameters (like price and income elasticities) for individual regions and sectors, as well as global parameters. We demonstrate the model's value by estimating income and price elasticity of copper demand in five sectors (Transportation, Electrical, Construction, Manufacturing, and Other) and five regions (North America, Europe, Japan, China, and Rest of World). To validate the benefits of the Bayesian approach, we compare the model to both a “fully pooled” and an “un-pooled” model. The Bayesian model can predict global demand with similar uncertainty as a fully pooled regression model, while additionally capturing regional heterogeneity in income elasticity of demand. Compared to un-pooled models that predict demand for individual countries and sectors separately, our model reduces the uncertainty of parameter estimates by more than 50%. The hierarchical Bayesian modeling approach we propose can be used for various commodities, improving material demand projections used to study the impact of policies on mining sector emissions and informing investment in critical material production.  相似文献   

11.
Generalized relative and absolute risk models are fitted to the latest Japanese atomic bomb survivor solid cancer and leukemia mortality data (through 2000), with the latest (DS02) dosimetry, by classical (regression calibration) and Bayesian techniques, taking account of errors in dose estimates and other uncertainties. Linear-quadratic and linear-quadratic-exponential models are fitted and used to assess risks for contemporary populations of China, Japan, Puerto Rico, the U.S. and the UK. Many of these models are the same as or very similar to models used in the UNSCEAR 2006 report. For a test dose of 0.1 Sv, the solid cancer mortality for a UK population using the generalized linear-quadratic relative risk model is estimated as 5.4% Sv(-1) [90% Bayesian credible interval (BCI) 3.1, 8.0]. At 0.1 Sv, leukemia mortality for a UK population using the generalized linear-quadratic relative risk model is estimated as 0.50% Sv(-1) (90% BCI 0.11, 0.97). Risk estimates varied little between populations; at 0.1 Sv the central estimates ranged from 3.7 to 5.4% Sv(-1) for solid cancers and from 0.4 to 0.6% Sv(-1) for leukemia. Analyses using regression calibration techniques yield central estimates of risk very similar to those for the Bayesian approach. The central estimates of population risk were similar for the generalized absolute risk model and the relative risk model. Linear-quadratic-exponential models predict lower risks (at least at low test doses) and appear to fit as well, although for other (theoretical) reasons we favor the simpler linear-quadratic models.  相似文献   

12.
A vast amount of ecological knowledge generated over the past two decades has hinged upon the ability of model selection methods to discriminate among various ecological hypotheses. The last decade has seen the rise of Bayesian hierarchical models in ecology. Consequently, commonly used tools, such as the AIC, become largely inapplicable and there appears to be no consensus about a particular model selection tool that can be universally applied. We focus on a specific class of competing Bayesian spatial capture–recapture (SCR) models and apply and evaluate some of the recommended Bayesian model selection tools: (1) Bayes Factor—using (a) Gelfand‐Dey and (b) harmonic mean methods, (2) Deviance Information Criterion (DIC), (3) Watanabe‐Akaike's Information Criterion (WAIC) and (4) posterior predictive loss criterion. In all, we evaluate 25 variants of model selection tools in our study. We evaluate these model selection tools from the standpoint of selecting the “true” model and parameter estimation. In all, we generate 120 simulated data sets using the true model and assess the frequency with which the true model is selected and how well the tool estimates N (population size), a parameter of much importance to ecologists. We find that when information content is low in the data, no particular model selection tool can be recommended to help realize, simultaneously, both the goals of model selection and parameter estimation. But, in general (when we consider both the objectives together), we recommend the use of our application of the Bayes Factor (Gelfand‐Dey with MAP approximation) for Bayesian SCR models. Our study highlights the point that although new model selection tools are emerging (e.g., WAIC) in the applied statistics literature, those tools based on sound theory even under approximation may still perform much better.  相似文献   

13.
The application of mixed nucleotide/doublet substitution models has recently received attention in RNA‐based phylogenetics. Within a Bayesian approach, it was shown that mixed models outperformed analyses relying on simple nucleotide models. We analysed an mt RNA data set of dragonflies representing all major lineages of Anisoptera plus outgroups, using a mixed model in a Bayesian and parsimony (MP) approach. We used a published mt 16S rRNA secondary consensus structure model and inferred consensus models for the mt 12S rRNA and tRNA valine. Secondary structure information was used to set data partitions for paired and unpaired sites on which doublet or nucleotide models were applied, respectively. Several different doublet models are currently available of which we chose the most appropriate one by a Bayes factor test. The MP reconstructions relied on recoded data for paired sites in order to account for character covariance and an application of the ratchet strategy to find most parsimonious trees. Bayesian and parsimony reconstructions are partly differently resolved, indicating sensitivity of the reconstructions to model specification. Our analyses depict a tree in which the damselfly family Lestidae is sister group to a monophyletic clade Epiophlebia + Anisoptera, contradicting recent morphological and molecular work. In Bayesian analyses, we found a deep split between Libelluloidea and a clade ‘Aeshnoidea’ within Anisoptera largely congruent with Tillyard’s early ideas of anisopteran evolution, which had been based on evidently plesiomorphic character states. However, parsimony analysis did not support a clade ‘Aeshnoidea’, but instead, placed Gomphidae as sister taxon to Libelluloidea. Monophyly of Libelluloidea is only modestly supported, and many inter‐family relationships within Libelluloidea do not receive substantial support in Bayesian and parsimony analyses. We checked whether high Bayesian node support was inflated owing to either: (i) wrong secondary consensus structures; (ii) under‐sampling of the MCMC process, thereby missing other local maxima; or (iii) unrealistic prior assumptions on topologies or branch lengths. We found that different consensus structure models exert strong influence on the reconstruction, which demonstrates the importance of taxon‐specific realistic secondary structure models in RNA phylogenetics.  相似文献   

14.
Surveillance of drug products in the marketplace continues after approval, to identify rare potential toxicities that are unlikely to have been observed in the clinical trials carried out before approval. This surveillance accumulates large numbers of spontaneous reports of adverse events along with other information in spontaneous report databases. Recently developed empirical Bayes and Bayes methods provide a way to summarize the data in these databases, including a quantitative measure of the strength of the reporting association between the drugs and the events. Determining which of the particular drug-event associations, of which there may be many tens of thousands, are real reporting associations and which random noise presents a substantial problem of multiplicity because the resources available for medical and epidemiologic followup are limited. The issues are similar to those encountered with the evaluation of microarrays, but there are important differences. This report compares the application of a standard empirical Bayes approach with micorarray-inspired methods for controlling the False Discovery Rate, and a new Bayesian method for the resolution of the multiplicity problem to a relatively small database containing about 48,000 reports. The Bayesian approach appears to have attractive diagnostic properties in addition to being easy to interpret and implement computationally.  相似文献   

15.
Bayesian inference in ecology   总被引:14,自引:1,他引:13  
Bayesian inference is an important statistical tool that is increasingly being used by ecologists. In a Bayesian analysis, information available before a study is conducted is summarized in a quantitative model or hypothesis: the prior probability distribution. Bayes’ Theorem uses the prior probability distribution and the likelihood of the data to generate a posterior probability distribution. Posterior probability distributions are an epistemological alternative to P‐values and provide a direct measure of the degree of belief that can be placed on models, hypotheses, or parameter estimates. Moreover, Bayesian information‐theoretic methods provide robust measures of the probability of alternative models, and multiple models can be averaged into a single model that reflects uncertainty in model construction and selection. These methods are demonstrated through a simple worked example. Ecologists are using Bayesian inference in studies that range from predicting single‐species population dynamics to understanding ecosystem processes. Not all ecologists, however, appreciate the philosophical underpinnings of Bayesian inference. In particular, Bayesians and frequentists differ in their definition of probability and in their treatment of model parameters as random variables or estimates of true values. These assumptions must be addressed explicitly before deciding whether or not to use Bayesian methods to analyse ecological data.  相似文献   

16.
We propose a Bayesian method for testing molecular clock hypotheses for use with aligned sequence data from multiple taxa. Our method utilizes a nonreversible nucleotide substitution model to avoid the necessity of specifying either a known tree relating the taxa or an outgroup for rooting the tree. We employ reversible jump Markov chain Monte Carlo to sample from the posterior distribution of the phylogenetic model parameters and conduct hypothesis testing using Bayes factors, the ratio of the posterior to prior odds of competing models. Here, the Bayes factors reflect the relative support of the sequence data for equal rates of evolutionary change between taxa versus unequal rates, averaged over all possible phylogenetic parameters, including the tree and root position. As the molecular clock model is a restriction of the more general unequal rates model, we use the Savage-Dickey ratio to estimate the Bayes factors. The Savage-Dickey ratio provides a convenient approach to calculating Bayes factors in favor of sharp hypotheses. Critical to calculating the Savage-Dickey ratio is a determination of the prior induced on the modeling restrictions. We demonstrate our method on a well-studied mtDNA sequence data set consisting of nine primates. We find strong support against a global molecular clock, but do find support for a local clock among the anthropoids. We provide mathematical derivations of the induced priors on branch length restrictions assuming equally likely trees. These derivations also have more general applicability to the examination of prior assumptions in Bayesian phylogenetics.  相似文献   

17.
Codon-based substitution models are routinely used to measure selective pressures acting on protein-coding genes. To this effect, the nonsynonymous to synonymous rate ratio (dN/dS = omega) is estimated. The proportion of amino-acid sites potentially under positive selection, as indicated by omega > 1, is inferred by fitting a probability distribution where some sites are permitted to have omega > 1. These sites are then inferred by means of an empirical Bayes or by a Bayes empirical Bayes approach that, respectively, ignores or accounts for sampling errors in maximum-likelihood estimates of the distribution used to infer the proportion of sites with omega > 1. Here, we extend a previous full-Bayes approach to include models with high power and low false-positive rates when inferring sites under positive selection. We propose some heuristics to alleviate the computational burden, and show that (i) full Bayes can be superior to empirical Bayes when analyzing a small data set or small simulated data, (ii) full Bayes has only a small advantage over Bayes empirical Bayes with our small test data, and (iii) Bayesian methods appear relatively insensitive to mild misspecifications of the random process generating adaptive evolution in our simulations, but in practice can prove extremely sensitive to model specification. We suggest that the codon model used to detect amino acids under selection should be carefully selected, for instance using Akaike information criterion (AIC).  相似文献   

18.
A P Grieve 《Biometrics》1985,41(4):979-990
Statisticians have been critical of the use of the two-period crossover designs for clinical trials because the estimate of the treatment difference is biased when the carryover effects of the two treatments are not equal. In the standard approach, if the null hypothesis of equal carryover effects is not rejected, data from both periods are used to estimate and test for treatment differences; if the null hypothesis is rejected, data from the first period alone are used. A Bayesian analysis based on the Bayes factor against unequal carryover effects is given. Although this Bayesian approach avoids the "all-or-nothing" decision inherent in the standard approach, it recognizes that with small trials it is difficult to provide unequivocal evidence that the carryover effects of the two treatments are equal, and thus that the interpretation of the difference between treatment effects is highly dependent on a subjective assessment of the reality or not of equal carryover effects.  相似文献   

19.
A common problem in molecular phylogenetics is choosing a model of DNA substitution that does a good job of explaining the DNA sequence alignment without introducing superfluous parameters. A number of methods have been used to choose among a small set of candidate substitution models, such as the likelihood ratio test, the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and Bayes factors. Current implementations of any of these criteria suffer from the limitation that only a small set of models are examined, or that the test does not allow easy comparison of non-nested models. In this article, we expand the pool of candidate substitution models to include all possible time-reversible models. This set includes seven models that have already been described. We show how Bayes factors can be calculated for these models using reversible jump Markov chain Monte Carlo, and apply the method to 16 DNA sequence alignments. For each data set, we compare the model with the best Bayes factor to the best models chosen using AIC and BIC. We find that the best model under any of these criteria is not necessarily the most complicated one; models with an intermediate number of substitution types typically do best. Moreover, almost all of the models that are chosen as best do not constrain a transition rate to be the same as a transversion rate, suggesting that it is the transition/transversion rate bias that plays the largest role in determining which models are selected. Importantly, the reversible jump Markov chain Monte Carlo algorithm described here allows estimation of phylogeny (and other phylogenetic model parameters) to be performed while accounting for uncertainty in the model of DNA substitution.  相似文献   

20.
Bayesian inference is becoming a common statistical approach to phylogenetic estimation because, among other reasons, it allows for rapid analysis of large data sets with complex evolutionary models. Conveniently, Bayesian phylogenetic methods use currently available stochastic models of sequence evolution. However, as with other model-based approaches, the results of Bayesian inference are conditional on the assumed model of evolution: inadequate models (models that poorly fit the data) may result in erroneous inferences. In this article, I present a Bayesian phylogenetic method that evaluates the adequacy of evolutionary models using posterior predictive distributions. By evaluating a model's posterior predictive performance, an adequate model can be selected for a Bayesian phylogenetic study. Although I present a single test statistic that assesses the overall (global) performance of a phylogenetic model, a variety of test statistics can be tailored to evaluate specific features (local performance) of evolutionary models to identify sources failure. The method presented here, unlike the likelihood-ratio test and parametric bootstrap, accounts for uncertainty in the phylogeny and model parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号