首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
There has been much development in Bayesian adaptive designs in clinical trials. In the Bayesian paradigm, the posterior predictive distribution characterizes the future possible outcomes given the currently observed data. Based on the interim time-to-event data, we develop a new phase II trial design by combining the strength of both Bayesian adaptive randomization and the predictive probability. By comparing the mean survival times between patients assigned to two treatment arms, more patients are assigned to the better treatment on the basis of adaptive randomization. We continuously monitor the trial using the predictive probability for early termination in the case of superiority or futility. We conduct extensive simulation studies to examine the operating characteristics of four designs: the proposed predictive probability adaptive randomization design, the predictive probability equal randomization design, the posterior probability adaptive randomization design, and the group sequential design. Adaptive randomization designs using predictive probability and posterior probability yield a longer overall median survival time than the group sequential design, but at the cost of a slightly larger sample size. The average sample size using the predictive probability method is generally smaller than that of the posterior probability design.  相似文献   

2.
Bayesian inference in ecology   总被引:14,自引:1,他引:13  
Bayesian inference is an important statistical tool that is increasingly being used by ecologists. In a Bayesian analysis, information available before a study is conducted is summarized in a quantitative model or hypothesis: the prior probability distribution. Bayes’ Theorem uses the prior probability distribution and the likelihood of the data to generate a posterior probability distribution. Posterior probability distributions are an epistemological alternative to P‐values and provide a direct measure of the degree of belief that can be placed on models, hypotheses, or parameter estimates. Moreover, Bayesian information‐theoretic methods provide robust measures of the probability of alternative models, and multiple models can be averaged into a single model that reflects uncertainty in model construction and selection. These methods are demonstrated through a simple worked example. Ecologists are using Bayesian inference in studies that range from predicting single‐species population dynamics to understanding ecosystem processes. Not all ecologists, however, appreciate the philosophical underpinnings of Bayesian inference. In particular, Bayesians and frequentists differ in their definition of probability and in their treatment of model parameters as random variables or estimates of true values. These assumptions must be addressed explicitly before deciding whether or not to use Bayesian methods to analyse ecological data.  相似文献   

3.
We have considered a Bayesian approach for the nonlinear regression model by replacing the normal distribution on the error term by some skewed distributions, which account for both skewness and heavy tails or skewness alone. The type of data considered in this paper concerns repeated measurements taken in time on a set of individuals. Such multiple observations on the same individual generally produce serially correlated outcomes. Thus, additionally, our model does allow for a correlation between observations made from the same individual. We have illustrated the procedure using a data set to study the growth curves of a clinic measurement of a group of pregnant women from an obstetrics clinic in Santiago, Chile. Parameter estimation and prediction were carried out using appropriate posterior simulation schemes based in Markov Chain Monte Carlo methods. Besides the deviance information criterion (DIC) and the conditional predictive ordinate (CPO), we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. For our data set, all these criteria chose the skew‐t model as the best model for the errors. These DIC and CPO criteria are also validated, for the model proposed here, through a simulation study. As a conclusion of this study, the DIC criterion is not trustful for this kind of complex model.  相似文献   

4.
Bayesian inference is becoming a common statistical approach to phylogenetic estimation because, among other reasons, it allows for rapid analysis of large data sets with complex evolutionary models. Conveniently, Bayesian phylogenetic methods use currently available stochastic models of sequence evolution. However, as with other model-based approaches, the results of Bayesian inference are conditional on the assumed model of evolution: inadequate models (models that poorly fit the data) may result in erroneous inferences. In this article, I present a Bayesian phylogenetic method that evaluates the adequacy of evolutionary models using posterior predictive distributions. By evaluating a model's posterior predictive performance, an adequate model can be selected for a Bayesian phylogenetic study. Although I present a single test statistic that assesses the overall (global) performance of a phylogenetic model, a variety of test statistics can be tailored to evaluate specific features (local performance) of evolutionary models to identify sources failure. The method presented here, unlike the likelihood-ratio test and parametric bootstrap, accounts for uncertainty in the phylogeny and model parameters.  相似文献   

5.
Cho H  Ibrahim JG  Sinha D  Zhu H 《Biometrics》2009,65(1):116-124
We propose Bayesian case influence diagnostics for complex survival models. We develop case deletion influence diagnostics for both the joint and marginal posterior distributions based on the Kullback-Leibler divergence (K-L divergence). We present a simplified expression for computing the K-L divergence between the posterior with the full data and the posterior based on single case deletion, as well as investigate its relationships to the conditional predictive ordinate. All the computations for the proposed diagnostic measures can be easily done using Markov chain Monte Carlo samples from the full data posterior distribution. We consider the Cox model with a gamma process prior on the cumulative baseline hazard. We also present a theoretical relationship between our case-deletion diagnostics and diagnostics based on Cox's partial likelihood. A simulated data example and two real data examples are given to demonstrate the methodology.  相似文献   

6.
We propose methods for Bayesian inference for a new class of semiparametric survival models with a cure fraction. Specifically, we propose a semiparametric cure rate model with a smoothing parameter that controls the degree of parametricity in the right tail of the survival distribution. We show that such a parameter is crucial for these kinds of models and can have an impact on the posterior estimates. Several novel properties of the proposed model are derived. In addition, we propose a class of improper noninformative priors based on this model and examine the properties of the implied posterior. Also, a class of informative priors based on historical data is proposed and its theoretical properties are investigated. A case study involving a melanoma clinical trial is discussed in detail to demonstrate the proposed methodology.  相似文献   

7.
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike''s Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered.  相似文献   

8.
Summary .  A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.  相似文献   

9.
Wang L  Dunson DB 《Biometrika》2011,98(3):537-551
Density regression models allow the conditional distribution of the response given predictors to change flexibly over the predictor space. Such models are much more flexible than nonparametric mean regression models with nonparametric residual distributions, and are well supported in many applications. A rich variety of Bayesian methods have been proposed for density regression, but it is not clear whether such priors have full support so that any true data-generating model can be accurately approximated. This article develops a new class of density regression models that incorporate stochastic-ordering constraints which are natural when a response tends to increase or decrease monotonely with a predictor. Theory is developed showing large support. Methods are developed for hypothesis testing, with posterior computation relying on a simple Gibbs sampler. Frequentist properties are illustrated in a simulation study, and an epidemiology application is considered.  相似文献   

10.
Reversible-jump Markov chain Monte Carlo (RJ-MCMC) is a technique for simultaneously evaluating multiple related (but not necessarily nested) statistical models that has recently been applied to the problem of phylogenetic model selection. Here we use a simulation approach to assess the performance of this method and compare it to Akaike weights, a measure of model uncertainty that is based on the Akaike information criterion. Under conditions where the assumptions of the candidate models matched the generating conditions, both Bayesian and AIC-based methods perform well. The 95% credible interval contained the generating model close to 95% of the time. However, the size of the credible interval differed with the Bayesian credible set containing approximately 25% to 50% fewer models than an AIC-based credible interval. The posterior probability was a better indicator of the correct model than the Akaike weight when all assumptions were met but both measures performed similarly when some model assumptions were violated. Models in the Bayesian posterior distribution were also more similar to the generating model in their number of parameters and were less biased in their complexity. In contrast, Akaike-weighted models were more distant from the generating model and biased towards slightly greater complexity. The AIC-based credible interval appeared to be more robust to the violation of the rate homogeneity assumption. Both AIC and Bayesian approaches suggest that substantial uncertainty can accompany the choice of model for phylogenetic analyses, suggesting that alternative candidate models should be examined in analysis of phylogenetic data. [AIC; Akaike weights; Bayesian phylogenetics; model averaging; model selection; model uncertainty; posterior probability; reversible jump.].  相似文献   

11.
针对生物信息学中序列模体的显著性检验问题,提出了一种基于极大似然准则的贝叶斯假设检验方法.将模体的显著性检验转化为多项分布的拟合优度检验问题,选取Dirichlet分布作为多项分布的先验分布并采用Newton-Raphson算法估计Dirichlet分布的超参数,使得数据的预测分布达到最大.应用贝叶斯定理得到贝叶斯因子进行模型选择,用于评价模体检验的统计显著性,这种方法克服了传统多项分布检验中构造检验统计量并计算其在零假设下确切分布的困难.选择JASPAR数据库中107个转录因子结合位点和100组随机模拟数据进行实验,采用皮尔逊积矩相关系数作为评价检验质量的一个标准,发现实验结果好于传统的模体检验的一些方法.  相似文献   

12.
Han C  Chaloner K 《Biometrics》2004,60(1):25-33
Bayesian experimental design is investigated for Bayesian analysis of nonlinear mixed-effects models. Existence of the posterior risk for parameter estimation is shown. When the same prior distribution is used for both design and inference, existence of the preposterior risk for design is also proven. If the prior distribution used in design is different from that used for inference, sufficient conditions are established for existence of the preposterior risk for design. A case study of design for an experiment in population HIV dynamics is provided.  相似文献   

13.
Chen MH  Ibrahim JG 《Biometrics》2000,56(3):678-685
Correlated count data arise often in practice, especially in repeated measures situations or instances in which observations are collected over time. In this paper, we consider a parametric model for a time series of counts by constructing a likelihood-based version of a model similar to that of Zeger (1988, Biometrika 75, 621-629). The model has the advantage of incorporating both overdispersion and autocorrelation. We consider a Bayesian approach and propose a class of informative prior distributions for the model parameters that are useful for prediction. The prior specification is motivated from the notion of the existence of data from similar previous studies, called historical data, which is then quantified into a prior distribution for the current study. We derive the Bayesian predictive distribution and use a Bayesian criterion, called the predictive L measure, for assessing the predictions for a given time series model. The distribution of the predictive L measure is also derived, which will enable us to compare the predictive ability for each model under consideration. Our methodology is motivated by a real data set involving yearly pollen counts, which is examined in some detail.  相似文献   

14.
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models.  相似文献   

15.
Hans C  Dunson DB 《Biometrics》2005,61(4):1018-1026
In regression applications with categorical predictors, interest often focuses on comparing the null hypothesis of homogeneity to an ordered alternative. This article proposes a Bayesian approach for addressing this problem in the setting of normal linear and probit regression models. The regression coefficients are assigned a conditionally conjugate prior density consisting of mixtures of point masses at 0 and truncated normal densities, with a (possibly unknown) changepoint parameter included to accommodate umbrella ordering. Two strategies of prior elicitation are considered: (1) a Bayesian Bonferroni approach in which the probability of the global null hypothesis is specified and local hypotheses are considered independent; and (2) an approach which treats these probabilities as random. A single Gibbs sampling chain can be used to obtain posterior probabilities for the different hypotheses and to estimate regression coefficients and predictive quantities either by model averaging or under the preferred hypothesis. The methods are applied to data from a carcinogenesis study.  相似文献   

16.
R D Ball 《Genetics》2001,159(3):1351-1364
We describe an approximate method for the analysis of quantitative trait loci (QTL) based on model selection from multiple regression models with trait values regressed on marker genotypes, using a modification of the easily calculated Bayesian information criterion to estimate the posterior probability of models with various subsets of markers as variables. The BIC-delta criterion, with the parameter delta increasing the penalty for additional variables in a model, is further modified to incorporate prior information, and missing values are handled by multiple imputation. Marginal probabilities for model sizes are calculated, and the posterior probability of nonzero model size is interpreted as the posterior probability of existence of a QTL linked to one or more markers. The method is demonstrated on analysis of associations between wood density and markers on two linkage groups in Pinus radiata. Selection bias, which is the bias that results from using the same data to both select the variables in a model and estimate the coefficients, is shown to be a problem for commonly used non-Bayesian methods for QTL mapping, which do not average over alternative possible models that are consistent with the data.  相似文献   

17.
Constructing maps of dry deposition pollution levels is vital for air quality management, and presents statistical problems typical of many environmental and spatial applications. Ideally, such maps would be based on a dense network of monitoring stations, but this does not exist. Instead, there are two main sources of information for dry deposition levels in the United States: one is pollution measurements at a sparse set of about 50 monitoring stations called CASTNet, and the other is the output of the regional scale air quality models, called Models-3. A related problem is the evaluation of these numerical models for air quality applications, which is crucial for control strategy selection. We develop formal methods for combining sources of information with different spatial resolutions and for the evaluation of numerical models. We specify a simple model for both the Models-3 output and the CASTNet observations in terms of the unobserved ground truth, and we estimate the model in a Bayesian way. This provides improved spatial prediction via the posterior distribution of the ground truth, allows us to validate Models-3 via the posterior predictive distribution of the CASTNet observations, and enables us to remove the bias in the Models-3 output. We apply our methods to data on SO2 concentrations, and we obtain high-resolution SO2 distributions by combining observed data with model output. We also conclude that the numerical models perform worse in areas closer to power plants, where the SO2 values are overestimated by the models.  相似文献   

18.
A vast amount of ecological knowledge generated over the past two decades has hinged upon the ability of model selection methods to discriminate among various ecological hypotheses. The last decade has seen the rise of Bayesian hierarchical models in ecology. Consequently, commonly used tools, such as the AIC, become largely inapplicable and there appears to be no consensus about a particular model selection tool that can be universally applied. We focus on a specific class of competing Bayesian spatial capture–recapture (SCR) models and apply and evaluate some of the recommended Bayesian model selection tools: (1) Bayes Factor—using (a) Gelfand‐Dey and (b) harmonic mean methods, (2) Deviance Information Criterion (DIC), (3) Watanabe‐Akaike's Information Criterion (WAIC) and (4) posterior predictive loss criterion. In all, we evaluate 25 variants of model selection tools in our study. We evaluate these model selection tools from the standpoint of selecting the “true” model and parameter estimation. In all, we generate 120 simulated data sets using the true model and assess the frequency with which the true model is selected and how well the tool estimates N (population size), a parameter of much importance to ecologists. We find that when information content is low in the data, no particular model selection tool can be recommended to help realize, simultaneously, both the goals of model selection and parameter estimation. But, in general (when we consider both the objectives together), we recommend the use of our application of the Bayes Factor (Gelfand‐Dey with MAP approximation) for Bayesian SCR models. Our study highlights the point that although new model selection tools are emerging (e.g., WAIC) in the applied statistics literature, those tools based on sound theory even under approximation may still perform much better.  相似文献   

19.
We present a Bayesian method for deriving species-sensitivity distributions (SSDs). We employed four Bayesian statistical models to consider differences in tolerance to toxic substances among different taxonomic groups. We first used a Malkov chain Monte Carlo simulation based on these models to estimate the SSD parameters. We then computed deviance information criterion values of the models and compared them in order to select the model with the best predictive ability. We applied this approach to seven substances (zinc, lead, hexavalent chromium, cadmium, nickel, short-chain chloride paraffin, and chloroform) as case examples, and then compared the derived SSDs from the selected models and a model that assumed no tolerance differences among taxonomic groups. We discuss the advantages and limitations of our approach on the basis of our results.  相似文献   

20.
Assuming a lognormally distributed measure of bioavailability, individual bioequivalence is defined as originally proposed by Anderson and Hauck (1990) and Wellek (1990; 1993). For the posterior probability of the associated statistical hypothesis with respect to a noninformative reference prior, a numerically efficient algorithm is constructed which serves as the building block of a procedure for computing exact rejection probabilities of the Bayesian test under arbitrary parameter constellations. By means of this tool, the Bayesian test can be shown to maintain the significance level without being over‐conservative and to yield gains in power of up to 30% as compared to the distribution‐free procedure which gained some popularity under the name TIER. Moreover, it is shown that the Bayesian construction also allows scaling of the probability‐based criterion with respect to the proportion of subjects exhibiting bioequivalent responses to repeated administrations of the reference formulation of the drug under study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号