首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
Chen MH  Ibrahim JG  Lam P  Yu A  Zhang Y 《Biometrics》2011,67(3):1163-1170
Summary We develop a new Bayesian approach of sample size determination (SSD) for the design of noninferiority clinical trials. We extend the fitting and sampling priors of Wang and Gelfand (2002, Statistical Science 17 , 193–208) to Bayesian SSD with a focus on controlling the type I error and power. Historical data are incorporated via a hierarchical modeling approach as well as the power prior approach of Ibrahim and Chen (2000, Statistical Science 15 , 46–60). Various properties of the proposed Bayesian SSD methodology are examined and a simulation‐based computational algorithm is developed. The proposed methodology is applied to the design of a noninferiority medical device clinical trial with historical data from previous trials.  相似文献   

2.
Ibrahim JG  Chen MH  Xia HA  Liu T 《Biometrics》2012,68(2):578-586
Recent guidance from the Food and Drug Administration for the evaluation of new therapies in the treatment of type 2 diabetes (T2DM) calls for a program-wide meta-analysis of cardiovascular (CV) outcomes. In this context, we develop a new Bayesian meta-analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for meta-analysis clinical trial design with a focus on controlling the type I error and power. We also propose the partial borrowing power prior to incorporate the historical survival meta data into the statistical design. Various properties of the proposed methodology are examined and an efficient Markov chain Monte Carlo sampling algorithm is developed to sample from the posterior distributions. In addition, we develop a simulation-based algorithm for computing various quantities, such as the power and the type I error in the Bayesian meta-analysis trial design. The proposed methodology is applied to the design of a phase 2/3 development program including a noninferiority clinical trial for CV risk assessment in T2DM studies.  相似文献   

3.
Consider a sample of animal abundances collected from one sampling occasion. Our focus is in estimating the number of species in a closed population. In order to conduct a noninformative Bayesian inference when modeling this data, we derive Jeffreys and reference priors from the full likelihood. We assume that the species' abundances are randomly distributed according to a distribution indexed by a finite‐dimensional parameter. We consider two specific cases which assume that the mean abundances are constant or exponentially distributed. The Jeffreys and reference priors are functions of the Fisher information for the model parameters; the information is calculated in part using the linear difference score for integer parameter models (Lindsay & Roeder 1987). The Jeffreys and reference priors perform similarly in a data example we consider. The posteriors based on the Jeffreys and reference priors are proper. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
Bayesian inference for a bivariate binomial distribution   总被引:1,自引:0,他引:1  
  相似文献   

5.
6.
Posterior probabilities for choosing a regression model   总被引:4,自引:0,他引:4  
ATKINSON  A. C. 《Biometrika》1978,65(1):39-48
  相似文献   

7.
One of the most important differences between Bayesian and traditional techniques is that the former combines information available beforehand-captured in the prior distribution and reflecting the subjective state of belief before an experiment is carried out-and what the data teach us, as expressed in the likelihood function. Bayesian inference is based on the combination of prior and current information which is reflected in the posterior distribution. The fast growing implementation of Bayesian analysis techniques can be attributed to the development of fast computers and the availability of easy to use software. It has long been established that the specification of prior distributions should receive a lot of attention. Unfortunately, flat distributions are often (inappropriately) used in an automatic fashion in a wide range of types of models. We reiterate that the specification of the prior distribution should be done with great care and support this through three examples. Even in the absence of strong prior information, prior specification should be done at the appropriate scale of biological interest. This often requires incorporation of (weak) prior information based on common biological sense. Very weak and uninformative priors at one scale of the model may result in relatively strong priors at other levels affecting the posterior distribution. We present three different examples intu?vely illustrating this phenomenon indicating that this bias can be substantial (especially in small samples) and is widely present. We argue that complete ignorance or absence of prior information may not exist. Because the central theme of the Bayesian paradigm is to combine prior information with current data, authors should be encouraged to publish their raw data such that every scientist is able to perform an analysis incorporating his/her own (subjective) prior distributions.  相似文献   

8.
Unnatural rainfall fluctuation can result in such severe natural phenomena as drought and floods. This variability not only occurs in areas with unusual natural features such as land formations and drainage but can also be due to human intervention. Since rainfall data often contain zero values, evaluating rainfall change is an important undertaking, which can be estimated via the confidence intervals for the difference between delta-lognormal variances using the highest posterior density–based reference (HPD-ref) and probability-matching (HPD-pm) priors. Simulation results indicate that HPD-pm performances were better than other methods in terms of coverage rates and relative average lengths for the difference in delta-lognormal variances, even with a large difference in variances. To illustrate the efficacy of our proposed methods, we applied them to daily rainfall data sets for the lower and upper regions of northern Thailand.  相似文献   

9.
Bayesian methods allow borrowing of historical information through prior distributions. The concept of prior effective sample size (prior ESS) facilitates quantification and communication of such prior information by equating it to a sample size. Prior information can arise from historical observations; thus, the traditional approach identifies the ESS with such a historical sample size. However, this measure is independent of newly observed data, and thus would not capture an actual “loss of information” induced by the prior in case of prior-data conflict. We build on a recent work to relate prior impact to the number of (virtual) samples from the current data model and introduce the effective current sample size (ECSS) of a prior, tailored to the application in Bayesian clinical trial designs. Special emphasis is put on robust mixture, power, and commensurate priors. We apply the approach to an adaptive design in which the number of recruited patients is adjusted depending on the effective sample size at an interim analysis. We argue that the ECSS is the appropriate measure in this case, as the aim is to save current (as opposed to historical) patients from recruitment. Furthermore, the ECSS can help overcome lack of consensus in the ESS assessment of mixture priors and can, more broadly, provide further insights into the impact of priors. An R package accompanies the paper.  相似文献   

10.
A Bayesian approach to transformations to normality   总被引:2,自引:0,他引:2  
PERICCHI  L. R. 《Biometrika》1981,68(1):35-43
  相似文献   

11.
Follmann DA  Albert PS 《Biometrics》1999,55(2):603-607
A Bayesian approach to monitoring event rates with censored data is proposed. A Dirichlet prior for discrete time event probabilities is blended with discrete survival times to provide a posterior distribution that is a mixture of Dirichlets. Approximation of the posterior distribution via data augmentation is discussed. Practical issues involved in implementing the procedure are discussed and illustrated with a simulation of the single arm Cord Blood Transplantation Study where 6-month survival is monitored.  相似文献   

12.
13.
Nonparametric Bayesian bioassay including ordered polytomous response   总被引:4,自引:0,他引:4  
GELFAND  ALAN E.; KUO  LYNN 《Biometrika》1991,78(3):657-666
  相似文献   

14.
Simulated data were used to investigate the influence of the choice of priors on estimation of genetic parameters in multivariate threshold models using Gibbs sampling. We simulated additive values, residuals and fixed effects for one continuous trait and liabilities of four binary traits, and QTL effects for one of the liabilities. Within each of four replicates six different datasets were generated which resembled different practical scenarios in horses with respect to number and distribution of animals with trait records and availability of QTL information. (Co)Variance components were estimated using a Bayesian threshold animal model via Gibbs sampling. The Gibbs sampler was implemented with both a flat and a proper prior for the genetic covariance matrix. Convergence problems were encountered in > 50% of flat prior analyses, with indications of potential or near posterior impropriety between about round 10 000 and 100 000. Terminations due to non-positive definite genetic covariance matrix occurred in flat prior analyses of the smallest datasets. Use of a proper prior resulted in improved mixing and convergence of the Gibbs chain. In order to avoid (near) impropriety of posteriors and extremely poorly mixing Gibbs chains, a proper prior should be used for the genetic covariance matrix when implementing the Gibbs sampler.  相似文献   

15.
Bayes estimation subject to uncertainty about parameter constraints   总被引:4,自引:0,他引:4  
O'HAGAN  A.; LEONARD  TOM 《Biometrika》1976,63(1):201-203
  相似文献   

16.
Determining the sample size of an experiment can be challenging, even more so when incorporating external information via a prior distribution. Such information is increasingly used to reduce the size of the control group in randomized clinical trials. Knowing the amount of prior information, expressed as an equivalent prior effective sample size (ESS), clearly facilitates trial designs. Various methods to obtain a prior's ESS have been proposed recently. They have been justified by the fact that they give the standard ESS for one-parameter exponential families. However, despite being based on similar information-based metrics, they may lead to surprisingly different ESS for nonconjugate settings, which complicates many designs with prior information. We show that current methods fail a basic predictive consistency criterion, which requires the expected posterior-predictive ESS for a sample of size N to be the sum of the prior ESS and N. The expected local-information-ratio ESS is introduced and shown to be predictively consistent. It corrects the ESS of current methods, as shown for normally distributed data with a heavy-tailed Student-t prior and exponential data with a generalized Gamma prior. Finally, two applications are discussed: the prior ESS for the control group derived from historical data and the posterior ESS for hierarchical subgroup analyses.  相似文献   

17.
The successful implementation of Bayesian shrinkage analysis of high-dimensional regression models, as often encountered in quantitative trait locus (QTL) mapping, is contingent upon the choice of suitable sparsity-inducing priors. In practice, the shape (that is, the rate of tail decay) of such priors is typically preset, with no regard for the range of plausible alternatives and the fact that the most appropriate shape may depend on the data at hand. This study is presumably the first attempt to tackle this oversight through the shape-adaptive shrinkage prior (SASP) approach, with a focus on the mapping of QTLs in experimental crosses. Simulation results showed that the separation between genuine QTL effects and spurious ones can be made clearer using the SASP-based approach as compared with existing competitors. This feature makes our new method a promising approach to QTL mapping, where good separation is the ultimate goal. We also discuss a re-estimation procedure intended to improve the accuracy of the estimated genetic effects of detected QTLs with regard to shrinkage-induced bias, which may be particularly important in large-scale models with collinear predictors. The re-estimation procedure is relevant to any shrinkage method, and is potentially valuable for many scientific disciplines such as bioinformatics and quantitative genetics, where oversaturated models are booming.  相似文献   

18.
Wolfinger RD  Kass RE 《Biometrics》2000,56(3):768-774
We consider the usual normal linear mixed model for variance components from a Bayesian viewpoint. With conjugate priors and balanced data, Gibbs sampling is easy to implement; however, simulating from full conditionals can become difficult for the analysis of unbalanced data with possibly nonconjugate priors, thus leading one to consider alternative Markov chain Monte Carlo schemes. We propose and investigate a method for posterior simulation based on an independence chain. The method is customized to exploit the structure of the variance component model, and it works with arbitrary prior distributions. As a default reference prior, we use a version of Jeffreys' prior based on the integrated (restricted) likelihood. We demonstrate the ease of application and flexibility of this approach in familiar settings involving both balanced and unbalanced data.  相似文献   

19.
Estimating nonlinear dose‐response relationships in the context of pharmaceutical clinical trials is often a challenging problem. The data in these trials are typically variable and sparse, making this a hard inference problem, despite sometimes seemingly large sample sizes. Maximum likelihood estimates often fail to exist in these situations, while for Bayesian methods, prior selection becomes a delicate issue when no carefully elicited prior is available, as the posterior distribution will often be sensitive to the priors chosen. This article provides guidance on the usage of functional uniform prior distributions in these situations. The essential idea of functional uniform priors is to employ a distribution that weights the functional shapes of the nonlinear regression function equally. By doing so one obtains a distribution that exhaustively and uniformly covers the underlying potential shapes of the nonlinear function. On the parameter scale these priors will often result in quite nonuniform prior distributions. This paper gives hints on how to implement these priors in practice and illustrates them in realistic trial examples in the context of Phase II dose‐response trials as well as Phase I first‐in‐human studies.  相似文献   

20.
On priors providing frequentist validity for Bayesian inference   总被引:4,自引:0,他引:4  
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号