首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 834 毫秒
1.
Bayesian methods allow borrowing of historical information through prior distributions. The concept of prior effective sample size (prior ESS) facilitates quantification and communication of such prior information by equating it to a sample size. Prior information can arise from historical observations; thus, the traditional approach identifies the ESS with such a historical sample size. However, this measure is independent of newly observed data, and thus would not capture an actual “loss of information” induced by the prior in case of prior-data conflict. We build on a recent work to relate prior impact to the number of (virtual) samples from the current data model and introduce the effective current sample size (ECSS) of a prior, tailored to the application in Bayesian clinical trial designs. Special emphasis is put on robust mixture, power, and commensurate priors. We apply the approach to an adaptive design in which the number of recruited patients is adjusted depending on the effective sample size at an interim analysis. We argue that the ECSS is the appropriate measure in this case, as the aim is to save current (as opposed to historical) patients from recruitment. Furthermore, the ECSS can help overcome lack of consensus in the ESS assessment of mixture priors and can, more broadly, provide further insights into the impact of priors. An R package accompanies the paper.  相似文献   

2.
Incorporating historical information into the design and analysis of a new clinical trial has been the subject of much recent discussion. For example, in the context of clinical trials of antibiotics for drug resistant infections, where patients with specific infections can be difficult to recruit, there is often only limited and heterogeneous information available from the historical trials. To make the best use of the combined information at hand, we consider an approach based on the multiple power prior that allows the prior weight of each historical study to be chosen adaptively by empirical Bayes. This choice of weight has advantages in that it varies commensurably with differences in the historical and current data and can choose weights near 1 if the data from the corresponding historical study are similar enough to the data from the current study. Fully Bayesian approaches are also considered. The methods are applied to data from antibiotics trials. An analysis of the operating characteristics in a binomial setting shows that the proposed empirical Bayes adaptive method works well, compared to several alternative approaches, including the meta‐analytic prior.  相似文献   

3.
This paper develops Bayesian sample size formulae for experiments comparing two groups, where relevant preexperimental information from multiple sources can be incorporated in a robust prior to support both the design and analysis. We use commensurate predictive priors for borrowing of information and further place Gamma mixture priors on the precisions to account for preliminary belief about the pairwise (in)commensurability between parameters that underpin the historical and new experiments. Averaged over the probability space of the new experimental data, appropriate sample sizes are found according to criteria that control certain aspects of the posterior distribution, such as the coverage probability or length of a defined density region. Our Bayesian methodology can be applied to circumstances that compare two normal means, proportions, or event times. When nuisance parameters (such as variance) in the new experiment are unknown, a prior distribution can further be specified based on preexperimental data. Exact solutions are available based on most of the criteria considered for Bayesian sample size determination, while a search procedure is described in cases for which there are no closed-form expressions. We illustrate the application of our sample size formulae in the design of clinical trials, where pretrial information is available to be leveraged. Hypothetical data examples, motivated by a rare-disease trial with an elicited expert prior opinion, and a comprehensive performance evaluation of the proposed methodology are presented.  相似文献   

4.
Ibrahim JG  Chen MH  Xia HA  Liu T 《Biometrics》2012,68(2):578-586
Recent guidance from the Food and Drug Administration for the evaluation of new therapies in the treatment of type 2 diabetes (T2DM) calls for a program-wide meta-analysis of cardiovascular (CV) outcomes. In this context, we develop a new Bayesian meta-analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for meta-analysis clinical trial design with a focus on controlling the type I error and power. We also propose the partial borrowing power prior to incorporate the historical survival meta data into the statistical design. Various properties of the proposed methodology are examined and an efficient Markov chain Monte Carlo sampling algorithm is developed to sample from the posterior distributions. In addition, we develop a simulation-based algorithm for computing various quantities, such as the power and the type I error in the Bayesian meta-analysis trial design. The proposed methodology is applied to the design of a phase 2/3 development program including a noninferiority clinical trial for CV risk assessment in T2DM studies.  相似文献   

5.
For the approval of biosimilars, it is, in most cases, necessary to conduct large Phase III clinical trials in patients to convince the regulatory authorities that the product is comparable in terms of efficacy and safety to the originator product. As the originator product has already been studied in several trials beforehand, it seems natural to include this historical information into the showing of equivalent efficacy. Since all studies for the regulatory approval of biosimilars are confirmatory studies, it is required that the statistical approach has reasonable frequentist properties, most importantly, that the Type I error rate is controlled—at least in all scenarios that are realistic in practice. However, it is well known that the incorporation of historical information can lead to an inflation of the Type I error rate in the case of a conflict between the distribution of the historical data and the distribution of the trial data. We illustrate this issue and confirm, using the Bayesian robustified meta‐analytic‐predictive (MAP) approach as an example, that simultaneously controlling the Type I error rate over the complete parameter space and gaining power in comparison to a standard frequentist approach that only considers the data in the new study, is not possible. We propose a hybrid Bayesian‐frequentist approach for binary endpoints that controls the Type I error rate in the neighborhood of the center of the prior distribution, while improving the power. We study the properties of this approach in an extensive simulation study and provide a real‐world example.  相似文献   

6.
Use of historical data and real-world evidence holds great potential to improve the efficiency of clinical trials. One major challenge is to effectively borrow information from historical data while maintaining a reasonable type I error and minimal bias. We propose the elastic prior approach to address this challenge. Unlike existing approaches, this approach proactively controls the behavior of information borrowing and type I errors by incorporating a well-known concept of clinically significant difference through an elastic function, defined as a monotonic function of a congruence measure between historical data and trial data. The elastic function is constructed to satisfy a set of prespecified criteria such that the resulting prior will strongly borrow information when historical and trial data are congruent, but refrain from information borrowing when historical and trial data are incongruent. The elastic prior approach has a desirable property of being information borrowing consistent, that is, asymptotically controls type I error at the nominal value, no matter that historical data are congruent or not to the trial data. Our simulation study that evaluates the finite sample characteristic confirms that, compared to existing methods, the elastic prior has better type I error control and yields competitive or higher power. The proposed approach is applicable to binary, continuous, and survival endpoints.  相似文献   

7.
Chen MH  Ibrahim JG  Lam P  Yu A  Zhang Y 《Biometrics》2011,67(3):1163-1170
Summary We develop a new Bayesian approach of sample size determination (SSD) for the design of noninferiority clinical trials. We extend the fitting and sampling priors of Wang and Gelfand (2002, Statistical Science 17 , 193–208) to Bayesian SSD with a focus on controlling the type I error and power. Historical data are incorporated via a hierarchical modeling approach as well as the power prior approach of Ibrahim and Chen (2000, Statistical Science 15 , 46–60). Various properties of the proposed Bayesian SSD methodology are examined and a simulation‐based computational algorithm is developed. The proposed methodology is applied to the design of a noninferiority medical device clinical trial with historical data from previous trials.  相似文献   

8.
Classical power analysis for sample size determination is typically performed in clinical trials. A “hybrid” classical Bayesian or a “fully Bayesian” approach can be alternatively used in order to add flexibility to the design assumptions needed at the planning stage of the study and to explicitly incorporate prior information in the procedure. In this paper, we exploit and compare these approaches to obtain the optimal sample size of a single-arm trial based on Poisson data. We adopt exact methods to establish the rejection of the null hypothesis within a frequentist or a Bayesian perspective and suggest the use of a conservative criterion for sample size determination that accounts for the not strictly monotonic behavior of the power function in the presence of discrete data. A Shiny web app in R has been developed to provide a user-friendly interface to easily compute the optimal sample size according to the proposed criteria and to assure the reproducibility of the results.  相似文献   

9.
A common concern in Bayesian data analysis is that an inappropriately informative prior may unduly influence posterior inferences. In the context of Bayesian clinical trial design, well chosen priors are important to ensure that posterior-based decision rules have good frequentist properties. However, it is difficult to quantify prior information in all but the most stylized models. This issue may be addressed by quantifying the prior information in terms of a number of hypothetical patients, i.e., a prior effective sample size (ESS). Prior ESS provides a useful tool for understanding the impact of prior assumptions. For example, the prior ESS may be used to guide calibration of prior variances and other hyperprior parameters. In this paper, we discuss such prior sensitivity analyses by using a recently proposed method to compute a prior ESS. We apply this in several typical settings of Bayesian biomedical data analysis and clinical trial design. The data analyses include cross-tabulated counts, multiple correlated diagnostic tests, and ordinal outcomes using a proportional-odds model. The study designs include a phase I trial with late-onset toxicities, a phase II trial that monitors event times, and a phase I/II trial with dose-finding based on efficacy and toxicity.  相似文献   

10.
In the era of precision medicine, novel designs are developed to deal with flexible clinical trials that incorporate many treatment strategies for multiple diseases in one trial setting. This situation often leads to small sample sizes in disease-treatment combinations and has fostered the discussion about the benefits of borrowing of external or historical information for decision-making in these trials. Several methods have been proposed that dynamically discount the amount of information borrowed from historical data based on the conformity between historical and current data. Specifically, Bayesian methods have been recommended and numerous investigations have been performed to characterize the properties of the various borrowing mechanisms with respect to the gain to be expected in the trials. However, there is common understanding that the risk of type I error inflation exists when information is borrowed and many simulation studies are carried out to quantify this effect. To add transparency to the debate, we show that if prior information is conditioned upon and a uniformly most powerful test exists, strict control of type I error implies that no power gain is possible under any mechanism of incorporation of prior information, including dynamic borrowing. The basis of the argument is to consider the test decision function as a function of the current data even when external information is included. We exemplify this finding in the case of a pediatric arm appended to an adult trial and dichotomous outcome for various methods of dynamic borrowing from adult information to the pediatric arm. In conclusion, if use of relevant external data is desired, the requirement of strict type I error control has to be replaced by more appropriate metrics.  相似文献   

11.
M. Xiong  S. W. Guo 《Genetics》1997,145(4):1201-1218
With increasing popularity of QTL mapping in economically important animals and experimental species, the need for statistical methodology for fine-scale QTL mapping becomes increasingly urgent. The ability to disentangle several linked QTL depends on the number of recombination events. An obvious approach to increase the recombination events is to increase sample size, but this approach is often constrained by resources. Moreover, increasing the sample size beyond a certain point will not further reduce the length of confidence interval for QTL map locations. The alternative approach is to use historical recombinations. We use analytical methods to examine the properties of fine QTL mapping using historical recombinations that are accumulated through repeated intercrossing from an F(2) population. We demonstrate that, using the historical recombinations, both simple and multiple regression models can reduce significantly the lengths of support intervals for estimated QTL map locations and the variances of estimated QTL map locations. We also demonstrate that, while the simple regression model using historical recombinations does not reduce the variances of the estimated additive and dominant effects, the multiple regression model does. We further determine the power and threshold values for both the simple and multiple regression models. In addition, we calculate the Kullback-Leibler distance and Fisher information for the simple regression model, in the hope to further understand the advantages and disadvantages of using historical recombinations relative to F(2) data.  相似文献   

12.
Reducing the number of animal subjects used in biomedical experiments is desirable for ethical and practical reasons. Previous reviews of the benefits of reducing sample sizes have focused on improving experimental designs and methods of statistical analysis, but reducing the size of control groups has been considered rarely. We discuss how the number of current control animals can be reduced, without loss of statistical power, by incorporating information from historical controls, i.e. subjects used as controls in similar previous experiments. Using example data from published reports, we describe how to incorporate information from historical controls under a range of assumptions that might be made in biomedical experiments. Assuming more similarities between historical and current controls yields higher savings and allows the use of smaller current control groups. We conducted simulations, based on typical designs and sample sizes, to quantify how different assumptions about historical controls affect the power of statistical tests. We show that, under our simulation conditions, the number of current control subjects can be reduced by more than half by including historical controls in the analyses. In other experimental scenarios, control groups may be unnecessary. Paying attention to both the function and to the statistical requirements of control groups would result in reducing the total number of animals used in experiments, saving time, effort and money, and bringing research with animals within ethically acceptable bounds.  相似文献   

13.
Pilot studies are often used to help design ecological studies. Ideally the pilot data are incorporated into the full-scale study data, but if the pilot study's results indicate a need for major changes to experimental design, then pooling pilot and full-scale study data is difficult. The default position is to disregard the preliminary data. But ignoring pilot study data after a more comprehensive study has been completed forgoes statistical power or costs more by sampling additional data equivalent to the pilot study's sample size. With Bayesian methods, pilot study data can be used as an informative prior for a model built from the full-scale study dataset. We demonstrate a Bayesian method for recovering information from otherwise unusable pilot study data with a case study on eucalypt seedling mortality. A pilot study of eucalypt tree seedling mortality was conducted in southeastern Australia in 2005. A larger study with a modified design was conducted the following year. The two datasets differed substantially, so they could not easily be combined. Posterior estimates from pilot dataset model parameters were used to inform a model for the second larger dataset. Model checking indicated that incorporating prior information maintained the predictive capacity of the model with respect to the training data. Importantly, adding prior information improved model accuracy in predicting a validation dataset. Adding prior information increased the precision and the effective sample size for estimating the average mortality rate. We recommend that practitioners move away from the default position of discarding pilot study data when they are incompatible with the form of their full-scale studies. More generally, we recommend that ecologists should use informative priors more frequently to reap the benefits of the additional data.  相似文献   

14.
Fisheries stock assessment and decision analysis: the Bayesian approach   总被引:4,自引:0,他引:4  
The Bayesian approach to stock assessment determines the probabilities of alternative hypotheses using information for the stock in question and from inferences for other stocks/species. These probabilities are essential if the consequences of alternative management actions are to be evaluated through a decision analysis. Using the Bayesian approach to stock assessment and decision analysis it becomes possible to admit the full range of uncertainty and use the collective historical experience of fisheries science when estimating the consequences of proposed management actions. Recent advances in computing algorithms and power have allowed methods based on the Bayesian approach to be used even for fairly complex stock assessment models and to be within the reach of most stock assessment scientists. However, to avoid coming to ill-founded conclusions, care must be taken when selecting prior distributions. In particular, selection of priors designed to be noninformative with respect to quantities of interest to management is problematic. The arguments of the paper are illustrated using New Zealand's western stock of hoki, Macruronus novaezelandiae (Merlucciidae) and the Bering--Chukchi--Beaufort Seas stock of bowhead whales as examples  相似文献   

15.
Basket trials simultaneously evaluate the effect of one or more drugs on a defined biomarker, genetic alteration, or molecular target in a variety of disease subtypes, often called strata. A conventional approach for analyzing such trials is an independent analysis of each of the strata. This analysis is inefficient as it lacks the power to detect the effect of drugs in each stratum. To address these issues, various designs for basket trials have been proposed, centering on designs using Bayesian hierarchical models. In this article, we propose a novel Bayesian basket trial design that incorporates predictive sample size determination, early termination for inefficacy and efficacy, and the borrowing of information across strata. The borrowing of information is based on the similarity between the posterior distributions of the response probability. In general, Bayesian hierarchical models have many distributional assumptions along with multiple parameters. By contrast, our method has prior distributions for response probability and two parameters for similarity of distributions. The proposed design is easier to implement and less computationally demanding than other Bayesian basket designs. Through a simulation with various scenarios, our proposed design is compared with other designs including one that does not borrow information and one that uses a Bayesian hierarchical model.  相似文献   

16.
We review methods for detecting and assessing the strength of density dependence based on 2 types of approaches: surveys of population size and studies of life history traits, in particular demographic parameters. For the first type of studies, methods neglecting uncertainty in population size should definitely be abandoned. Bayesian approaches to simple state-space models accounting for uncertainty in population size are recommended, with some caution because of numerical difficulties and risks of model misspecification. Realistic state-space models incorporating features such as environmental covariates, age structure, etc., may lack power because of the shortness of the time series and the simultaneous presence of process and sampling variability. In all cases, complementing the population survey data with some external information, with priority on the intrinsic growth rate, is highly recommended. Methods for detecting density dependence in life history traits are generally conservative (i.e., tend to underestimate the strength of density dependence). Among approaches to correct for this effect, the state-space formulation of capture–recapture models is again the most promising. Foreseeable developments will exploit integrated monitoring combining population size surveys and individual longitudinal data in refined state-space models, for which a Bayesian approach is the most straightforward statistical treatment. One may thus expect an integration of various types of models that will make it possible to look at density dependence as a complex biological process interacting with other processes rather than in terms of a simple equation; modern statistical and modeling tools make such a synthesis within reach. © 2012 The Wildlife Society.  相似文献   

17.
Commercial whaling decimated many whale populations, including the eastern Pacific gray whale, but little is known about how population dynamics or ecology differed prior to these removals. Of particular interest is the possibility of a large population decline prior to whaling, as such a decline could explain the ~5-fold difference between genetic estimates of prior abundance and estimates based on historical records. We analyzed genetic (mitochondrial control region) and isotopic information from modern and prehistoric gray whales using serial coalescent simulations and Bayesian skyline analyses to test for a pre-whaling decline and to examine prehistoric genetic diversity, population dynamics and ecology. Simulations demonstrate that significant genetic differences observed between ancient and modern samples could be caused by a large, recent population bottleneck, roughly concurrent with commercial whaling. Stable isotopes show minimal differences between modern and ancient gray whale foraging ecology. Using rejection-based Approximate Bayesian Computation, we estimate the size of the population bottleneck at its minimum abundance and the pre-bottleneck abundance. Our results agree with previous genetic studies suggesting the historical size of the eastern gray whale population was roughly three to five times its current size.  相似文献   

18.
We review a Bayesian predictive approach for interim data monitoring and propose its application to interim sample size reestimation for clinical trials. Based on interim data, this approach predicts how the sample size of a clinical trial needs to be adjusted so as to claim a success at the conclusion of the trial with an expected probability. The method is compared with predictive power and conditional power approaches using clinical trial data. Advantages of this approach over the others are discussed.  相似文献   

19.
The allocation of resources among plant structures depends on size. For example, plants need to have a certain minimum size before they allocate resources into producing seeds. Furthermore, the allometric relationship between different plant structures and size has often been found to be adequately described by power functions. Allometric power functions have traditionally led to a bias when estimating and predicting e.g. seed production as a function of size using classical linear statistical methods. The statistical problems of using the linear models when estimating a power function with a threshold value have been solved but due to the relative complexity of the statistical solutions, the solutions are often not used in the ecological literature. Here, an intuitive and simple power model with a minimum size of allocation is investigated using a Bayesian estimation method on a simulated data set. The Bayesian estimation provided satisfactory estimates of the parameters in the model, and the model is suggested as a simple alternative when fitting allometric power functions to ecological data.  相似文献   

20.
For a Phase III randomized trial that compares survival outcomes between an experimental treatment versus a standard therapy, interim monitoring analysis is used to potentially terminate the study early based on efficacy. To preserve the nominal Type I error rate, alpha spending methods and information fractions are used to compute appropriate rejection boundaries in studies with planned interim analyses. For a one-sided trial design applied to a scenario in which the experimental therapy is superior to the standard therapy, interim monitoring should provide the opportunity to stop the trial prior to full follow-up and conclude that the experimental therapy is superior. This paper proposes a method called total control only (TCO) for estimating the information fraction based on the number of events within the standard treatment regimen. Based on theoretical derivations and simulation studies, for a maximum duration superiority design, the TCO method is not influenced by departure from the designed hazard ratio, is sensitive to detecting treatment differences, and preserves the Type I error rate compared to information fraction estimation methods that are based on total observed events. The TCO method is simple to apply, provides unbiased estimates of the information fraction, and does not rely on statistical assumptions that are impossible to verify at the design stage. For these reasons, the TCO method is a good approach when designing a maximum duration superiority trial with planned interim monitoring analyses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号