首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
In the era of precision medicine, novel designs are developed to deal with flexible clinical trials that incorporate many treatment strategies for multiple diseases in one trial setting. This situation often leads to small sample sizes in disease-treatment combinations and has fostered the discussion about the benefits of borrowing of external or historical information for decision-making in these trials. Several methods have been proposed that dynamically discount the amount of information borrowed from historical data based on the conformity between historical and current data. Specifically, Bayesian methods have been recommended and numerous investigations have been performed to characterize the properties of the various borrowing mechanisms with respect to the gain to be expected in the trials. However, there is common understanding that the risk of type I error inflation exists when information is borrowed and many simulation studies are carried out to quantify this effect. To add transparency to the debate, we show that if prior information is conditioned upon and a uniformly most powerful test exists, strict control of type I error implies that no power gain is possible under any mechanism of incorporation of prior information, including dynamic borrowing. The basis of the argument is to consider the test decision function as a function of the current data even when external information is included. We exemplify this finding in the case of a pediatric arm appended to an adult trial and dichotomous outcome for various methods of dynamic borrowing from adult information to the pediatric arm. In conclusion, if use of relevant external data is desired, the requirement of strict type I error control has to be replaced by more appropriate metrics.  相似文献   

2.
Ibrahim JG  Chen MH  Xia HA  Liu T 《Biometrics》2012,68(2):578-586
Recent guidance from the Food and Drug Administration for the evaluation of new therapies in the treatment of type 2 diabetes (T2DM) calls for a program-wide meta-analysis of cardiovascular (CV) outcomes. In this context, we develop a new Bayesian meta-analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for meta-analysis clinical trial design with a focus on controlling the type I error and power. We also propose the partial borrowing power prior to incorporate the historical survival meta data into the statistical design. Various properties of the proposed methodology are examined and an efficient Markov chain Monte Carlo sampling algorithm is developed to sample from the posterior distributions. In addition, we develop a simulation-based algorithm for computing various quantities, such as the power and the type I error in the Bayesian meta-analysis trial design. The proposed methodology is applied to the design of a phase 2/3 development program including a noninferiority clinical trial for CV risk assessment in T2DM studies.  相似文献   

3.
Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this article, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular, conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen.  相似文献   

4.
Bayesian methods allow borrowing of historical information through prior distributions. The concept of prior effective sample size (prior ESS) facilitates quantification and communication of such prior information by equating it to a sample size. Prior information can arise from historical observations; thus, the traditional approach identifies the ESS with such a historical sample size. However, this measure is independent of newly observed data, and thus would not capture an actual “loss of information” induced by the prior in case of prior-data conflict. We build on a recent work to relate prior impact to the number of (virtual) samples from the current data model and introduce the effective current sample size (ECSS) of a prior, tailored to the application in Bayesian clinical trial designs. Special emphasis is put on robust mixture, power, and commensurate priors. We apply the approach to an adaptive design in which the number of recruited patients is adjusted depending on the effective sample size at an interim analysis. We argue that the ECSS is the appropriate measure in this case, as the aim is to save current (as opposed to historical) patients from recruitment. Furthermore, the ECSS can help overcome lack of consensus in the ESS assessment of mixture priors and can, more broadly, provide further insights into the impact of priors. An R package accompanies the paper.  相似文献   

5.
Chen MH  Ibrahim JG  Lam P  Yu A  Zhang Y 《Biometrics》2011,67(3):1163-1170
Summary We develop a new Bayesian approach of sample size determination (SSD) for the design of noninferiority clinical trials. We extend the fitting and sampling priors of Wang and Gelfand (2002, Statistical Science 17 , 193–208) to Bayesian SSD with a focus on controlling the type I error and power. Historical data are incorporated via a hierarchical modeling approach as well as the power prior approach of Ibrahim and Chen (2000, Statistical Science 15 , 46–60). Various properties of the proposed Bayesian SSD methodology are examined and a simulation‐based computational algorithm is developed. The proposed methodology is applied to the design of a noninferiority medical device clinical trial with historical data from previous trials.  相似文献   

6.
There has been growing interest in leveraging external control data to augment a randomized control group data in clinical trials and enable more informative decision making. In recent years, the quality and availability of real-world data have improved steadily as external controls. However, information borrowing by directly pooling such external controls with randomized controls may lead to biased estimates of the treatment effect. Dynamic borrowing methods under the Bayesian framework have been proposed to better control the false positive error. However, the numerical computation and, especially, parameter tuning, of those Bayesian dynamic borrowing methods remain a challenge in practice. In this paper, we present a frequentist interpretation of a Bayesian commensurate prior borrowing approach and describe intrinsic challenges associated with this method from the perspective of optimization. Motivated by this observation, we propose a new dynamic borrowing approach using adaptive lasso. The treatment effect estimate derived from this method follows a known asymptotic distribution, which can be used to construct confidence intervals and conduct hypothesis tests. The finite sample performance of the method is evaluated through extensive Monte Carlo simulations under different settings. We observed highly competitive performance of adaptive lasso compared to Bayesian approaches. Methods for selecting tuning parameters are also thoroughly discussed based on results from numerical studies and an illustration example.  相似文献   

7.
For the approval of biosimilars, it is, in most cases, necessary to conduct large Phase III clinical trials in patients to convince the regulatory authorities that the product is comparable in terms of efficacy and safety to the originator product. As the originator product has already been studied in several trials beforehand, it seems natural to include this historical information into the showing of equivalent efficacy. Since all studies for the regulatory approval of biosimilars are confirmatory studies, it is required that the statistical approach has reasonable frequentist properties, most importantly, that the Type I error rate is controlled—at least in all scenarios that are realistic in practice. However, it is well known that the incorporation of historical information can lead to an inflation of the Type I error rate in the case of a conflict between the distribution of the historical data and the distribution of the trial data. We illustrate this issue and confirm, using the Bayesian robustified meta‐analytic‐predictive (MAP) approach as an example, that simultaneously controlling the Type I error rate over the complete parameter space and gaining power in comparison to a standard frequentist approach that only considers the data in the new study, is not possible. We propose a hybrid Bayesian‐frequentist approach for binary endpoints that controls the Type I error rate in the neighborhood of the center of the prior distribution, while improving the power. We study the properties of this approach in an extensive simulation study and provide a real‐world example.  相似文献   

8.
Incorporating historical information into the design and analysis of a new clinical trial has been the subject of much recent discussion. For example, in the context of clinical trials of antibiotics for drug resistant infections, where patients with specific infections can be difficult to recruit, there is often only limited and heterogeneous information available from the historical trials. To make the best use of the combined information at hand, we consider an approach based on the multiple power prior that allows the prior weight of each historical study to be chosen adaptively by empirical Bayes. This choice of weight has advantages in that it varies commensurably with differences in the historical and current data and can choose weights near 1 if the data from the corresponding historical study are similar enough to the data from the current study. Fully Bayesian approaches are also considered. The methods are applied to data from antibiotics trials. An analysis of the operating characteristics in a binomial setting shows that the proposed empirical Bayes adaptive method works well, compared to several alternative approaches, including the meta‐analytic prior.  相似文献   

9.
Estimating p-values in small microarray experiments   总被引:5,自引:0,他引:5  
MOTIVATION: Microarray data typically have small numbers of observations per gene, which can result in low power for statistical tests. Test statistics that borrow information from data across all of the genes can improve power, but these statistics have non-standard distributions, and their significance must be assessed using permutation analysis. When sample sizes are small, the number of distinct permutations can be severely limited, and pooling the permutation-derived test statistics across all genes has been proposed. However, the null distribution of the test statistics under permutation is not the same for equally and differentially expressed genes. This can have a negative impact on both p-value estimation and the power of information borrowing statistics. RESULTS: We investigate permutation based methods for estimating p-values. One of methods that uses pooling from a selected subset of the data are shown to have the correct type I error rate and to provide accurate estimates of the false discovery rate (FDR). We provide guidelines to select an appropriate subset. We also demonstrate that information borrowing statistics have substantially increased power compared to the t-test in small experiments.  相似文献   

10.
As an approach to combining the phase II dose finding trial and phase III pivotal trials, we propose a two-stage adaptive design that selects the best among several treatments in the first stage and tests significance of the selected treatment in the second stage. The approach controls the type I error defined as the probability of selecting a treatment and claiming its significance when the selected treatment is indifferent from placebo, as considered in Bischoff and Miller (2005). Our approach uses the conditional error function and allows determining the conditional type I error function for the second stage based on information observed at the first stage in a similar way to that for an ordinary adaptive design without treatment selection. We examine properties such as expected sample size and stage-2 power of this design with a given type I error and a maximum stage-2 sample size under different hypothesis configurations. We also propose a method to find the optimal conditional error function of a simple parametric form to improve the performance of the design and have derived optimal designs under some hypothesis configurations. Application of this approach is illustrated by a hypothetical example.  相似文献   

11.
Determining the sample size of an experiment can be challenging, even more so when incorporating external information via a prior distribution. Such information is increasingly used to reduce the size of the control group in randomized clinical trials. Knowing the amount of prior information, expressed as an equivalent prior effective sample size (ESS), clearly facilitates trial designs. Various methods to obtain a prior's ESS have been proposed recently. They have been justified by the fact that they give the standard ESS for one-parameter exponential families. However, despite being based on similar information-based metrics, they may lead to surprisingly different ESS for nonconjugate settings, which complicates many designs with prior information. We show that current methods fail a basic predictive consistency criterion, which requires the expected posterior-predictive ESS for a sample of size N to be the sum of the prior ESS and N. The expected local-information-ratio ESS is introduced and shown to be predictively consistent. It corrects the ESS of current methods, as shown for normally distributed data with a heavy-tailed Student-t prior and exponential data with a generalized Gamma prior. Finally, two applications are discussed: the prior ESS for the control group derived from historical data and the posterior ESS for hierarchical subgroup analyses.  相似文献   

12.
Identifying differential expressed genes across various conditions or genotypes is the most typical approach to studying the regulation of gene expression. An estimate of gene-specific variance is often needed for the assessment of statistical significance in most differential expression (DE) detection methods, including linear models (e.g., for transformed and normalized microarray data) and generalized linear models (e.g., for count data in RNAseq). Due to a common limit in sample size, the variance estimate is often unstable in small experiments. Shrinkage estimates using empirical Bayes methods have proven useful in improving the variance estimate, hence improving the detection of DE. The most widely used empirical Bayes methods borrow information across genes within the same experiments. In these methods, genes are considered exchangeable or exchangeable conditioning on expression level. We propose, with the increasing accumulation of expression data, borrowing information from historical data on the same gene can provide better estimate of gene-specific variance, thus further improve DE detection. Specifically, we show that the variation of gene expression is truly gene-specific and reproducible between different experiments. We present a new method to establish informative gene-specific prior on the variance of expression using existing public data, and illustrate how to shrink the variance estimate and detect DE. We demonstrate improvement in DE detection under our strategy compared to leading DE detection methods.  相似文献   

13.
Suppose we are interested in the effect of a treatment in a clinical trial. The efficiency of inference may be limited due to small sample size. However, external control data are often available from historical studies. Motivated by an application to Helicobacter pylori infection, we show how to borrow strength from such data to improve efficiency of inference in the clinical trial. Under an exchangeability assumption about the potential outcome mean, we show that the semiparametric efficiency bound for estimating the average treatment effect can be reduced by incorporating both the clinical trial data and external controls. We then derive a doubly robust and locally efficient estimator. The improvement in efficiency is prominent especially when the external control data set has a large sample size and small variability. Our method allows for a relaxed overlap assumption, and we illustrate with the case where the clinical trial only contains a treated group. We also develop doubly robust and locally efficient approaches that extrapolate the causal effect in the clinical trial to the external population and the overall population. Our results also offer a meaningful implication for trial design and data collection. We evaluate the finite-sample performance of the proposed estimators via simulation. In the Helicobacter pylori infection application, our approach shows that the combination treatment has potential efficacy advantages over the triple therapy.  相似文献   

14.
Statistical inference for microarray experiments usually involves the estimation of error variance for each gene. Because the sample size available for each gene is often low, the usual unbiased estimator of the error variance can be unreliable. Shrinkage methods, including empirical Bayes approaches that borrow information across genes to produce more stable estimates, have been developed in recent years. Because the same microarray platform is often used for at least several experiments to study similar biological systems, there is an opportunity to improve variance estimation further by borrowing information not only across genes but also across experiments. We propose a lognormal model for error variances that involves random gene effects and random experiment effects. Based on the model, we develop an empirical Bayes estimator of the error variance for each combination of gene and experiment and call this estimator BAGE because information is Borrowed Across Genes and Experiments. A permutation strategy is used to make inference about the differential expression status of each gene. Simulation studies with data generated from different probability models and real microarray data show that our method outperforms existing approaches.  相似文献   

15.
Basket trials simultaneously evaluate the effect of one or more drugs on a defined biomarker, genetic alteration, or molecular target in a variety of disease subtypes, often called strata. A conventional approach for analyzing such trials is an independent analysis of each of the strata. This analysis is inefficient as it lacks the power to detect the effect of drugs in each stratum. To address these issues, various designs for basket trials have been proposed, centering on designs using Bayesian hierarchical models. In this article, we propose a novel Bayesian basket trial design that incorporates predictive sample size determination, early termination for inefficacy and efficacy, and the borrowing of information across strata. The borrowing of information is based on the similarity between the posterior distributions of the response probability. In general, Bayesian hierarchical models have many distributional assumptions along with multiple parameters. By contrast, our method has prior distributions for response probability and two parameters for similarity of distributions. The proposed design is easier to implement and less computationally demanding than other Bayesian basket designs. Through a simulation with various scenarios, our proposed design is compared with other designs including one that does not borrow information and one that uses a Bayesian hierarchical model.  相似文献   

16.
We introduce a nonparametric Bayesian model for a phase II clinical trial with patients presenting different subtypes of the disease under study. The objective is to estimate the success probability of an experimental therapy for each subtype. We consider the case when small sample sizes require extensive borrowing of information across subtypes, but the subtypes are not a priori exchangeable. The lack of a priori exchangeability hinders the straightforward use of traditional hierarchical models to implement borrowing of strength across disease subtypes. We introduce instead a random partition model for the set of disease subtypes. This is a variation of the product partition model that allows us to model a nonexchangeable prior structure. Like a hierarchical model, the proposed clustering approach considers all observations, across all disease subtypes, to estimate individual success probabilities. But in contrast to standard hierarchical models, the model considers disease subtypes a priori nonexchangeable. This implies that when assessing the success probability for a particular type our model borrows more information from the outcome of the patients sharing the same prognosis than from the others. Our data arise from a phase II clinical trial of patients with sarcoma, a rare type of cancer affecting connective or supportive tissues and soft tissue (e.g., cartilage and fat). Each patient presents one subtype of the disease and subtypes are grouped by good, intermediate, and poor prognosis. The prior model should respect the varying prognosis across disease subtypes. The practical motivation for the proposed approach is that the number of accrued patients within each disease subtype is small. Thus it is not possible to carry out a clinical study of possible new therapies for rare conditions, because it would be impossible to plan for sufficiently large sample size to achieve the desired power. We carry out a simulation study to compare the proposed model with a model that assumes similar success probabilities for all subtypes with the same prognosis, i.e., a fixed partition of subtypes by prognosis. When the assumption is satisfied the two models perform comparably. But the proposed model outperforms the competing model when the assumption is incorrect.  相似文献   

17.
Yimei Li  Ying Yuan 《Biometrics》2020,76(4):1364-1373
Pediatric phase I trials are usually carried out after the adult trial testing the same agent has started, but not completed yet. As the pediatric trial progresses, in light of the accrued interim data from the concurrent adult trial, the pediatric protocol often is amended to modify the original pediatric dose escalation design. In practice, this is done frequently in an ad hoc way, interrupting patient accrual and slowing down the trial. We developed a pediatric-continuous reassessment method (PA-CRM) to streamline this process, providing a more efficient and rigorous method to find the maximum tolerated dose for pediatric phase I oncology trials. We use a discounted joint likelihood of the adult and pediatric data, with a discount parameter controlling information borrowing between pediatric and adult trials. According to the interim adult and pediatric data, the discount parameter is adaptively updated using the Bayesian model averaging method. Numerical study shows that the PA-CRM improves the efficiency and accuracy of the pediatric trial and is robust to various model assumptions.  相似文献   

18.
This paper develops Bayesian sample size formulae for experiments comparing two groups, where relevant preexperimental information from multiple sources can be incorporated in a robust prior to support both the design and analysis. We use commensurate predictive priors for borrowing of information and further place Gamma mixture priors on the precisions to account for preliminary belief about the pairwise (in)commensurability between parameters that underpin the historical and new experiments. Averaged over the probability space of the new experimental data, appropriate sample sizes are found according to criteria that control certain aspects of the posterior distribution, such as the coverage probability or length of a defined density region. Our Bayesian methodology can be applied to circumstances that compare two normal means, proportions, or event times. When nuisance parameters (such as variance) in the new experiment are unknown, a prior distribution can further be specified based on preexperimental data. Exact solutions are available based on most of the criteria considered for Bayesian sample size determination, while a search procedure is described in cases for which there are no closed-form expressions. We illustrate the application of our sample size formulae in the design of clinical trials, where pretrial information is available to be leveraged. Hypothetical data examples, motivated by a rare-disease trial with an elicited expert prior opinion, and a comprehensive performance evaluation of the proposed methodology are presented.  相似文献   

19.
In historical control trials (HCTs), the experimental therapy is compared with a control therapy that has been evaluated in a previously conducted trial. Makuch and Simon developed a sample size formula where the observations from the HC group were considered not subject to sampling variability. Many researchers have pointed out that the Makuch–Simon sample size formula does not preserve the nominal power and type I error. We develop a sample size calculation approach that properly accounts for the uncertainty in the true response rate of the HC group. We demonstrate that the empirical power and type I error, obtained over the simulated HC data, have extremely skewed distributions. We then derive a closed‐form sample size formula that enables researchers to control percentiles, instead of means, of the power and type I error accounting for the skewness of the distributions. A simulation study demonstrates that this approach preserves the operational characteristics in a more realistic scenario where the true response rate of the HC group is unknown. We also show that the controlling percentiles can be used to describe the joint behavior of the power and type I error. It provides a new perspective on the assessment of HCTs.  相似文献   

20.
Switching between testing for superiority and non-inferiority has been an important statistical issue in the design and analysis of active controlled clinical trial. In practice, it is often conducted with a two-stage testing procedure. It has been assumed that there is no type I error rate adjustment required when either switching to test for non-inferiority once the data fail to support the superiority claim or switching to test for superiority once the null hypothesis of non-inferiority is rejected with a pre-specified non-inferiority margin in a generalized historical control approach. However, when using a cross-trial comparison approach for non-inferiority testing, controlling the type I error rate sometimes becomes an issue with the conventional two-stage procedure. We propose to adopt a single-stage simultaneous testing concept as proposed by Ng (2003) to test both non-inferiority and superiority hypotheses simultaneously. The proposed procedure is based on Fieller's confidence interval procedure as proposed by Hauschke et al. (1999).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号