首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Brannath W  Bauer P 《Biometrics》2004,60(3):715-723
Ethical considerations and the competitive environment of clinical trials usually require that any given trial have sufficient power to detect a treatment advance. If at an interim analysis the available data are used to decide whether the trial is promising enough to be continued, investigators and sponsors often wish to have a high conditional power, which is the probability to reject the null hypothesis given the interim data and the alternative of interest. Under this requirement a design with interim sample size recalculation, which keeps the overall and conditional power at a prespecified value and preserves the overall type I error rate, is a reasonable alternative to a classical group sequential design, in which the conditional power is often too small. In this article two-stage designs with control of overall and conditional power are constructed that minimize the expected sample size, either for a simple point alternative or for a random mixture of alternatives given by a prior density for the efficacy parameter. The presented optimality result applies to trials with and without an interim hypothesis test; in addition, one can account for constraints such as a minimal sample size for the second stage. The optimal designs will be illustrated with an example, and will be compared to the frequently considered method of using the conditional type I error level of a group sequential design.  相似文献   

2.
Lin Y  Shih WJ 《Biometrics》2004,60(2):482-490
The main purpose of a phase IIA trial of a new anticancer therapy is to determine whether the therapy has sufficient promise against a specific type of tumor to warrant its further development. The therapy will be rejected for further investigation if the true response rate is less than some uninteresting level and the test of hypothesis is powered at a specific target response rate. Two-stage designs are commonly used for this situation. However, in many situations investigators often express concern about uncertainty in targeting the alternative hypothesis to study power at the planning stage. In this article, motivated by a real example, we propose a strategy for adaptive two-stage designs that will use the information at the first stage of the study to either reject the therapy or continue testing with either an optimistic or a skeptic target response rate, while the type I error rate is controlled. We also introduce new optimal criteria to reduce the expected total sample size.  相似文献   

3.
OBJECTIVES: The use of conventional Transmission/Disequilibrium tests in the analysis of candidate-gene association studies requires the precise and complete pre-specification of the total number of trios to be sampled to obtain sufficient power at a certain significance level (type I error risk). In most of these studies, very little information about the genetic effect size will be available beforehand and thus it will be difficult to calculate a reasonable sample size. One would therefore wish to reassess the sample size during the course of a study. METHOD: We propose an adaptive group sequential procedure which allows for both early stopping of the study with rejection of the null hypothesis (H0) and for recalculation of the sample size based on interim effect size estimates when H0 cannot be rejected. The applicability of the method which was developed by Müller and Sch?fer [Biometrics 2001;57:886-891] in a clinical context is demonstrated by a numerical example. Monte Carlo simulations are performed comparing the adaptive procedure with a fixed sample and a conventional group sequential design. RESULTS: The main advantage of the adaptive procedure is its flexibility to allow for design changes in order to achieve a stabilized power characteristic while controlling the overall type I error and using the information already collected. CONCLUSIONS: Given these advantages, the procedure is a promising alternative to traditional designs.  相似文献   

4.
In clinical trials, sample size reestimation is a useful strategy for mitigating the risk of uncertainty in design assumptions and ensuring sufficient power for the final analysis. In particular, sample size reestimation based on unblinded interim effect size can often lead to sample size increase, and statistical adjustment is usually needed for the final analysis to ensure that type I error rate is appropriately controlled. In current literature, sample size reestimation and corresponding type I error control are discussed in the context of maintaining the original randomization ratio across treatment groups, which we refer to as “proportional increase.” In practice, not all studies are designed based on an optimal randomization ratio due to practical reasons. In such cases, when sample size is to be increased, it is more efficient to allocate the additional subjects such that the randomization ratio is brought closer to an optimal ratio. In this research, we propose an adaptive randomization ratio change when sample size increase is warranted. We refer to this strategy as “nonproportional increase,” as the number of subjects increased in each treatment group is no longer proportional to the original randomization ratio. The proposed method boosts power not only through the increase of the sample size, but also via efficient allocation of the additional subjects. The control of type I error rate is shown analytically. Simulations are performed to illustrate the theoretical results.  相似文献   

5.
Proschan and Hunsberger (1995) suggest the use of a conditional error function to construct a two stage test that meets the α level and allows a very flexible reassessment of the sample size after the interim analysis. In this note we show that several adaptive designs can be formulated in terms of such an error function. The conditional power function defined similarly provides a simple method for sample size reassessment in adaptive two stage designs.  相似文献   

6.
In clinical trials where several experimental treatments are of interest, the goal may be viewed as identification of the best of these and comparison of that treatment to a standard control therapy. However, it is undesirable to commit patients to a large-scale comparative trial of a new regimen without evidence that its therapeutic success rate is acceptably high. We propose a two-stage design in which patients are first randomized among the experimental treatments, and the single treatment having the highest observed success rate is identified. If this highest rate falls below a fixed cutoff then the trial is terminated. Otherwise, the "best" new treatment is compared to the control at a second stage. Locally optimal values of the cutoff and the stage-1 and stage-2 sample sizes are derived by minimizing expected total sample size. The design has both high power and high probability of terminating early when no experimental treatment is superior to the control. Numerical results for implementing the design are presented, and comparison to Dunnett's (1984, in Design of Experiments: Ranking and Selection, T. J. Santner and A. C. Tamhane (eds), 47-66; New York: Marcel Dekker) optimal one-stage procedure is made.  相似文献   

7.
A two-stage design is proposed to choose among several experimental treatments and a standard treatment in clinical trials. The first stage employs a selection procedure to select the best treatment, provided it is better than the standard. The second stage tests the hypothesis between the best treatment selected at the first stage (if any) and the standard treatment. All the treatments are assumed to follow normal distributions and the best treatment is the one with the largest population mean. The level and the power are defined and they are used to set up equations to solve unknown first stage sample size, second stage sample size, and procedure parameters. The optimal design is the one that gives the smallest average sample size. Numerical results are presented to illustrate the improvement of one design as compared to existing one stage design.  相似文献   

8.
Shen Y  Fisher L 《Biometrics》1999,55(1):190-197
In the process of monitoring clinical trials, it seems appealing to use the interim findings to determine whether the sample size originally planned will provide adequate power when the alternative hypothesis is true, and to adjust the sample size if necessary. In the present paper, we propose a flexible sequential monitoring method following the work of Fisher (1998), in which the maximum sample size does not have to be specified in advance. The final test statistic is constructed based on a weighted average of the sequentially collected data, where the weight function at each stage is determined by the observed data prior to that stage. Such a weight function is used to maintain the integrity of the variance of the final test statistic so that the overall type I error rate is preserved. Moreover, the weight function plays an implicit role in termination of a trial when a treatment difference exists. Finally, the design allows the trial to be stopped early when the efficacy result is sufficiently negative. Simulation studies confirm the performance of the method.  相似文献   

9.
The search for the association of rare genetic variants with common diseases is of high interest, yet challenging because of cost considerations. We present an efficient two-stage design that uses diseased cases to first screen for rare variants at stage-1. If too few cases are found to carry any variants, the study stops. Otherwise, the selected variants are screened at stage-2 in a larger set of cases and controls, and the frequency of variants is compared between cases and controls by an exact test that corrects for the stage-1 ascertainment. Simulations show that our new method provides conservative Type-I error rates, similar to the conservative aspect of Fisher’s exact test. We show that the probability of stopping at stage-1 increases with a smaller number of cases screened at stage-1, a larger stage-1 continuation threshold, or a smaller carrier probability. Our simulations also show how these factors impact the power at stage-2. To balance stopping early when there are few variant carriers versus continuation to stage-2 when the variants have a reasonable effect size on the phenotype, we provide guidance on designing an optimal study that minimizes the expected sample size when the null hypothesis is true, yet achieves the desired power.  相似文献   

10.
Bioequivalence studies are the pivotal clinical trials submitted to regulatory agencies to support the marketing applications of generic drug products. Average bioequivalence (ABE) is used to determine whether the mean values for the pharmacokinetic measures determined after administration of the test and reference products are comparable. Two‐stage 2×2 crossover adaptive designs (TSDs) are becoming increasingly popular because they allow making assumptions on the clinically meaningful treatment effect and a reliable guess for the unknown within‐subject variability. At an interim look, if ABE is not declared with an initial sample size, they allow to increase it depending on the estimated variability and to enroll additional subjects at a second stage, or to stop for futility in case of poor likelihood of bioequivalence. This is crucial because both parameters must clearly be prespecified in protocols, and the strategy agreed with regulatory agencies in advance with emphasis on controlling the overall type I error. We present an iterative method to adjust the significance levels at each stage which preserves the overall type I error for a wide set of scenarios which should include the true unknown variability value. Simulations showed adjusted significance levels higher than 0.0300 in most cases with type I error always below 5%, and with a power of at least 80%. TSDs work particularly well for coefficients of variation below 0.3 which are especially useful due to the balance between the power and the percentage of studies proceeding to stage 2. Our approach might support discussions with regulatory agencies.  相似文献   

11.
Two-stage, drop-the-losers designs for adaptive treatment selection have been considered by many authors. The distributions of conditional sufficient statistics and the Rao-Blackwell technique were used to obtain an unbiased estimate and to construct an exact confidence interval for the parameter of interest. In this paper, we characterize the selection process from a binomial drop-the-losers design using a truncated binomial distribution. We propose a new estimator and show that it is asymptotically consistent with a large sample size in either the first stage or the second stage. Supported by simulation analyses, we recommend the new estimator over the naive estimator and the Rao-Blackwell-type estimator based on its robustness in the finite-sample setting. We frame the concept as a simple and easily implemented procedure for phase 2 oncology trial design that can be confirmatory in nature, and we use an example to illustrate its application.  相似文献   

12.
Ibrahim JG  Chen MH  Xia HA  Liu T 《Biometrics》2012,68(2):578-586
Recent guidance from the Food and Drug Administration for the evaluation of new therapies in the treatment of type 2 diabetes (T2DM) calls for a program-wide meta-analysis of cardiovascular (CV) outcomes. In this context, we develop a new Bayesian meta-analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for meta-analysis clinical trial design with a focus on controlling the type I error and power. We also propose the partial borrowing power prior to incorporate the historical survival meta data into the statistical design. Various properties of the proposed methodology are examined and an efficient Markov chain Monte Carlo sampling algorithm is developed to sample from the posterior distributions. In addition, we develop a simulation-based algorithm for computing various quantities, such as the power and the type I error in the Bayesian meta-analysis trial design. The proposed methodology is applied to the design of a phase 2/3 development program including a noninferiority clinical trial for CV risk assessment in T2DM studies.  相似文献   

13.
We consider an adaptive dose‐finding study with two stages. The doses for the second stage will be chosen based on the first stage results. Instead of considering pairwise comparisons with placebo, we apply one test to show an upward trend across doses. This is a possibility according to the ICH‐guideline for dose‐finding studies (ICH‐E4). In this article, we are interested in trend tests based on a single contrast or on the maximum of multiple contrasts. We are interested in flexibly choosing the Stage 2 doses including the possibility to add doses. If certain requirements for the interim decision rules are fulfilled, the final trend test that ignores the adaptive nature of the trial (naïve test) can control the type I error. However, for the more common case that these requirements are not fulfilled, we need to take the adaptivity into account and discuss a method for type I error control. We apply the general conditional error approach to adaptive dose‐finding and discuss special issues appearing in this application. We call the test based on this approach Adaptive Multiple Contrast Test. For an example, we illustrate the theory discussed before and compare the performance of several tests for the adaptive design in a simulation study.  相似文献   

14.
A two-stage adaptive design trial is a single trial that combines the learning data from stage 1 (or phase II) and the confirming data in stage 2 (or phase III) for formal statistical testing. We call it a "Learn and Confirm" trial. The studywise type I error rate remains to be at issue in a "Learn and Confirm" trial. For studying multiple doses or multiple enpdoints, a "Learn and Confirm" adaptive design can be more attractive than a fixed design approach. This is because intuitively the learning data in stage 1 should not be subjected to type I error scrutiny if there is no formal interim analysis performed and only an adaptive selection of design parameters is made at stage 1. In this work, we conclude from extensive simulation studies that the intuition is most often misleading. That is, regardless of whether or not there is a formal interim analysis for making an adaptive selection, the type I error rates are always at risk of inflation. Inappropriate use of any "Learn and Confirm" strategy should not be overlooked.  相似文献   

15.
In oncology, single‐arm two‐stage designs with binary endpoint are widely applied in phase II for the development of cytotoxic cancer therapies. Simon's optimal design with prefixed sample sizes in both stages minimizes the expected sample size under the null hypothesis and is one of the most popular designs. The search algorithms that are currently used to identify phase II designs showing prespecified characteristics are computationally intensive. For this reason, most authors impose restrictions on their search procedure. However, it remains unclear to what extent this approach influences the optimality of the resulting designs. This article describes an extension to fixed sample size phase II designs by allowing the sample size of stage two to depend on the number of responses observed in the first stage. Furthermore, we present a more efficient numerical algorithm that allows for an exhaustive search of designs. Comparisons between designs presented in the literature and the proposed optimal adaptive designs show that while the improvements are generally moderate, notable reductions in the average sample size can be achieved for specific parameter constellations when applying the new method and search strategy.  相似文献   

16.
The internal pilot study design enables to estimate nuisance parameters required for sample size calculation on the basis of data accumulated in an ongoing trial. By this, misspecifications made when determining the sample size in the planning phase can be corrected employing updated knowledge. According to regulatory guidelines, blindness of all personnel involved in the trial has to be preserved and the specified type I error rate has to be controlled when the internal pilot study design is applied. Especially in the late phase of drug development, most clinical studies are run in more than one centre. In these multicentre trials, one may have to deal with an unequal distribution of the patient numbers among the centres. Depending on the type of the analysis (weighted or unweighted), unequal centre sample sizes may lead to a substantial loss of power. Like the variance, the magnitude of imbalance is difficult to predict in the planning phase. We propose a blinded sample size recalculation procedure for the internal pilot study design in multicentre trials with normally distributed outcome and two balanced treatment groups that are analysed applying the weighted or the unweighted approach. The method addresses both uncertainty with respect to the variance of the endpoint and the extent of disparity of the centre sample sizes. The actual type I error rate as well as the expected power and sample size of the procedure is investigated in simulation studies. For the weighted analysis as well as for the unweighted analysis, the maximal type I error rate was not or only minimally exceeded. Furthermore, application of the proposed procedure led to an expected power that achieves the specified value in many cases and is throughout very close to it.  相似文献   

17.
Designs incorporating more than one endpoint have become popular in drug development. One of such designs allows for incorporation of short‐term information in an interim analysis if the long‐term primary endpoint has not been yet observed for some of the patients. At first we consider a two‐stage design with binary endpoints allowing for futility stopping only based on conditional power under both fixed and observed effects. Design characteristics of three estimators: using primary long‐term endpoint only, short‐term endpoint only, and combining data from both are compared. For each approach, equivalent cut‐off point values for fixed and observed effect conditional power calculations can be derived resulting in the same overall power. While in trials stopping for futility the type I error rate cannot get inflated (it usually decreases), there is loss of power. In this study, we consider different scenarios, including different thresholds for conditional power, different amount of information available at the interim, different correlations and probabilities of success. We further extend the methods to adaptive designs with unblinded sample size reassessments based on conditional power with inverse normal method as the combination function. Two different futility stopping rules are considered: one based on the conditional power, and one from P‐values based on Z‐statistics of the estimators. Average sample size, probability to stop for futility and overall power of the trial are compared and the influence of the choice of weights is investigated.  相似文献   

18.
When designing clinical trials, researchers often encounter the uncertainty in the treatment effect or variability assumptions. Hence the sample size calculation at the planning stage of a clinical trial may also be questionable. Adjustment of the sample size during the mid-course of a clinical trial has become a popular strategy lately. In this paper we propose a procedure for calculating additional sample size needed based on conditional power, and adjusting the final-stage critical value to protect the overall type-I error rate. Compared to other previous procedures, the proposed procedure uses the definition of the conditional type-I error directly without appealing to an extra special function for it. It has better flexibility in setting up interim decision rules and the final-stage test is a likelihood ratio test.  相似文献   

19.
Zheng G  Song K  Elston RC 《Human heredity》2007,63(3-4):175-186
We study a two-stage analysis of genetic association for case-control studies. In the first stage, we compare Hardy-Weinberg disequilibrium coefficients between cases and controls and, in the second stage, we apply the Cochran- Armitage trend test. The two analyses are statistically independent when Hardy-Weinberg equilibrium holds in the population, so all the samples are used in both stages. The significance level in the first stage is adaptively determined based on its conditional power. Given the level in the first stage, the level for the second stage analysis is determined with the overall Type I error being asymptotically controlled. For finite sample sizes, a parametric bootstrap method is used to control the overall Type I error rate. This two-stage analysis is often more powerful than the Cochran-Armitage trend test alone for a large association study. The new approach is applied to SNPs from a real study.  相似文献   

20.
Flexible design for following up positive findings   总被引:2,自引:0,他引:2       下载免费PDF全文
As more population-based studies suggest associations between genetic variants and disease risk, there is a need to improve the design of follow-up studies (stage II) in independent samples to confirm evidence of association observed at the initial stage (stage I). We propose to use flexible designs developed for randomized clinical trials in the calculation of sample size for follow-up studies. We apply a bootstrap procedure to correct the effect of regression to the mean, also called "winner's curse," resulting from choosing to follow up the markers with the strongest associations. We show how the results from stage I can improve sample size calculations for stage II adaptively. Despite the adaptive use of stage I data, the proposed method maintains the nominal global type I error for final analyses on the basis of either pure replication with the stage II data only or a joint analysis using information from both stages. Simulation studies show that sample-size calculations accounting for the impact of regression to the mean with the bootstrap procedure are more appropriate than is the conventional method. We also find that, in the context of flexible design, the joint analysis is generally more powerful than the replication analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号