首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
We consider an adaptive dose‐finding study with two stages. The doses for the second stage will be chosen based on the first stage results. Instead of considering pairwise comparisons with placebo, we apply one test to show an upward trend across doses. This is a possibility according to the ICH‐guideline for dose‐finding studies (ICH‐E4). In this article, we are interested in trend tests based on a single contrast or on the maximum of multiple contrasts. We are interested in flexibly choosing the Stage 2 doses including the possibility to add doses. If certain requirements for the interim decision rules are fulfilled, the final trend test that ignores the adaptive nature of the trial (naïve test) can control the type I error. However, for the more common case that these requirements are not fulfilled, we need to take the adaptivity into account and discuss a method for type I error control. We apply the general conditional error approach to adaptive dose‐finding and discuss special issues appearing in this application. We call the test based on this approach Adaptive Multiple Contrast Test. For an example, we illustrate the theory discussed before and compare the performance of several tests for the adaptive design in a simulation study.  相似文献   

4.
We propose a Bayesian two-stage biomarker-based adaptive randomization (AR) design for the development of targeted agents. The design has three main goals: (1) to test the treatment efficacy, (2) to identify prognostic and predictive markers for the targeted agents, and (3) to provide better treatment for patients enrolled in the trial. To treat patients better, both stages are guided by the Bayesian AR based on the individual patient’s biomarker profiles. The AR in the first stage is based on a known marker. A Go/No-Go decision can be made in the first stage by testing the overall treatment effects. If a Go decision is made at the end of the first stage, a two-step Bayesian lasso strategy will be implemented to select additional prognostic or predictive biomarkers to refine the AR in the second stage. We use simulations to demonstrate the good operating characteristics of the design, including the control of per-comparison type I and type II errors, high probability in selecting important markers, and treating more patients with more effective treatments. Bayesian adaptive designs allow for continuous learning. The designs are particularly suitable for the development of multiple targeted agents in the quest of personalized medicine. By estimating treatment effects and identifying relevant biomarkers, the information acquired from the interim data can be used to guide the choice of treatment for each individual patient enrolled in the trial in real time to achieve a better outcome. The design is being implemented in the BATTLE-2 trial in lung cancer at the MD Anderson Cancer Center.  相似文献   

5.
As an approach to combining the phase II dose finding trial and phase III pivotal trials, we propose a two-stage adaptive design that selects the best among several treatments in the first stage and tests significance of the selected treatment in the second stage. The approach controls the type I error defined as the probability of selecting a treatment and claiming its significance when the selected treatment is indifferent from placebo, as considered in Bischoff and Miller (2005). Our approach uses the conditional error function and allows determining the conditional type I error function for the second stage based on information observed at the first stage in a similar way to that for an ordinary adaptive design without treatment selection. We examine properties such as expected sample size and stage-2 power of this design with a given type I error and a maximum stage-2 sample size under different hypothesis configurations. We also propose a method to find the optimal conditional error function of a simple parametric form to improve the performance of the design and have derived optimal designs under some hypothesis configurations. Application of this approach is illustrated by a hypothetical example.  相似文献   

6.
Brannath W  Bauer P 《Biometrics》2004,60(3):715-723
Ethical considerations and the competitive environment of clinical trials usually require that any given trial have sufficient power to detect a treatment advance. If at an interim analysis the available data are used to decide whether the trial is promising enough to be continued, investigators and sponsors often wish to have a high conditional power, which is the probability to reject the null hypothesis given the interim data and the alternative of interest. Under this requirement a design with interim sample size recalculation, which keeps the overall and conditional power at a prespecified value and preserves the overall type I error rate, is a reasonable alternative to a classical group sequential design, in which the conditional power is often too small. In this article two-stage designs with control of overall and conditional power are constructed that minimize the expected sample size, either for a simple point alternative or for a random mixture of alternatives given by a prior density for the efficacy parameter. The presented optimality result applies to trials with and without an interim hypothesis test; in addition, one can account for constraints such as a minimal sample size for the second stage. The optimal designs will be illustrated with an example, and will be compared to the frequently considered method of using the conditional type I error level of a group sequential design.  相似文献   

7.
Most existing phase II clinical trial designs focus on conventional chemotherapy with binary tumor response as the endpoint. The advent of novel therapies, such as molecularly targeted agents and immunotherapy, has made the endpoint of phase II trials more complicated, often involving ordinal, nested, and coprimary endpoints. We propose a simple and flexible Bayesian optimal phase II predictive probability (OPP) design that handles binary and complex endpoints in a unified way. The Dirichlet-multinomial model is employed to accommodate different types of endpoints. At each interim, given the observed interim data, we calculate the Bayesian predictive probability of success, should the trial continue to the maximum planned sample size, and use it to make the go/no-go decision. The OPP design controls the type I error rate, maximizes power or minimizes the expected sample size, and is easy to implement, because the go/no-go decision boundaries can be enumerated and included in the protocol before the onset of the trial. Simulation studies show that the OPP design has satisfactory operating characteristics.  相似文献   

8.
A population-enrichment adaptive design allows a prospective use for study population selection. It has the flexibility allowing pre-specified modifications to an ongoing trial to mitigate the potential risk associated with the assumptions made at design stage. In this way, the trial can potentially encompass a broader target patient population, and move forward only with the subpopulations that appear to be benefiting from the treatment. Our work is motivated by a Phase III event-driven vaccine efficacy trial. Two target patient subpopulations were enrolled with the assumption that vaccine efficacy can be demonstrated based on the combined population. It is recognized due to the nature of patients’ underlying conditions, one subpopulation might respond to the treatment better than the other. To maximize the probability of demonstrating vaccine efficacy in at least one patient population while taking advantage of combining two subpopulations in one single trial, an adaptive design strategy with potential population enrichment is developed. Specifically, if the observed vaccine efficacy at interim for one subpopulation is not promising to warrant carrying forward, the population may be enriched with the other subpopulation with better performance. Simulations were conducted to evaluate the operational characteristics from a selection of interim analysis plans. This population-enrichment design provides a more efficient way as compared to the conventional approaches when targeting multiple subpopulations. If executed and planned with caution, this strategy can provide a greater chance of success of the trial and help maintain scientific and regulatory rigors.  相似文献   

9.
Adaptive two‐stage designs allow a data‐driven change of design characteristics during the ongoing trial. One of the available options is an adaptive choice of the test statistic for the second stage of the trial based on the results of the interim analysis. Since there is often only a vague knowledge of the distribution shape of the primary endpoint in the planning phase of a study, a change of the test statistic may then be considered if the data indicate that the assumptions underlying the initial choice of the test are not correct. Collings and Hamilton proposed a bootstrap method for the estimation of the power of the two‐sample Wilcoxon test for shift alternatives. We use this approach for the selection of the test statistic. By means of a simulation study, we show that the gain in terms of power may be considerable when the initial assumption about the underlying distribution was wrong, whereas the loss is relatively small when in the first instance the optimal test statistic was chosen. The results also hold true for comparison with a one‐stage design. Application of the method is illustrated by a clinical trial example.  相似文献   

10.
In the planning stage of a clinical trial investigating a potentially targeted therapy, there is commonly a high degree of uncertainty whether the treatment is more efficient (or efficient only) in a subgroup compared to the whole population. Recently developed adaptive designs enable to allow for an efficacy assessment both for the whole population and a subgroup and to select the target population mid-course based on interim results (see, e.g., Wang et al., Pharm Stat 6:227–244, 2007, Brannath et al., Stat Med 28:1445–1463, 2009, Wang et al., Biom J 51:358–374, 2009, Jenkins et al., Pharm Stat 10:347–356, 2011, Friede et al., Stat Med 31:4309–4120, 2012). Frequently, predictive biomarkers are used in these trials for identifying patients more likely to benefit from a drug. We consider the situation that the selection of the patient population is based on a biomarker and where the diagnostics that evaluates the biomarker may be perfect, i.e., with 100 % sensitivity and specificity, or not. The performance of the applied subset selection rule is crucial for the overall characteristics of the design. In the setting of an adaptive enrichment design, we evaluate the properties of subgroup selection rules in terms of type I error rate and power by taking into account decision rules with a fixed ad hoc threshold and optimal decision rules developed for the situation of uncertain assumptions. In a simulation study, we demonstrate that designs with optimal decision rules are under certain assumptions more powerful as compared to those with ad hoc decision rules. Throughout the results, a strong impact of sensitivity and specificity of the biomarker on both type I error rate and power is observed.  相似文献   

11.
Sample sizes given in regulatory guidelines are not based on statistical reasoning. However, from an ethical, scientific, and regulatory point of view, a mutagenicity experiment must have a reasonable chance of supporting the decision as to whether a result is negative or positive. Consequently, the sample size should be based on type I and type II errors, the underlying variability, and the specific size of a treatment effect. A two-stage adaptive interim analysis is presented, which permits an adaptive choice of sample size after an interim analysis of the data from the first stage. Because the sample size of the first stage is considered to be a minimum requirement, this stage can also be regarded as a pilot study.  相似文献   

12.
A popular design for clinical trials assessing targeted therapies is the two-stage adaptive enrichment design with recruitment in stage 2 limited to a biomarker-defined subgroup chosen based on data from stage 1. The data-dependent selection leads to statistical challenges if data from both stages are used to draw inference on treatment effects in the selected subgroup. If subgroups considered are nested, as when defined by a continuous biomarker, treatment effect estimates in different subgroups follow the same distribution as estimates in a group-sequential trial. This result is used to obtain tests controlling the familywise type I error rate (FWER) for six simple subgroup selection rules, one of which also controls the FWER for any selection rule. Two approaches are proposed: one based on multivariate normal distributions suitable if the number of possible subgroups, k, is small, and one based on Brownian motion approximations suitable for large k. The methods, applicable in the wide range of settings with asymptotically normal test statistics, are illustrated using survival data from a breast cancer trial.  相似文献   

13.
Adaptive designs were originally developed for independent and uniformly distributed p‐values. There are trial settings where independence is not satisfied or where it may not be possible to check whether it is satisfied. In these cases, the test statistics and p‐values of each stage may be dependent. Since the probability of a type I error for a fixed adaptive design depends on the true dependence structure between the p‐values of the stages, control of the type I error rate might be endangered if the dependence structure is not taken into account adequately. In this paper, we address the problem of controlling the type I error rate in two‐stage adaptive designs if any dependence structure between the test statistics of the stages is admitted (worst case scenario). For this purpose, we pursue a copula approach to adaptive designs. For two‐stage adaptive designs without futility stop, we derive the probability of a type I error in the worst case, that is for the most adverse dependence structure between the p‐values of the stages. Explicit analytical considerations are performed for the class of inverse normal designs. A comparison with the significance level for independent and uniformly distributed p‐values is performed. For inverse normal designs without futility stop and equally weighted stages, it turns out that correcting for the worst case is too conservative as compared to a simple Bonferroni design.  相似文献   

14.
Predictive and prognostic biomarkers play an important role in personalized medicine to determine strategies for drug evaluation and treatment selection. In the context of continuous biomarkers, identification of an optimal cutoff for patient selection can be challenging due to limited information on biomarker predictive value, the biomarker’s distribution in the intended use population, and the complexity of the biomarker relationship to clinical outcomes. As a result, prespecified candidate cutoffs may be rationalized based on biological and practical considerations. In this context, adaptive enrichment designs have been proposed with interim decision rules to select a biomarker-defined subpopulation to optimize study performance. With a group sequential design as a reference, the performance of several proposed adaptive designs are evaluated and compared under various scenarios (e.g., sample size, study power, enrichment effects) where type I error rates are well controlled through closed testing procedures and where subpopulation selections are based upon the predictive probability of trial success. It is found that when the treatment is more effective in a subpopulation, these adaptive designs can improve study power substantially. Furthermore, we identified one adaptive design to have generally higher study power than the other designs under various scenarios.  相似文献   

15.
A sequential multiple assignment randomized trial (SMART) facilitates the comparison of multiple adaptive treatment strategies (ATSs) simultaneously. Previous studies have established a framework to test the homogeneity of multiple ATSs by a global Wald test through inverse probability weighting. SMARTs are generally lengthier than classical clinical trials due to the sequential nature of treatment randomization in multiple stages. Thus, it would be beneficial to add interim analyses allowing for an early stop if overwhelming efficacy is observed. We introduce group sequential methods to SMARTs to facilitate interim monitoring based on the multivariate chi-square distribution. Simulation studies demonstrate that the proposed interim monitoring in SMART (IM-SMART) maintains the desired type I error and power with reduced expected sample size compared to the classical SMART. Finally, we illustrate our method by reanalyzing a SMART assessing the effects of cognitive behavioral and physical therapies in patients with knee osteoarthritis and comorbid subsyndromal depressive symptoms.  相似文献   

16.
For a Phase III randomized trial that compares survival outcomes between an experimental treatment versus a standard therapy, interim monitoring analysis is used to potentially terminate the study early based on efficacy. To preserve the nominal Type I error rate, alpha spending methods and information fractions are used to compute appropriate rejection boundaries in studies with planned interim analyses. For a one-sided trial design applied to a scenario in which the experimental therapy is superior to the standard therapy, interim monitoring should provide the opportunity to stop the trial prior to full follow-up and conclude that the experimental therapy is superior. This paper proposes a method called total control only (TCO) for estimating the information fraction based on the number of events within the standard treatment regimen. Based on theoretical derivations and simulation studies, for a maximum duration superiority design, the TCO method is not influenced by departure from the designed hazard ratio, is sensitive to detecting treatment differences, and preserves the Type I error rate compared to information fraction estimation methods that are based on total observed events. The TCO method is simple to apply, provides unbiased estimates of the information fraction, and does not rely on statistical assumptions that are impossible to verify at the design stage. For these reasons, the TCO method is a good approach when designing a maximum duration superiority trial with planned interim monitoring analyses.  相似文献   

17.
K K Lan  J M Lachin 《Biometrics》1990,46(3):759-770
To control the Type I error probability in a group sequential procedure using the logrank test, it is important to know the information times (fractions) at the times of interim analyses conducted for purposes of data monitoring. For the logrank test, the information time at an interim analysis is the fraction of the total number of events to be accrued in the entire trial. In a maximum information trial design, the trial is concluded when a prespecified total number of events has been accrued. For such a design, therefore, the information time at each interim analysis is known. However, many trials are designed to accrue data over a fixed duration of follow-up on a specified number of patients. This is termed a maximum duration trial design. Under such a design, the total number of events to be accrued is unknown at the time of an interim analysis. For a maximum duration trial design, therefore, these information times need to be estimated. A common practice is to assume that a fixed fraction of information will be accrued between any two consecutive interim analyses, and then employ a Pocock or O'Brien-Fleming boundary. In this article, we describe an estimate of the information time based on the fraction of total patient exposure, which tends to be slightly negatively biased (i.e., conservative) if survival is exponentially distributed. We then present a numerical exploration of the robustness of this estimate when nonexponential survival applies. We also show that the Lan-DeMets (1983, Biometrika 70, 659-663) procedure for constructing group sequential boundaries with the desired level of Type I error control can be computed using the estimated information fraction, even though it may be biased. Finally, we discuss the implications of employing a biased estimate of study information for a group sequential procedure.  相似文献   

18.
In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular due to its flexibility and efficiency. Based on adaptations applied, adaptive designs can be classified into three categories: prospective, concurrent (ad hoc), and retrospective adaptive designs. An adaptive design allows modifications made to trial and/or statistical procedures of ongoing clinical trials. However, it is a concern that the actual patient population after the adaptations could deviate from the originally target patient population and consequently the overall type I error (to erroneously claim efficacy for an infective drug) rate may not be controlled. In addition, major adaptations of trial and/or statistical procedures of on-going trials may result in a totally different trial that is unable to address the scientific/medical questions the trial intends to answer. In this article, several commonly considered adaptive designs in clinical trials are reviewed. Impacts of ad hoc adaptations (protocol amendments), challenges in by design (prospective) adaptations, and obstacles of retrospective adaptations are described. Strategies for the use of adaptive design in clinical development of rare diseases are discussed. Some examples concerning the development of Velcade intended for multiple myeloma and non-Hodgkin's lymphoma are given. Practical issues that are commonly encountered when implementing adaptive design methods in clinical trials are also discussed.  相似文献   

19.
Shen Y  Fisher L 《Biometrics》1999,55(1):190-197
In the process of monitoring clinical trials, it seems appealing to use the interim findings to determine whether the sample size originally planned will provide adequate power when the alternative hypothesis is true, and to adjust the sample size if necessary. In the present paper, we propose a flexible sequential monitoring method following the work of Fisher (1998), in which the maximum sample size does not have to be specified in advance. The final test statistic is constructed based on a weighted average of the sequentially collected data, where the weight function at each stage is determined by the observed data prior to that stage. Such a weight function is used to maintain the integrity of the variance of the final test statistic so that the overall type I error rate is preserved. Moreover, the weight function plays an implicit role in termination of a trial when a treatment difference exists. Finally, the design allows the trial to be stopped early when the efficacy result is sufficiently negative. Simulation studies confirm the performance of the method.  相似文献   

20.
Although linear rank statistics for the two‐sample problem are distribution free tests, their power depends on the distribution of the data. In the planning phase of an experiment, researchers are often uncertain about the shape of this distribution and so the choice of test statistic for the analysis and the determination of the required sample size are based on vague information. Adaptive designs with interim analysis can potentially overcome both problems. And in particular, adaptive tests based on a selector statistic are a solution to the first. We investigate whether adaptive tests can be usefully implemented in flexible two‐stage designs to gain power. In a simulation study, we compare several methods for choosing a test statistic for the second stage of an adaptive design based on interim data with the procedure that applies adaptive tests in both stages. We find that the latter is a sensible approach that leads to the best results in most situations considered here. The different methods are illustrated using a clinical trial example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号