首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Modification of sample size in group sequential clinical trials   总被引:1,自引:0,他引:1  
Cui L  Hung HM  Wang SJ 《Biometrics》1999,55(3):853-857
In group sequential clinical trials, sample size reestimation can be a complicated issue when it allows for change of sample size to be influenced by an observed sample path. Our simulation studies show that increasing sample size based on an interim estimate of the treatment difference can substantially inflate the probability of type I error in most practical situations. A new group sequential test procedure is developed by modifying the weights used in the traditional repeated significance two-sample mean test. The new test has the type I error probability preserved at the target level and can provide a substantial gain in power with the increase of sample size. Generalization of the new procedure is discussed.  相似文献   

2.
Predictive and prognostic biomarkers play an important role in personalized medicine to determine strategies for drug evaluation and treatment selection. In the context of continuous biomarkers, identification of an optimal cutoff for patient selection can be challenging due to limited information on biomarker predictive value, the biomarker’s distribution in the intended use population, and the complexity of the biomarker relationship to clinical outcomes. As a result, prespecified candidate cutoffs may be rationalized based on biological and practical considerations. In this context, adaptive enrichment designs have been proposed with interim decision rules to select a biomarker-defined subpopulation to optimize study performance. With a group sequential design as a reference, the performance of several proposed adaptive designs are evaluated and compared under various scenarios (e.g., sample size, study power, enrichment effects) where type I error rates are well controlled through closed testing procedures and where subpopulation selections are based upon the predictive probability of trial success. It is found that when the treatment is more effective in a subpopulation, these adaptive designs can improve study power substantially. Furthermore, we identified one adaptive design to have generally higher study power than the other designs under various scenarios.  相似文献   

3.
Brannath W  Bauer P 《Biometrics》2004,60(3):715-723
Ethical considerations and the competitive environment of clinical trials usually require that any given trial have sufficient power to detect a treatment advance. If at an interim analysis the available data are used to decide whether the trial is promising enough to be continued, investigators and sponsors often wish to have a high conditional power, which is the probability to reject the null hypothesis given the interim data and the alternative of interest. Under this requirement a design with interim sample size recalculation, which keeps the overall and conditional power at a prespecified value and preserves the overall type I error rate, is a reasonable alternative to a classical group sequential design, in which the conditional power is often too small. In this article two-stage designs with control of overall and conditional power are constructed that minimize the expected sample size, either for a simple point alternative or for a random mixture of alternatives given by a prior density for the efficacy parameter. The presented optimality result applies to trials with and without an interim hypothesis test; in addition, one can account for constraints such as a minimal sample size for the second stage. The optimal designs will be illustrated with an example, and will be compared to the frequently considered method of using the conditional type I error level of a group sequential design.  相似文献   

4.
Flexible design for following up positive findings   总被引:2,自引:0,他引:2       下载免费PDF全文
As more population-based studies suggest associations between genetic variants and disease risk, there is a need to improve the design of follow-up studies (stage II) in independent samples to confirm evidence of association observed at the initial stage (stage I). We propose to use flexible designs developed for randomized clinical trials in the calculation of sample size for follow-up studies. We apply a bootstrap procedure to correct the effect of regression to the mean, also called "winner's curse," resulting from choosing to follow up the markers with the strongest associations. We show how the results from stage I can improve sample size calculations for stage II adaptively. Despite the adaptive use of stage I data, the proposed method maintains the nominal global type I error for final analyses on the basis of either pure replication with the stage II data only or a joint analysis using information from both stages. Simulation studies show that sample-size calculations accounting for the impact of regression to the mean with the bootstrap procedure are more appropriate than is the conventional method. We also find that, in the context of flexible design, the joint analysis is generally more powerful than the replication analysis.  相似文献   

5.
We propose a multiple comparison procedure to identify the minimum effective dose level by sequentially comparing each dose level with the zero dose level in the dose finding test. If we can find the minimum effective dose level at an early stage in the sequential test, it is possible to terminate the procedure in the dose finding test after a few group observations up to the dose level. Thus, the procedure is viable from an economical point of view when high costs are involved in obtaining the observations. In the procedure, we present an integral formula to determine the critical values for satisfying a predefined type I familywise error rate. Furthermore, we show how to determine the required sample size in order to guarantee the power of the test in the procedure. In practice, we compare the power of the test and the required sample size for various configurations of the population means in simulation studies and adopt our sequential procedure to the dose response test in a case study.  相似文献   

6.
In clinical trials, sample size reestimation is a useful strategy for mitigating the risk of uncertainty in design assumptions and ensuring sufficient power for the final analysis. In particular, sample size reestimation based on unblinded interim effect size can often lead to sample size increase, and statistical adjustment is usually needed for the final analysis to ensure that type I error rate is appropriately controlled. In current literature, sample size reestimation and corresponding type I error control are discussed in the context of maintaining the original randomization ratio across treatment groups, which we refer to as “proportional increase.” In practice, not all studies are designed based on an optimal randomization ratio due to practical reasons. In such cases, when sample size is to be increased, it is more efficient to allocate the additional subjects such that the randomization ratio is brought closer to an optimal ratio. In this research, we propose an adaptive randomization ratio change when sample size increase is warranted. We refer to this strategy as “nonproportional increase,” as the number of subjects increased in each treatment group is no longer proportional to the original randomization ratio. The proposed method boosts power not only through the increase of the sample size, but also via efficient allocation of the additional subjects. The control of type I error rate is shown analytically. Simulations are performed to illustrate the theoretical results.  相似文献   

7.
As an approach to combining the phase II dose finding trial and phase III pivotal trials, we propose a two-stage adaptive design that selects the best among several treatments in the first stage and tests significance of the selected treatment in the second stage. The approach controls the type I error defined as the probability of selecting a treatment and claiming its significance when the selected treatment is indifferent from placebo, as considered in Bischoff and Miller (2005). Our approach uses the conditional error function and allows determining the conditional type I error function for the second stage based on information observed at the first stage in a similar way to that for an ordinary adaptive design without treatment selection. We examine properties such as expected sample size and stage-2 power of this design with a given type I error and a maximum stage-2 sample size under different hypothesis configurations. We also propose a method to find the optimal conditional error function of a simple parametric form to improve the performance of the design and have derived optimal designs under some hypothesis configurations. Application of this approach is illustrated by a hypothetical example.  相似文献   

8.
Clinical trials with adaptive sample size re-assessment, based on an analysis of the unblinded interim results (ubSSR), have gained in popularity due to uncertainty regarding the value of \(\delta \) at which to power the trial at the start of the study. While the statistical methodology for controlling the type-1 error of such designs is well established, there remain concerns that conventional group sequential designs with no ubSSR can accomplish the same goals with greater efficiency. The precise manner in which this efficiency comparison can be objectified has been difficult to quantify, however. In this paper, we present a methodology for making this comparison in a standard, well-accepted manner by plotting the unconditional power curves of the two approaches while holding constant their expected sample size, at each value of \(\delta \) in the range of interest. It is seen that under reasonable decision rules for increasing sample size (conservative promising zones, and no more than a 50% increase in sample size) there is little or no loss of efficiency for the adaptive designs in terms of unconditional power. The two approaches, however, have very different conditional power profiles. More generally, a methodology has been provided for comparing any design with ubSSR relative to a comparable group sequential design with no ubSSR, so one can determine whether the efficiency loss, if any, of the ubSSR design is offset by the advantages it confers for re-powering the study at the time of the interim analysis.  相似文献   

9.
Adaptive sample size calculations in group sequential trials   总被引:4,自引:0,他引:4  
Lehmacher W  Wassmer G 《Biometrics》1999,55(4):1286-1290
A method for group sequential trials that is based on the inverse normal method for combining the results of the separate stages is proposed. Without exaggerating the Type I error rate, this method enables data-driven sample size reassessments during the course of the study. It uses the stopping boundaries of the classical group sequential tests. Furthermore, exact test procedures may be derived for a wide range of applications. The procedure is compared with the classical designs in terms of power and expected sample size.  相似文献   

10.
The internal pilot study design enables to estimate nuisance parameters required for sample size calculation on the basis of data accumulated in an ongoing trial. By this, misspecifications made when determining the sample size in the planning phase can be corrected employing updated knowledge. According to regulatory guidelines, blindness of all personnel involved in the trial has to be preserved and the specified type I error rate has to be controlled when the internal pilot study design is applied. Especially in the late phase of drug development, most clinical studies are run in more than one centre. In these multicentre trials, one may have to deal with an unequal distribution of the patient numbers among the centres. Depending on the type of the analysis (weighted or unweighted), unequal centre sample sizes may lead to a substantial loss of power. Like the variance, the magnitude of imbalance is difficult to predict in the planning phase. We propose a blinded sample size recalculation procedure for the internal pilot study design in multicentre trials with normally distributed outcome and two balanced treatment groups that are analysed applying the weighted or the unweighted approach. The method addresses both uncertainty with respect to the variance of the endpoint and the extent of disparity of the centre sample sizes. The actual type I error rate as well as the expected power and sample size of the procedure is investigated in simulation studies. For the weighted analysis as well as for the unweighted analysis, the maximal type I error rate was not or only minimally exceeded. Furthermore, application of the proposed procedure led to an expected power that achieves the specified value in many cases and is throughout very close to it.  相似文献   

11.
MOTIVATION: Sample size calculation is important in experimental design and is even more so in microarray or proteomic experiments since only a few repetitions can be afforded. In the multiple testing problems involving these experiments, it is more powerful and more reasonable to control false discovery rate (FDR) or positive FDR (pFDR) instead of type I error, e.g. family-wise error rate (FWER). When controlling FDR, the traditional approach of estimating sample size by controlling type I error is no longer applicable. RESULTS: Our proposed method applies to controlling FDR. The sample size calculation is straightforward and requires minimal computation, as illustrated with two sample t-tests and F-tests. Based on simulation with the resultant sample size, the power is shown to be achievable by the q-value procedure. AVAILABILITY: A Matlab code implementing the described methods is available upon request.  相似文献   

12.
A sequential multiple assignment randomized trial (SMART) facilitates the comparison of multiple adaptive treatment strategies (ATSs) simultaneously. Previous studies have established a framework to test the homogeneity of multiple ATSs by a global Wald test through inverse probability weighting. SMARTs are generally lengthier than classical clinical trials due to the sequential nature of treatment randomization in multiple stages. Thus, it would be beneficial to add interim analyses allowing for an early stop if overwhelming efficacy is observed. We introduce group sequential methods to SMARTs to facilitate interim monitoring based on the multivariate chi-square distribution. Simulation studies demonstrate that the proposed interim monitoring in SMART (IM-SMART) maintains the desired type I error and power with reduced expected sample size compared to the classical SMART. Finally, we illustrate our method by reanalyzing a SMART assessing the effects of cognitive behavioral and physical therapies in patients with knee osteoarthritis and comorbid subsyndromal depressive symptoms.  相似文献   

13.
When planning a two-arm group sequential clinical trial with a binary primary outcome that has severe implications for quality of life (e.g., mortality), investigators may strive to find the design that maximizes in-trial patient benefit. In such cases, Bayesian response-adaptive randomization (BRAR) is often considered because it can alter the allocation ratio throughout the trial in favor of the treatment that is currently performing better. Although previous studies have recommended using fixed randomization over BRAR based on patient benefit metrics calculated from the realized trial sample size, these previous comparisons have been limited by failures to hold type I and II error rates constant across designs or consider the impacts on all individuals directly affected by the design choice. In this paper, we propose a metric for comparing designs with the same type I and II error rates that reflects expected outcomes among individuals who would participate in the trial if enrollment is open when they become eligible. We demonstrate how to use the proposed metric to guide the choice of design in the context of two recent trials in persons suffering out of hospital cardiac arrest. Using computer simulation, we demonstrate that various implementations of group sequential BRAR offer modest improvements with respect to the proposed metric relative to conventional group sequential monitoring alone.  相似文献   

14.
Liu Q  Chi GY 《Biometrics》2001,57(1):172-177
Proschan and Hunsberger (1995, Biometrics 51, 1315-1324) proposed a two-stage adaptive design that maintains the Type I error rate. For practical applications, a two-stage adaptive design is also required to achieve a desired statistical power while limiting the maximum overall sample size. In our proposal, a two-stage adaptive design is comprised of a main stage and an extension stage, where the main stage has sufficient power to reject the null under the anticipated effect size and the extension stage allows increasing the sample size in case the true effect size is smaller than anticipated. For statistical inference, methods for obtaining the overall adjusted p-value, point estimate and confidence intervals are developed. An exact two-stage test procedure is also outlined for robust inference.  相似文献   

15.
Study planning often involves selecting an appropriate sample size. Power calculations require specifying an effect size and estimating “nuisance” parameters, e.g. the overall incidence of the outcome. For observational studies, an additional source of randomness must be estimated: the rate of the exposure. A poor estimate of any of these parameters will produce an erroneous sample size. Internal pilot (IP) designs reduce the risk of this error ‐ leading to better resource utilization ‐ by using revised estimates of the nuisance parameters at an interim stage to adjust the final sample size. In the clinical trials setting, where allocation to treatment groups is pre‐determined, IP designs have been shown to achieve the targeted power without introducing substantial inflation of the type I error rate. It has not been demonstrated whether the same general conclusions hold in observational studies, where exposure‐group membership cannot be controlled by the investigator. We extend the IP to observational settings. We demonstrate through simulations that implementing an IP, in which prevalence of the exposure can be re‐estimated at an interim stage, helps ensure optimal power for observational research with little inflation of the type I error associated with the final data analysis.  相似文献   

16.
Shen Y  Fisher L 《Biometrics》1999,55(1):190-197
In the process of monitoring clinical trials, it seems appealing to use the interim findings to determine whether the sample size originally planned will provide adequate power when the alternative hypothesis is true, and to adjust the sample size if necessary. In the present paper, we propose a flexible sequential monitoring method following the work of Fisher (1998), in which the maximum sample size does not have to be specified in advance. The final test statistic is constructed based on a weighted average of the sequentially collected data, where the weight function at each stage is determined by the observed data prior to that stage. Such a weight function is used to maintain the integrity of the variance of the final test statistic so that the overall type I error rate is preserved. Moreover, the weight function plays an implicit role in termination of a trial when a treatment difference exists. Finally, the design allows the trial to be stopped early when the efficacy result is sufficiently negative. Simulation studies confirm the performance of the method.  相似文献   

17.
OBJECTIVE: In past years, the focus of genetic-epidemiological studies has shifted to analyzing complex diseases. Here, single genes often contribute only little to the manifestation of traits so that many probands have to be included in a study to reliably detect small effects. To reduce the number of required phenotypings and genotypings and thus facilitate analyzing complex traits, sequential study designs can be applied. METHODS: For sequential analyses of complex diseases in association studies, we compare the procedure by Sobell et al. (Am J Med Genet 1993;48:28-35) with the adaptation of formal group sequential study designs by Pampallona and Tsiatis (J Stat Plan Inf 1994;42:19-35). Error rates and average sample sizes are investigated by Monte-Carlo simulations. RESULTS: Formal sequential designs have a higher power regardless of underlying genetic effects. In addition, compared with conventional designs with fixed samples, average sample sizes are reduced considerably; under the null hypothesis of no association, up to 50% of the required sample size can be spared. CONCLUSIONS: To increase the efficiency of genetic-epidemiological case-control studies, we recommend using formal group sequential study designs. The tremendous savings in average sample sizes are expected to affect both cost and time spent on large-scale studies.  相似文献   

18.
Liu A  Tan M  Boyett JM  Xiong X 《Biometrics》2000,56(2):640-644
Sequential monitoring in a clinical trial poses difficulty in hypotheses testing on secondary endpoints after the trial is terminated. The conventional likelihood-based testing procedure that ignores the sequential monitoring inflates Type I error and undermines power. In this article, we show that the power of the conventional testing procedure can be substantially improved while the Type I error is controlled. The method is illustrated with a real clinical trial.  相似文献   

19.
K K Lan  J M Lachin 《Biometrics》1990,46(3):759-770
To control the Type I error probability in a group sequential procedure using the logrank test, it is important to know the information times (fractions) at the times of interim analyses conducted for purposes of data monitoring. For the logrank test, the information time at an interim analysis is the fraction of the total number of events to be accrued in the entire trial. In a maximum information trial design, the trial is concluded when a prespecified total number of events has been accrued. For such a design, therefore, the information time at each interim analysis is known. However, many trials are designed to accrue data over a fixed duration of follow-up on a specified number of patients. This is termed a maximum duration trial design. Under such a design, the total number of events to be accrued is unknown at the time of an interim analysis. For a maximum duration trial design, therefore, these information times need to be estimated. A common practice is to assume that a fixed fraction of information will be accrued between any two consecutive interim analyses, and then employ a Pocock or O'Brien-Fleming boundary. In this article, we describe an estimate of the information time based on the fraction of total patient exposure, which tends to be slightly negatively biased (i.e., conservative) if survival is exponentially distributed. We then present a numerical exploration of the robustness of this estimate when nonexponential survival applies. We also show that the Lan-DeMets (1983, Biometrika 70, 659-663) procedure for constructing group sequential boundaries with the desired level of Type I error control can be computed using the estimated information fraction, even though it may be biased. Finally, we discuss the implications of employing a biased estimate of study information for a group sequential procedure.  相似文献   

20.
Thach CT  Fisher LD 《Biometrics》2002,58(2):432-438
In the design of clinical trials, the sample size for the trial is traditionally calculated from estimates of parameters of interest, such as the mean treatment effect, which can often be inaccurate. However, recalculation of the sample size based on an estimate of the parameter of interest that uses accumulating data from the trial can lead to inflation of the overall Type I error rate of the trial. The self-designing method of Fisher, also known as the variance-spending method, allows the use of all accumulating data in a sequential trial (including the estimated treatment effect) in determining the sample size for the next stage of the trial without inflating the Type I error rate. We propose a self-designing group sequential procedure to minimize the expected total cost of a trial. Cost is an important parameter to consider in the statistical design of clinical trials due to limited financial resources. Using Bayesian decision theory on the accumulating data, the design specifies sequentially the optimal sample size and proportion of the test statistic's variance needed for each stage of a trial to minimize the expected cost of the trial. The optimality is with respect to a prior distribution on the parameter of interest. Results are presented for a simple two-stage trial. This method can extend to nonmonetary costs, such as ethical costs or quality-adjusted life years.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号