首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In two‐stage group sequential trials with a primary and a secondary endpoint, the overall type I error rate for the primary endpoint is often controlled by an α‐level boundary, such as an O'Brien‐Fleming or Pocock boundary. Following a hierarchical testing sequence, the secondary endpoint is tested only if the primary endpoint achieves statistical significance either at an interim analysis or at the final analysis. To control the type I error rate for the secondary endpoint, this is tested using a Bonferroni procedure or any α‐level group sequential method. In comparison with marginal testing, there is an overall power loss for the test of the secondary endpoint since a claim of a positive result depends on the significance of the primary endpoint in the hierarchical testing sequence. We propose two group sequential testing procedures with improved secondary power: the improved Bonferroni procedure and the improved Pocock procedure. The proposed procedures use the correlation between the interim and final statistics for the secondary endpoint while applying graphical approaches to transfer the significance level from the primary endpoint to the secondary endpoint. The procedures control the familywise error rate (FWER) strongly by construction and this is confirmed via simulation. We also compare the proposed procedures with other commonly used group sequential procedures in terms of control of the FWER and the power of rejecting the secondary hypothesis. An example is provided to illustrate the procedures.  相似文献   

2.
Predictive and prognostic biomarkers play an important role in personalized medicine to determine strategies for drug evaluation and treatment selection. In the context of continuous biomarkers, identification of an optimal cutoff for patient selection can be challenging due to limited information on biomarker predictive value, the biomarker’s distribution in the intended use population, and the complexity of the biomarker relationship to clinical outcomes. As a result, prespecified candidate cutoffs may be rationalized based on biological and practical considerations. In this context, adaptive enrichment designs have been proposed with interim decision rules to select a biomarker-defined subpopulation to optimize study performance. With a group sequential design as a reference, the performance of several proposed adaptive designs are evaluated and compared under various scenarios (e.g., sample size, study power, enrichment effects) where type I error rates are well controlled through closed testing procedures and where subpopulation selections are based upon the predictive probability of trial success. It is found that when the treatment is more effective in a subpopulation, these adaptive designs can improve study power substantially. Furthermore, we identified one adaptive design to have generally higher study power than the other designs under various scenarios.  相似文献   

3.
Most existing phase II clinical trial designs focus on conventional chemotherapy with binary tumor response as the endpoint. The advent of novel therapies, such as molecularly targeted agents and immunotherapy, has made the endpoint of phase II trials more complicated, often involving ordinal, nested, and coprimary endpoints. We propose a simple and flexible Bayesian optimal phase II predictive probability (OPP) design that handles binary and complex endpoints in a unified way. The Dirichlet-multinomial model is employed to accommodate different types of endpoints. At each interim, given the observed interim data, we calculate the Bayesian predictive probability of success, should the trial continue to the maximum planned sample size, and use it to make the go/no-go decision. The OPP design controls the type I error rate, maximizes power or minimizes the expected sample size, and is easy to implement, because the go/no-go decision boundaries can be enumerated and included in the protocol before the onset of the trial. Simulation studies show that the OPP design has satisfactory operating characteristics.  相似文献   

4.
Designs incorporating more than one endpoint have become popular in drug development. One of such designs allows for incorporation of short‐term information in an interim analysis if the long‐term primary endpoint has not been yet observed for some of the patients. At first we consider a two‐stage design with binary endpoints allowing for futility stopping only based on conditional power under both fixed and observed effects. Design characteristics of three estimators: using primary long‐term endpoint only, short‐term endpoint only, and combining data from both are compared. For each approach, equivalent cut‐off point values for fixed and observed effect conditional power calculations can be derived resulting in the same overall power. While in trials stopping for futility the type I error rate cannot get inflated (it usually decreases), there is loss of power. In this study, we consider different scenarios, including different thresholds for conditional power, different amount of information available at the interim, different correlations and probabilities of success. We further extend the methods to adaptive designs with unblinded sample size reassessments based on conditional power with inverse normal method as the combination function. Two different futility stopping rules are considered: one based on the conditional power, and one from P‐values based on Z‐statistics of the estimators. Average sample size, probability to stop for futility and overall power of the trial are compared and the influence of the choice of weights is investigated.  相似文献   

5.
Clinical trials are often planned with high uncertainty about the variance of the primary outcome variable. A poor estimate of the variance, however, may lead to an over‐ or underpowered study. In the internal pilot study design, the sample variance is calculated at an interim step and the sample size can be adjusted if necessary. The available recalculation procedures use the data of those patients for sample size recalculation that have already completed the study. In this article, we consider a variance estimator that takes into account both the data at the endpoint and at an intermediate point of the treatment phase. We derive asymptotic properties of this estimator and the relating sample size recalculation procedure. In a simulation study, the performance of the proposed approach is evaluated and compared with the procedure that uses only long‐term data. Simulation results demonstrate that the sample size resulting from the proposed procedure shows in general a smaller variability. At the same time, the Type I error rate is not inflated and the achieved power is close to the desired value.  相似文献   

6.
In the planning stage of a clinical trial investigating a potentially targeted therapy, there is commonly a high degree of uncertainty whether the treatment is more efficient (or efficient only) in a subgroup compared to the whole population. Recently developed adaptive designs enable to allow for an efficacy assessment both for the whole population and a subgroup and to select the target population mid-course based on interim results (see, e.g., Wang et al., Pharm Stat 6:227–244, 2007, Brannath et al., Stat Med 28:1445–1463, 2009, Wang et al., Biom J 51:358–374, 2009, Jenkins et al., Pharm Stat 10:347–356, 2011, Friede et al., Stat Med 31:4309–4120, 2012). Frequently, predictive biomarkers are used in these trials for identifying patients more likely to benefit from a drug. We consider the situation that the selection of the patient population is based on a biomarker and where the diagnostics that evaluates the biomarker may be perfect, i.e., with 100 % sensitivity and specificity, or not. The performance of the applied subset selection rule is crucial for the overall characteristics of the design. In the setting of an adaptive enrichment design, we evaluate the properties of subgroup selection rules in terms of type I error rate and power by taking into account decision rules with a fixed ad hoc threshold and optimal decision rules developed for the situation of uncertain assumptions. In a simulation study, we demonstrate that designs with optimal decision rules are under certain assumptions more powerful as compared to those with ad hoc decision rules. Throughout the results, a strong impact of sensitivity and specificity of the biomarker on both type I error rate and power is observed.  相似文献   

7.
Study planning often involves selecting an appropriate sample size. Power calculations require specifying an effect size and estimating “nuisance” parameters, e.g. the overall incidence of the outcome. For observational studies, an additional source of randomness must be estimated: the rate of the exposure. A poor estimate of any of these parameters will produce an erroneous sample size. Internal pilot (IP) designs reduce the risk of this error ‐ leading to better resource utilization ‐ by using revised estimates of the nuisance parameters at an interim stage to adjust the final sample size. In the clinical trials setting, where allocation to treatment groups is pre‐determined, IP designs have been shown to achieve the targeted power without introducing substantial inflation of the type I error rate. It has not been demonstrated whether the same general conclusions hold in observational studies, where exposure‐group membership cannot be controlled by the investigator. We extend the IP to observational settings. We demonstrate through simulations that implementing an IP, in which prevalence of the exposure can be re‐estimated at an interim stage, helps ensure optimal power for observational research with little inflation of the type I error associated with the final data analysis.  相似文献   

8.
Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies.  相似文献   

9.
In clinical trials, sample size reestimation is a useful strategy for mitigating the risk of uncertainty in design assumptions and ensuring sufficient power for the final analysis. In particular, sample size reestimation based on unblinded interim effect size can often lead to sample size increase, and statistical adjustment is usually needed for the final analysis to ensure that type I error rate is appropriately controlled. In current literature, sample size reestimation and corresponding type I error control are discussed in the context of maintaining the original randomization ratio across treatment groups, which we refer to as “proportional increase.” In practice, not all studies are designed based on an optimal randomization ratio due to practical reasons. In such cases, when sample size is to be increased, it is more efficient to allocate the additional subjects such that the randomization ratio is brought closer to an optimal ratio. In this research, we propose an adaptive randomization ratio change when sample size increase is warranted. We refer to this strategy as “nonproportional increase,” as the number of subjects increased in each treatment group is no longer proportional to the original randomization ratio. The proposed method boosts power not only through the increase of the sample size, but also via efficient allocation of the additional subjects. The control of type I error rate is shown analytically. Simulations are performed to illustrate the theoretical results.  相似文献   

10.
Brannath W  Bauer P 《Biometrics》2004,60(3):715-723
Ethical considerations and the competitive environment of clinical trials usually require that any given trial have sufficient power to detect a treatment advance. If at an interim analysis the available data are used to decide whether the trial is promising enough to be continued, investigators and sponsors often wish to have a high conditional power, which is the probability to reject the null hypothesis given the interim data and the alternative of interest. Under this requirement a design with interim sample size recalculation, which keeps the overall and conditional power at a prespecified value and preserves the overall type I error rate, is a reasonable alternative to a classical group sequential design, in which the conditional power is often too small. In this article two-stage designs with control of overall and conditional power are constructed that minimize the expected sample size, either for a simple point alternative or for a random mixture of alternatives given by a prior density for the efficacy parameter. The presented optimality result applies to trials with and without an interim hypothesis test; in addition, one can account for constraints such as a minimal sample size for the second stage. The optimal designs will be illustrated with an example, and will be compared to the frequently considered method of using the conditional type I error level of a group sequential design.  相似文献   

11.
A two-stage adaptive design trial is a single trial that combines the learning data from stage 1 (or phase II) and the confirming data in stage 2 (or phase III) for formal statistical testing. We call it a "Learn and Confirm" trial. The studywise type I error rate remains to be at issue in a "Learn and Confirm" trial. For studying multiple doses or multiple enpdoints, a "Learn and Confirm" adaptive design can be more attractive than a fixed design approach. This is because intuitively the learning data in stage 1 should not be subjected to type I error scrutiny if there is no formal interim analysis performed and only an adaptive selection of design parameters is made at stage 1. In this work, we conclude from extensive simulation studies that the intuition is most often misleading. That is, regardless of whether or not there is a formal interim analysis for making an adaptive selection, the type I error rates are always at risk of inflation. Inappropriate use of any "Learn and Confirm" strategy should not be overlooked.  相似文献   

12.
Shen Y  Fisher L 《Biometrics》1999,55(1):190-197
In the process of monitoring clinical trials, it seems appealing to use the interim findings to determine whether the sample size originally planned will provide adequate power when the alternative hypothesis is true, and to adjust the sample size if necessary. In the present paper, we propose a flexible sequential monitoring method following the work of Fisher (1998), in which the maximum sample size does not have to be specified in advance. The final test statistic is constructed based on a weighted average of the sequentially collected data, where the weight function at each stage is determined by the observed data prior to that stage. Such a weight function is used to maintain the integrity of the variance of the final test statistic so that the overall type I error rate is preserved. Moreover, the weight function plays an implicit role in termination of a trial when a treatment difference exists. Finally, the design allows the trial to be stopped early when the efficacy result is sufficiently negative. Simulation studies confirm the performance of the method.  相似文献   

13.
OBJECTIVES: The use of conventional Transmission/Disequilibrium tests in the analysis of candidate-gene association studies requires the precise and complete pre-specification of the total number of trios to be sampled to obtain sufficient power at a certain significance level (type I error risk). In most of these studies, very little information about the genetic effect size will be available beforehand and thus it will be difficult to calculate a reasonable sample size. One would therefore wish to reassess the sample size during the course of a study. METHOD: We propose an adaptive group sequential procedure which allows for both early stopping of the study with rejection of the null hypothesis (H0) and for recalculation of the sample size based on interim effect size estimates when H0 cannot be rejected. The applicability of the method which was developed by Müller and Sch?fer [Biometrics 2001;57:886-891] in a clinical context is demonstrated by a numerical example. Monte Carlo simulations are performed comparing the adaptive procedure with a fixed sample and a conventional group sequential design. RESULTS: The main advantage of the adaptive procedure is its flexibility to allow for design changes in order to achieve a stabilized power characteristic while controlling the overall type I error and using the information already collected. CONCLUSIONS: Given these advantages, the procedure is a promising alternative to traditional designs.  相似文献   

14.
Testing for random mating of a population is important in population genetics, because deviations from randomness of mating may indicate inbreeding, population stratification, natural selection, or sampling bias. However, current methods use only observed numbers of genotypes and alleles, and do not take advantage of the fact that the advent of sequencing technology provides an opportunity to investigate this topic in unprecedented detail. To address this opportunity, a novel statistical test for random mating is required in population genomics studies for which large sequencing datasets are generally available. Here, we propose a Monte-Carlo-based-permutation test (MCP) as an approach to detect random mating. Computer simulations used to evaluate the performance of the permutation test indicate that its type I error is well controlled and that its statistical power is greater than that of the commonly used chi-square test (CHI). Our simulation study shows the power of our test is greater for datasets characterized by lower levels of migration between subpopulations. In addition, test power increases with increasing recombination rate, sample size, and divergence time of subpopulations. For populations exhibiting limited migration and having average levels of population divergence, the statistical power approaches 1 for sequences longer than 1Mbp and for samples of 400 individuals or more. Taken together, our results suggest that our permutation test is a valuable tool to detect random mating of populations, especially in population genomics studies.  相似文献   

15.
Cheung YK 《Biometrics》2008,64(3):940-949
Summary .   In situations when many regimens are possible candidates for a large phase III study, but too few resources are available to evaluate each relative to the standard, conducting a multi-armed randomized selection trial is a useful strategy to remove inferior treatments from further consideration. When the study has a relatively quick endpoint such as an imaging-based lesion volume change in acute stroke patients, frequent interim monitoring of the trial is ethically and practically appealing to clinicians. In this article, I propose a class of sequential selection boundaries for multi-armed clinical trials, in which the objective is to select a treatment with a clinically significant improvement upon the control group, or to declare futility if no such treatment exists. The proposed boundaries are easy to implement in a blinded fashion, and can be applied on a flexible monitoring schedule in terms of calendar time. Design calibration with respect to prespecified levels of confidence is simple, and can be accomplished when the response rate of the control group is known only up to an interval. One of the proposed methods is applied to redesign a selection trial with an imaging endpoint in acute stroke patients, and is compared to an optimal two-stage design via simulations: The proposed method imposes smaller sample size on average than the two-stage design; this advantage is substantial when there is in fact a superior treatment to the control group.  相似文献   

16.
Ibrahim JG  Chen MH  Xia HA  Liu T 《Biometrics》2012,68(2):578-586
Recent guidance from the Food and Drug Administration for the evaluation of new therapies in the treatment of type 2 diabetes (T2DM) calls for a program-wide meta-analysis of cardiovascular (CV) outcomes. In this context, we develop a new Bayesian meta-analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for meta-analysis clinical trial design with a focus on controlling the type I error and power. We also propose the partial borrowing power prior to incorporate the historical survival meta data into the statistical design. Various properties of the proposed methodology are examined and an efficient Markov chain Monte Carlo sampling algorithm is developed to sample from the posterior distributions. In addition, we develop a simulation-based algorithm for computing various quantities, such as the power and the type I error in the Bayesian meta-analysis trial design. The proposed methodology is applied to the design of a phase 2/3 development program including a noninferiority clinical trial for CV risk assessment in T2DM studies.  相似文献   

17.
18.
For a Phase III randomized trial that compares survival outcomes between an experimental treatment versus a standard therapy, interim monitoring analysis is used to potentially terminate the study early based on efficacy. To preserve the nominal Type I error rate, alpha spending methods and information fractions are used to compute appropriate rejection boundaries in studies with planned interim analyses. For a one-sided trial design applied to a scenario in which the experimental therapy is superior to the standard therapy, interim monitoring should provide the opportunity to stop the trial prior to full follow-up and conclude that the experimental therapy is superior. This paper proposes a method called total control only (TCO) for estimating the information fraction based on the number of events within the standard treatment regimen. Based on theoretical derivations and simulation studies, for a maximum duration superiority design, the TCO method is not influenced by departure from the designed hazard ratio, is sensitive to detecting treatment differences, and preserves the Type I error rate compared to information fraction estimation methods that are based on total observed events. The TCO method is simple to apply, provides unbiased estimates of the information fraction, and does not rely on statistical assumptions that are impossible to verify at the design stage. For these reasons, the TCO method is a good approach when designing a maximum duration superiority trial with planned interim monitoring analyses.  相似文献   

19.
20.
Summary I propose that the strength of selection is greatest when resource levels are intermediate. This idea was partially tested using artificial character variation for petal size inDrosera tracyi. This plant opens one flower each morning, with stigmas becoming available and pollen being presented simultaneously throughout the population. Bees remove all the pollen that can be removed and deposit all the pollen that can be deposited in one or two visits. Visitation rates track blooming density, as do pollen removal and deposition. I measured selection as the difference in pollen removal and deposition between flowers with trimmed petals versus flowers with normal petals. In one meadow, I found a wide range of levels of pollination and corresponding levels of selection, and selection was strongest at intermediate levels of pollination. This might mean that organisms are adapted to conditions when resources are at intermediate levels even though they may experience those conditions relatively rarely.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号