首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, in order to accelerate drug development, trials that use adaptive seamless designs such as phase II/III clinical trials have been proposed. Phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages. Using stage 1 data, an interim analysis is performed to answer phase II objectives and after collection of stage 2 data, a final confirmatory analysis is performed to answer phase III objectives. In this paper we consider phase II/III clinical trials in which, at stage 1, several experimental treatments are compared to a control and the apparently most effective experimental treatment is selected to continue to stage 2. Although these trials are attractive because the confirmatory analysis includes phase II data from stage 1, the inference methods used for trials that compare a single experimental treatment to a control and do not have an interim analysis are no longer appropriate. Several methods for analysing phase II/III clinical trials have been developed. These methods are recent and so there is little literature on extensive comparisons of their characteristics. In this paper we review and compare the various methods available for constructing confidence intervals after phase II/III clinical trials.  相似文献   

2.
There is growing interest in integrated Phase I/II oncology clinical trials involving molecularly targeted agents (MTA). One of the main challenges of these trials are nontrivial dose–efficacy relationships and administration of MTAs in combination with other agents. While some designs were recently proposed for such Phase I/II trials, the majority of them consider the case of binary toxicity and efficacy endpoints only. At the same time, a continuous efficacy endpoint can carry more information about the agent's mechanism of action, but corresponding designs have received very limited attention in the literature. In this work, an extension of a recently developed information‐theoretic design for the case of a continuous efficacy endpoint is proposed. The design transforms the continuous outcome using the logistic transformation and uses an information–theoretic argument to govern selection during the trial. The performance of the design is investigated in settings of single‐agent and dual‐agent trials. It is found that the novel design leads to substantial improvements in operating characteristics compared to a model‐based alternative under scenarios with nonmonotonic dose/combination–efficacy relationships. The robustness of the design to missing/delayed efficacy responses and to the correlation in toxicity and efficacy endpoints is also investigated.  相似文献   

3.
Englert S  Kieser M 《Biometrics》2012,68(3):886-892
Summary Phase II trials in oncology are usually conducted as single-arm two-stage designs with binary endpoints. Currently available adaptive design methods are tailored to comparative studies with continuous test statistics. Direct transfer of these methods to discrete test statistics results in conservative procedures and, therefore, in a loss in power. We propose a method based on the conditional error function principle that directly accounts for the discreteness of the outcome. It is shown how application of the method can be used to construct new phase II designs that are more efficient as compared to currently applied designs and that allow flexible mid-course design modifications. The proposed method is illustrated with a variety of frequently used phase II designs.  相似文献   

4.
Dose-Finding Designs for HIV Studies   总被引:1,自引:0,他引:1  
We present a class of simple designs that can be used in early dose-finding studies in HIV. Such designs, in contrast with Phase I designs in cancer, have a lot of the Phase II flavor about them. Information on efficacy is obtained during the trial and is as important as that relating to toxicity. The designs proposed here sequentially incorporate the information obtained on viral reduction. Initial doses are given from some fixed range of dose regimens. The doses are ordered in terms of their toxic potential. At any dose, a patient can have one of three outcomes: inability to take the treatment (toxicity), ability to take the treatment but insufficient reduction in viral load (viral failure), and ability to take the treatment as well as a sufficient reduction of viral load (success). A clear goal for some class of designs would be the identification of the dose leading to the greatest percentage of successes. Under certain assumptions, which we identify and discuss, we can obtain efficient designs for this task. Under weaker, sometimes more realistic assumptions, we can still obtain designs that have good operating characteristics in identifying a level, if such a level exists, having some given or greater success rate. In the absence of such a level, the designs will come to an early closure, indicating the ineffectiveness of the new treatment.  相似文献   

5.
Although there are several new designs for phase I cancer clinical trials including the continual reassessment method and accelerated titration design, the traditional algorithm-based designs, like the '3 + 3' design, are still widely used because of their practical simplicity. In this paper, we study some key statistical properties of the traditional algorithm-based designs in a general framework and derive the exact formulae for the corresponding statistical quantities. These quantities are important for the investigator to gain insights regarding the design of the trial, and are (i) the probability of a dose being chosen as the maximum tolerated dose (MTD); (ii) the expected number of patients treated at each dose level; (iii) target toxicity level (i.e. the expected dose-limiting toxicity (DLT) incidences at the MTD); (iv) expected DLT incidences at each dose level and (v) expected overall DLT incidences in the trial. Real examples of clinical trials are given, and a computer program to do the calculation can be found at the authors' website approximately linyo" locator-type="url">http://www2.umdnj.edu/ approximately linyo.  相似文献   

6.
A common assumption of data analysis in clinical trials is that the patient population, as well as treatment effects, do not vary during the course of the study. However, when trials enroll patients over several years, this hypothesis may be violated. Ignoring variations of the outcome distributions over time, under the control and experimental treatments, can lead to biased treatment effect estimates and poor control of false positive results. We propose and compare two procedures that account for possible variations of the outcome distributions over time, to correct treatment effect estimates, and to control type-I error rates. The first procedure models trends of patient outcomes with splines. The second leverages conditional inference principles, which have been introduced to analyze randomized trials when patient prognostic profiles are unbalanced across arms. These two procedures are applicable in response-adaptive clinical trials. We illustrate the consequences of trends in the outcome distributions in response-adaptive designs and in platform trials, and investigate the proposed methods in the analysis of a glioblastoma study.  相似文献   

7.
Publication bias is a major concern in conducting systematic reviews and meta-analyses. Various sensitivity analysis or bias-correction methods have been developed based on selection models, and they have some advantages over the widely used trim-and-fill bias-correction method. However, likelihood methods based on selection models may have difficulty in obtaining precise estimates and reasonable confidence intervals, or require a rather complicated sensitivity analysis process. Herein, we develop a simple publication bias adjustment method by utilizing the information on conducted but still unpublished trials from clinical trial registries. We introduce an estimating equation for parameter estimation in the selection function by regarding the publication bias issue as a missing data problem under the missing not at random assumption. With the estimated selection function, we introduce the inverse probability weighting (IPW) method to estimate the overall mean across studies. Furthermore, the IPW versions of heterogeneity measures such as the between-study variance and the I2 measure are proposed. We propose methods to construct confidence intervals based on asymptotic normal approximation as well as on parametric bootstrap. Through numerical experiments, we observed that the estimators successfully eliminated bias, and the confidence intervals had empirical coverage probabilities close to the nominal level. On the other hand, the confidence interval based on asymptotic normal approximation is much wider in some scenarios than the bootstrap confidence interval. Therefore, the latter is recommended for practical use.  相似文献   

8.
Design and analysis of phase I clinical trials   总被引:5,自引:0,他引:5  
B E Storer 《Biometrics》1989,45(3):925-937
The Phase I clinical trial is a study intended to estimate the so-called maximum tolerable dose (MTD) of a new drug. Although there exists more or less a standard type of design for such trials, its development has been largely ad hoc. As usually implemented, the trial design has no intrinsic property that provides a generally satisfactory basis for estimation of the MTD. In this paper, the standard design and several simple alternatives are compared with regard to the conservativeness of the design and with regard to point and interval estimation of an MTD (33rd percentile) with small sample sizes. Using a Markov chain representation, we found several designs to be nearly as conservative as the standard design in terms of the proportion of patients entered at higher dose levels. In Monte Carlo simulations, two two-stage designs are found to provide reduced bias in maximum likelihood estimation of the MTD in less than ideal dose-response settings. Of the three methods considered for determining confidence intervals--the delta method, a method based on Fieller's theorem, and a likelihood ratio method--none was able to provide both usefully narrow intervals and coverage probabilities close to nominal.  相似文献   

9.
临床试验中的适应性设计是根据累积信息来修正试验的一种设计方法,旨在使临床试验和临床开发计划效率更高,并为患者提供更加有效的治疗。此外,因信息隐匿造成试验失败和患者死亡,从而导致公众对医药行业的信任度下降,故关于公开临床数据的观点已开始朝着提升透明度方向调整。介绍3 类适应性设计方法的特点和应用,以及国外药政部门和制药企业对提高临床试验透明性的举措。  相似文献   

10.
This paper discusses a number of methods for adjusting treatment effect estimates in clinical trials where differential effects in several subpopulations are suspected. In such situations, the estimates from the most extreme subpopulation are often overinterpreted. The paper focusses on the construction of simultaneous confidence intervals intended to provide a more realistic assessment regarding the uncertainty around these extreme results. The methods from simultaneous inference are compared with shrinkage estimates arising from Bayesian hierarchical models by discussing salient features of both approaches in a typical application.  相似文献   

11.
Dose-finding based on efficacy-toxicity trade-offs   总被引:1,自引:0,他引:1  
Thall PF  Cook JD 《Biometrics》2004,60(3):684-693
We present an adaptive Bayesian method for dose-finding in phase I/II clinical trials based on trade-offs between the probabilities of treatment efficacy and toxicity. The method accommodates either trinary or bivariate binary outcomes, as well as efficacy probabilities that possibly are nonmonotone in dose. Doses are selected for successive patient cohorts based on a set of efficacy-toxicity trade-off contours that partition the two-dimensional outcome probability domain. Priors are established by solving for hyperparameters that optimize the fit of the model to elicited mean outcome probabilities. For trinary outcomes, the new algorithm is compared to the method of Thall and Russell (1998, Biometrics 54, 251-264) by application to a trial of rapid treatment for ischemic stroke. The bivariate binary outcome case is illustrated by a trial of graft-versus-host disease treatment in allogeneic bone marrow transplantation. Computer simulations show that, under a wide rage of dose-outcome scenarios, the new method has high probabilities of making correct decisions and treats most patients at doses with desirable efficacy-toxicity trade-offs.  相似文献   

12.
A popular design for clinical trials assessing targeted therapies is the two-stage adaptive enrichment design with recruitment in stage 2 limited to a biomarker-defined subgroup chosen based on data from stage 1. The data-dependent selection leads to statistical challenges if data from both stages are used to draw inference on treatment effects in the selected subgroup. If subgroups considered are nested, as when defined by a continuous biomarker, treatment effect estimates in different subgroups follow the same distribution as estimates in a group-sequential trial. This result is used to obtain tests controlling the familywise type I error rate (FWER) for six simple subgroup selection rules, one of which also controls the FWER for any selection rule. Two approaches are proposed: one based on multivariate normal distributions suitable if the number of possible subgroups, k, is small, and one based on Brownian motion approximations suitable for large k. The methods, applicable in the wide range of settings with asymptotically normal test statistics, are illustrated using survival data from a breast cancer trial.  相似文献   

13.
Paired survival times with potential censoring are often observed from two treatment groups in clinical trials and other types of clinical studies. The ratio of marginal hazard rates may be used to quantify the treatment effect in these studies. In this paper, a recently proposed nonparametric kernel method is used to estimate the marginal hazard rate, and the method of variance estimates recovery (MOVER) is used for the construction of the confidence intervals of a time‐dependent hazard ratio based on the confidence limits of a single marginal hazard rate. Two methods are proposed: one uses the delta method and another adopts the transformation method to construct confidence limits for the marginal hazard rate. Simulations are performed to evaluate the performance of the proposed methods. Real data from two clinical trials are analyzed using the proposed methods.  相似文献   

14.
A Bayesian decision-theoretic method is proposed for conducting small, randomized pre-phase II selection trials. The aim is to improve on the design of Thall and Estey (1993, Statistics in Medicine 12, 1197-1211). Designs are derived that optimize a gain function accounting for current and future patient gains, per-patient cost, and future treatment development cost. To reduce the computational burden associated with backward induction, myopic versions of the design that consider only one, two, or three future decisions at a time are also considered. The designs are compared in the context of a screening trial in acute myelogenous leukemia.  相似文献   

15.
J. Feifel  D. Dobler 《Biometrics》2021,77(1):175-185
Nested case‐control designs are attractive in studies with a time‐to‐event endpoint if the outcome is rare or if interest lies in evaluating expensive covariates. The appeal is that these designs restrict to small subsets of all patients at risk just prior to the observed event times. Only these small subsets need to be evaluated. Typically, the controls are selected at random and methods for time‐simultaneous inference have been proposed in the literature. However, the martingale structure behind nested case‐control designs allows for more powerful and flexible non‐standard sampling designs. We exploit that structure to find simultaneous confidence bands based on wild bootstrap resampling procedures within this general class of designs. We show in a simulation study that the intended coverage probability is obtained for confidence bands for cumulative baseline hazard functions. We apply our methods to observational data about hospital‐acquired infections.  相似文献   

16.
Bayesian design and analysis of active control clinical trials   总被引:6,自引:0,他引:6  
Simon R 《Biometrics》1999,55(2):484-487
We consider the design and analysis of active control clinical trials, i.e., clinical trials comparing an experimental treatment E to a control treatment C considered to be effective. Direct comparison of E to placebo P, or no treatment, is sometimes ethically unacceptable. Much discussion of the design and analysis of such clinical trials has focused on whether the comparison of E to C should be based on a test of the null hypothesis of equivalence, on a test of a nonnull hypothesis that the difference is of some minimally medically important size delta, or on one or two-sided confidence intervals. These approaches are essentially the same for study planning. They all suffer from arbitrariness in specifying the size of the difference delta that must be excluded. We propose an alternative Bayesian approach to the design and analysis of active control trials. We derive the posterior probability that E is superior to P or that E is at least k% as good as C and that C is more effective than P. We also derive approximations for use with logistic and proportional hazard models. Selection of prior distributions is discussed, and results are illustrated using data from an active control trial of a drug for the treatment of unstable angina.  相似文献   

17.
Curve-free and model-based continual reassessment method designs   总被引:2,自引:0,他引:2  
O'Quigley J 《Biometrics》2002,58(1):245-249
Gasparini and Eisele (2000, Biometrics 56, 609 615) present a development of the continual reassessment method of O'Quigley, Pepe, and Fisher (1990, Biometrics 46, 33-48). They call their development a curve-free method for Phase I clinical trials. However, unless we are dealing with informative prior information, then the curve-free method coincides with the usual model-based continual reassessment method. Both methods are subject to arbitrary specification parameters, and we provide some discussion on this. Whatever choices are made for one method, there exists equivalent choices for the other method, where " equivalent" means that the operating characteristics (sequential dose allocation and final recommendation) are the same. The insightful development of Gasparini and Eisele provides clarification on some of the basic ideas behind the continual reassessment method, particularly when viewed from a Bayesian perspective. But their development does not lead to a new class of designs and the comparative results in their article, indicating some preference for curve-free designs over model-based designs, are simply reflecting a more fortunate choice of arbitrary specification parameters. Other choices could equally well have inversed their conclusion. A correct conclusion should be one of operational equivalence. The story is different for the case of informative priors, a situation that is inherently much more difficult. We discuss this. We also mention the important idea of two-stage designs (Moller, 1995, Statistics in Medicine 14, 911-922; O'Quigley and Shen, 1996, Biometrics 52, 163-174), arguing, via a simple comparison with the results of Gasparini and Eisele (2000), that there is room for notable gains here. Two-stage designs also have an advantage of avoiding the issue of prior specification altogether.  相似文献   

18.
Traditionally drug development is generally divided into three phases which have different aims and objectives. Recently so-called adaptive seamless designs that allow combination of the objectives of different development phases into a single trial have gained much interest. Adaptive trials combining treatment selection typical for Phase II and confirmation of efficacy as in Phase III are referred to as adaptive seamless Phase II/III designs and are considered in this paper. We compared four methods for adaptive treatment selection, namely the classical Dunnett test, an adaptive version of the Dunnett test based on the conditional error approach, the combination test approach, and an approach within the classical group-sequential framework. The latter two approaches have only recently been published. In a simulation study we found that no one method dominates the others in terms of power apart from the adaptive Dunnett test that dominates the classical Dunnett by construction. Furthermore, scenarios under which one approach outperforms others are described.  相似文献   

19.
In cancer clinical trials, it is often of interest in estimating the ratios of hazard rates at some specific time points during the study from two independent populations. In this paper, we consider nonparametric confidence interval procedures for the hazard ratio based on kernel estimates for the hazard rates with under-smoothing bandwidths. Two methods are used to derive the confidence intervals: one based on the asymptotic normality of the ratio of the kernel estimates for the hazard rates in two populations and another through Fieller's Theorem. The performances of the proposed confidence intervals are evaluated through Monte-Carlo simulations and applied to the analysis of data from a clinical trial on early breast cancer.  相似文献   

20.
Adaptive seamless designs combine confirmatory testing, a domain of phase III trials, with features such as treatment or subgroup selection, typically associated with phase II trials. They promise to increase the efficiency of development programmes of new drugs, for example, in terms of sample size and/or development time. It is well acknowledged that adaptive designs are more involved from a logistical perspective and require more upfront planning, often in the form of extensive simulation studies, than conventional approaches. Here, we present a framework for adaptive treatment and subgroup selection using the same notation, which links the somewhat disparate literature on treatment selection on one side and on subgroup selection on the other. Furthermore, we introduce a flexible and efficient simulation model that serves both designs. As primary endpoints often take a long time to observe, interim analyses are frequently informed by early outcomes. Therefore, all methods presented accommodate interim analyses informed by either the primary outcome or an early outcome. The R package asd , previously developed to simulate designs with treatment selection, was extended to include subgroup selection (so-called adaptive enrichment designs). Here, we describe the functionality of the R package asd and use it to present some worked-up examples motivated by clinical trials in chronic obstructive pulmonary disease and oncology. The examples both illustrate various features of the R package and provide insights into the operating characteristics of adaptive seamless studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号