首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
In a typical comparative clinical trial the randomization scheme is fixed at the beginning of the study, and maintained throughout the course of the trial. A number of researchers have championed a randomized trial design referred to as ‘outcome‐adaptive randomization.’ In this type of trial, the likelihood of a patient being enrolled to a particular arm of the study increases or decreases as preliminary information becomes available suggesting that treatment may be superior or inferior. While the design merits of outcome‐adaptive trials have been debated, little attention has been paid to significant ethical concerns that arise in the conduct of such studies. These include loss of equipoise, lack of processes for adequate informed consent, and inequalities inherent in the research design which could lead to perceptions of injustice that may have negative implications for patients and the research enterprise. This article examines the ethical difficulties inherent in outcome‐adaptive trials.  相似文献   

4.
Pei L  Hughes MD 《Biometrics》2008,64(4):1117-1125
SUMMARY: Bridging clinical trials are sometimes designed to evaluate whether a proposed dose for use in one population, for example, children, gives similar pharmacokinetic (PK) levels, or has similar effects on a surrogate marker as an established effective dose used in another population, for example, adults. For HIV bridging trials, because of the increased risk of viral resistance to drugs at low PK levels, the goal is often to determine whether the doses used in different populations result in similar percentages of patients with low PK levels. For example, it may be desired to evaluate that a proposed pediatric dose gives approximately 10% of children with PK levels below the 10th percentile of PK levels for the established adult dose. However, the 10th percentile for the adult dose is often imprecisely estimated in studies of relatively small size. Little attention has been given to the statistical framework for such bridging studies. In this article, a formal framework for the design and analysis of quantile-based bridging studies is proposed. The methodology is then developed for normally distributed outcome measures from both frequentist and Bayesian directions. Sample size and other design considerations are discussed.  相似文献   

5.
Bayesian hierarchical models have been applied in clinical trials to allow for information sharing across subgroups. Traditional Bayesian hierarchical models do not have subgroup classifications; thus, information is shared across all subgroups. When the difference between subgroups is large, it suggests that the subgroups belong to different clusters. In that case, placing all subgroups in one pool and borrowing information across all subgroups can result in substantial bias for the subgroups with strong borrowing, or a lack of efficiency gain with weak borrowing. To resolve this difficulty, we propose a hierarchical Bayesian classification and information sharing (BaCIS) model for the design of multigroup phase II clinical trials with binary outcomes. We introduce subgroup classification into the hierarchical model. Subgroups are classified into two clusters on the basis of their outcomes mimicking the hypothesis testing framework. Subsequently, information sharing takes place within subgroups in the same cluster, rather than across all subgroups. This method can be applied to the design and analysis of multigroup clinical trials with binary outcomes. Compared to the traditional hierarchical models, better operating characteristics are obtained with the BaCIS model under various scenarios.  相似文献   

6.
Atkinson AC  Biswas A 《Biometrics》2005,61(1):118-125
Adaptive designs are used in phase III clinical trials for skewing the allocation pattern toward the better treatments. We use optimum design theory to derive a skewed Bayesian biased-coin procedure for sequential designs with continuous responses. The skewed designs are used to provide adaptive designs, the performance of which is studied numerically and theoretically. Important properties are loss and the proportion of allocation to the better treatment.  相似文献   

7.
Yimei Li  Ying Yuan 《Biometrics》2020,76(4):1364-1373
Pediatric phase I trials are usually carried out after the adult trial testing the same agent has started, but not completed yet. As the pediatric trial progresses, in light of the accrued interim data from the concurrent adult trial, the pediatric protocol often is amended to modify the original pediatric dose escalation design. In practice, this is done frequently in an ad hoc way, interrupting patient accrual and slowing down the trial. We developed a pediatric-continuous reassessment method (PA-CRM) to streamline this process, providing a more efficient and rigorous method to find the maximum tolerated dose for pediatric phase I oncology trials. We use a discounted joint likelihood of the adult and pediatric data, with a discount parameter controlling information borrowing between pediatric and adult trials. According to the interim adult and pediatric data, the discount parameter is adaptively updated using the Bayesian model averaging method. Numerical study shows that the PA-CRM improves the efficiency and accuracy of the pediatric trial and is robust to various model assumptions.  相似文献   

8.
9.
10.
In oncology, single‐arm two‐stage designs with binary endpoint are widely applied in phase II for the development of cytotoxic cancer therapies. Simon's optimal design with prefixed sample sizes in both stages minimizes the expected sample size under the null hypothesis and is one of the most popular designs. The search algorithms that are currently used to identify phase II designs showing prespecified characteristics are computationally intensive. For this reason, most authors impose restrictions on their search procedure. However, it remains unclear to what extent this approach influences the optimality of the resulting designs. This article describes an extension to fixed sample size phase II designs by allowing the sample size of stage two to depend on the number of responses observed in the first stage. Furthermore, we present a more efficient numerical algorithm that allows for an exhaustive search of designs. Comparisons between designs presented in the literature and the proposed optimal adaptive designs show that while the improvements are generally moderate, notable reductions in the average sample size can be achieved for specific parameter constellations when applying the new method and search strategy.  相似文献   

11.
Bayesian methods allow borrowing of historical information through prior distributions. The concept of prior effective sample size (prior ESS) facilitates quantification and communication of such prior information by equating it to a sample size. Prior information can arise from historical observations; thus, the traditional approach identifies the ESS with such a historical sample size. However, this measure is independent of newly observed data, and thus would not capture an actual “loss of information” induced by the prior in case of prior-data conflict. We build on a recent work to relate prior impact to the number of (virtual) samples from the current data model and introduce the effective current sample size (ECSS) of a prior, tailored to the application in Bayesian clinical trial designs. Special emphasis is put on robust mixture, power, and commensurate priors. We apply the approach to an adaptive design in which the number of recruited patients is adjusted depending on the effective sample size at an interim analysis. We argue that the ECSS is the appropriate measure in this case, as the aim is to save current (as opposed to historical) patients from recruitment. Furthermore, the ECSS can help overcome lack of consensus in the ESS assessment of mixture priors and can, more broadly, provide further insights into the impact of priors. An R package accompanies the paper.  相似文献   

12.
We propose an adaptive two-stage Bayesian design for finding one or more acceptable dose combinations of two cytotoxic agents used together in a Phase I clinical trial. The method requires that each of the two agents has been studied previously as a single agent, which is almost invariably the case in practice. A parametric model is assumed for the probability of toxicity as a function of the two doses. Informative priors for parameters characterizing the single-agent toxicity probability curves are either elicited from the physician(s) planning the trial or obtained from historical data, and vague priors are assumed for parameters characterizing two-agent interactions. A method for eliciting the single-agent parameter priors is described. The design is applied to a trial of gemcitabine and cyclophosphamide, and a simulation study is presented.  相似文献   

13.
A common assumption of data analysis in clinical trials is that the patient population, as well as treatment effects, do not vary during the course of the study. However, when trials enroll patients over several years, this hypothesis may be violated. Ignoring variations of the outcome distributions over time, under the control and experimental treatments, can lead to biased treatment effect estimates and poor control of false positive results. We propose and compare two procedures that account for possible variations of the outcome distributions over time, to correct treatment effect estimates, and to control type-I error rates. The first procedure models trends of patient outcomes with splines. The second leverages conditional inference principles, which have been introduced to analyze randomized trials when patient prognostic profiles are unbalanced across arms. These two procedures are applicable in response-adaptive clinical trials. We illustrate the consequences of trends in the outcome distributions in response-adaptive designs and in platform trials, and investigate the proposed methods in the analysis of a glioblastoma study.  相似文献   

14.
The evaluation of surrogate endpoints for primary use in future clinical trials is an increasingly important research area, due to demands for more efficient trials coupled with recent regulatory acceptance of some surrogates as 'valid.' However, little consideration has been given to how a trial that utilizes a newly validated surrogate endpoint as its primary endpoint might be appropriately designed. We propose a novel Bayesian adaptive trial design that allows the new surrogate endpoint to play a dominant role in assessing the effect of an intervention, while remaining realistically cautious about its use. By incorporating multitrial historical information on the validated relationship between the surrogate and clinical endpoints, then subsequently evaluating accumulating data against this relationship as the new trial progresses, we adaptively guard against an erroneous assessment of treatment based upon a truly invalid surrogate. When the joint outcomes in the new trial seem plausible given similar historical trials, we proceed with the surrogate endpoint as the primary endpoint, and do so adaptively-perhaps stopping the trial for early success or inferiority of the experimental treatment, or for futility. Otherwise, we discard the surrogate and switch adaptive determinations to the original primary endpoint. We use simulation to test the operating characteristics of this new design compared to a standard O'Brien-Fleming approach, as well as the ability of our design to discriminate trustworthy from untrustworthy surrogates in hypothetical future trials. Furthermore, we investigate possible benefits using patient-level data from 18 adjuvant therapy trials in colon cancer, where disease-free survival is considered a newly validated surrogate endpoint for overall survival.  相似文献   

15.
This paper proposes a two-stage phase I-II clinical trial design to optimize dose-schedule regimes of an experimental agent within ordered disease subgroups in terms of the toxicity-efficacy trade-off. The design is motivated by settings where prior biological information indicates it is certain that efficacy will improve with ordinal subgroup level. We formulate a flexible Bayesian hierarchical model to account for associations among subgroups and regimes, and to characterize ordered subgroup effects. Sequentially adaptive decision-making is complicated by the problem, arising from the motivating application, that efficacy is scored on day 90 and toxicity is evaluated within 30 days from the start of therapy, while the patient accrual rate is fast relative to these outcome evaluation intervals. To deal with this in a practical manner, we take a likelihood-based approach that treats unobserved toxicity and efficacy outcomes as missing values, and use elicited utilities that quantify the efficacy-toxicity trade-off as a decision criterion. Adaptive randomization is used to assign patients to regimes while accounting for subgroups, with randomization probabilities depending on the posterior predictive distributions of utilities. A simulation study is presented to evaluate the design's performance under a variety of scenarios, and to assess its sensitivity to the amount of missing data, the prior, and model misspecification.  相似文献   

16.
In the management of most chronic conditions characterized by the lack of universally effective treatments, adaptive treatment strategies (ATSs) have grown in popularity as they offer a more individualized approach. As a result, sequential multiple assignment randomized trials (SMARTs) have gained attention as the most suitable clinical trial design to formalize the study of these strategies. While the number of SMARTs has increased in recent years, sample size and design considerations have generally been carried out in frequentist settings. However, standard frequentist formulae require assumptions on interim response rates and variance components. Misspecifying these can lead to incorrect sample size calculations and correspondingly inadequate levels of power. The Bayesian framework offers a straightforward path to alleviate some of these concerns. In this paper, we provide calculations in a Bayesian setting to allow more realistic and robust estimates that account for uncertainty in inputs through the ‘two priors’ approach. Additionally, compared to the standard frequentist formulae, this methodology allows us to rely on fewer assumptions, integrate pre-trial knowledge, and switch the focus from the standardized effect size to the MDD. The proposed methodology is evaluated in a thorough simulation study and is implemented to estimate the sample size for a full-scale SMART of an internet-based adaptive stress management intervention on cardiovascular disease patients using data from its pilot study conducted in two Canadian provinces.  相似文献   

17.
One of the primary objectives of an oncology dose-finding trial for novel therapies, such as molecular-targeted agents and immune-oncology therapies, is to identify an optimal dose (OD) that is tolerable and therapeutically beneficial for subjects in subsequent clinical trials. These new therapeutic agents appear more likely to induce multiple low or moderate-grade toxicities than dose-limiting toxicities. Besides, for efficacy, evaluating the overall response and long-term stable disease in solid tumors and considering the difference between complete remission and partial remission in lymphoma are preferable. It is also essential to accelerate early-stage trials to shorten the entire period of drug development. However, it is often challenging to make real-time adaptive decisions due to late-onset outcomes, fast accrual rates, and differences in outcome evaluation periods for efficacy and toxicity. To solve the issues, we propose a time-to-event generalized Bayesian optimal interval design to accelerate dose finding, accounting for efficacy and toxicity grades. The new design named “TITE-gBOIN-ET” design is model-assisted and straightforward to implement in actual oncology dose-finding trials. Simulation studies show that the TITE-gBOIN-ET design significantly shortens the trial duration compared with the designs without sequential enrollment while having comparable or higher performance in the percentage of correct OD selection and the average number of patients allocated to the ODs across various realistic settings.  相似文献   

18.
Although there are several new designs for phase I cancer clinical trials including the continual reassessment method and accelerated titration design, the traditional algorithm-based designs, like the '3 + 3' design, are still widely used because of their practical simplicity. In this paper, we study some key statistical properties of the traditional algorithm-based designs in a general framework and derive the exact formulae for the corresponding statistical quantities. These quantities are important for the investigator to gain insights regarding the design of the trial, and are (i) the probability of a dose being chosen as the maximum tolerated dose (MTD); (ii) the expected number of patients treated at each dose level; (iii) target toxicity level (i.e. the expected dose-limiting toxicity (DLT) incidences at the MTD); (iv) expected DLT incidences at each dose level and (v) expected overall DLT incidences in the trial. Real examples of clinical trials are given, and a computer program to do the calculation can be found at the authors' website approximately linyo" locator-type="url">http://www2.umdnj.edu/ approximately linyo.  相似文献   

19.
There is growing interest in integrated Phase I/II oncology clinical trials involving molecularly targeted agents (MTA). One of the main challenges of these trials are nontrivial dose–efficacy relationships and administration of MTAs in combination with other agents. While some designs were recently proposed for such Phase I/II trials, the majority of them consider the case of binary toxicity and efficacy endpoints only. At the same time, a continuous efficacy endpoint can carry more information about the agent's mechanism of action, but corresponding designs have received very limited attention in the literature. In this work, an extension of a recently developed information‐theoretic design for the case of a continuous efficacy endpoint is proposed. The design transforms the continuous outcome using the logistic transformation and uses an information–theoretic argument to govern selection during the trial. The performance of the design is investigated in settings of single‐agent and dual‐agent trials. It is found that the novel design leads to substantial improvements in operating characteristics compared to a model‐based alternative under scenarios with nonmonotonic dose/combination–efficacy relationships. The robustness of the design to missing/delayed efficacy responses and to the correlation in toxicity and efficacy endpoints is also investigated.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号