首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Wages NA  Conaway MR  O'Quigley J 《Biometrics》2011,67(4):1555-1563
Summary Much of the statistical methodology underlying the experimental design of phase 1 trials in oncology is intended for studies involving a single cytotoxic agent. The goal of these studies is to estimate the maximally tolerated dose, the highest dose that can be administered with an acceptable level of toxicity. A fundamental assumption of these methods is monotonicity of the dose–toxicity curve. This is a reasonable assumption for single‐agent trials in which the administration of greater doses of the agent can be expected to produce dose‐limiting toxicities in increasing proportions of patients. When studying multiple agents, the assumption may not hold because the ordering of the toxicity probabilities could possibly be unknown for several of the available drug combinations. At the same time, some of the orderings are known and so we describe the whole situation as that of a partial ordering. In this article, we propose a new two‐dimensional dose‐finding method for multiple‐agent trials that simplifies to the continual reassessment method (CRM), introduced by O'Quigley, Pepe, and Fisher (1990, Biometrics 46 , 33–48), when the ordering is fully known. This design enables us to relax the assumption of a monotonic dose–toxicity curve. We compare our approach and some simulation results to a CRM design in which the ordering is known as well as to other suggestions for partial orders.  相似文献   

2.
Gasparini M  Eisele J 《Biometrics》2000,56(2):609-615
Consider the problem of finding the dose that is as high as possible subject to having a controlled rate of toxicity. The problem is commonplace in oncology Phase I clinical trials. Such a dose is often called the maximum tolerated dose (MTD) since it represents a necessary trade-off between efficacy and toxicity. The continual reassessment method (CRM) is an improvement over traditional up-and-down schemes for estimating the MTD. It is based on a Bayesian approach and on the assumption that the dose-toxicity relationship follows a specific response curve, e.g., the logistic or power curve. The purpose of this paper is to illustrate how the assumption of a specific curve used in the CRM is not necessary and can actually hinder the efficient use of prior inputs. An alternative curve-free method in which the probabilities of toxicity are modeled directly as an unknown multidimensional parameter is presented. To that purpose, a product-of-beta prior (PBP) is introduced and shown to bring about logical improvements. Practical improvements are illustrated by simulation results.  相似文献   

3.
Yin G  Li Y  Ji Y 《Biometrics》2006,62(3):777-787
A Bayesian adaptive design is proposed for dose-finding in phase I/II clinical trials to incorporate the bivariate outcomes, toxicity and efficacy, of a new treatment. Without specifying any parametric functional form for the drug dose-response curve, we jointly model the bivariate binary data to account for the correlation between toxicity and efficacy. After observing all the responses of each cohort of patients, the dosage for the next cohort is escalated, deescalated, or unchanged according to the proposed odds ratio criteria constructed from the posterior toxicity and efficacy probabilities. A novel class of prior distributions is proposed through logit transformations which implicitly imposes a monotonic constraint on dose toxicity probabilities and correlates the probabilities of the bivariate outcomes. We conduct simulation studies to evaluate the operating characteristics of the proposed method. Under various scenarios, the new Bayesian design based on the toxicity-efficacy odds ratio trade-offs exhibits good properties and treats most patients at the desirable dose levels. The method is illustrated with a real trial design for a breast medical oncology study.  相似文献   

4.
A Bayesian design is proposed for randomized phase II clinical trials that screen multiple experimental treatments compared to an active control based on ordinal categorical toxicity and response. The underlying model and design account for patient heterogeneity characterized by ordered prognostic subgroups. All decision criteria are subgroup specific, including interim rules for dropping unsafe or ineffective treatments, and criteria for selecting optimal treatments at the end of the trial. The design requires an elicited utility function of the two outcomes that varies with the subgroups. Final treatment selections are based on posterior mean utilities. The methodology is illustrated by a trial of targeted agents for metastatic renal cancer, which motivated the design methodology. In the context of this application, the design is evaluated by computer simulation, including comparison to three designs that conduct separate trials within subgroups, or conduct one trial while ignoring subgroups, or base treatment selection on estimated response rates while ignoring toxicity.  相似文献   

5.
Some methods of statistical analysis of data on DNA fingerprinting suffer serious weaknesses. Unlinked Mendelizing loci that are at linkage equilibrium in subpopulations may be statistically associated, not statistically independent, in the population as a whole if there is heterogeneity in gene frequencies between subpopulations. In the populations where DNA fingerprinting is used for forensic applications, the assumption that DNA fragments occur statistically independently for different probes, different loci, or different fragment size classes lacks supporting data so far; there is some contrary evidence. Statistical association of alleles may cause estimates based on the assumption of statistical independence to understate the true matching probabilities by many orders of magnitude. The assumptions that DNA fragments occur independently and with constant frequency within a size class appear to be contradicted by the available data on the mean and variance of the number of fragments per person. The mistaken use of the geometric mean instead of the arithmetic mean to compute the probability that every DNA fragment of a randomly chosen person is present among the DNA fragments of a specimen may substantially understate the probability of a match between blots, even if other assumptions involved in the calculations are taken as correct. The conclusion is that some astronomically small probabilities of matching by chance, which have been claimed in forensic applications of DNA fingerprinting, presently lack substantial empirical and theoretical support.  相似文献   

6.
7.
Yin G  Yuan Y 《Biometrics》2009,65(3):866-875
Summary .  Two-agent combination trials have recently attracted enormous attention in oncology research. There are several strong motivations for combining different agents in a treatment: to induce the synergistic treatment effect, to increase the dose intensity with nonoverlapping toxicities, and to target different tumor cell susceptibilities. To accommodate this growing trend in clinical trials, we propose a Bayesian adaptive design for dose finding based on latent 2 × 2 tables. In the search for the maximum tolerated dose combination, we continuously update the posterior estimates for the unknown parameters associated with marginal probabilities and the correlation parameter based on the data from successive patients. By reordering the dose toxicity probabilities in the two-dimensional space, we assign each coming cohort of patients to the most appropriate dose combination. We conduct extensive simulation studies to examine the operating characteristics of the proposed method under various practical scenarios. Finally, we illustrate our dose-finding procedure with a clinical trial of agent combinations at M. D. Anderson Cancer Center.  相似文献   

8.
There is growing interest in integrated Phase I/II oncology clinical trials involving molecularly targeted agents (MTA). One of the main challenges of these trials are nontrivial dose–efficacy relationships and administration of MTAs in combination with other agents. While some designs were recently proposed for such Phase I/II trials, the majority of them consider the case of binary toxicity and efficacy endpoints only. At the same time, a continuous efficacy endpoint can carry more information about the agent's mechanism of action, but corresponding designs have received very limited attention in the literature. In this work, an extension of a recently developed information‐theoretic design for the case of a continuous efficacy endpoint is proposed. The design transforms the continuous outcome using the logistic transformation and uses an information–theoretic argument to govern selection during the trial. The performance of the design is investigated in settings of single‐agent and dual‐agent trials. It is found that the novel design leads to substantial improvements in operating characteristics compared to a model‐based alternative under scenarios with nonmonotonic dose/combination–efficacy relationships. The robustness of the design to missing/delayed efficacy responses and to the correlation in toxicity and efficacy endpoints is also investigated.  相似文献   

9.
One of the primary objectives of an oncology dose-finding trial for novel therapies, such as molecular-targeted agents and immune-oncology therapies, is to identify an optimal dose (OD) that is tolerable and therapeutically beneficial for subjects in subsequent clinical trials. These new therapeutic agents appear more likely to induce multiple low or moderate-grade toxicities than dose-limiting toxicities. Besides, for efficacy, evaluating the overall response and long-term stable disease in solid tumors and considering the difference between complete remission and partial remission in lymphoma are preferable. It is also essential to accelerate early-stage trials to shorten the entire period of drug development. However, it is often challenging to make real-time adaptive decisions due to late-onset outcomes, fast accrual rates, and differences in outcome evaluation periods for efficacy and toxicity. To solve the issues, we propose a time-to-event generalized Bayesian optimal interval design to accelerate dose finding, accounting for efficacy and toxicity grades. The new design named “TITE-gBOIN-ET” design is model-assisted and straightforward to implement in actual oncology dose-finding trials. Simulation studies show that the TITE-gBOIN-ET design significantly shortens the trial duration compared with the designs without sequential enrollment while having comparable or higher performance in the percentage of correct OD selection and the average number of patients allocated to the ODs across various realistic settings.  相似文献   

10.
We propose an adaptive two-stage Bayesian design for finding one or more acceptable dose combinations of two cytotoxic agents used together in a Phase I clinical trial. The method requires that each of the two agents has been studied previously as a single agent, which is almost invariably the case in practice. A parametric model is assumed for the probability of toxicity as a function of the two doses. Informative priors for parameters characterizing the single-agent toxicity probability curves are either elicited from the physician(s) planning the trial or obtained from historical data, and vague priors are assumed for parameters characterizing two-agent interactions. A method for eliciting the single-agent parameter priors is described. The design is applied to a trial of gemcitabine and cyclophosphamide, and a simulation study is presented.  相似文献   

11.
The current drug development pathway in oncology research has led to a large attrition rate for new drugs, in part due to a general lack of appropriate preclinical studies that are capable of accurately predicting efficacy and/or toxicity in the target population. Because of an obvious need for novel therapeutics in many types of cancer, new compounds are being investigated in human Phase I and Phase II clinical trials before a complete understanding of their toxicity and efficacy profiles is obtained. In fact, for newer targeted molecular agents that are often cytostatic in nature, the conventional preclinical evaluation used for traditional cytotoxic chemotherapies utilizing primary tumor shrinkage as an endpoint may not be appropriate. By utilizing an integrated pharmacokinetic/pharmacodynamic approach, along with proper selection of a model system, the drug development process in oncology research may be improved leading to a better understanding of the determinants of efficacy and toxicity, and ultimately fewer drugs that fail once they reach human clinical trials.  相似文献   

12.
Huang X  Biswas S  Oki Y  Issa JP  Berry DA 《Biometrics》2007,63(2):429-436
The use of multiple drugs in a single clinical trial or as a therapeutic strategy has become common, particularly in the treatment of cancer. Because traditional trials are designed to evaluate one agent at a time, the evaluation of therapies in combination requires specialized trial designs. In place of the traditional separate phase I and II trials, we propose using a parallel phase I/II clinical trial to evaluate simultaneously the safety and efficacy of combination dose levels, and select the optimal combination dose. The trial is started with an initial period of dose escalation, then patients are randomly assigned to admissible dose levels. These dose levels are compared with each other. Bayesian posterior probabilities are used in the randomization to adaptively assign more patients to doses with higher efficacy levels. Combination doses with lower efficacy are temporarily closed and those with intolerable toxicity are eliminated from the trial. The trial is stopped if the posterior probability for safety, efficacy, or futility crosses a prespecified boundary. For illustration, we apply the design to a combination chemotherapy trial for leukemia. We use simulation studies to assess the operating characteristics of the parallel phase I/II trial design, and compare it to a conventional design for a standard phase I and phase II trial. The simulations show that the proposed design saves sample size, has better power, and efficiently assigns more patients to doses with higher efficacy levels.  相似文献   

13.
Drug combination trials are increasingly common nowadays in clinical research. However, very few methods have been developed to consider toxicity attributions in the dose escalation process. We are motivated by a trial in which the clinician is able to identify certain toxicities that can be attributed to one of the agents. We present a Bayesian adaptive design in which toxicity attributions are modeled via copula regression and the maximum tolerated dose (MTD) curve is estimated as a function of model parameters. The dose escalation algorithm uses cohorts of two patients, following the continual reassessment method (CRM) scheme, where at each stage of the trial, we search for the dose of one agent given the current dose of the other agent. The performance of the design is studied by evaluating its operating characteristics when the underlying model is either correctly specified or misspecified. We show that this method can be extended to accommodate discrete dose combinations.  相似文献   

14.
Li Z 《Biometrics》1999,55(1):277-283
A method of interim monitoring is described for survival trials in which the proportional hazards assumption may not hold. This method extends the test statistics based on the cumulative weighted difference in the Kaplan-Meier estimates (Pepe and Fleming, 1989, Biometrics 45, 497-507) to the sequential setting. Therefore, it provides a useful alternative to the group sequential linear rank tests. With an appropriate weight function, the test statistic itself provides an estimator for the cumulative weighted difference in survival probabilities, which is an interpretable measure for the treatment difference, especially when the proportional hazards model fails. The method is illustrated based on the design of a real trial. The operating characteristics are studied through a small simulation.  相似文献   

15.
Bayesian Nonparametric Nonproportional Hazards Survival Modeling   总被引:1,自引:0,他引:1  
Summary .  We develop a dependent Dirichlet process model for survival analysis data. A major feature of the proposed approach is that there is no necessity for resulting survival curve estimates to satisfy the ubiquitous proportional hazards assumption. An illustration based on a cancer clinical trial is given, where survival probabilities for times early in the study are estimated to be lower for those on a high-dose treatment regimen than for those on the low dose treatment, while the reverse is true for later times, possibly due to the toxic effect of the high dose for those who are not as healthy at the beginning of the study.  相似文献   

16.
A multiple toxicity model for the quantal response of organisms is constructed based on an existing bivariate theory. The main assumption is that the tolerances follow a multivariate normal distribution function. However, any monotone tolerance distribution can be applied by mapping the integration region in the n-dimensional space of transforms on the n-dimensional space of normal equivalent deviates. General requirements to noninteractive bivariate tolerance distributions are discussed, and it is shown that bivariate logit and Weibull distributions, constructed according to the mapping procedure, meet these criteria. The univariate Weibull dose-response model is given a novel interpretation in terms of reactions between toxicant molecules and a hypothetical key receptor of the organism. The application of the multiple toxicity model is demonstrated using literature data for the action of gamma-benzene hexachloride and pyrethrins on flour beetles (Tribolium castaneum). Nonnormal tolerance distributions are needed when the mortality data include extreme response probabilities.  相似文献   

17.
A new method is proposed to derive the size of the interspecies uncertainty factor (UF) that is toxicologically and statistically based. The method is based on the biological/evolutionary assumption that similarity in susceptibility to toxic substances is a function of phylogenetic relatedness. This assumption is assessed via a large and highly structured aquatic database with over 500 agents tested in specific binary toxicity comparison (i.e., when two species have been tested with the same chemical under identical conditions) for dozens of species of wide phylogenetic relatedness. The methodology takes into account the generic need to estimate a response in any species (not just human) and the need to predict responses for new chemical agents. The method involves quantifying interspecies variation in susceptibility to numerous toxic substances via the use of binary interspecies comparisons that are converted to a 95% UF. This interspecies UF represents an estimate of the upper 95% of the population of 95% prediction intervals (PI) for binary interspecies comparisons within four categories of phylogenetic relatedness (species‐within‐genus, genera‐within‐family, families‐within‐order, orders‐within‐class). The 95% interspecies UFs range from a low of 10 for species‐within‐genus up to 65 for orders‐within‐class. Most mammalian toxicology studies involving mice, rats, cats, dogs, gerbils, and rabbits are orders‐within‐class categories for human risk assessment and would be provided a 65‐fold UF. Larger or smaller interspecies UF values could be selected based on the level of protection desired. The procedures described have application to both human and ecological risk assessment.  相似文献   

18.
Dose-finding based on efficacy-toxicity trade-offs   总被引:1,自引:0,他引:1  
Thall PF  Cook JD 《Biometrics》2004,60(3):684-693
We present an adaptive Bayesian method for dose-finding in phase I/II clinical trials based on trade-offs between the probabilities of treatment efficacy and toxicity. The method accommodates either trinary or bivariate binary outcomes, as well as efficacy probabilities that possibly are nonmonotone in dose. Doses are selected for successive patient cohorts based on a set of efficacy-toxicity trade-off contours that partition the two-dimensional outcome probability domain. Priors are established by solving for hyperparameters that optimize the fit of the model to elicited mean outcome probabilities. For trinary outcomes, the new algorithm is compared to the method of Thall and Russell (1998, Biometrics 54, 251-264) by application to a trial of rapid treatment for ischemic stroke. The bivariate binary outcome case is illustrated by a trial of graft-versus-host disease treatment in allogeneic bone marrow transplantation. Computer simulations show that, under a wide rage of dose-outcome scenarios, the new method has high probabilities of making correct decisions and treats most patients at doses with desirable efficacy-toxicity trade-offs.  相似文献   

19.
Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies.  相似文献   

20.
Fan SK  Wang YG 《Biometrics》2007,63(3):856-864
Summary .   The goal of this article is to provide a new design framework and its corresponding estimation for phase I trials. Existing phase I designs assign each subject to one dose level based on responses from previous subjects. Yet it is possible that subjects with neither toxicity nor efficacy responses can be treated at higher dose levels, and their subsequent responses to higher doses will provide more information. In addition, for some trials, it might be possible to obtain multiple responses (repeated measures) from a subject at different dose levels. In this article, a nonparametric estimation method is developed for such studies. We also explore how the designs of multiple doses per subject can be implemented to improve design efficiency. The gain of efficiency from "single dose per subject" to "multiple doses per subject" is evaluated for several scenarios. Our numerical study shows that using "multiple doses per subject" and the proposed estimation method together increases the efficiency substantially.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号