首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Thall PF  Nguyen HQ  Estey EH 《Biometrics》2008,64(4):1126-1136
SUMMARY: A Bayesian sequential dose-finding procedure based on bivariate (efficacy, toxicity) outcomes that accounts for patient covariates and dose-covariate interactions is presented. Historical data are used to obtain an informative prior on covariate main effects, with uninformative priors assumed for all dose effect parameters. Elicited limits on the probabilities of efficacy and toxicity for each of a representative set of covariate vectors are used to construct bounding functions that determine the acceptability of each dose for each patient. Elicited outcome probability pairs that are equally desirable for a reference patient are used to define two different posterior criteria, either of which may be used to select an optimal covariate-specific dose for each patient. Because the dose selection criteria are covariate specific, different patients may receive different doses at the same point in the trial, and the set of eligible patients may change adaptively during the trial. The method is illustrated by a dose-finding trial in acute leukemia, including a simulation study.  相似文献   

2.
Bekele BN  Shen Y 《Biometrics》2005,61(2):343-354
In this article, we propose a Bayesian approach to phase I/II dose-finding oncology trials by jointly modeling a binary toxicity outcome and a continuous biomarker expression outcome. We apply our method to a clinical trial of a new gene therapy for bladder cancer patients. In this trial, the biomarker expression indicates biological activity of the new therapy. For ethical reasons, the trial is conducted sequentially, with the dose for each successive patient chosen using both toxicity and activity data from patients previously treated in the trial. The modeling framework that we use naturally incorporates correlation between the binary toxicity and continuous activity outcome via a latent Gaussian variable. The dose-escalation/de-escalation decision rules are based on the posterior distributions of both toxicity and activity. A flexible state-space model is used to relate the activity outcome and dose. Extensive simulation studies show that the design reliably chooses the preferred dose using both toxicity and expression outcomes under various clinical scenarios.  相似文献   

3.
Summary We propose a Bayesian dose‐finding design that accounts for two important factors, the severity of toxicity and heterogeneity in patients' susceptibility to toxicity. We consider toxicity outcomes with various levels of severity and define appropriate scores for these severity levels. We then use a multinomial‐likelihood function and a Dirichlet prior to model the probabilities of these toxicity scores at each dose, and characterize the overall toxicity using an average toxicity score (ATS) parameter. To address the issue of heterogeneity in patients' susceptibility to toxicity, we categorize patients into different risk groups based on their susceptibility. A Bayesian isotonic transformation is applied to induce an order‐restricted posterior inference on the ATS. We demonstrate the performance of the proposed dose‐finding design using simulations based on a clinical trial in multiple myeloma.  相似文献   

4.
Summary An outcome‐adaptive Bayesian design is proposed for choosing the optimal dose pair of a chemotherapeutic agent and a biological agent used in combination in a phase I/II clinical trial. Patient outcome is characterized as a vector of two ordinal variables accounting for toxicity and treatment efficacy. A generalization of the Aranda‐Ordaz model (1981, Biometrika 68 , 357–363) is used for the marginal outcome probabilities as functions of a dose pair, and a Gaussian copula is assumed to obtain joint distributions. Numerical utilities of all elementary patient outcomes, allowing the possibility that efficacy is inevaluable due to severe toxicity, are obtained using an elicitation method aimed to establish consensus among the physicians planning the trial. For each successive patient cohort, a dose pair is chosen to maximize the posterior mean utility. The method is illustrated by a trial in bladder cancer, including simulation studies of the method's sensitivity to prior parameters, the numerical utilities, correlation between the outcomes, sample size, cohort size, and starting dose pair.  相似文献   

5.
Yin G  Li Y  Ji Y 《Biometrics》2006,62(3):777-787
A Bayesian adaptive design is proposed for dose-finding in phase I/II clinical trials to incorporate the bivariate outcomes, toxicity and efficacy, of a new treatment. Without specifying any parametric functional form for the drug dose-response curve, we jointly model the bivariate binary data to account for the correlation between toxicity and efficacy. After observing all the responses of each cohort of patients, the dosage for the next cohort is escalated, deescalated, or unchanged according to the proposed odds ratio criteria constructed from the posterior toxicity and efficacy probabilities. A novel class of prior distributions is proposed through logit transformations which implicitly imposes a monotonic constraint on dose toxicity probabilities and correlates the probabilities of the bivariate outcomes. We conduct simulation studies to evaluate the operating characteristics of the proposed method. Under various scenarios, the new Bayesian design based on the toxicity-efficacy odds ratio trade-offs exhibits good properties and treats most patients at the desirable dose levels. The method is illustrated with a real trial design for a breast medical oncology study.  相似文献   

6.
Thall PF  Sung HG  Choudhury A 《Biometrics》2001,57(3):914-921
A new modality for treatment of cancer involves the ex vivo growth of cancer-specific T-cells for subsequent infusion into the patient. The therapeutic aim is selective destruction of cancer cells by the activated infused cells. An important problem in the early phase of developing such a treatment is to determine a maximal tolerated dose (MTD) for use in a subsequent phase II clinical trial. Dose may be quantified by the number of cells infused per unit body weight, and determination of an MTD may be based on the probability of infusional toxicity as a function of dose. As in a phase I trial of a new chemotherapeutic agent, this may be done by treating successive cohorts of patients at different dose levels, with each new level chosen adaptively based on the toxicity data of the patients previously treated. Such a dose-finding strategy is inadequate in T-cell infusion trials because the number of cells grown ex vivo for a given patient may be insufficient for infusing the patient at the current targeted dose. To address this problem, we propose an algorithm for trial conduct that determines a feasible MTD based on the probabilities of both infusibility and toxicity as functions of dose. The method is illustrated by application to a dendritic cell activated lymphocyte infusion trial in the treatment of acute leukemia. A simulation study indicates that the proposed methodology is both safe and reliable.  相似文献   

7.
Monoclonal antibodies (mAbs) are improving the quality of life for patients suffering from serious diseases due to their high specificity for their target and low potential for off-target toxicity. The toxicity of mAbs is primarily driven by their pharmacological activity, and therefore safety testing of these drugs prior to clinical testing is performed in species in which the mAb binds and engages the target to a similar extent to that anticipated in humans. For highly human-specific mAbs, this testing often requires the use of non-human primates (NHPs) as relevant species. It has been argued that the value of these NHP studies is limited because most of the adverse events can be predicted from the knowledge of the target, data from transgenic rodents or target-deficient humans, and other sources. However, many of the mAbs currently in development target novel pathways and may comprise novel scaffolds with multi-functional domains; hence, the pharmacological effects and potential safety risks are less predictable. Here, we present a total of 18 case studies, including some of these novel mAbs, with the aim of interrogating the value of NHP safety studies in human risk assessment. These studies have identified mAb candidate molecules and pharmacological pathways with severe safety risks, leading to candidate or target program termination, as well as highlighting that some pathways with theoretical safety concerns are amenable to safe modulation by mAbs. NHP studies have also informed the rational design of safer drug candidates suitable for human testing and informed human clinical trial design (route, dose and regimen, patient inclusion and exclusion criteria and safety monitoring), further protecting the safety of clinical trial participants.  相似文献   

8.
Amphetamine ('Speed'), methamphetamine ('Ice') and its congener 3,4-methylenedioxymethamphetamine (MDMA; 'Ecstasy') are illicit drugs abused worldwide for their euphoric and stimulant effects. Despite compelling evidence for chronic MDMA neurotoxicity in animal models, the physiological consequences of such toxicity in humans remain unclear. In addition, distinct differences in the metabolism and pharmacokinetics of MDMA between species and different strains of animals prevent the rationalisation of realistic human dose paradigms in animal studies. Here, we attempt to review amphetamine toxicity and in particular MDMA toxicity in the pathogenesis of exemplary human pathologies, independently of confounding environmental factors such as poly-drug use and drug purity.  相似文献   

9.
Huang X  Biswas S  Oki Y  Issa JP  Berry DA 《Biometrics》2007,63(2):429-436
The use of multiple drugs in a single clinical trial or as a therapeutic strategy has become common, particularly in the treatment of cancer. Because traditional trials are designed to evaluate one agent at a time, the evaluation of therapies in combination requires specialized trial designs. In place of the traditional separate phase I and II trials, we propose using a parallel phase I/II clinical trial to evaluate simultaneously the safety and efficacy of combination dose levels, and select the optimal combination dose. The trial is started with an initial period of dose escalation, then patients are randomly assigned to admissible dose levels. These dose levels are compared with each other. Bayesian posterior probabilities are used in the randomization to adaptively assign more patients to doses with higher efficacy levels. Combination doses with lower efficacy are temporarily closed and those with intolerable toxicity are eliminated from the trial. The trial is stopped if the posterior probability for safety, efficacy, or futility crosses a prespecified boundary. For illustration, we apply the design to a combination chemotherapy trial for leukemia. We use simulation studies to assess the operating characteristics of the parallel phase I/II trial design, and compare it to a conventional design for a standard phase I and phase II trial. The simulations show that the proposed design saves sample size, has better power, and efficiently assigns more patients to doses with higher efficacy levels.  相似文献   

10.
A common concern in Bayesian data analysis is that an inappropriately informative prior may unduly influence posterior inferences. In the context of Bayesian clinical trial design, well chosen priors are important to ensure that posterior-based decision rules have good frequentist properties. However, it is difficult to quantify prior information in all but the most stylized models. This issue may be addressed by quantifying the prior information in terms of a number of hypothetical patients, i.e., a prior effective sample size (ESS). Prior ESS provides a useful tool for understanding the impact of prior assumptions. For example, the prior ESS may be used to guide calibration of prior variances and other hyperprior parameters. In this paper, we discuss such prior sensitivity analyses by using a recently proposed method to compute a prior ESS. We apply this in several typical settings of Bayesian biomedical data analysis and clinical trial design. The data analyses include cross-tabulated counts, multiple correlated diagnostic tests, and ordinal outcomes using a proportional-odds model. The study designs include a phase I trial with late-onset toxicities, a phase II trial that monitors event times, and a phase I/II trial with dose-finding based on efficacy and toxicity.  相似文献   

11.
In many settings, including oncology, increasing the dose of treatment results in both increased efficacy and toxicity. With the increasing availability of validated biomarkers and prediction models, there is the potential for individualized dosing based on patient specific factors. We consider the setting where there is an existing dataset of patients treated with heterogenous doses and including binary efficacy and toxicity outcomes and patient factors such as clinical features and biomarkers. The goal is to analyze the data to estimate an optimal dose for each (future) patient based on their clinical features and biomarkers. We propose an optimal individualized dose finding rule by maximizing utility functions for individual patients while limiting the rate of toxicity. The utility is defined as a weighted combination of efficacy and toxicity probabilities. This approach maximizes overall efficacy at a prespecified constraint on overall toxicity. We model the binary efficacy and toxicity outcomes using logistic regression with dose, biomarkers and dose–biomarker interactions. To incorporate the large number of potential parameters, we use the LASSO method. We additionally constrain the dose effect to be non-negative for both efficacy and toxicity for all patients. Simulation studies show that the utility approach combined with any of the modeling methods can improve efficacy without increasing toxicity relative to fixed dosing. The proposed methods are illustrated using a dataset of patients with lung cancer treated with radiation therapy.  相似文献   

12.
One of the criticisms of industry-sponsored human subject testing of toxicants is based on the perception that it is often motivated by an attempt to raise the acceptable exposure limit for the chemical. When Reference Doses (RfDs) or Reference Concentrations (RfCs) are based upon no-effect levels from human rather than animal data, an animal-to-human uncertainty factor (usually 10) is not required, which could conceivably result in a higher safe exposure limit. There has been little in the way of study of the effect of using human vs. animal data on the development of RfDs and RfCs to lend empirical support to this argument. We have recently completed an analysis comparing RfDs and RfCs derived from human data with toxicity values for the same chemicals based on animal data. The results, published in detail elsewhere, are summarized here. We found that the use of human data did not always result in higher RfDs or RfCs. In 36% of the comparisons, human-based RfDs or RfCs were lower than the corresponding animal-based toxicity values, and were more than 3-fold lower in 23% of the comparisons. In 10 out of 43 possible comparisons (23%), insufficient experimental animal data are readily available or data are inappropriate to estimate either RfDs or RfCs. Although there are practical limitations in conducting this type of analysis, it nonetheless suggests that the use of human data does not routinely lead to higher toxicity values. Given the inherent ability of human data to reduce uncertainty regarding risks from human exposures, its use in conjunction with data gathered from experimental animals is a public health protective policy that should be encouraged.  相似文献   

13.
In April 2009, the International Life Sciences Institute (ILSI) Health and Environmental Sciences Institute's (HESI) Developmental and Reproductive Toxicology Technical Committee held a two-day workshop entitled "Developmental Toxicology-New Directions." The third session of the workshop focused on ways to refine animal studies to improve relevance and predictivity for human risk. The session included five presentations on: (1) considerations for refining developmental toxicology testing and data interpretation; (2) comparative embryology and considerations in study design and interpretation; (3) pharmacokinetic considerations in study design; (4) utility of genetically modified models for understanding mode-of-action; and (5) special considerations in reproductive testing for biologics. The presentations were followed by discussion by the presenters and attendees. Much of the discussion focused on aspects of refining current animal testing strategies, including use of toxicokinetic data, dose selection, tiered/triggered testing strategies, species selection, and use of alternative animal models. Another major area of discussion was use of non-animal-based testing paradigms, including how to define a "signal" or adverse effect, translating in vitro exposures to whole animal and human exposures, validation strategies, the need to bridge the existing gap between classical toxicology testing and risk assessment, and development of new technologies. Although there was general agreement among participants that the current testing strategy is effective, there was also consensus that traditional methods are resource-intensive and improved effectiveness of developmental toxicity testing to assess risks to human health is possible. This article provides a summary of the session's presentations and discussion and describes some key areas that warrant further consideration.  相似文献   

14.
The use of the iron chelator deferiprone (L, CP20, 1,2-dimethyl-3-hydroxypyrid-4-one) for the treatment of diseases of iron overload and other disorders is problematic and requires further evaluation. In this study the efficacy, toxicity and mechanism of action of orally administered L were investigated in the guinea pig using the carbonyl iron model of iron overload. In an acute trial, depletion of liver non-heme iron in drug-treated guinea pigs (normal iron status) was maximal (approximately 50% of control) after a single oral dose of L1 of 200 mg kg, suggesting a limited chelatable pool in normal tissue. There was no apparent toxicity up to 600 mg kg. In each of two sub-acute trials, normal and iron-loaded animals were fed L (300 mg kg day) or placebo for six days. Final mortalities were 12/20 (L) and 0/20 (placebo). Symptoms included weakness, weight loss and eye discharge. Iron-loaded as well as normal guinea pigs were affected, indicating that at this drug level iron loading was not protective. In a chronic trial guinea pigs received L (50 mg kg day) or placebo for six days per week over eight months. Liver non-heme iron was reduced in animals iron-loaded prior to the trial. The increase in a wave latency (electroretinogram), the foci of hepatic, myocardial and musculo-skeletal necrosis, and the decrease in white blood cells in the drug-treated/normal diet group even at the low dose of 50 mg kg day suggests that L may be unsuitable for the treatment of diseases which do not involve Fe overload. However, the low level of pathology in animals treated with iron prior to the trial suggests that even a small degree of iron overload (two-fold after eight months) is protective at this drug level. We conclude that the relationship between drug dose and iron status is critical in avoiding toxicity and must be monitored rigorously as cellular iron is depleted.  相似文献   

15.
The use of drug combinations in clinical trials is increasingly common during the last years since a more favorable therapeutic response may be obtained by combining drugs. In phase I clinical trials, most of the existing methodology recommends a one unique dose combination as “optimal,” which may result in a subsequent failed phase II clinical trial since other dose combinations may present higher treatment efficacy for the same level of toxicity. We are particularly interested in the setting where it is necessary to wait a few cycles of therapy to observe an efficacy outcome and the phase I and II population of patients are different with respect to treatment efficacy. Under these circumstances, it is common practice to implement two-stage designs where a set of maximum tolerated dose combinations is selected in a first stage, and then studied in a second stage for treatment efficacy. In this article we present a new two-stage design for early phase clinical trials with drug combinations. In the first stage, binary toxicity data is used to guide the dose escalation and set the maximum tolerated dose combinations. In the second stage, we take the set of maximum tolerated dose combinations recommended from the first stage, which remains fixed along the entire second stage, and through adaptive randomization, we allocate subsequent cohorts of patients in dose combinations that are likely to have high posterior median time to progression. The methodology is assessed with extensive simulations and exemplified with a real trial.  相似文献   

16.
We propose an adaptive two-stage Bayesian design for finding one or more acceptable dose combinations of two cytotoxic agents used together in a Phase I clinical trial. The method requires that each of the two agents has been studied previously as a single agent, which is almost invariably the case in practice. A parametric model is assumed for the probability of toxicity as a function of the two doses. Informative priors for parameters characterizing the single-agent toxicity probability curves are either elicited from the physician(s) planning the trial or obtained from historical data, and vague priors are assumed for parameters characterizing two-agent interactions. A method for eliciting the single-agent parameter priors is described. The design is applied to a trial of gemcitabine and cyclophosphamide, and a simulation study is presented.  相似文献   

17.
Workshops on maternal toxicity were held at the annual Society of Toxicology, Teratology Society, and European Teratology Society meetings in 2009. Speakers presented background information prior to a general discussion on this topic. The following recommendations/options are based on the outcome of the discussions at the workshops:
  • 1. A comprehensive evaluation of all available data from general toxicity studies, range‐finding Developmental and Reproductive Toxicology (DART) studies, class effects, structure–activity relationships, exposure studies, etc. is essential for appropriate dose selection for definitive DART studies. The intent is to avoid marked maternal toxicity leading to mortality or decreased body weight gains of greater than 20% for prolonged periods.
  • (a) Evaluate alternative endpoints for dose selection and data interpretation (e.g., target tissue effects and pharmacology) for biotherapeutics.
  • (B) Evaluate additional maternal parameters based on effects and/or target organs observed in short‐term (e.g., 2‐ or 4‐week) general toxicity studies.
  • 2. Evaluate all available data to determine a cause–effect relationship for developmental toxicity.
  • (a) Conduct a pair‐feeding/pair‐watering study as a follow‐up.
  • (b) Evaluate individual data demonstrating maternal toxicity in the mother with adverse embryo–fetal outcomes in the litter associated with the affected mother.
  • (c) Conduct single‐dose studies at increasing doses as a complement to conventional embryo–fetal toxicity studies for certain classes of compounds that affect the hERG channel.
  • 3. Support statements that embryo–fetal effects are caused by maternal toxicity and/or exaggerated pharmacology, especially for malformations.
  • (a) Provide mechanistic or other supporting data.
  • (b) Establish the relevance of the DART findings in animals for human exposures. Birth Defects Res (Part B) 92:36–51, 2010. © 2011 Wiley‐Liss, Inc.
  相似文献   

18.
As alternative models and scientific advancements improve the ability to predict developmental toxicity, the challenge is how to best use this information to support safe use of pharmaceuticals in humans. While in vivo experimental data are often expected, there are other important considerations that drive the impact of developmental toxicity data to human risk assessment and product labeling. These considerations include three key elements: (1) the drug's likelihood of producing off‐target toxicities, (2) risk tolerance of adverse effects based on indication and patient population, and (3) how much is known about the effects of modulating the target in pregnancy and developmental biology. For example, there is little impact or value of a study in pregnant monkeys to inform the risk assessment for a highly specific monoclonal antibody indicated for a life‐threatening indication against a target known to be critical for pregnancy maintenance and fetal survival. In contrast, a small molecule to a novel biological target for a chronic lifestyle indication would warrant more safety data than simply in vitro studies and a literature review. Rather than accounting for innumerable theoretical possibilities surrounding each potential submission's profile, we consolidated most of the typical situations into eight possible scenarios across these three elements, and present a discussion of these scenarios here. We hope that this framework will facilitate a rational approach to determining what new information is required to inform developmental toxicity risk of pharmaceuticals in context of the specific needs of each program while reducing animal use where possible.  相似文献   

19.
Braun TM  Yuan Z  Thall PF 《Biometrics》2005,61(2):335-343
Most phase I clinical trials are designed to determine a maximum-tolerated dose (MTD) for one initial administration or treatment course of a cytotoxic experimental agent. Toxicity usually is defined as the indicator of whether one or more particular adverse events occur within a short time period from the start of therapy. However, physicians often administer an agent to the patient repeatedly and monitor long-term toxicity due to cumulative effects. We propose a new method for such settings. It is based on the time to toxicity rather than a binary outcome, and the goal is to determine a maximum-tolerated schedule (MTS) rather than a conventional MTD. The model and method account for a patient's entire sequence of administrations, with the overall hazard of toxicity modeled as the sum of a sequence of hazards, each associated with one administration. Data monitoring and decision making are done continuously throughout the trial. We illustrate the method with an allogeneic bone marrow transplantation (BMT) trial to determine how long a recombinant human growth factor can be administered as prophylaxis for acute graft-versus-host disease (aGVHD), and we present a simulation study in the context of this trial.  相似文献   

20.
Although there are several new designs for phase I cancer clinical trials including the continual reassessment method and accelerated titration design, the traditional algorithm-based designs, like the '3 + 3' design, are still widely used because of their practical simplicity. In this paper, we study some key statistical properties of the traditional algorithm-based designs in a general framework and derive the exact formulae for the corresponding statistical quantities. These quantities are important for the investigator to gain insights regarding the design of the trial, and are (i) the probability of a dose being chosen as the maximum tolerated dose (MTD); (ii) the expected number of patients treated at each dose level; (iii) target toxicity level (i.e. the expected dose-limiting toxicity (DLT) incidences at the MTD); (iv) expected DLT incidences at each dose level and (v) expected overall DLT incidences in the trial. Real examples of clinical trials are given, and a computer program to do the calculation can be found at the authors' website approximately linyo" locator-type="url">http://www2.umdnj.edu/ approximately linyo.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号