首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cheung YK  Chappell R 《Biometrics》2000,56(4):1177-1182
Traditional designs for phase I clinical trials require each patient (or small group of patients) to be completely followed before the next patient or group is assigned. In situations such as when evaluating late-onset effects of radiation or toxicities from chemopreventive agents, this may result in trials of impractically long duration. We propose a new method, called the time-to-event continual reassessment method (TITE-CRM), that allows patients to be entered in a staggered fashion. It is an extension of the continual reassessment method (CRM; O'Quigley, Pepe, and Fisher, 1990, Biometrics 46, 33-48). We also note that this time-to-toxicity approach can be applied to extend other designs for studies of short-term toxicities. We prove that the recommended dose given by the TITE-CRM converges to the correct level under certain conditions. A simulation study shows our method's accuracy and safety are comparable with CRM's while the former takes a much shorter trial duration: a trial that would take up to 12 years to complete by the CRM could be reduced to 2-4 years by our method.  相似文献   

2.
One of the primary objectives of an oncology dose-finding trial for novel therapies, such as molecular-targeted agents and immune-oncology therapies, is to identify an optimal dose (OD) that is tolerable and therapeutically beneficial for subjects in subsequent clinical trials. These new therapeutic agents appear more likely to induce multiple low or moderate-grade toxicities than dose-limiting toxicities. Besides, for efficacy, evaluating the overall response and long-term stable disease in solid tumors and considering the difference between complete remission and partial remission in lymphoma are preferable. It is also essential to accelerate early-stage trials to shorten the entire period of drug development. However, it is often challenging to make real-time adaptive decisions due to late-onset outcomes, fast accrual rates, and differences in outcome evaluation periods for efficacy and toxicity. To solve the issues, we propose a time-to-event generalized Bayesian optimal interval design to accelerate dose finding, accounting for efficacy and toxicity grades. The new design named “TITE-gBOIN-ET” design is model-assisted and straightforward to implement in actual oncology dose-finding trials. Simulation studies show that the TITE-gBOIN-ET design significantly shortens the trial duration compared with the designs without sequential enrollment while having comparable or higher performance in the percentage of correct OD selection and the average number of patients allocated to the ODs across various realistic settings.  相似文献   

3.
Thall PF  Nguyen HQ  Estey EH 《Biometrics》2008,64(4):1126-1136
SUMMARY: A Bayesian sequential dose-finding procedure based on bivariate (efficacy, toxicity) outcomes that accounts for patient covariates and dose-covariate interactions is presented. Historical data are used to obtain an informative prior on covariate main effects, with uninformative priors assumed for all dose effect parameters. Elicited limits on the probabilities of efficacy and toxicity for each of a representative set of covariate vectors are used to construct bounding functions that determine the acceptability of each dose for each patient. Elicited outcome probability pairs that are equally desirable for a reference patient are used to define two different posterior criteria, either of which may be used to select an optimal covariate-specific dose for each patient. Because the dose selection criteria are covariate specific, different patients may receive different doses at the same point in the trial, and the set of eligible patients may change adaptively during the trial. The method is illustrated by a dose-finding trial in acute leukemia, including a simulation study.  相似文献   

4.
A common concern in Bayesian data analysis is that an inappropriately informative prior may unduly influence posterior inferences. In the context of Bayesian clinical trial design, well chosen priors are important to ensure that posterior-based decision rules have good frequentist properties. However, it is difficult to quantify prior information in all but the most stylized models. This issue may be addressed by quantifying the prior information in terms of a number of hypothetical patients, i.e., a prior effective sample size (ESS). Prior ESS provides a useful tool for understanding the impact of prior assumptions. For example, the prior ESS may be used to guide calibration of prior variances and other hyperprior parameters. In this paper, we discuss such prior sensitivity analyses by using a recently proposed method to compute a prior ESS. We apply this in several typical settings of Bayesian biomedical data analysis and clinical trial design. The data analyses include cross-tabulated counts, multiple correlated diagnostic tests, and ordinal outcomes using a proportional-odds model. The study designs include a phase I trial with late-onset toxicities, a phase II trial that monitors event times, and a phase I/II trial with dose-finding based on efficacy and toxicity.  相似文献   

5.
Delayed dose limiting toxicities (i.e. beyond first cycle of treatment) is a challenge for phase I trials. The time‐to‐event continual reassessment method (TITE‐CRM) is a Bayesian dose‐finding design to address the issue of long observation time and early patient drop‐out. It uses a weighted binomial likelihood with weights assigned to observations by the unknown time‐to‐toxicity distribution, and is open to accrual continually. To avoid dosing at overly toxic levels while retaining accuracy and efficiency for DLT evaluation that involves multiple cycles, we propose an adaptive weight function by incorporating cyclical data of the experimental treatment with parameters updated continually. This provides a reasonable estimate for the time‐to‐toxicity distribution by accounting for inter‐cycle variability and maintains the statistical properties of consistency and coherence. A case study of a First‐in‐Human trial in cancer for an experimental biologic is presented using the proposed design. Design calibrations for the clinical and statistical parameters are conducted to ensure good operating characteristics. Simulation results show that the proposed TITE‐CRM design with adaptive weight function yields significantly shorter trial duration, does not expose patients to additional risk, is competitive against the existing weighting methods, and possesses some desirable properties.  相似文献   

6.
Yin G  Yuan Y 《Biometrics》2009,65(3):866-875
Summary .  Two-agent combination trials have recently attracted enormous attention in oncology research. There are several strong motivations for combining different agents in a treatment: to induce the synergistic treatment effect, to increase the dose intensity with nonoverlapping toxicities, and to target different tumor cell susceptibilities. To accommodate this growing trend in clinical trials, we propose a Bayesian adaptive design for dose finding based on latent 2 × 2 tables. In the search for the maximum tolerated dose combination, we continuously update the posterior estimates for the unknown parameters associated with marginal probabilities and the correlation parameter based on the data from successive patients. By reordering the dose toxicity probabilities in the two-dimensional space, we assign each coming cohort of patients to the most appropriate dose combination. We conduct extensive simulation studies to examine the operating characteristics of the proposed method under various practical scenarios. Finally, we illustrate our dose-finding procedure with a clinical trial of agent combinations at M. D. Anderson Cancer Center.  相似文献   

7.
For most antivenoms there is little information from clinical studies to infer the relationship between dose and efficacy or dose and toxicity. Antivenom dose-finding studies usually recruit too few patients (e.g. fewer than 20) relative to clinically significant event rates (e.g. 5%). Model based adaptive dose-finding studies make efficient use of accrued patient data by using information across dosing levels, and converge rapidly to the contextually defined ‘optimal dose’. Adequate sample sizes for adaptive dose-finding trials can be determined by simulation. We propose a model based, Bayesian phase 2 type, adaptive clinical trial design for the characterisation of optimal initial antivenom doses in contexts where both efficacy and toxicity are measured as binary endpoints. This design is illustrated in the context of dose-finding for Daboia siamensis (Eastern Russell’s viper) envenoming in Myanmar. The design formalises the optimal initial dose of antivenom as the dose closest to that giving a pre-specified desired efficacy, but resulting in less than a pre-specified maximum toxicity. For Daboia siamensis envenoming, efficacy is defined as the restoration of blood coagulability within six hours, and toxicity is defined as anaphylaxis. Comprehensive simulation studies compared the expected behaviour of the model based design to a simpler rule based design (a modified ‘3+3’ design). The model based design can identify an optimal dose after fewer patients relative to the rule based design. Open source code for the simulations is made available in order to determine adequate sample sizes for future adaptive snakebite trials. Antivenom dose-finding trials would benefit from using standard model based adaptive designs. Dose-finding trials where rare events (e.g. 5% occurrence) are of clinical importance necessitate larger sample sizes than current practice. We will apply the model based design to determine a safe and efficacious dose for a novel lyophilised antivenom to treat Daboia siamensis envenoming in Myanmar.  相似文献   

8.
In addition to the IRB (Institutional review board), the DSMB (data safety and monitoring board) takes an increasing role in the monitoring of clinical trials, especially in large multicenter trials. The DSMB is a an expert committee, independent from the investigators and the sponsor of the trial, which periodically examines the safety data accumulated during progress of the trial and ensures that the benefit/risk ratio remains acceptable for participating patients. The DSMB is also a safeguard for the scientific integrity of the trial. It is the only committee which can have access to unblinded data from the trial. The DSMB may recommend termination of the trial in three situations : (1) occurrence of unanticipated adverse events which may pose a serious risk for participating patients; (2) demonstration of efficacy before the planned accrual \; (3) and because of "futility" (the most difficult situation), i.e. in absence of a reasonable probability that the trial may reach a conclusion within its planned frame.  相似文献   

9.
10.
Dose-finding based on efficacy-toxicity trade-offs   总被引:1,自引:0,他引:1  
Thall PF  Cook JD 《Biometrics》2004,60(3):684-693
We present an adaptive Bayesian method for dose-finding in phase I/II clinical trials based on trade-offs between the probabilities of treatment efficacy and toxicity. The method accommodates either trinary or bivariate binary outcomes, as well as efficacy probabilities that possibly are nonmonotone in dose. Doses are selected for successive patient cohorts based on a set of efficacy-toxicity trade-off contours that partition the two-dimensional outcome probability domain. Priors are established by solving for hyperparameters that optimize the fit of the model to elicited mean outcome probabilities. For trinary outcomes, the new algorithm is compared to the method of Thall and Russell (1998, Biometrics 54, 251-264) by application to a trial of rapid treatment for ischemic stroke. The bivariate binary outcome case is illustrated by a trial of graft-versus-host disease treatment in allogeneic bone marrow transplantation. Computer simulations show that, under a wide rage of dose-outcome scenarios, the new method has high probabilities of making correct decisions and treats most patients at doses with desirable efficacy-toxicity trade-offs.  相似文献   

11.
Ionizing radiation plays a central role in several medical and industrial purposes. In spite of the beneficial effects of ionizing radiation, there are some concerns related to accidental exposure that could pose a threat to the lives of exposed people. This issue is also very critical for triage of injured people in a possible terror event or nuclear disaster. The most common side effects of ionizing radiation are experienced in cancer patients who had undergone radiotherapy. For complete eradication of tumors, there is a need for high doses of ionizing radiation. However, these high doses lead to severe toxicities in adjacent organs. Management of normal tissue toxicity may be achieved via modulation of radiation responses in both normal and malignant cells. It has been suggested that treatment of patients with some adjuvant agents may be useful for amelioration of radiation toxicity or sensitization of tumor cells. However, there are always some concerns for possible severe toxicities and protection of tumor cells, which in turn affect radiotherapy outcomes. Selenium is a trace element in the body that has shown potent antioxidant and radioprotective effects for many years. Selenium can potently stimulate antioxidant defense of cells, especially via upregulation of glutathione (GSH) level and glutathione peroxidase activity. Some studies in recent years have shown that selenium is able to mitigate radiation toxicity when administered after exposure. These studies suggest that selenium may be a useful radiomitigator for an accidental radiation event. Molecular and cellular studies have revealed that selenium protects different normal cells against radiation, while it may sensitize tumor cells. These differential effects of selenium have also been revealed in some clinical studies. In the present study, we aimed to review the radiomitigative and radioprotective effects of selenium on normal cells/tissues, as well as its radiosensitive effect on cancer cells.  相似文献   

12.
The release of inflammatory cytokines has been implicated in the toxicity of conventional radiotherapy (CRT). Transforming growth factor β (TGF-β) has been suggested to be a risk marker for pulmonary toxicity following radiotherapy. Pulsed low-dose rate radiotherapy (PLDR) is a technique that involves spreading out a conventional radiotherapy dose into short pulses of dose with breaks in between to reduce toxicities. We hypothesized that the more tolerable toxicity profile of PLDR compared with CRT may be related to differential expression of inflammatory cytokines such as TGF-β in normal tissues. To address this, we analyzed tissues from mice that had been subjected to lethal doses of CRT and PLDR by histology and immunohistochemistry (IHC). Equivalent physical doses of CRT triggered more cellular atrophy in the bone marrow, intestine, and pancreas when compared with PLDR as indicated by hematoxylin and eosin staining. IHC data indicates that TGF-β expression is increased in the bone marrow, intestine, and lungs of mice subjected to CRT as compared with tissues from mice subjected to PLDR. Our in vivo data suggest that differential expression of inflammatory cytokines such as TGF-β may play a role in the more favorable normal tissue late response following treatment with PLDR.  相似文献   

13.
R Brookmeyer  S G Self 《Biometrics》1985,41(1):129-136
A method called partial completion is proposed for predicting the gain in precision of the Kaplan-Meier survival curve associated with additional follow-up and accrual. This is accomplished by using the initial data to predict the numbers of patients who would be at risk at the observed death times by the end of the proposed second follow-up period. A consistency result ensures that the predictors will be accurate in large samples while simulation results suggest that the predictors are accurate with moderate sample sizes. The procedures are applied to a bone marrow transplant study and the Channing House data set.  相似文献   

14.
Bekele BN  Shen Y 《Biometrics》2005,61(2):343-354
In this article, we propose a Bayesian approach to phase I/II dose-finding oncology trials by jointly modeling a binary toxicity outcome and a continuous biomarker expression outcome. We apply our method to a clinical trial of a new gene therapy for bladder cancer patients. In this trial, the biomarker expression indicates biological activity of the new therapy. For ethical reasons, the trial is conducted sequentially, with the dose for each successive patient chosen using both toxicity and activity data from patients previously treated in the trial. The modeling framework that we use naturally incorporates correlation between the binary toxicity and continuous activity outcome via a latent Gaussian variable. The dose-escalation/de-escalation decision rules are based on the posterior distributions of both toxicity and activity. A flexible state-space model is used to relate the activity outcome and dose. Extensive simulation studies show that the design reliably chooses the preferred dose using both toxicity and expression outcomes under various clinical scenarios.  相似文献   

15.
Thall PF  Sung HG  Choudhury A 《Biometrics》2001,57(3):914-921
A new modality for treatment of cancer involves the ex vivo growth of cancer-specific T-cells for subsequent infusion into the patient. The therapeutic aim is selective destruction of cancer cells by the activated infused cells. An important problem in the early phase of developing such a treatment is to determine a maximal tolerated dose (MTD) for use in a subsequent phase II clinical trial. Dose may be quantified by the number of cells infused per unit body weight, and determination of an MTD may be based on the probability of infusional toxicity as a function of dose. As in a phase I trial of a new chemotherapeutic agent, this may be done by treating successive cohorts of patients at different dose levels, with each new level chosen adaptively based on the toxicity data of the patients previously treated. Such a dose-finding strategy is inadequate in T-cell infusion trials because the number of cells grown ex vivo for a given patient may be insufficient for infusing the patient at the current targeted dose. To address this problem, we propose an algorithm for trial conduct that determines a feasible MTD based on the probabilities of both infusibility and toxicity as functions of dose. The method is illustrated by application to a dendritic cell activated lymphocyte infusion trial in the treatment of acute leukemia. A simulation study indicates that the proposed methodology is both safe and reliable.  相似文献   

16.
Yin G  Li Y  Ji Y 《Biometrics》2006,62(3):777-787
A Bayesian adaptive design is proposed for dose-finding in phase I/II clinical trials to incorporate the bivariate outcomes, toxicity and efficacy, of a new treatment. Without specifying any parametric functional form for the drug dose-response curve, we jointly model the bivariate binary data to account for the correlation between toxicity and efficacy. After observing all the responses of each cohort of patients, the dosage for the next cohort is escalated, deescalated, or unchanged according to the proposed odds ratio criteria constructed from the posterior toxicity and efficacy probabilities. A novel class of prior distributions is proposed through logit transformations which implicitly imposes a monotonic constraint on dose toxicity probabilities and correlates the probabilities of the bivariate outcomes. We conduct simulation studies to evaluate the operating characteristics of the proposed method. Under various scenarios, the new Bayesian design based on the toxicity-efficacy odds ratio trade-offs exhibits good properties and treats most patients at the desirable dose levels. The method is illustrated with a real trial design for a breast medical oncology study.  相似文献   

17.
As alternative models and scientific advancements improve the ability to predict developmental toxicity, the challenge is how to best use this information to support safe use of pharmaceuticals in humans. While in vivo experimental data are often expected, there are other important considerations that drive the impact of developmental toxicity data to human risk assessment and product labeling. These considerations include three key elements: (1) the drug's likelihood of producing off‐target toxicities, (2) risk tolerance of adverse effects based on indication and patient population, and (3) how much is known about the effects of modulating the target in pregnancy and developmental biology. For example, there is little impact or value of a study in pregnant monkeys to inform the risk assessment for a highly specific monoclonal antibody indicated for a life‐threatening indication against a target known to be critical for pregnancy maintenance and fetal survival. In contrast, a small molecule to a novel biological target for a chronic lifestyle indication would warrant more safety data than simply in vitro studies and a literature review. Rather than accounting for innumerable theoretical possibilities surrounding each potential submission's profile, we consolidated most of the typical situations into eight possible scenarios across these three elements, and present a discussion of these scenarios here. We hope that this framework will facilitate a rational approach to determining what new information is required to inform developmental toxicity risk of pharmaceuticals in context of the specific needs of each program while reducing animal use where possible.  相似文献   

18.
This study provides a method for characterizing the effects of concentration variability and correlation among co-acting compounds on mixture toxicity, considering the implications of missing chemical data. The method is explored by developing a set of multiple occurrence scenarios for mixtures of related chemicals. The calculations are performed for hypothetical mixtures of a group of ten synthetic antibiotics that have been tested on marine bacterium to fit dose-response relationships for long-term bioluminescence inhibition of Vibrio fischeri. Mixture toxicities are computed and compared for the assumptions of independent joint action theory and concentration/dose addition theory. The study results show that higher variability in concentrations is associated with higher effective (average) mixture toxicity, in this application by as much as a factor of ten for mixtures with highly variable component concentrations. Moreover, omitting the most toxic compounds caused mixture toxicities to be underestimated by a factor of approximately two. We recommend a pre-assessment of the effect of different chemical occurrence patterns and variability on mixture toxicity to help prioritize needs for further co-occurrence data and toxicity studies.  相似文献   

19.
Ivanova A  Kim SH 《Biometrics》2009,65(1):307-315
Summary .  In many phase I trials, the design goal is to find the dose associated with a certain target toxicity rate. In some trials, the goal can be to find the dose with a certain weighted sum of rates of various toxicity grades. For others, the goal is to find the dose with a certain mean value of a continuous response. In this article, we describe a dose-finding design that can be used in any of the dose-finding trials described above, trials where the target dose is defined as the dose at which a certain monotone function of the dose is a prespecified value. At each step of the proposed design, the normalized difference between the current dose and the target is computed. If that difference is close to zero, the dose is repeated. Otherwise, the dose is increased or decreased, depending on the sign of the difference.  相似文献   

20.
A primary objective in quantitative risk or safety assessment is characterization of the severity and likelihood of an adverse effect caused by a chemical toxin or pharmaceutical agent. In many cases data are not available at low doses or low exposures to the agent, and inferences at those doses must be based on the high-dose data. A modern method for making low-dose inferences is known as benchmark analysis, where attention centers on the dose at which a fixed benchmark level of risk is achieved. Both upper confidence limits on the risk and lower confidence limits on the "benchmark dose" are of interest. In practice, a number of possible benchmark risks may be under study; if so, corrections must be applied to adjust the limits for multiplicity. In this short note, we discuss approaches for doing so with quantal response data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号