首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Drug combination trials are increasingly common nowadays in clinical research. However, very few methods have been developed to consider toxicity attributions in the dose escalation process. We are motivated by a trial in which the clinician is able to identify certain toxicities that can be attributed to one of the agents. We present a Bayesian adaptive design in which toxicity attributions are modeled via copula regression and the maximum tolerated dose (MTD) curve is estimated as a function of model parameters. The dose escalation algorithm uses cohorts of two patients, following the continual reassessment method (CRM) scheme, where at each stage of the trial, we search for the dose of one agent given the current dose of the other agent. The performance of the design is studied by evaluating its operating characteristics when the underlying model is either correctly specified or misspecified. We show that this method can be extended to accommodate discrete dose combinations.  相似文献   

2.
We propose an adaptive two-stage Bayesian design for finding one or more acceptable dose combinations of two cytotoxic agents used together in a Phase I clinical trial. The method requires that each of the two agents has been studied previously as a single agent, which is almost invariably the case in practice. A parametric model is assumed for the probability of toxicity as a function of the two doses. Informative priors for parameters characterizing the single-agent toxicity probability curves are either elicited from the physician(s) planning the trial or obtained from historical data, and vague priors are assumed for parameters characterizing two-agent interactions. A method for eliciting the single-agent parameter priors is described. The design is applied to a trial of gemcitabine and cyclophosphamide, and a simulation study is presented.  相似文献   

3.
Yin G  Yuan Y 《Biometrics》2009,65(3):866-875
Summary .  Two-agent combination trials have recently attracted enormous attention in oncology research. There are several strong motivations for combining different agents in a treatment: to induce the synergistic treatment effect, to increase the dose intensity with nonoverlapping toxicities, and to target different tumor cell susceptibilities. To accommodate this growing trend in clinical trials, we propose a Bayesian adaptive design for dose finding based on latent 2 × 2 tables. In the search for the maximum tolerated dose combination, we continuously update the posterior estimates for the unknown parameters associated with marginal probabilities and the correlation parameter based on the data from successive patients. By reordering the dose toxicity probabilities in the two-dimensional space, we assign each coming cohort of patients to the most appropriate dose combination. We conduct extensive simulation studies to examine the operating characteristics of the proposed method under various practical scenarios. Finally, we illustrate our dose-finding procedure with a clinical trial of agent combinations at M. D. Anderson Cancer Center.  相似文献   

4.
Toxicity screening and testing of chemical mixtures for interaction effects is a potentially onerous task due to the sheer volume of combinations that may be of interest. We propose an economical approach for assessing the interaction effects of chemical mixtures that is guided by risk-based considerations. We describe the statistical underpinnings of the approach and use examples from the published literature to illustrate concepts of local versus global mixture assessment. Our approach employs a sequential testing procedure to find the dose combinations that define the dose boundary for a specified acceptable risk level. The first test is conducted for a dose combination consisting of the acceptable doses of each individual chemical in the mixture. The outcome of this first test indicates the dose combination that should be tested next. Continuing in this manner, the boundary of dose combinations for the specified acceptable risk level can be approximated based on measurements for relatively few dose combinations. Dose combinations on one side of the boundary would have responses less than the response associated with the acceptable risk level, and dose combinations on the boundary would be acceptable levels of exposure for the mixture.  相似文献   

5.
Braun TM  Yuan Z  Thall PF 《Biometrics》2005,61(2):335-343
Most phase I clinical trials are designed to determine a maximum-tolerated dose (MTD) for one initial administration or treatment course of a cytotoxic experimental agent. Toxicity usually is defined as the indicator of whether one or more particular adverse events occur within a short time period from the start of therapy. However, physicians often administer an agent to the patient repeatedly and monitor long-term toxicity due to cumulative effects. We propose a new method for such settings. It is based on the time to toxicity rather than a binary outcome, and the goal is to determine a maximum-tolerated schedule (MTS) rather than a conventional MTD. The model and method account for a patient's entire sequence of administrations, with the overall hazard of toxicity modeled as the sum of a sequence of hazards, each associated with one administration. Data monitoring and decision making are done continuously throughout the trial. We illustrate the method with an allogeneic bone marrow transplantation (BMT) trial to determine how long a recombinant human growth factor can be administered as prophylaxis for acute graft-versus-host disease (aGVHD), and we present a simulation study in the context of this trial.  相似文献   

6.
Wages NA  Conaway MR  O'Quigley J 《Biometrics》2011,67(4):1555-1563
Summary Much of the statistical methodology underlying the experimental design of phase 1 trials in oncology is intended for studies involving a single cytotoxic agent. The goal of these studies is to estimate the maximally tolerated dose, the highest dose that can be administered with an acceptable level of toxicity. A fundamental assumption of these methods is monotonicity of the dose–toxicity curve. This is a reasonable assumption for single‐agent trials in which the administration of greater doses of the agent can be expected to produce dose‐limiting toxicities in increasing proportions of patients. When studying multiple agents, the assumption may not hold because the ordering of the toxicity probabilities could possibly be unknown for several of the available drug combinations. At the same time, some of the orderings are known and so we describe the whole situation as that of a partial ordering. In this article, we propose a new two‐dimensional dose‐finding method for multiple‐agent trials that simplifies to the continual reassessment method (CRM), introduced by O'Quigley, Pepe, and Fisher (1990, Biometrics 46 , 33–48), when the ordering is fully known. This design enables us to relax the assumption of a monotonic dose–toxicity curve. We compare our approach and some simulation results to a CRM design in which the ordering is known as well as to other suggestions for partial orders.  相似文献   

7.
Optimizing combination chemotherapy by controlling drug ratios   总被引:1,自引:0,他引:1  
Cancer chemotherapy treatments typically employ drug combinations in which the dose of each agent is pushed to the brink of unacceptable toxicity; however, emerging evidence indicates that this approach may not be providing optimal efficacy due to the manner in which drugs interact. Specifically, whereas certain ratios of combined drugs can be synergistic, other ratios of the same agents may be antagonistic, implying that the most efficacious combinations may be those that utilize certain agents at reduced doses. Advances in nano-scale drug delivery vehicles now enable the translation of in vitro information on synergistic drug ratios into improved anticancer combination therapies in which the desired drug ratio can be controlled and maintained following administration in vivo, so that synergistic effects can be exploited. This "ratiometric" approach to combination chemotherapy opens new opportunities to enhance the effectiveness of existing and future treatment regimens across a spectrum of human diseases.  相似文献   

8.
Summary We propose a hierarchical model for the probability of dose‐limiting toxicity (DLT) for combinations of doses of two therapeutic agents. We apply this model to an adaptive Bayesian trial algorithm whose goal is to identify combinations with DLT rates close to a prespecified target rate. We describe methods for generating prior distributions for the parameters in our model from a basic set of information elicited from clinical investigators. We survey the performance of our algorithm in a series of simulations of a hypothetical trial that examines combinations of four doses of two agents. We also compare the performance of our approach to two existing methods and assess the sensitivity of our approach to the chosen prior distribution.  相似文献   

9.
Benchmark analysis is a widely used tool in biomedical and environmental risk assessment. Therein, estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a prespecified benchmark response (BMR) is well understood for the case of an adverse response to a single stimulus. For cases where two agents are studied in tandem, however, the benchmark approach is far less developed. This paper demonstrates how the benchmark modeling paradigm can be expanded from the single‐agent setting to joint‐action, two‐agent studies. Focus is on continuous response outcomes. Extending the single‐exposure setting, representations of risk are based on a joint‐action dose–response model involving both agents. Based on such a model, the concept of a benchmark profile—a two‐dimensional analog of the single‐dose BMD at which both agents achieve the specified BMR—is defined for use in quantitative risk characterization and assessment.  相似文献   

10.
In some clinical trials or clinical practice, the therapeutic agent is administered repeatedly, and doses are adjusted in each patient based on repeatedly measured continuous responses, to maintain the response levels in a target range. Because a lower dose tends to be selected for patients with a better outcome, simple summarizations may wrongly show a better outcome for the lower dose, producing an incorrect dose–response relationship. In this study, we consider the dose–response relationship under these situations. We show that maximum‐likelihood estimates are consistent without modeling the dose‐modification mechanisms when the selection of the dose as a time‐dependent covariate is based only on observed, but not on unobserved, responses, and measurements are generated based on administered doses. We confirmed this property by performing simulation studies under several dose‐modification mechanisms. We examined an autoregressive linear mixed effects model. The model represents profiles approaching each patient's asymptote when identical doses are repeatedly administered. The model takes into account the previous dose history and provides a dose–response relationship of the asymptote as a summary measure. We also examined a linear mixed effects model assuming all responses are measured at steady state. In the simulation studies, the estimates of both the models were unbiased under the dose modification based on observed responses, but biased under the dose modification based on unobserved responses. In conclusion, the maximum‐likelihood estimates of the dose–response relationship are consistent under the dose modification based only on observed responses.  相似文献   

11.
Recent success of sequential administration of immunotherapy following radiotherapy (RT), often referred to as immunoRT, has sparked the urgent need for novel clinical trial designs to accommodate the unique features of immunoRT. For this purpose, we propose a Bayesian phase I/II design for immunotherapy administered after standard-dose RT to identify the optimal dose that is personalized for each patient according to his/her measurements of PD-L1 expression at baseline and post-RT. We model the immune response, toxicity, and efficacy as functions of dose and patient's baseline and post-RT PD-L1 expression profile. We quantify the desirability of the dose using a utility function and propose a two-stage dose-finding algorithm to find the personalized optimal dose. Simulation studies show that our proposed design has good operating characteristics, with a high probability of identifying the personalized optimal dose.  相似文献   

12.
Huang X  Biswas S  Oki Y  Issa JP  Berry DA 《Biometrics》2007,63(2):429-436
The use of multiple drugs in a single clinical trial or as a therapeutic strategy has become common, particularly in the treatment of cancer. Because traditional trials are designed to evaluate one agent at a time, the evaluation of therapies in combination requires specialized trial designs. In place of the traditional separate phase I and II trials, we propose using a parallel phase I/II clinical trial to evaluate simultaneously the safety and efficacy of combination dose levels, and select the optimal combination dose. The trial is started with an initial period of dose escalation, then patients are randomly assigned to admissible dose levels. These dose levels are compared with each other. Bayesian posterior probabilities are used in the randomization to adaptively assign more patients to doses with higher efficacy levels. Combination doses with lower efficacy are temporarily closed and those with intolerable toxicity are eliminated from the trial. The trial is stopped if the posterior probability for safety, efficacy, or futility crosses a prespecified boundary. For illustration, we apply the design to a combination chemotherapy trial for leukemia. We use simulation studies to assess the operating characteristics of the parallel phase I/II trial design, and compare it to a conventional design for a standard phase I and phase II trial. The simulations show that the proposed design saves sample size, has better power, and efficiently assigns more patients to doses with higher efficacy levels.  相似文献   

13.
Hu B  Ji Y  Tsui KW 《Biometrics》2008,64(4):1223-1230
SUMMARY: Inverse dose-response estimation refers to the inference of an effective dose of some agent that gives a desired probability of response, say 0.5. We consider inverse dose response for two agents, an application that has not received much attention in the literature. Through the posterior profiling technique (Hsu, 1995, The Canadian Journal of Statistics 23, 399-410), we propose a Bayesian method in which we approximate the marginal posterior distribution of an effective dose using a profile posterior distribution, and obtain the maximum a posteriori (MAP) estimate for the effective dose. We then employ an adaptive direction sampling algorithm to obtain the highest posterior density (HPD) credible region for the effective dose. Using the MAP and HPD estimates, investigators will be able to simultaneously calibrate the levels of two agents in dose-response studies. We illustrate our proposed Bayesian method through a simulation study and two practical examples.  相似文献   

14.
The effects of the combinations of dexmedetomidine-fentanyl and dexmedetomidine-diazepam on the righting reflex were studied in rats. The doses that block the righting reflex for the agents given alone and for their combinations were determined with a probit procedure and compared with an isobolographic analysis. The interactions between dexmedetomidine and fentanyl or diazepam were found to be synergistic. In the dexmedetomidine-diazepam combination studies, less than one-fourth of the single drug dose (for each of two agents) was needed to produce the required effect. These data confirm synergistic interactions between dexmedetomidine and fentanyl or diazepam in producing hypnotic-anesthetic action.  相似文献   

15.
Fryer HR  McLean AR 《PloS one》2011,6(8):e23664
Understanding the circumstances under which exposure to transmissible spongiform encephalopathies (TSEs) leads to infection is important for managing risks to public health. Based upon ideas in toxicology and radiology, it is plausible that exposure to harmful agents, including TSEs, is completely safe if the dose is low enough. However, the existence of a threshold, below which infection probability is zero has never been demonstrated experimentally. Here we explore this question by combining data and mathematical models that describe scrapie infections in mice following experimental challenge over a broad range of doses. We analyse data from 4338 mice inoculated at doses ranging over ten orders of magnitude. These data are compared to results from a within-host model in which prions accumulate according to a stochastic birth-death process. Crucially, this model assumes no threshold on the dose required for infection. Our data reveal that infection is possible at the very low dose of a 1000 fold dilution of the dose that infects half the challenged animals (ID50). Furthermore, the dose response curve closely matches that predicted by the model. These findings imply that there is no safe dose of prions and that assessments of the risk from low dose exposure are right to assume a linear relationship between dose and probability of infection. We also refine two common perceptions about TSE incubation periods: that their mean values decrease linearly with logarithmic decreases in dose and that they are highly reproducible between hosts. The model and data both show that the linear decrease in incubation period holds only for doses above the ID50. Furthermore, variability in incubation periods is greater than predicted by the model, not smaller. This result poses new questions about the sources of variability in prion incubation periods. It also provides insight into the limitations of the incubation period assay.  相似文献   

16.
The effect of combinations of penicillin, tetracycline and rifampicin on R. prowazekii (the causative agent of typhus) and R. sibirica (the causative agent of tick-borne rickettsiosis of the North Asia) was studied. It was shown that tetracycline and penicillin used in combination had a summation effect on both R. sibirica and R. prowazekii. The dose of each antibiotic was 2 times lower than the doses of the antibiotics used alone. However, R. sibirica was less sensitive to this combination than R. prowazekii: the minimum rickettsiocidic doses of the combination were 0.5 mg of tetracycline + 10000 units of penicillin per embryo with respect to R. sibirica and 0.1 mg of tetracycline + 10000 units of penicillin per embryo with respect to R. prowazekii. The combinations of rifampicin with penicillin or tetracycline in the concentrations used had no rickettsiocidic effect on either R. sibirica or R. prowazekii. However, it should be noted that these combinations had a synergistic action and provided a rickettsiostatic effect on R. prowazekii: the dose of rifampicin in its combination with penicillin was decreased 10 times and in the combination of rifampicin with tetracycline the doses of both rifampicin and tetracycline were decreased 10 times. Still, penicillin even in a dose of 20000 units per embryo had only a rickettsiostatic effect on R. sibirica and R. prowazekii.  相似文献   

17.
The number of cholera vaccine doses required for immunity is a constraint during epidemic cholera. Protective immunity following one dose of multiple Vibrio cholerae (Vc) colonization factors (Inaba LPS El Tor, TcpA, TcpF, and CBP-A) has not been directly tested even though individual Vc colonization factors are the protective antigens. Inaba LPS consistently induced vibriocidal and protective antibodies at low doses. A LPS booster, regardless of dose, induced highly protective secondary sera. Vc protein immunogens emulsified in adjuvant were variably immunogenic. CBP-A was proficient at inducing high IgG serum titers compared with TcpA or TcpF. After one immunization, TcpA or TcpF antisera protected only when the toxin co-regulated pilus operon of the challenge Vc was induced by AKI culture conditions. CBP-A was not consistently able to induce protection independent of the challenge Vc culture conditions. These results reveal the need to understand how best to leverage the 'right' Vc immunogens to obtain durable immunity after one dose of a cholera subunit vaccine. The dominance of the protective anti-LPS antibody response over other Vc antigen antibody response needs to be controlled to find other protective antigens that can add to anti-LPS antibody-based immunity.  相似文献   

18.
Experimental Zika virus infection in non-human primates results in acute viral load dynamics that can be well-described by mathematical models. The inoculum dose that would be received in a natural infection setting is likely lower than the experimental infections and how this difference affects the viral dynamics and immune response is unclear. Here we study a dataset of experimental infection of non-human primates with a range of doses of Zika virus. We develop new models of infection incorporating both an innate immune response and viral interference with that response. We find that such a model explains the data better than models with no interaction between virus and the immune response. We also find that larger inoculum doses lead to faster dynamics of infection, but approximately the same total amount of viral production.  相似文献   

19.
H G Müller  T Schmitt 《Biometrics》1990,46(1):117-129
We address the question of how to choose the number of doses when estimating the median effective dose (ED50) of a symmetric dose-response curve by the maximum likelihood method. One criterion for this choice here is the asymptotic mean squared error (determined by the asymptotic variance) of the estimated ED50 of a dose-response relationship with qualitative responses. The choice is based on an analysis of the inverse of the information matrix. We find that in many cases, assuming various symmetric dose-response curves and various design densities, choice of as many doses as possible, i.e., the allocation of one subject per dose, is optimal. The theoretical and numerical results are supported by simulations and by an example concerning choice of design in an adolescence study.  相似文献   

20.
Intravascular brachytherapy (IVBT) has rapidly gained acceptance as a new treatment modality for reducing restenosis and improving the success rate of percutaneous transluminal coronary angioplasty (PTCA). Recent clinical results on patients treated with beta-emitting 32P stents suggest that radiation reduces in-stent restenosis but may exacerbate neointimal growth at the edges of the stents. This has been referred to as the "candy wrapper effect." It is well known that radioactive stents yield extremely inhomogeneous dose distributions, with low doses delivered to tissues in between stent struts, at the ends of the stent, and also at depth. Some animal model studies suggest that low doses of radiation may stimulate rather than inhibit neointimal growth in an injured vessel, and it is hypothesized that dose inhomogeneity at the ends of a stent may contribute to the candy wrapper effect. We present here a theoretical study comparing dose distributions for beta stents vs. gamma stents; "dumbbell" radioactive loaded stents vs. uniformly loaded stents; and stents with alternate strut design. Calculations demonstrate that dose inhomogenieties between stent struts, at the ends of stents, and at depth can be reduced by better stent design and isotope selection. Prior to the introduction of radioactive stents, criteria for stent design included factors such as trackability, flexibility, strength, etc. We show here that if stent design also includes criteria for strut shape and spacing that improved dose distributions are possible, which in turn could reduce the candy wrapper effect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号