共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a novel response-adaptive randomization procedure for multi-armed trials with continuous outcomes that are assumed to be normally distributed. Our proposed rule is non-myopic, and oriented toward a patient benefit objective, yet maintains computational feasibility. We derive our response-adaptive algorithm based on the Gittins index for the multi-armed bandit problem, as a modification of the method first introduced in Villar et al. (Biometrics, 71, pp. 969-978). The resulting procedure can be implemented under the assumption of both known or unknown variance. We illustrate the proposed procedure by simulations in the context of phase II cancer trials. Our results show that, in a multi-armed setting, there are efficiency and patient benefit gains of using a response-adaptive allocation procedure with a continuous endpoint instead of a binary one. These gains persist even if an anticipated low rate of missing data due to deaths, dropouts, or complete responses is imputed online through a procedure first introduced in this paper. Additionally, we discuss how there are response-adaptive designs that outperform the traditional equal randomized design both in terms of efficiency and patient benefit measures in the multi-armed trial context. 相似文献
2.
Most existing phase II clinical trial designs focus on conventional chemotherapy with binary tumor response as the endpoint. The advent of novel therapies, such as molecularly targeted agents and immunotherapy, has made the endpoint of phase II trials more complicated, often involving ordinal, nested, and coprimary endpoints. We propose a simple and flexible Bayesian optimal phase II predictive probability (OPP) design that handles binary and complex endpoints in a unified way. The Dirichlet-multinomial model is employed to accommodate different types of endpoints. At each interim, given the observed interim data, we calculate the Bayesian predictive probability of success, should the trial continue to the maximum planned sample size, and use it to make the go/no-go decision. The OPP design controls the type I error rate, maximizes power or minimizes the expected sample size, and is easy to implement, because the go/no-go decision boundaries can be enumerated and included in the protocol before the onset of the trial. Simulation studies show that the OPP design has satisfactory operating characteristics. 相似文献
3.
There is growing interest in integrated Phase I/II oncology clinical trials involving molecularly targeted agents (MTA). One of the main challenges of these trials are nontrivial dose–efficacy relationships and administration of MTAs in combination with other agents. While some designs were recently proposed for such Phase I/II trials, the majority of them consider the case of binary toxicity and efficacy endpoints only. At the same time, a continuous efficacy endpoint can carry more information about the agent's mechanism of action, but corresponding designs have received very limited attention in the literature. In this work, an extension of a recently developed information‐theoretic design for the case of a continuous efficacy endpoint is proposed. The design transforms the continuous outcome using the logistic transformation and uses an information–theoretic argument to govern selection during the trial. The performance of the design is investigated in settings of single‐agent and dual‐agent trials. It is found that the novel design leads to substantial improvements in operating characteristics compared to a model‐based alternative under scenarios with nonmonotonic dose/combination–efficacy relationships. The robustness of the design to missing/delayed efficacy responses and to the correlation in toxicity and efficacy endpoints is also investigated. 相似文献
4.
In the era of targeted therapies and immunotherapies, the traditional drug development paradigm of testing one drug at a time in one indication has become increasingly inefficient. Motivated by a real-world application, we propose a master-protocol–based Bayesian platform trial design with mixed endpoints (PDME) to simultaneously evaluate multiple drugs in multiple indications, where different subsets of efficacy measures (eg, objective response and landmark progression-free survival) may be used by different indications as single or multiple endpoints. We propose a Bayesian hierarchical model to accommodate mixed endpoints and reflect the trial structure of indications that are nested within treatments. We develop a two-stage approach that first clusters the indications into homogeneous subgroups and then applies the Bayesian hierarchical model to each subgroup to achieve precision information borrowing. Patients are enrolled in a group-sequential way and adaptively assigned to treatments according to their efficacy estimates. At each interim analysis, the posterior probabilities that the treatment effect exceeds prespecified clinically relevant thresholds are used to drop ineffective treatments and “graduate” effective treatments. Simulations show that the PDME design has desirable operating characteristics compared to existing method. 相似文献
5.
Huan Yin Weizhen Wang Zhongzhan Zhang 《Biometrical journal. Biometrische Zeitschrift》2019,61(6):1462-1476
When establishing a treatment in clinical trials, it is important to evaluate both effectiveness and toxicity. In phase II clinical trials, multinomial data are collected in m‐stage designs, especially in two‐stage () design. Exact tests on two proportions, for the response rate and for the nontoxicity rate, should be employed due to limited sample sizes. However, existing tests use certain parameter configurations at the boundary of null hypothesis space to determine rejection regions without showing that the maximum Type I error rate is achieved at the boundary of null hypothesis. In this paper, we show that the power function for each test in a large family of tests is nondecreasing in both and ; identify the parameter configurations at which the maximum Type I error rate and the minimum power are achieved and derive level‐α tests; provide optimal two‐stage designs with the least expected total sample size and the optimization algorithm; and extend the results to the case of . Some R‐codes are given in the Supporting Information. 相似文献
6.
Satya Prakash Singh; 《Biometrical journal. Biometrische Zeitschrift》2024,66(1):2300168
Recently, there has been a growing interest in designing cluster trials using stepped wedge design (SWD). An SWD is a type of cluster–crossover design in which clusters of individuals are randomized unidirectional from a control to an intervention at certain time points. The intraclass correlation coefficient (ICC) that measures the dependency of subject within a cluster plays an important role in design and analysis of stepped wedge trials. In this paper, we discuss a Bayesian approach to address the dependency of SWD on the ICC and robust Bayesian SWDs are proposed. Bayesian design is shown to be more robust against the misspecification of the parameter values compared to the locally optimal design. Designs are obtained for the various choices of priors assigned to the ICC. A detailed sensitivity analysis is performed to assess the robustness of proposed optimal designs. The power superiority of Bayesian design against the commonly used balanced design is demonstrated numerically using hypothetical as well as real scenarios. 相似文献
7.
Summary Given a randomized treatment Z, a clinical outcome Y, and a biomarker S measured some fixed time after Z is administered, we may be interested in addressing the surrogate endpoint problem by evaluating whether S can be used to reliably predict the effect of Z on Y. Several recent proposals for the statistical evaluation of surrogate value have been based on the framework of principal stratification. In this article, we consider two principal stratification estimands: joint risks and marginal risks. Joint risks measure causal associations (CAs) of treatment effects on S and Y, providing insight into the surrogate value of the biomarker, but are not statistically identifiable from vaccine trial data. Although marginal risks do not measure CAs of treatment effects, they nevertheless provide guidance for future research, and we describe a data collection scheme and assumptions under which the marginal risks are statistically identifiable. We show how different sets of assumptions affect the identifiability of these estimands; in particular, we depart from previous work by considering the consequences of relaxing the assumption of no individual treatment effects on Y before S is measured. Based on algebraic relationships between joint and marginal risks, we propose a sensitivity analysis approach for assessment of surrogate value, and show that in many cases the surrogate value of a biomarker may be hard to establish, even when the sample size is large. 相似文献
8.
Johanna vander Spek Larry Cosenza Thasia Woodworth Jean C. Nichols John R. Murphy 《Molecular and cellular biochemistry》1994,138(1-2):151-156
We have used protein engineering and recombinant DNA methodologies in order to construct a fusion protein in which human interleukin-2 (IL-2) is genetically linked to the catalytic and transmembrane domains of diphtheria toxin. The fusion toxin, DAB486IL-2, is highly cytotoxic for only those cells which display the high affinity interleukin-2 receptor (IL-2R) on their surface. In phase I/II clinical studies the intravenous administration of DAB486IL-2 has been found to be safe, well tolerated and may lead to the induction of durable remissions in patients presenting with a variety of IL-2R positive lymphomas. 相似文献
9.
Summary . Most phase II screening designs available in the literature consider one treatment at a time. Each study is considered in isolation. We propose a more systematic decision-making approach to the phase II screening process. The sequential design allows for more efficiency and greater learning about treatments. The approach incorporates a Bayesian hierarchical model that allows combining information across several related studies in a formal way and improves estimation in small data sets by borrowing strength from other treatments. The design incorporates a utility function that includes sampling costs and possible future payoff. Computer simulations show that this method has high probability of discarding treatments with low success rates and moving treatments with high success rates to phase III trial. 相似文献
10.
11.
Identifying a biomarker or treatment-dose threshold that marks a specified level of risk is an important problem, especially in clinical trials. In view of this goal, we consider a covariate-adjusted threshold-based interventional estimand, which happens to equal the binary treatment–specific mean estimand from the causal inference literature obtained by dichotomizing the continuous biomarker or treatment as above or below a threshold. The unadjusted version of this estimand was considered in Donovan et al.. Expanding upon Stitelman et al., we show that this estimand, under conditions, identifies the expected outcome of a stochastic intervention that sets the treatment dose of all participants above the threshold. We propose a novel nonparametric efficient estimator for the covariate-adjusted threshold-response function for the case of informative outcome missingness, which utilizes machine learning and targeted minimum-loss estimation (TMLE). We prove the estimator is efficient and characterize its asymptotic distribution and robustness properties. Construction of simultaneous 95% confidence bands for the threshold-specific estimand across a set of thresholds is discussed. In the Supporting Information, we discuss how to adjust our estimator when the biomarker is missing at random, as occurs in clinical trials with biased sampling designs, using inverse probability weighting. Efficiency and bias reduction of the proposed estimator are assessed in simulations. The methods are employed to estimate neutralizing antibody thresholds for virologically confirmed dengue risk in the CYD14 and CYD15 dengue vaccine trials. 相似文献
13.
The development and use of vaccine adjuvants 总被引:3,自引:0,他引:3
Edelman R 《Molecular biotechnology》2002,21(2):129-148
Interest in vaccine adjuvants is intense and growing, because many of the new subunit vaccine candidates lack sufficient immunogenicity
to be clinically useful. In this review, I have emphasized modern vaccine adjuvants injected parenterally, or administered
orally, intranasally, or transcutaneously with licensed or experimental vaccines in humans. Every adjuvant has a complex and
often multi-factorial immunological mechanism, usually poorly understood in vivo. Many determinants of adjuvanticity exist,
and each adjuvanted vaccine is unique. Adjuvant safety is critical and can enhance, retard, or stop development of an adjuvanted
vaccine. The choice of an adjuvant often depends upon expensive experimental trial and error, upon cost, and upon commercial
availability. Extensive regulatory and administrative support is required to conduct clinical trials of adjuvanted vaccines.
Finally, comparative adjuvant trials where one antigen is formulated with different adjuvants and administered by a common
protocol to animals and humans can accelerate vaccine development. 相似文献
14.
Despite their crucial role in the translation of pre‐clinical research into new clinical applications, phase 1 trials involving patients continue to prompt ethical debate. At the heart of the controversy is the question of whether risks of administering experimental drugs are therapeutically justified. We suggest that prior attempts to address this question have been muddled, in part because it cannot be answered adequately without first attending to the way labor is divided in managing risk in clinical trials. In what follows, we approach the question of therapeutic justification for phase 1 trials from the viewpoint of five different stakeholders: the drug regulatory authority, the IRB, the clinical investigator, the referring physician, and the patient. Our analysis shows that the question of therapeutic justification actually raises multiple questions corresponding to the roles and responsibilities of the different stakeholders involved. By attending to these contextual differences, we provide more coherent guidance for the ethical negotiation of risk in phase 1 trials involving patients. We close by discussing the implications of our argument for various perennial controversies in phase 1 trial practice. 相似文献
15.
Wang Q Jia R Ye C Garcia M Li J Hidalgo IJ 《In vitro cellular & developmental biology. Animal》2005,41(3-4):97-103
Summary Uridine 5′-diphospho-N-acetylgalactosamine glycosyltransferases (UGTs) and sulfotransferases (SULTs) are 2 phase II enzymes that are actively involved
in detoxification processes as well as in drug metabolism. Compared with cytochrome P450 enzymes, the role of UGTs and SULTs
in drug metabolism has received little attention. Liver microsomes, S9 fractions, and cryopreserved hepatocytes from human,
dog, cynomolgus monkey, mouse, and rat were used as matrices in the study. Single compound, 7-hydroxycoumarin (7-HC), along
with necessary cofactors was dosed into the matrices and incubated at 37° C; formation of two metabolites, 7-HC-glucuronide
and 7-HC-sulfate, was determined with liquid chromatography with tandem mass spectrometry. Within the same species, the UGTs
activities in microsomes and S9 fractions were comparable. In addition, UGTs activities in cryopreserved hepatocytes were
lower than in the other matrices. Also, the SULTs activities were much higher in S9 fractions than in cryopreserved hepatocytes
and microsomes. Species differences on UGTs and SULTs activities were also observed. The results indicated that S9 fractions,
microsomes, and cryopreserved hepatocytes might be useful for UGTs metabolism study, whereas S9 fractions appear to be the
most appropriate matrix for both UGTs and SULTs metabolism. Species differences with respect to phase II metabolism also need
to be taken into consideration when selecting an in vitro system to evaluate various aspects of drug metabolism. 相似文献
16.
Carolina Scaraffuni Gomes Jens‐Uwe Repke Michael Meyer 《Engineering in Life Science》2020,20(3-4):79-89
During leather manufacture, high amounts of chromium shavings, wet by‐products of the leather industry, are produced worldwide. They are stable towards temperatures of up to 110°C and enzymatic degradation, preventing anaerobic digestion in a biogas plant. Hitherto, chromium shavings are not utilized industrially to produce biogas. In order to ease enzymatic degradation, necessary to produce biogas, a previous denaturation of the native structure has to be carried out. In our projects, chromium shavings were pre‐treated thermally and mechanically by extrusion and hydrothermal methods. In previous works, we intensively studied the use of these shavings to produce biogas in batch scale and significant improvement was reached when using pre‐treated shavings. In this work, a scale‐up of the process was performed in a continuous reactor using pre‐treated and untreated chromium shavings to examine the feasibility of the considered method. Measuring different parameters along the anaerobic digestion, namely organic matter, collagen content, and volatile fatty acids content, it was possible to show that a higher methane production can be reached and a higher loading rate can be used when feeding the reactor with pre‐treated shavings instead of untreated chromium shavings, which means a more economical and efficient process in an industrial scenario. 相似文献
17.
In a quantal response study, there may be insufficient knowledge of the response relationship for the stimulus (or dose) levels to be chosen properly. Information from such a study can be scanty or even unreliable. A two-stage design is proposed for such studies, which can determine whether and how a follow-up (i.e., second-stage) study should be conducted to select additional stimulus levels to compensate for the scarcity of information in the initial study. These levels are determined by using optimal design theory and are based on the fitted model from the data in the initial study. Its advantages are demonstrated using a fishery study. 相似文献
18.
There are several approaches to produce enantiomerically pure drug substances, such as recrystallization, catalytic process (ligand and enzyme), indirect chromatographic resolution, and direct chromatographic resolution. However, the use of preparative chromatography with chiral stationary phases seems to be most effective for early phase projects, where the time and resources on the developments need to be minimized to get the drug candidates into the clinical studies. We showed that by following a well-defined process, chiral chromatography can be easily scaled up from an analytical system to a pilot plant system. We also used the results from a multicolumn continuous chromatography (MCC) study to conclude that MCC can be a cost-effective production method for chiral manufacturing. 相似文献
19.
20.
In this paper, we consider multiplicity testing approaches mainly for phase 3 trials with two doses. We review a few available approaches and propose some new ones. The doses selected for phase 3 usually have the same or a similar efficacy profile, so they have some degree of consistency in efficacy. We review the Hochberg procedure, the Bonferroni procedure, and a few consistency‐adjusted procedures, and suggest new ones by applying the available procedures to the pooled dose and the high dose, the dose that is thought to be more efficacious between two doses. The reason behind the idea is that the pooled dose and the high dose are more consistent than the original two doses if the high dose is more efficacious than the low dose. We compare all approaches via simulations and recommend using a procedure combining 4A and the pooling approach. We also discuss briefly the testing strategy for trials with more than two doses. 相似文献