首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For most antivenoms there is little information from clinical studies to infer the relationship between dose and efficacy or dose and toxicity. Antivenom dose-finding studies usually recruit too few patients (e.g. fewer than 20) relative to clinically significant event rates (e.g. 5%). Model based adaptive dose-finding studies make efficient use of accrued patient data by using information across dosing levels, and converge rapidly to the contextually defined ‘optimal dose’. Adequate sample sizes for adaptive dose-finding trials can be determined by simulation. We propose a model based, Bayesian phase 2 type, adaptive clinical trial design for the characterisation of optimal initial antivenom doses in contexts where both efficacy and toxicity are measured as binary endpoints. This design is illustrated in the context of dose-finding for Daboia siamensis (Eastern Russell’s viper) envenoming in Myanmar. The design formalises the optimal initial dose of antivenom as the dose closest to that giving a pre-specified desired efficacy, but resulting in less than a pre-specified maximum toxicity. For Daboia siamensis envenoming, efficacy is defined as the restoration of blood coagulability within six hours, and toxicity is defined as anaphylaxis. Comprehensive simulation studies compared the expected behaviour of the model based design to a simpler rule based design (a modified ‘3+3’ design). The model based design can identify an optimal dose after fewer patients relative to the rule based design. Open source code for the simulations is made available in order to determine adequate sample sizes for future adaptive snakebite trials. Antivenom dose-finding trials would benefit from using standard model based adaptive designs. Dose-finding trials where rare events (e.g. 5% occurrence) are of clinical importance necessitate larger sample sizes than current practice. We will apply the model based design to determine a safe and efficacious dose for a novel lyophilised antivenom to treat Daboia siamensis envenoming in Myanmar.  相似文献   

2.
VanderWeele TJ  Chen Y  Ahsan H 《Biometrics》2011,67(4):1414-1421
Dichotomization of continuous exposure variables is a common practice in medical and epidemiological research. The practice has been cautioned against on the grounds of efficiency and bias. Here we consider the consequences of dichotomization of a continuous covariate for the study of interactions. We show that when a continuous exposure has been dichotomized certain inferences concerning causal interactions can be drawn with regard to the original continuous exposure scale. Within the context of interaction analyses, dichotomization and the use of the results in this article can furthermore help prevent incorrect conclusions about the presence of interactions that result simply from erroneous modeling of the exposure variables. By considering different dichotomization points one can gain considerable insight concerning the presence of causal interaction between exposures at different levels. The results in this article are applied to a study of the interactive effects between smoking and arsenic exposure from well water in producing skin lesions.  相似文献   

3.
In the field of pharmaceutical drug development, there have been extensive discussions on the establishment of statistically significant results that demonstrate the efficacy of a new treatment with multiple co‐primary endpoints. When designing a clinical trial with such multiple co‐primary endpoints, it is critical to determine the appropriate sample size for indicating the statistical significance of all the co‐primary endpoints with preserving the desired overall power because the type II error rate increases with the number of co‐primary endpoints. We consider overall power functions and sample size determinations with multiple co‐primary endpoints that consist of mixed continuous and binary variables, and provide numerical examples to illustrate the behavior of the overall power functions and sample sizes. In formulating the problem, we assume that response variables follow a multivariate normal distribution, where binary variables are observed in a dichotomized normal distribution with a certain point of dichotomy. Numerical examples show that the sample size decreases as the correlation increases when the individual powers of each endpoint are approximately and mutually equal.  相似文献   

4.
In dose-finding clinical study, it is common that multiple endpoints are of interest. For instance, efficacy and toxicity endpoints are both primary in clinical trials. In this article, we propose a joint model for correlated efficacy-toxicity outcome constructed with Archimedean Copula, and extend the continual reassessment method (CRM) to a bivariate trial design in which the optimal dose for phase III is based on both efficacy and toxicity. Specially, considering numerous cases that continuous and discrete outcomes are observed in drug study, we will extend our joint model to mixed correlated outcomes. We demonstrate through simulations that our algorithm based on Archimedean Copula model has excellent operating characteristics.  相似文献   

5.
There is growing interest in integrated Phase I/II oncology clinical trials involving molecularly targeted agents (MTA). One of the main challenges of these trials are nontrivial dose–efficacy relationships and administration of MTAs in combination with other agents. While some designs were recently proposed for such Phase I/II trials, the majority of them consider the case of binary toxicity and efficacy endpoints only. At the same time, a continuous efficacy endpoint can carry more information about the agent's mechanism of action, but corresponding designs have received very limited attention in the literature. In this work, an extension of a recently developed information‐theoretic design for the case of a continuous efficacy endpoint is proposed. The design transforms the continuous outcome using the logistic transformation and uses an information–theoretic argument to govern selection during the trial. The performance of the design is investigated in settings of single‐agent and dual‐agent trials. It is found that the novel design leads to substantial improvements in operating characteristics compared to a model‐based alternative under scenarios with nonmonotonic dose/combination–efficacy relationships. The robustness of the design to missing/delayed efficacy responses and to the correlation in toxicity and efficacy endpoints is also investigated.  相似文献   

6.
Regarding Paper “Sample size determination in clinical trials with multiple co‐primary endpoints including mixed continuous and binary variables” by T. Sozu , T. Sugimoto , and T. Hamasaki Biometrical Journal (2012) 54 (5): 716–729 Article: http://dx.doi.org/10.1002/bimj.201100221 Authors' Reply: http://dx.doi.org/10.1002/bimj.201300032 This paper recently introduced a methodology for calculating the sample size in clinical trials with multiple mixed binary and continuous co‐primary endpoints modeled by the so‐called conditional grouped continuous model (CGCM). The purpose of this note is to clarify certain aspects of the methodology and propose an alternative approach based on latent means tests for the binary endpoints. We demonstrate that our approach is more powerful, yielding smaller sample sizes at powers comparable to those used in the paper.  相似文献   

7.
Most existing phase II clinical trial designs focus on conventional chemotherapy with binary tumor response as the endpoint. The advent of novel therapies, such as molecularly targeted agents and immunotherapy, has made the endpoint of phase II trials more complicated, often involving ordinal, nested, and coprimary endpoints. We propose a simple and flexible Bayesian optimal phase II predictive probability (OPP) design that handles binary and complex endpoints in a unified way. The Dirichlet-multinomial model is employed to accommodate different types of endpoints. At each interim, given the observed interim data, we calculate the Bayesian predictive probability of success, should the trial continue to the maximum planned sample size, and use it to make the go/no-go decision. The OPP design controls the type I error rate, maximizes power or minimizes the expected sample size, and is easy to implement, because the go/no-go decision boundaries can be enumerated and included in the protocol before the onset of the trial. Simulation studies show that the OPP design has satisfactory operating characteristics.  相似文献   

8.

Objective

Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores (“Shift”) is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon’s model, we quantified errors of the “Shift” compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials.

Methods

We identified 35 randomized stroke trials that met inclusion criteria. Each trial’s mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for “shift” and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by “shift” mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account.

Results

Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized.

Conclusion

We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.  相似文献   

9.
Ivanova A  Kim SH 《Biometrics》2009,65(1):307-315
Summary .  In many phase I trials, the design goal is to find the dose associated with a certain target toxicity rate. In some trials, the goal can be to find the dose with a certain weighted sum of rates of various toxicity grades. For others, the goal is to find the dose with a certain mean value of a continuous response. In this article, we describe a dose-finding design that can be used in any of the dose-finding trials described above, trials where the target dose is defined as the dose at which a certain monotone function of the dose is a prespecified value. At each step of the proposed design, the normalized difference between the current dose and the target is computed. If that difference is close to zero, the dose is repeated. Otherwise, the dose is increased or decreased, depending on the sign of the difference.  相似文献   

10.
In this paper, we propose a Bayesian design framework for a biosimilars clinical program that entails conducting concurrent trials in multiple therapeutic indications to establish equivalent efficacy for a proposed biologic compared to a reference biologic in each indication to support approval of the proposed biologic as a biosimilar. Our method facilitates information borrowing across indications through the use of a multivariate normal correlated parameter prior (CPP), which is constructed from easily interpretable hyperparameters that represent direct statements about the equivalence hypotheses to be tested. The CPP accommodates different endpoints and data types across indications (eg, binary and continuous) and can, therefore, be used in a wide context of models without having to modify the data (eg, rescaling) to provide reasonable information-borrowing properties. We illustrate how one can evaluate the design using Bayesian versions of the type I error rate and power with the objective of determining the sample size required for each indication such that the design has high power to demonstrate equivalent efficacy in each indication, reasonably high power to demonstrate equivalent efficacy simultaneously in all indications (ie, globally), and reasonable type I error control from a Bayesian perspective. We illustrate the method with several examples, including designing biosimilars trials for follicular lymphoma and rheumatoid arthritis using binary and continuous endpoints, respectively.  相似文献   

11.
Bekele BN  Shen Y 《Biometrics》2005,61(2):343-354
In this article, we propose a Bayesian approach to phase I/II dose-finding oncology trials by jointly modeling a binary toxicity outcome and a continuous biomarker expression outcome. We apply our method to a clinical trial of a new gene therapy for bladder cancer patients. In this trial, the biomarker expression indicates biological activity of the new therapy. For ethical reasons, the trial is conducted sequentially, with the dose for each successive patient chosen using both toxicity and activity data from patients previously treated in the trial. The modeling framework that we use naturally incorporates correlation between the binary toxicity and continuous activity outcome via a latent Gaussian variable. The dose-escalation/de-escalation decision rules are based on the posterior distributions of both toxicity and activity. A flexible state-space model is used to relate the activity outcome and dose. Extensive simulation studies show that the design reliably chooses the preferred dose using both toxicity and expression outcomes under various clinical scenarios.  相似文献   

12.
Pragmatic trials evaluating health care interventions often adopt cluster randomization due to scientific or logistical considerations. Systematic reviews have shown that coprimary endpoints are not uncommon in pragmatic trials but are seldom recognized in sample size or power calculations. While methods for power analysis based on K ( K 2 $K\ge 2$ ) binary coprimary endpoints are available for cluster randomized trials (CRTs), to our knowledge, methods for continuous coprimary endpoints are not yet available. Assuming a multivariate linear mixed model (MLMM) that accounts for multiple types of intraclass correlation coefficients among the observations in each cluster, we derive the closed-form joint distribution of K treatment effect estimators to facilitate sample size and power determination with different types of null hypotheses under equal cluster sizes. We characterize the relationship between the power of each test and different types of correlation parameters. We further relax the equal cluster size assumption and approximate the joint distribution of the K treatment effect estimators through the mean and coefficient of variation of cluster sizes. Our simulation studies with a finite number of clusters indicate that the predicted power by our method agrees well with the empirical power, when the parameters in the MLMM are estimated via the expectation-maximization algorithm. An application to a real CRT is presented to illustrate the proposed method.  相似文献   

13.
A bayesian approach to the design of phase II clinical trials   总被引:1,自引:0,他引:1  
R J Sylvester 《Biometrics》1988,44(3):823-836
A new strategy for the design of Phase II clinical trials is presented which utilizes the information provided by the prior distribution of the response rate, the costs of treating a patient, and the losses or gains resulting from the decisions taken at the completion of the study. A risk function is derived from which one may determine the optimal Bayes sampling plan. The decision theoretic/Bayesian approach is shown to provide a formal justification for the sample sizes often used in practice and shows the conditions under which such sample sizes are clearly inappropriate.  相似文献   

14.
Englert S  Kieser M 《Biometrics》2012,68(3):886-892
Summary Phase II trials in oncology are usually conducted as single-arm two-stage designs with binary endpoints. Currently available adaptive design methods are tailored to comparative studies with continuous test statistics. Direct transfer of these methods to discrete test statistics results in conservative procedures and, therefore, in a loss in power. We propose a method based on the conditional error function principle that directly accounts for the discreteness of the outcome. It is shown how application of the method can be used to construct new phase II designs that are more efficient as compared to currently applied designs and that allow flexible mid-course design modifications. The proposed method is illustrated with a variety of frequently used phase II designs.  相似文献   

15.
One of the primary objectives of an oncology dose-finding trial for novel therapies, such as molecular-targeted agents and immune-oncology therapies, is to identify an optimal dose (OD) that is tolerable and therapeutically beneficial for subjects in subsequent clinical trials. These new therapeutic agents appear more likely to induce multiple low or moderate-grade toxicities than dose-limiting toxicities. Besides, for efficacy, evaluating the overall response and long-term stable disease in solid tumors and considering the difference between complete remission and partial remission in lymphoma are preferable. It is also essential to accelerate early-stage trials to shorten the entire period of drug development. However, it is often challenging to make real-time adaptive decisions due to late-onset outcomes, fast accrual rates, and differences in outcome evaluation periods for efficacy and toxicity. To solve the issues, we propose a time-to-event generalized Bayesian optimal interval design to accelerate dose finding, accounting for efficacy and toxicity grades. The new design named “TITE-gBOIN-ET” design is model-assisted and straightforward to implement in actual oncology dose-finding trials. Simulation studies show that the TITE-gBOIN-ET design significantly shortens the trial duration compared with the designs without sequential enrollment while having comparable or higher performance in the percentage of correct OD selection and the average number of patients allocated to the ODs across various realistic settings.  相似文献   

16.
A basket trial simultaneously evaluates a treatment in multiple cancer subtypes, offering an effective way to accelerate drug development in multiple indications. Many basket trials are designed and monitored based on a single efficacy endpoint, primarily the tumor response. For molecular targeted or immunotherapy agents, however, a single efficacy endpoint cannot adequately characterize the treatment effect. It is increasingly important to use more complex endpoints to comprehensively assess the risk–benefit profile of such targeted therapies. We extend the calibrated Bayesian hierarchical modeling approach to monitor phase II basket trials with multiple endpoints. We propose two generalizations, one based on the latent variable approach and the other based on the multinomial–normal hierarchical model, to accommodate different types of endpoints and dependence assumptions regarding information sharing. We introduce shrinkage parameters as functions of statistics measuring homogeneity among subgroups and propose a general calibration approach to determine the functional forms. Theoretical properties of the generalized hierarchical models are investigated. Simulation studies demonstrate that the monitoring procedure based on the generalized approach yields desirable operating characteristics.  相似文献   

17.
When there is a predictive biomarker, enrichment can focus the clinical trial on a benefiting subpopulation. We describe a two-stage enrichment design, in which the first stage is designed to efficiently estimate a threshold and the second stage is a “phase III-like” trial on the enriched population. The goal of this paper is to explore design issues: sample size in Stages 1 and 2, and re-estimation of the Stage 2 sample size following Stage 1. By treating these as separate trials, we can gain insight into how the predictive nature of the biomarker specifically impacts the sample size. We also show that failure to adequately estimate the threshold can have disastrous consequences in the second stage. While any bivariate model could be used, we assume a continuous outcome and continuous biomarker, described by a bivariate normal model. The correlation coefficient between the outcome and biomarker is the key to understanding the behavior of the design, both for predictive and prognostic biomarkers. Through a series of simulations we illustrate the impact of model misspecification, consequences of poor threshold estimation, and requisite sample sizes that depend on the predictive nature of the biomarker. Such insight should be helpful in understanding and designing enrichment trials.  相似文献   

18.
Adaptive clinical trials are becoming very popular because of their flexibility in allowing mid‐stream changes of sample size, endpoints, populations, etc. At the same time, they have been regarded with mistrust because they can produce bizarre results in very extreme settings. Understanding the advantages and disadvantages of these rapidly developing methods is a must. This paper reviews flexible methods for sample size re‐estimation when the outcome is continuous.  相似文献   

19.
It is common in epidemiologic analyses to summarize continuous outcomes as falling above or below a threshold. With such a dichotomized outcome, the usual chi2 statistics for association or trend can be used to test for equality of proportions across strata of the study population. However, if the threshold is chosen to maximize the test statistic, the nominal chi2 reference distributions are incorrect. In this paper, the asymptotic distributions of maximally selected chi2 statistics for association and for trend for the k x 2 table are derived. The methodology is illustrated with data from an AIDS clinical trial. The results of simulation experiments that assess the accuracy of the asymptotic distributions in moderate sample sizes are also reported.  相似文献   

20.
《Cytotherapy》2022,24(2):193-204
Immune effector cell (IEC) therapies have revolutionized our approach to relapsed B-cell malignancies, and interest in the investigational use of IECs is rapidly expanding into other diseases. Current challenges in the analysis of IEC therapies include small sample sizes, limited access to clinical trials and a paucity of predictive biomarkers of efficacy and toxicity associated with IEC therapies. Retrospective and prospective multi-center cell therapy trials can assist in overcoming these barriers through harmonization of clinical endpoints and correlative assays for immune monitoring, allowing additional cross-trial analysis to identify biomarkers of failure and success. The Consortium for Pediatric Cellular Immunotherapy (CPCI) offers a unique platform to address the aforementioned challenges by delivering cutting-edge cell and gene therapies for children through multi-center clinical trials. Here the authors discuss some of the important pre-analytic variables, such as biospecimen collection and initial processing procedures, that affect biomarker assays commonly used in IEC trials across participating CPCI sites. The authors review the recent literature and provide data to support recommendations for alignment and standardization of practices that can affect flow cytometry assays measuring immune effector function as well as interpretation of cytokine/chemokine data. The authors also identify critical gaps that often make parallel comparisons between trials difficult or impossible.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号