首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There has been much development in Bayesian adaptive designs in clinical trials. In the Bayesian paradigm, the posterior predictive distribution characterizes the future possible outcomes given the currently observed data. Based on the interim time-to-event data, we develop a new phase II trial design by combining the strength of both Bayesian adaptive randomization and the predictive probability. By comparing the mean survival times between patients assigned to two treatment arms, more patients are assigned to the better treatment on the basis of adaptive randomization. We continuously monitor the trial using the predictive probability for early termination in the case of superiority or futility. We conduct extensive simulation studies to examine the operating characteristics of four designs: the proposed predictive probability adaptive randomization design, the predictive probability equal randomization design, the posterior probability adaptive randomization design, and the group sequential design. Adaptive randomization designs using predictive probability and posterior probability yield a longer overall median survival time than the group sequential design, but at the cost of a slightly larger sample size. The average sample size using the predictive probability method is generally smaller than that of the posterior probability design.  相似文献   

2.
We review a Bayesian predictive approach for interim data monitoring and propose its application to interim sample size reestimation for clinical trials. Based on interim data, this approach predicts how the sample size of a clinical trial needs to be adjusted so as to claim a success at the conclusion of the trial with an expected probability. The method is compared with predictive power and conditional power approaches using clinical trial data. Advantages of this approach over the others are discussed.  相似文献   

3.
Most existing phase II clinical trial designs focus on conventional chemotherapy with binary tumor response as the endpoint. The advent of novel therapies, such as molecularly targeted agents and immunotherapy, has made the endpoint of phase II trials more complicated, often involving ordinal, nested, and coprimary endpoints. We propose a simple and flexible Bayesian optimal phase II predictive probability (OPP) design that handles binary and complex endpoints in a unified way. The Dirichlet-multinomial model is employed to accommodate different types of endpoints. At each interim, given the observed interim data, we calculate the Bayesian predictive probability of success, should the trial continue to the maximum planned sample size, and use it to make the go/no-go decision. The OPP design controls the type I error rate, maximizes power or minimizes the expected sample size, and is easy to implement, because the go/no-go decision boundaries can be enumerated and included in the protocol before the onset of the trial. Simulation studies show that the OPP design has satisfactory operating characteristics.  相似文献   

4.
Basket trials simultaneously evaluate the effect of one or more drugs on a defined biomarker, genetic alteration, or molecular target in a variety of disease subtypes, often called strata. A conventional approach for analyzing such trials is an independent analysis of each of the strata. This analysis is inefficient as it lacks the power to detect the effect of drugs in each stratum. To address these issues, various designs for basket trials have been proposed, centering on designs using Bayesian hierarchical models. In this article, we propose a novel Bayesian basket trial design that incorporates predictive sample size determination, early termination for inefficacy and efficacy, and the borrowing of information across strata. The borrowing of information is based on the similarity between the posterior distributions of the response probability. In general, Bayesian hierarchical models have many distributional assumptions along with multiple parameters. By contrast, our method has prior distributions for response probability and two parameters for similarity of distributions. The proposed design is easier to implement and less computationally demanding than other Bayesian basket designs. Through a simulation with various scenarios, our proposed design is compared with other designs including one that does not borrow information and one that uses a Bayesian hierarchical model.  相似文献   

5.
Dietary oils are a significant contributor to overall energy and fatty acid intakes. Changes in the amount and/or type of dietary oils consumed have the potential to impact human health. Clinical trials represent the gold standard for testing the health impacts of such changes in dietary oils. The objective of this review is to explore best practices for clinical trials examining impacts of dietary oils including 1) pre-clinical topics such as research question generation, study design, participant population, outcome measures and intervention product selection and/or preparation; 2) clinical trial implementation topics such as recruitment, trial management, record keeping and compliance monitoring; and 3) post-clinical trial topics dealing with sample analysis and storage as well as management, publication and data access. The use of digital case report forms, and the best practices in reporting and publishing results are also addressed. In summary, properly designed and implemented clinical trials studying dietary oils produce strong scientific evidence-guiding their use.  相似文献   

6.
The double-blind randomized controlled trial (DBRCT) is the gold standard of medical research. We show that DBRCTs fail to fully account for the efficacy of treatment if there are interactions between treatment and behavior, for example, if a treatment is more effective when patients change their exercise or diet. Since behavioral or placebo effects depend on patients’ beliefs that they are receiving treatment, clinical trials with a single probability of treatment are poorly suited to estimate the additional treatment benefit that arises from such interactions. Here, we propose methods to identify interaction effects, and use those methods in a meta-analysis of data from blinded anti-depressant trials in which participant-level data was available. Out of six eligible studies, which included three for the selective serotonin re-uptake inhibitor paroxetine, and three for the tricyclic imipramine, three studies had a high (>65%) probability of treatment. We found strong evidence that treatment probability affected the behavior of trial participants, specifically the decision to drop out of a trial. In the case of paroxetine, but not imipramine, there was an interaction between treatment and behavioral changes that enhanced the effectiveness of the drug. These data show that standard blind trials can fail to account for the full value added when there are interactions between a treatment and behavior. We therefore suggest that a new trial design, two-by-two blind trials, will better account for treatment efficacy when interaction effects may be important.  相似文献   

7.
In the linear model for cross‐over trials, with fixed subject effects and normal i.i.d. random errors, the residual variability corresponds to the intraindividual variability. While population variances are in general unknown, an estimate can be derived that follows a gamma distribution, where the scale parameter is based on the true unknown variability. This gamma distribution is often used for the sample size calculation for trial planning with the precision approach, where the aim is to achieve in the next trial a predefined precision with a given probability. But then the imprecision in the estimated residual variability or, from a Bayesian perspective, the uncertainty of the unknown variability is not taken into account. Here, we present the predictive distribution for the residual variability, and we investigate a link to the F distribution. The consequence is that in the precision approach more subjects will be necessary than with the conventional calculation. For values of the intraindividual variability that are typical of human pharmacokinetics, that is a gCV of 17–36%, we would need approximately a sixth more subjects.  相似文献   

8.
When there is a predictive biomarker, enrichment can focus the clinical trial on a benefiting subpopulation. We describe a two-stage enrichment design, in which the first stage is designed to efficiently estimate a threshold and the second stage is a “phase III-like” trial on the enriched population. The goal of this paper is to explore design issues: sample size in Stages 1 and 2, and re-estimation of the Stage 2 sample size following Stage 1. By treating these as separate trials, we can gain insight into how the predictive nature of the biomarker specifically impacts the sample size. We also show that failure to adequately estimate the threshold can have disastrous consequences in the second stage. While any bivariate model could be used, we assume a continuous outcome and continuous biomarker, described by a bivariate normal model. The correlation coefficient between the outcome and biomarker is the key to understanding the behavior of the design, both for predictive and prognostic biomarkers. Through a series of simulations we illustrate the impact of model misspecification, consequences of poor threshold estimation, and requisite sample sizes that depend on the predictive nature of the biomarker. Such insight should be helpful in understanding and designing enrichment trials.  相似文献   

9.
Meta-analyses and re-analyses of trial data have not been able to answer some of the essential questions that would allow prediction of placebo responses in clinical trials. We will confront these questions with current empirical evidence. The most important question asks whether the placebo response rates in the drug arm and in the placebo arm are equal. This 'additive model' is a general assumption in almost all placebo-controlled drug trials but has rarely been tested. Secondly, we would like to address whether the placebo response is a function of the likelihood of receiving drug/placebo. Evidence suggests that the number of study arms in a trial may determine the size of the placebo and the drug response. Thirdly, we ask what the size of the placebo response is in 'comparator' studies with a direct comparison of a (novel) drug against another drug. Meta-analytic and experimental evidence suggests that comparator studies may produce higher placebo response rates when compared with placebo-controlled trials. Finally, we address the placebo response rate outside the laboratory and outside of trials in clinical routine. This question poses a serious challenge whether the drug response in trials can be taken as evidence of drug effects in clinical routine.  相似文献   

10.
Many confidence intervals calculated in practice are potentially not exact, either because the requirements for the interval estimator to be exact are known to be violated, or because the (exact) distribution of the data is unknown. If a confidence interval is approximate, the crucial question is how well its true coverage probability approximates its intended coverage probability. In this paper we propose to use the bootstrap to calculate an empirical estimate for the (true) coverage probability of a confidence interval. In the first instance, the empirical coverage can be used to assess whether a given type of confidence interval is adequate for the data at hand. More generally, when planning the statistical analysis of future trials based on existing data pools, the empirical coverage can be used to study the coverage properties of confidence intervals as a function of type of data, sample size, and analysis scale, and thus inform the statistical analysis plan for the future trial. In this sense, the paper proposes an alternative to the problematic pretest of the data for normality, followed by selection of the analysis method based on the results of the pretest. We apply the methodology to a data pool of bioequivalence studies, and in the selection of covariance patterns for repeated measures data.  相似文献   

11.
Abstract

This study investigated whether the variability of the sequence length of the go trials preceding a stop trial enhanced or interfered with inhibitory control. The hypotheses tested were either inhibitory control improves when the sequence length of the go trials varies as a consequence of increased preparatory effort or it degrades as a consequence of the switching cost from the go trial to the stop trial. The right-handed participants abducted the left or right index finger in response to a go cue during the go trials. A stop cue was given at 50, 90, or 130?ms after the go cue, with 0.25 probability in the stop trial. In the less variable session, a stop trial was presented after two, three, or four consecutive go trials. In the variable session, a stop trial was presented after one, two, three, four, or five consecutive go trials. The reaction time and stop-signal reaction time were not significantly different between the sessions and between the response sides. Nevertheless, the probability of successful inhibition of the right-hand response in the variable session was higher than that in the less variable session when the stop cue was given 50?ms after a go cue. This finding supports the view that preparatory effort due to less predictability of the chance of a forthcoming response inhibition enhances the ability of the right-hand response inhibition when the stop process begins earlier.  相似文献   

12.
Experimental data in human movement science commonly consist of repeated measurements under comparable conditions. One can face the question how to identify a single trial, a set of trials, or erroneous trials from the entire data set. This study presents and evaluates a Selection Method for a Representative Trial (SMaRT) based on the Principal Component Analysis. SMaRT was tested on 1841 data sets containing 11 joint angle curves of gait analysis. The automatically detected characteristic trials were compared with the choice of three independent experts. SMaRT required 1.4s to analyse 100 data sets consisting of 8±3 trials each. The robustness against outliers reached 98.8% (standard visual control). We conclude that SMaRT is a powerful tool to determine a representative, uncontaminated trial in movement analysis data sets with multiple parameters.  相似文献   

13.
A P Grieve 《Biometrics》1991,47(1):323-9; discussion 330
In a recent paper, Choi and Pepple (1989, Biometrics 45, 317-323) consider the use of predictive probabilities in the monitoring of clinical trials. In particular, they characterise the predictive probability as a "useful conservative measure" for monitoring purposes. In this note the nature and source of this "conservatism" are investigated.  相似文献   

14.
Ivanova A  Qaqish BF  Schell MJ 《Biometrics》2005,61(2):540-545
The goal of a phase II trial in oncology is to evaluate the efficacy of a new therapy. The dose investigated in a phase II trial is usually an estimate of a maximum-tolerated dose obtained in a preceding phase I trial. Because this estimate is imprecise, stopping rules for toxicity are used in many phase II trials. We give recommendations on how to construct stopping rules to monitor toxicity continuously. A table is provided from which Pocock stopping boundaries can be easily obtained for a range of toxicity rates and sample sizes. Estimation of the probability of toxicity and response is also discussed.  相似文献   

15.
Predictive and postdictive success of statistical analyses of yield trials   总被引:2,自引:0,他引:2  
Summary The accuracy of a yield trial can be increased by improved experimental techniques, more replicates, or more efficient statistical analyses. The third option involves nominal fixed costs, and is therefore very attractive. The statistical analysis recommended here combines the Additive main effects and multiplicative interaction (AMMI) model with a predictive assessment of accuracy. AMMI begins with the usual analysis of variance (ANOVA) to compute genotype and environment additive effects. It then applies principal components analysis (PCA) to analyze non-additive interaction effects. Tests with a New York soybean yield trial show that the predictive accuracy of AMMI with only two replicates is equal to the predictive accuracy of means based on five replicates. The effectiveness of AMMI increases with the size of the yield trial and with the noisiness of the data. Statistical analysis of yield trials with the AMMI model has a number of promising implications for agronomy and plant breeding research programs.This research was supported by the Rhizobotany Project of the USDA-ARS  相似文献   

16.
In addition to the IRB (Institutional review board), the DSMB (data safety and monitoring board) takes an increasing role in the monitoring of clinical trials, especially in large multicenter trials. The DSMB is a an expert committee, independent from the investigators and the sponsor of the trial, which periodically examines the safety data accumulated during progress of the trial and ensures that the benefit/risk ratio remains acceptable for participating patients. The DSMB is also a safeguard for the scientific integrity of the trial. It is the only committee which can have access to unblinded data from the trial. The DSMB may recommend termination of the trial in three situations : (1) occurrence of unanticipated adverse events which may pose a serious risk for participating patients; (2) demonstration of efficacy before the planned accrual \; (3) and because of "futility" (the most difficult situation), i.e. in absence of a reasonable probability that the trial may reach a conclusion within its planned frame.  相似文献   

17.
Despite their crucial role in the translation of pre‐clinical research into new clinical applications, phase 1 trials involving patients continue to prompt ethical debate. At the heart of the controversy is the question of whether risks of administering experimental drugs are therapeutically justified. We suggest that prior attempts to address this question have been muddled, in part because it cannot be answered adequately without first attending to the way labor is divided in managing risk in clinical trials. In what follows, we approach the question of therapeutic justification for phase 1 trials from the viewpoint of five different stakeholders: the drug regulatory authority, the IRB, the clinical investigator, the referring physician, and the patient. Our analysis shows that the question of therapeutic justification actually raises multiple questions corresponding to the roles and responsibilities of the different stakeholders involved. By attending to these contextual differences, we provide more coherent guidance for the ethical negotiation of risk in phase 1 trials involving patients. We close by discussing the implications of our argument for various perennial controversies in phase 1 trial practice.  相似文献   

18.
Cheung YK  Thall PF 《Biometrics》2002,58(1):89-97
In many phase II clinical trials, interim monitoring is based on the probability of a binary event, response, defined in terms of one or more time-to-event variables within a time period of fixed length. Such outcome-adaptive methods may require repeated interim suspension of accrual in order to follow each patient for the time period required to evaluate response. This may increase trial duration, and eligible patients arriving during such delays either must wait for accrual to reopen or be treated outside the trial. Alternatively, monitoring may be done continuously by ignoring censored data each time the stopping rule is applied, which wastes information. We propose an adaptive Bayesian method that eliminates these problems. At each patient's accrual time, an approximate posterior for the response probability based on all of the event-time data is used to compute an early stopping criterion. Application to a leukemia trial with a composite event shows that the method can reduce trial duration substantially while maintaining the reliability of interim decisions.  相似文献   

19.
The development of biomarkers of cell death to reflect tumor biology and drug-induced response has garnered interest with the development of several classes of drugs aimed at decreasing the cellular threshold for apoptosis and exploiting pre-existing oncogenic stresses. These novel anticancer drugs, directly targeted to the apoptosis regulatory machinery and aimed at abrogating survival signaling pathways, are entering early clinical trials provoking the question of how to monitor their impact on cancer patients. The parallel development of drugs with predictive biomarkers and their incorporation into early clinical trials are anticipated to support the pharmacological audit trail, to speed the development and reduce the attrition rate of novel drugs whose objective is to provoke tumor cell death. Tumor biopsies are an ideal matrix to measure apoptosis, but surrogate less invasive biomarkers such as blood samples and functional imaging are less challenging to acquire clinically. Archetypal and exploratory examples illustrating the importance of biomarkers to drug development are given. This review explores the substantive challenges associated with the validation, deployment, interpretation and utility of biomarkers of cell death and reviews recent advances in their incorporation in preclinical and early clinical trial contexts.  相似文献   

20.
BACKGROUND TO THE DEBATE: Systematic reviews that combine high-quality evidence from several trials are now widely considered to be at the top of the hierarchy of clinical evidence. Given the primacy of systematic reviews-and the fact that individual clinical trials rarely provide definitive answers to a clinical research question-some commentators question whether the sample size calculation for an individual trial still matters. Others point out that small trials can still be potentially misleading.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号