首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A Bayesian design is proposed for randomized phase II clinical trials that screen multiple experimental treatments compared to an active control based on ordinal categorical toxicity and response. The underlying model and design account for patient heterogeneity characterized by ordered prognostic subgroups. All decision criteria are subgroup specific, including interim rules for dropping unsafe or ineffective treatments, and criteria for selecting optimal treatments at the end of the trial. The design requires an elicited utility function of the two outcomes that varies with the subgroups. Final treatment selections are based on posterior mean utilities. The methodology is illustrated by a trial of targeted agents for metastatic renal cancer, which motivated the design methodology. In the context of this application, the design is evaluated by computer simulation, including comparison to three designs that conduct separate trials within subgroups, or conduct one trial while ignoring subgroups, or base treatment selection on estimated response rates while ignoring toxicity.  相似文献   

2.
We consider a response adaptive design of clinical trials with a variance‐penalized criterion. It is shown that this criterion evaluates the performance of a response adaptive design based on both the number of patients assigned to the better treatment and the power of the statistical test. A new proportion of treatment allocation is proposed and the doubly biased coin procedure is used to target the proposed proportion. Under reasonable assumptions, the proposed design is demonstrated to generate an asymptotic variance of allocation proportions, which is smaller than that of the drop‐the‐loser design. Simulation comparisons of the proposed design with some existing designs are presented.  相似文献   

3.
K Kim  A A Tsiatis 《Biometrics》1990,46(1):81-92
A comparative clinical trial with built-in sequential stopping rules allows earlier-than-scheduled stopping, should there be a significant indication of treatment difference. In a clinical trial where the major outcome is time (survival time or response) to a certain event such as failure, the design of the study should determine how long one needs to accrue patients and follow through until there is a sufficient number of events observed during the entire study duration. This paper proposes a unified design procedure for group sequential clinical trials with survival response. The time to event is assumed to be exponentially distributed, but the arguments extend naturally to the proportional hazards model after suitable transformation on the time scale. An example from the Eastern Cooperative Oncology Group (ECOG) is given to illustrate how this procedure can be implemented. The same example is used to explore the overall operating characteristics and the robustness of the proposed group sequential design.  相似文献   

4.
Data-driven methods for personalizing treatment assignment have garnered much attention from clinicians and researchers. Dynamic treatment regimes formalize this through a sequence of decision rules that map individual patient characteristics to a recommended treatment. Observational studies are commonly used for estimating dynamic treatment regimes due to the potentially prohibitive costs of conducting sequential multiple assignment randomized trials. However, estimating a dynamic treatment regime from observational data can lead to bias in the estimated regime due to unmeasured confounding. Sensitivity analyses are useful for assessing how robust the conclusions of the study are to a potential unmeasured confounder. A Monte Carlo sensitivity analysis is a probabilistic approach that involves positing and sampling from distributions for the parameters governing the bias. We propose a method for performing a Monte Carlo sensitivity analysis of the bias due to unmeasured confounding in the estimation of dynamic treatment regimes. We demonstrate the performance of the proposed procedure with a simulation study and apply it to an observational study examining tailoring the use of antidepressant medication for reducing symptoms of depression using data from Kaiser Permanente Washington.  相似文献   

5.
Group sequential stopping rules are often used during the conduct of clinical trials in order to attain more ethical treatment of patients and to better address efficiency concerns. Because the use of such stopping rules materially affects the frequentist operating characteristics of the hypothesis test, it is necessary to choose an appropriate stopping rule during the planning of the study. It is often the case, however, that the number and timing of interim analyses are not precisely known at the time of trial design, and thus the implementation of a particular stopping rule must allow for flexible determination of the schedule of interim analyses. In this article, we consider the use of constrained stopping boundaries in the implementation of stopping rules. We compare this approach when used on various scales for the test statistic. When implemented on the scale of boundary crossing probabilities, this approach is identical to the error spending function approach of Lan and DeMets (1983).  相似文献   

6.
The three‐arm design with a test treatment, an active control and a placebo group is the gold standard design for non‐inferiority trials if it is ethically justifiable to expose patients to placebo. In this paper, we first use the closed testing principle to establish the hierarchical testing procedure for the multiple comparisons involved in the three‐arm design. For the effect preservation test we derive the explicit formula for the optimal allocation ratios. We propose a group sequential type design, which naturally accommodates the hierarchical testing procedure. Under this proposed design, Monte Carlo simulations are conducted to evaluate the performance of the sequential effect preservation test when the variance of the test statistic is estimated based on the restricted maximum likelihood estimators of the response rates under the null hypothesis. When there are uncertainties for the placebo response rate, the proposed design demonstrates better operating characteristics than the fixed sample design.  相似文献   

7.
The study of HIV dynamics is one of the most important developments in recent AIDS research. It greatly improves our understanding of the pathogenesis of HIV infection. Recently it has been proposed to use HIV dynamics to evaluate the efficacy of antiviral treatments. Currently a large number of AIDS clinical trials on HIV dynamics are in development worldwide. However, many design issues that arise from HIV dynamic studies have not been addressed. In this paper, we study these problems using intensive Monte Carlo simulations and analytic methods. We evaluate a finite number of feasible candidate designs, which are currently used and proposed in AIDS clinical trials from different perspectives. We compare the viral dynamic marker and classical viral load change markers in terms of power for identifying treatment difference, asymptotic relative efficiency, and sensitivity. Finally we propose some useful suggestions for practitioners based on our results.  相似文献   

8.
Three‐arm noninferiority trials (involving an experimental treatment, a reference treatment, and a placebo)—called the “gold standard” noninferiority trials—are conducted in patients with mental disorders whenever feasible, but often fail to show superiority of the experimental treatment and/or the reference treatment over the placebo. One possible reason is that some of the patients receiving the placebo show apparent improvement in the clinical condition. An approach to addressing this problem is the use of the sequential parallel comparison design (SPCD). Nonetheless, the SPCD has not yet been discussed in relation to gold standard noninferiority trials. In this article, our aim was to develop a hypothesis‐testing method and its corresponding sample size calculation method for gold standard noninferiority trials with the SPCD. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy.  相似文献   

9.
A popular design for clinical trials assessing targeted therapies is the two-stage adaptive enrichment design with recruitment in stage 2 limited to a biomarker-defined subgroup chosen based on data from stage 1. The data-dependent selection leads to statistical challenges if data from both stages are used to draw inference on treatment effects in the selected subgroup. If subgroups considered are nested, as when defined by a continuous biomarker, treatment effect estimates in different subgroups follow the same distribution as estimates in a group-sequential trial. This result is used to obtain tests controlling the familywise type I error rate (FWER) for six simple subgroup selection rules, one of which also controls the FWER for any selection rule. Two approaches are proposed: one based on multivariate normal distributions suitable if the number of possible subgroups, k, is small, and one based on Brownian motion approximations suitable for large k. The methods, applicable in the wide range of settings with asymptotically normal test statistics, are illustrated using survival data from a breast cancer trial.  相似文献   

10.
Many clinical trials compare two or more treatment groups by using a binary outcome measure. For example, the goal could be to determine whether the frequency of pain episodes is significantly reduced in the treatment group (arm A) as compared to the control group (arm B). However, for ethical or regulatory reasons, group sequential designs are commonly employed. Then, based on a binomial distribution, the stopping boundaries for the interim analyses are constructed for assessing the difference in the response probabilities between the two groups. This is easily accomplished by using any of the standard procedures, e.g., those discussed by Jennison and Turnbull (2000), and using one of the most commonly used software packages, East (2000). Several factors are known to often affect the primary outcome of interest, but their true distributions are not known in advance. In addition, these factors may cause heterogeneous treatment responses among individuals in a group, and their exact effect size may be unknown. To limit the effect of such factors on the comparison of the two arms, stratified randomization is used in the actual conduct of the trial. Then, a stratified analysis based on the odds ratio proposed in Jennison and Turnbull (2000, pages 251-252) and consistent with the stratified design is undertaken. However, the stopping rules used for the interim analyses are those obtained for determining the differences in response rates in a design that was not stratified. The purpose of this paper is to assess the robustness of such an approach on the performance of the odds ratio test when the underlying distribution and effect size of the factors that influence the outcome may vary. The simulation studies indicate that, in general, the stratified approach offers consistently better results than does the unstratified approach, as long as the difference in the weighted average of the response probabilities across strata between the two groups remains closer to the hypothesized values, irrespective of the differences in the (allocation) distributions and heterogeneous response rate. However, if the response probabilities deviate significantly from the hypothesized values so that the difference in the weighted average is less than the hypothesized value, then the proposed study could be significantly underpowered.  相似文献   

11.
For phase II oncology trials, Simon’s two-stage design is the most commonly used strategy. However, when clinically unevaluable patients occur, the total number of patients included at each stage differs from what was initially planned. Such situations raise concerns about the operating characteristics of the trial design. This paper evaluates three classical ad hoc strategies and a novel one proposed in this work for handling unevaluable patients. This latter is called the rescue strategy which adapts the critical stopping rules to the number of unevaluable patients at each stage without modifying the planned sample size. blue Simulations show that none of these strategies perfectly match the original target constraints for type I and II error rates. Our rescue strategy is nevertheless the one which best approaches the target error rates requirement. A re-analysis of one real phase II clinical trials on metastatic cancer illustrates the use of the proposed strategy.  相似文献   

12.
The design of clinical trials is typically based on marginal comparisons of a primary response under two or more treatments. The considerable gains in efficiency afforded by models conditional on one or more baseline responses has been extensively studied for Gaussian models. The purpose of this article is to present methods for the design and analysis of clinical trials in which the response is a count or a point process, and a corresponding baseline count is available prior to randomization. The methods are based on a conditional negative binomial model for the response given the baseline count and can be used to examine the effect of introducing selection criteria on power and sample size requirements. We show that designs based on this approach are more efficient than those proposed by McMahon et al. (1994).  相似文献   

13.
Clinical trials research is mainly conducted for the purpose of evaluating the relative efficacy of two or more treatments. However, a positive response due to treatment is not sufficient to put forward a new product because one must also demonstrate safety. In such cases, clinical trials which show a positive effect would need to accrue enough patients to also demonstrate that the new treatment is safe. It is our purpose to show how the efficacy and safety problems can be combined to yield a more practical clinical trial design. In this paper we propose an asymmetric stopping rule which allows the experimenter to terminate a clinical trial early for a sufficiently negative result and to continue to a specified number of patients otherwise. As it turns out, a few interim tests will have negligible effects on the overall significance level.  相似文献   

14.
Basket trials simultaneously evaluate the effect of one or more drugs on a defined biomarker, genetic alteration, or molecular target in a variety of disease subtypes, often called strata. A conventional approach for analyzing such trials is an independent analysis of each of the strata. This analysis is inefficient as it lacks the power to detect the effect of drugs in each stratum. To address these issues, various designs for basket trials have been proposed, centering on designs using Bayesian hierarchical models. In this article, we propose a novel Bayesian basket trial design that incorporates predictive sample size determination, early termination for inefficacy and efficacy, and the borrowing of information across strata. The borrowing of information is based on the similarity between the posterior distributions of the response probability. In general, Bayesian hierarchical models have many distributional assumptions along with multiple parameters. By contrast, our method has prior distributions for response probability and two parameters for similarity of distributions. The proposed design is easier to implement and less computationally demanding than other Bayesian basket designs. Through a simulation with various scenarios, our proposed design is compared with other designs including one that does not borrow information and one that uses a Bayesian hierarchical model.  相似文献   

15.
Longitudinal studies are often applied in biomedical research and clinical trials to evaluate the treatment effect. The association pattern within the subject must be considered in both sample size calculation and the analysis. One of the most important approaches to analyze such a study is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which “working correlation structure” is introduced and the association pattern within the subject depends on a vector of association parameters denoted by ρ. The explicit sample size formulas for two‐group comparison in linear and logistic regression models are obtained based on the GEE method by Liu and Liang. For cluster randomized trials (CRTs), researchers proposed the optimal sample sizes at both the cluster and individual level as a function of sampling costs and the intracluster correlation coefficient (ICC). In these approaches, the optimal sample sizes depend strongly on the ICC. However, the ICC is usually unknown for CRTs and multicenter trials. To overcome this shortcoming, Van Breukelen et al. consider a range of possible ICC values identified from literature reviews and present Maximin designs (MMDs) based on relative efficiency (RE) and efficiency under budget and cost constraints. In this paper, the optimal sample size and number of repeated measurements using GEE models with an exchangeable working correlation matrix is proposed under the considerations of fixed budget, where “optimal” refers to maximum power for a given sampling budget. The equations of sample size and number of repeated measurements for a known parameter value ρ are derived and a straightforward algorithm for unknown ρ is developed. Applications in practice are discussed. We also discuss the existence of the optimal design when an AR(1) working correlation matrix is assumed. Our proposed method can be extended under the scenarios when the true and working correlation matrix are different.  相似文献   

16.
In many clinical trials, the primary endpoint is time to an event of interest, for example, time to cardiac attack or tumor progression, and the statistical power of these trials is primarily driven by the number of events observed during the trials. In such trials, the number of events observed is impacted not only by the number of subjects enrolled but also by other factors including the event rate and the follow‐up duration. Consequently, it is important for investigators to be able to monitor and predict accurately patient accrual and event times so as to predict the times of interim and final analyses and enable efficient allocation of research resources, which have long been recognized as important aspects of trial design and conduct. The existing methods for prediction of event times all assume that patient accrual follows a Poisson process with a constant Poisson rate over time; however, it is fairly common in real‐life clinical trials that the Poisson rate changes over time. In this paper, we propose a Bayesian joint modeling approach for monitoring and prediction of accrual and event times in clinical trials. We employ a nonhomogeneous Poisson process to model patient accrual and a parametric or nonparametric model for the event and loss to follow‐up processes. Compared to existing methods, our proposed methods are more flexible and robust in that we model accrual and event/loss‐to‐follow‐up times jointly and allow the underlying accrual rates to change over time. We evaluate the performance of the proposed methods through simulation studies and illustrate the methods using data from a real oncology trial.  相似文献   

17.
This paper explores the extent to which application of statistical stopping rules in clinical trials can create an artificial heterogeneity of treatment effects in overviews (meta-analyses) of related trials. For illustration, we concentrate on overviews of identically designed group sequential trials, using either fixed nominal or O'Brien and Fleming two-sided boundaries. Some analytic results are obtained for two-group designs and simulation studies are otherwise used, with the following overall findings. The use of stopping rules leads to biased estimates of treatment effect so that the assessment of heterogeneity of results in an overview of trials, some of which have used stopping rules, is confounded by this bias. If the true treatment effect being studied is small, as is often the case, then artificial heterogeneity is introduced, thus increasing the Type I error rate in the test of homogeneity. This could lead to erroneous use of a random effects model, producing exaggerated estimates and confidence intervals. However, if the true mean effect is large, then between-trial heterogeneity may be underestimated. When undertaking or interpreting overviews, one should ascertain whether stopping rules have been used (either formally or informally) and should consider whether their use might account for any heterogeneity found.  相似文献   

18.
Traditionally, a clinical trial is conducted comparing treatment to standard care for all patients. However, it could be inefficient given patients’ heterogeneous responses to treatments, and rapid advances in the molecular understanding of diseases have made biomarker-based clinical trials increasingly popular. We propose a new targeted clinical trial design, termed as Max-Impact design, which selects the appropriate subpopulation for a clinical trial and aims to optimize population impact once the trial is completed. The proposed design not only gains insights on the patients who would be included in the trial but also considers the benefit to the excluded patients. We develop novel algorithms to construct enrollment rules for optimizing population impact, which are fairly general and can be applied to various types of outcomes. Simulation studies and a data example from the SWOG Cancer Research Network demonstrate the competitive performance of our proposed method compared to traditional untargeted and targeted designs.  相似文献   

19.
Ivanova A  Kim SH 《Biometrics》2009,65(1):307-315
Summary .  In many phase I trials, the design goal is to find the dose associated with a certain target toxicity rate. In some trials, the goal can be to find the dose with a certain weighted sum of rates of various toxicity grades. For others, the goal is to find the dose with a certain mean value of a continuous response. In this article, we describe a dose-finding design that can be used in any of the dose-finding trials described above, trials where the target dose is defined as the dose at which a certain monotone function of the dose is a prespecified value. At each step of the proposed design, the normalized difference between the current dose and the target is computed. If that difference is close to zero, the dose is repeated. Otherwise, the dose is increased or decreased, depending on the sign of the difference.  相似文献   

20.
The crossover design is often used in biomedical trials since it eliminates between subject variability. This paper is concerned with the statistical analysis of data arising from such trials when assumptions like normality do not necessarily apply. Nonparametric analysis of the two-period, two-treatment design was first described by Koch in a paper 1972. The purpose of this paper is to study nonparametric methods in crossover designs with three or more treatments and an equal number of periods. The proposed test for direct treatment effects is based on within subject comparisons after removing a possible period effect. With only two treatments this test reduces to the twosided Wilcoxon signed rank test. By simulation experiments the validity of the significance level of the test when using the asymptotic distribution of the test statistic are manifested and the power against different alternatives illustrated. A test for first order carryover effects can be constructed by a straightforward generalization of the test proposed by Koch in 1972. However, since this test is based on between subject comparisons its power will be low. Our recommendation is to consider the crossover design rather than the parallel group design if the carryover effects are assumed to be neglible or positive and smaller then the direct treatment effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号