首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a response adaptive design of clinical trials with a variance‐penalized criterion. It is shown that this criterion evaluates the performance of a response adaptive design based on both the number of patients assigned to the better treatment and the power of the statistical test. A new proportion of treatment allocation is proposed and the doubly biased coin procedure is used to target the proposed proportion. Under reasonable assumptions, the proposed design is demonstrated to generate an asymptotic variance of allocation proportions, which is smaller than that of the drop‐the‐loser design. Simulation comparisons of the proposed design with some existing designs are presented.  相似文献   

2.
Atkinson AC  Biswas A 《Biometrics》2005,61(1):118-125
Adaptive designs are used in phase III clinical trials for skewing the allocation pattern toward the better treatments. We use optimum design theory to derive a skewed Bayesian biased-coin procedure for sequential designs with continuous responses. The skewed designs are used to provide adaptive designs, the performance of which is studied numerically and theoretically. Important properties are loss and the proportion of allocation to the better treatment.  相似文献   

3.
Summary. We derive the optimal allocation between two treatments in a clinical trial based on the following optimality criterion: for fixed variance of the test statistic, what allocation minimizes the expected number of treatment failures? A sequential design is described that leads asymptotically to the optimal allocation and is compared with the randomized play‐the‐winner rule, sequential Neyman allocation, and equal allocation at similar power levels. We find that the sequential procedure generally results in fewer treatment failures than the other procedures, particularly when the success probabilities of treatments are smaller.  相似文献   

4.
Zhang L  Rosenberger WF 《Biometrics》2006,62(2):562-569
We provide an explicit asymptotic method to evaluate the performance of different response-adaptive randomization procedures in clinical trials with continuous outcomes. We use this method to investigate four different response-adaptive randomization procedures. Their performance, especially in power and treatment assignment skewing to the better treatment, is thoroughly evaluated theoretically. These results are then verified by simulation. Our analysis concludes that the doubly adaptive biased coin design procedure targeting optimal allocation is the best one for practical use. We also consider the effect of delay in responses and nonstandard responses, for example, Cauchy distributed response. We illustrate our procedure by redesigning a real clinical trial.  相似文献   

5.
Many late-phase clinical trials recruit subjects at multiple study sites. This introduces a hierarchical structure into the data that can result in a power-loss compared to a more homogeneous single-center trial. Building on a recently proposed approach to sample size determination, we suggest a sample size recalculation procedure for multicenter trials with continuous endpoints. The procedure estimates nuisance parameters at interim from noncomparative data and recalculates the sample size required based on these estimates. In contrast to other sample size calculation methods for multicenter trials, our approach assumes a mixed effects model and does not rely on balanced data within centers. It is therefore advantageous, especially for sample size recalculation at interim. We illustrate the proposed methodology by a study evaluating a diabetes management system. Monte Carlo simulations are carried out to evaluate operation characteristics of the sample size recalculation procedure using comparative as well as noncomparative data, assessing their dependence on parameters such as between-center heterogeneity, residual variance of observations, treatment effect size and number of centers. We compare two different estimators for between-center heterogeneity, an unadjusted and a bias-adjusted estimator, both based on quadratic forms. The type 1 error probability as well as statistical power are close to their nominal levels for all parameter combinations considered in our simulation study for the proposed unadjusted estimator, whereas the adjusted estimator exhibits some type 1 error rate inflation. Overall, the sample size recalculation procedure can be recommended to mitigate risks arising from misspecified nuisance parameters at the planning stage.  相似文献   

6.
We assess the efficiency of balanced treatment allocation methods in clinical trials for comparing treatments with respect to survival. We compare optimal designs for each of three standard survival analysis techniques (maximum partial likelihood estimation, log-rank test, exponential regression) with balanced designs, over a range of hypothetical trials. Although balanced designs are not optimal, we find them to be very efficient. In view of the high efficiency demonstrated in this and in a previous paper (Begg and Kalish, 1984, Biometrics 40, 409-420), and practical difficulties in implementing an optimal design, we recommend the use of balanced allocation methods in practice.  相似文献   

7.
C B Begg  L A Kalish 《Biometrics》1984,40(2):409-420
Many clinical trials have a binary outcome variable. If covariate adjustment is necessary in the analysis, the logistic-regression model is frequently used. Optimal designs for allocating treatments for this model, or for any nonlinear or heteroscedastic model, are generally unbalanced with regard to overall treatment totals and totals within strata. However, all treatment-allocation methods that have been recommended for clinical trials in the literature are designed to balance treatments within strata, either directly or asymptotically. In this paper, the efficiencies of balanced sequential allocation schemes are measured relative to sequential Ds-optimal designs for the logistic model, using as examples completed trials conducted by the Eastern Cooperative Oncology Group and systematic simulations. The results demonstrate that stratified, balanced designs are quite efficient, in general. However, complete randomization is frequently inefficient, and will occasionally result in a trial that is very inefficient.  相似文献   

8.
In this work, we propose a novel method for individualized treatment selection when the treatment response is multivariate. Our method covers any number of treatments and it can be applied for a broad set of models. The proposed method uses a Mahalanobis-type distance measure to establish an ordering of treatments based on treatment performance measures. Our investigation in this work deals with means of responses conditional on lower dimensional composite scores based on covariates where these scores are built using single index models to approximate mean responses against patient covariates. Smoothed estimates of such conditional means are combined to construct an estimate of the aforementioned distance measure, which is then used to estimate the optimal treatment. An empirical study demonstrates the performance of the proposed method in finite samples. We also present a data analysis using an HIV clinical trial data to show the applicability of the proposed procedure for real data.  相似文献   

9.
Donner A  Klar N  Zou G 《Biometrics》2004,60(4):919-925
Split-cluster designs are frequently used in the health sciences when naturally occurring clusters such as multiple sites or organs in the same subject are assigned to different treatments. However, statistical methods for the analysis of binary data arising from such designs are not well developed. The purpose of this article is to propose and evaluate a new procedure for testing the equality of event rates in a design dividing each of k clusters into two segments having multiple sites (e.g., teeth, lesions). The test statistic proposed is a generalization of a previously published procedure based on adjusting the standard Pearson chi-square statistic, but can also be derived as a score test using the approach of generalized estimating equations.  相似文献   

10.
A common assumption of data analysis in clinical trials is that the patient population, as well as treatment effects, do not vary during the course of the study. However, when trials enroll patients over several years, this hypothesis may be violated. Ignoring variations of the outcome distributions over time, under the control and experimental treatments, can lead to biased treatment effect estimates and poor control of false positive results. We propose and compare two procedures that account for possible variations of the outcome distributions over time, to correct treatment effect estimates, and to control type-I error rates. The first procedure models trends of patient outcomes with splines. The second leverages conditional inference principles, which have been introduced to analyze randomized trials when patient prognostic profiles are unbalanced across arms. These two procedures are applicable in response-adaptive clinical trials. We illustrate the consequences of trends in the outcome distributions in response-adaptive designs and in platform trials, and investigate the proposed methods in the analysis of a glioblastoma study.  相似文献   

11.
This paper considers the problem of identifying which treatments are strictly inferior to the best treatment or treatments in a balanced one-way layout with three treatments, which has important applications in screening trials for new product development. A stepdown procedure is constructed that selects a subset of the treatments containing only treatments that are known to be strictly inferior to the best treatment or treatments. This stepdown procedure uses feedback from the first stage to the second stage that improves its operating characteristics. The advantages accruing from this feedback are demonstrated.  相似文献   

12.
In clinical trials, sample size reestimation is a useful strategy for mitigating the risk of uncertainty in design assumptions and ensuring sufficient power for the final analysis. In particular, sample size reestimation based on unblinded interim effect size can often lead to sample size increase, and statistical adjustment is usually needed for the final analysis to ensure that type I error rate is appropriately controlled. In current literature, sample size reestimation and corresponding type I error control are discussed in the context of maintaining the original randomization ratio across treatment groups, which we refer to as “proportional increase.” In practice, not all studies are designed based on an optimal randomization ratio due to practical reasons. In such cases, when sample size is to be increased, it is more efficient to allocate the additional subjects such that the randomization ratio is brought closer to an optimal ratio. In this research, we propose an adaptive randomization ratio change when sample size increase is warranted. We refer to this strategy as “nonproportional increase,” as the number of subjects increased in each treatment group is no longer proportional to the original randomization ratio. The proposed method boosts power not only through the increase of the sample size, but also via efficient allocation of the additional subjects. The control of type I error rate is shown analytically. Simulations are performed to illustrate the theoretical results.  相似文献   

13.
In many clinical trials, the primary endpoint is time to an event of interest, for example, time to cardiac attack or tumor progression, and the statistical power of these trials is primarily driven by the number of events observed during the trials. In such trials, the number of events observed is impacted not only by the number of subjects enrolled but also by other factors including the event rate and the follow‐up duration. Consequently, it is important for investigators to be able to monitor and predict accurately patient accrual and event times so as to predict the times of interim and final analyses and enable efficient allocation of research resources, which have long been recognized as important aspects of trial design and conduct. The existing methods for prediction of event times all assume that patient accrual follows a Poisson process with a constant Poisson rate over time; however, it is fairly common in real‐life clinical trials that the Poisson rate changes over time. In this paper, we propose a Bayesian joint modeling approach for monitoring and prediction of accrual and event times in clinical trials. We employ a nonhomogeneous Poisson process to model patient accrual and a parametric or nonparametric model for the event and loss to follow‐up processes. Compared to existing methods, our proposed methods are more flexible and robust in that we model accrual and event/loss‐to‐follow‐up times jointly and allow the underlying accrual rates to change over time. We evaluate the performance of the proposed methods through simulation studies and illustrate the methods using data from a real oncology trial.  相似文献   

14.
DAS (1960) gave a method of construction of confounded balanced asymmetrical factorial designs of the type v × 22 by using BIB designs. In the present paper a method has been given for construction of balanced asymmetrical factorial designs of the type (vt) × 22 by using truncated balanced incomplete block designs obtainable by omitting t treatments. Likewise, partially balanced asymmetrical factorial designs can also be obtained by omitting any particular treatment alongwith its first or second associate treatments from the v treatments of a PBIB design. We can get a large number of new designs not available in literature through this technique. These designs are well suited for varietal trials with multiple basals.  相似文献   

15.
The stepped wedge design (SWD) is a form of cluster randomized trial, usually comparing two treatments, which is divided into time periods and sequences, with clusters allocated to sequences. Typically all sequences start with the standard treatment and end with the new treatment, with the change happening at different times in the different sequences. The clusters will usually differ in size but this is overlooked in much of the existing literature. This paper considers the case when clusters have different sizes and determines how efficient designs can be found. The approach uses an approximation to the variance of the treatment effect, which is expressed in terms of the proportions of clusters and of individuals allocated to each sequence of the design. The roles of these sets of proportions in determining an efficient design are discussed and illustrated using two SWDs, one in the treatment of sexually transmitted diseases and one in renal replacement therapy. Cluster-balanced designs, which allocate equal numbers of clusters to each sequence, are shown to have excellent statistical and practical properties; suggestions are made about the practical application of the results for these designs. The paper concentrates on the cross-sectional case, where subjects are measured once, but it is briefly indicated how the methods can be extended to the closed-cohort design.  相似文献   

16.
Targeted therapies on the basis of genomic aberrations analysis of the tumor have shown promising results in cancer prognosis and treatment. Regardless of tumor type, trials that match patients to targeted therapies for their particular genomic aberrations have become a mainstream direction of therapeutic management of patients with cancer. Therefore, finding the subpopulation of patients who can most benefit from an aberration‐specific targeted therapy across multiple cancer types is important. We propose an adaptive Bayesian clinical trial design for patient allocation and subpopulation identification. We start with a decision theoretic approach, including a utility function and a probability model across all possible subpopulation models. The main features of the proposed design and population finding methods are the use of a flexible nonparametric Bayesian survival regression based on a random covariate‐dependent partition of patients, and decisions based on a flexible utility function that reflects the requirement of the clinicians appropriately and realistically, and the adaptive allocation of patients to their superior treatments. Through extensive simulation studies, the new method is demonstrated to achieve desirable operating characteristics and compares favorably against the alternatives.  相似文献   

17.
Zhao Y  Zeng D  Socinski MA  Kosorok MR 《Biometrics》2011,67(4):1422-1433
Typical regimens for advanced metastatic stage IIIB/IV nonsmall cell lung cancer (NSCLC) consist of multiple lines of treatment. We present an adaptive reinforcement learning approach to discover optimal individualized treatment regimens from a specially designed clinical trial (a "clinical reinforcement trial") of an experimental treatment for patients with advanced NSCLC who have not been treated previously with systemic therapy. In addition to the complexity of the problem of selecting optimal compounds for first- and second-line treatments based on prognostic factors, another primary goal is to determine the optimal time to initiate second-line therapy, either immediately or delayed after induction therapy, yielding the longest overall survival time. A reinforcement learning method called Q-learning is utilized, which involves learning an optimal regimen from patient data generated from the clinical reinforcement trial. Approximating the Q-function with time-indexed parameters can be achieved by using a modification of support vector regression that can utilize censored data. Within this framework, a simulation study shows that the procedure can extract optimal regimens for two lines of treatment directly from clinical data without prior knowledge of the treatment effect mechanism. In addition, we demonstrate that the design reliably selects the best initial time for second-line therapy while taking into account the heterogeneity of NSCLC across patients.  相似文献   

18.
The present paper describes a controlled clinical trial to investigate the effect of Pulsed Electro‐Magnetic Field for the treatment of sequentially entering patients with Rheumatoid Arthritis. According to the study design, repeated monitorings of the patients are carried out to assess their clinical status. The allocation of the entering patients to either a therapy group or a placebo group is done using a newly designed adaptive allocation rule termed as randomized longitudinal play‐the‐winner rule. A discussion of some inference procedures is followed by the analysis of a real data, which shows a clear preference of the therapy over the placebo. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
We propose a dynamic allocation procedure that increases power and efficiency when measuring an average treatment effect in sequential randomized trials exploiting some subjects' previous assessed responses. Subjects arrive sequentially and are either randomized or paired to a previously randomized subject and administered the alternate treatment. The pairing is made via a dynamic matching criterion that iteratively learns which specific covariates are important to the response. We develop estimators for the average treatment effect as well as an exact test. We illustrate our method's increase in efficiency and power over other allocation procedures in both simulated scenarios and a clinical trial dataset. An R package “SeqExpMatch ” for use by practitioners is available on CRAN .  相似文献   

20.
Restricted randomization designs in clinical trials.   总被引:4,自引:0,他引:4  
R Simon 《Biometrics》1979,35(2):503-512
Though therapeutic clinical trials are often categorized as using either "randomization" or "historical controls" as a basis for treatment evaluation, pure random assignment of treatments is rarely employed. Instead various restricted randomization designs are used. The restrictions include the balancing of treatment assignments over time and the stratification of the assignment with regard to covariates that may affect response. Restricted randomization designs for clinical trials differ from those of other experimental areas because patients arrive sequentially and a balanced design cannot be ensured. The major restricted randomization designs and arguments concerning the proper role of stratification are reviewed here. The effect of randomization restrictions on the validity of significance tests is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号