首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Point estimation in group sequential and adaptive trials is an important issue in analysing a clinical trial. Most literature in this area is only concerned with estimation after completion of a trial. Since adaptive designs allow reassessment of sample size during the trial, reliable point estimation of the true effect when continuing the trial is additionally needed. We present a bias adjusted estimator which allows a more exact sample size determination based on the conditional power principle than the naive sample mean does.  相似文献   

2.
We consider estimation after a group sequential test. An estimator that is unbiased or has small bias may have substantial conditional bias (Troendle and Yu, 1999, Coburger and Wassmer, 2001). In this paper we derive the conditional maximum likelihood estimators of both the primary parameter and a secondary parameter, and investigate their properties within a conditional inference framework. The method applies to both the usual and adaptive group sequential test designs. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
Adaptive designs were originally developed for independent and uniformly distributed p‐values. There are trial settings where independence is not satisfied or where it may not be possible to check whether it is satisfied. In these cases, the test statistics and p‐values of each stage may be dependent. Since the probability of a type I error for a fixed adaptive design depends on the true dependence structure between the p‐values of the stages, control of the type I error rate might be endangered if the dependence structure is not taken into account adequately. In this paper, we address the problem of controlling the type I error rate in two‐stage adaptive designs if any dependence structure between the test statistics of the stages is admitted (worst case scenario). For this purpose, we pursue a copula approach to adaptive designs. For two‐stage adaptive designs without futility stop, we derive the probability of a type I error in the worst case, that is for the most adverse dependence structure between the p‐values of the stages. Explicit analytical considerations are performed for the class of inverse normal designs. A comparison with the significance level for independent and uniformly distributed p‐values is performed. For inverse normal designs without futility stop and equally weighted stages, it turns out that correcting for the worst case is too conservative as compared to a simple Bonferroni design.  相似文献   

4.
Proschan and Hunsberger (1995) suggest the use of a conditional error function to construct a two stage test that meets the α level and allows a very flexible reassessment of the sample size after the interim analysis. In this note we show that several adaptive designs can be formulated in terms of such an error function. The conditional power function defined similarly provides a simple method for sample size reassessment in adaptive two stage designs.  相似文献   

5.
In order to maximize control of heterogeneity within complete blocks, an experimenter could use incomplete blocks of size k = 2 or 3. In certain situations, incomplete blocks of this nature would eliminate the need for such spatial types of analyses as nearest neighbor. The intrablock efficiency factors for such designs are relatively low. However, with recovery of interblock information, FEDERER and SPEED (1987) have presented measures of design efficiency factors which demonstrate that efficiency factors approach unity for certain ratios of the intrablock and interblock variance components. Hence with recovery of interblock information, even incomplete block designs with k = 2 or 3 have relatively high efficiency factors. The reduction in the intrablock error variance over the complete block error variance in many situations will provide designs with high efficiency. A simple procedure for constructing incomplete blocks of sizes 2 and 3 is presented. It is shown how to obtain additional zero-one association confounding arrangements when v = 4 t, t an integer, and for v = pk, k ≤ p. It is indicated how to do the statistical analysis for these designs.  相似文献   

6.
Many variables and their interactions can affect a biotechnological process. Testing a large number of variables and all their possible interactions is a cumbersome task and its cost can be prohibitive. Several screening strategies, with a relatively low number of experiments, can be used to find which variables have the largest impact on the process and estimate the magnitude of their effect. One approach for process screening is the use of experimental designs, among which fractional factorial and Plackett–Burman designs are frequent choices. Other screening strategies involve the use of artificial neural networks (ANNs). The advantage of ANNs is that they have fewer assumptions than experimental designs, but they render black-box models (i.e., little information can be extracted about the process mechanics). In this paper, we simulate a biotechnological process (fed-batch growth of bakers yeast) to analyze and compare the effect of random experimental errors of different magnitudes and statistical distributions on experimental designs and ANNs. Except for the situation in which the error has a normal distribution and the standard deviation is constant, it was not possible to determine a clear-cut rule for favoring one screening strategy over the other. Instead, we found that the data can be better analyzed using both strategies simultaneously.  相似文献   

7.
Currently, the design of group sequential clinical trials requires choosing among several distinct design categories, design scales, and strategies for determining stopping rules. This approach can limit the design selection process so that clinical issues are not fully addressed. This paper describes a family of designs that unifies previous approaches and allows continuous movement among the previous categories. This unified approach facilitates the process of tailoring the design to address important clinical issues. The unified family of designs is constructed from a generalization of a four-boundary group sequential design in which the shape and location of each boundary can be independently specified. Methods for implementing the design using error-spending functions are described. Examples illustrating the use of the design family are also presented.  相似文献   

8.
The Group Divisible Rotatable (GDR) designs are the designs in which the factors get divided into groups such that for the factors within group, the designs are rotatable. In the present paper we have obtained a series of Group Divisible Second Order Rotatable designs, by decomposing the v-dimensional space corresponding to v-factors under consideration into three mutually orthogonal spaces. We have given the least squares estimates of the parameters, the analysis and construction of such designs.  相似文献   

9.
Brannath W  Bauer P  Maurer W  Posch M 《Biometrics》2003,59(1):106-114
The problem of simultaneous sequential tests for noninferiority and superiority of a treatment, as compared to an active control, is considered in terms of continuous hierarchical families of one-sided null hypotheses, in the framework of group sequential and adaptive two-stage designs. The crucial point is that the decision boundaries for the individual null hypotheses may vary over the parameter space. This allows one to construct designs where, e.g., a rigid stopping criterion is chosen, rejecting or accepting all individual null hypotheses simultaneously. Another possibility is to use monitoring type stopping boundaries, which leave some flexibility to the experimenter: he can decide, at the interim analysis, whether he is satisfied with the noninferiority margin achieved at this stage, or wants to go for more at the second stage. In the case where he proceeds to the second stage, he may perform midtrial design modifications (e.g., reassess the sample size). The proposed approach allows one to "spend," e.g., less of alpha for an early proof of noninferiority than for an early proof of superiority, and is illustrated by typical examples.  相似文献   

10.
A simple and operationally convenient approximation is proposed for the Bayes optimal multistage design within the Colton decision-theoretic model for the comparison of two medical treatments. The two- and three-stage designs are developed in full; the latter is found to be superior to several existing designs including some involving sequential adaptive sampling.  相似文献   

11.
Cheng Y  Shen Y 《Biometrics》2004,60(4):910-918
For confirmatory trials of regulatory decision making, it is important that adaptive designs under consideration provide inference with the correct nominal level, as well as unbiased estimates, and confidence intervals for the treatment comparisons in the actual trials. However, naive point estimate and its confidence interval are often biased in adaptive sequential designs. We develop a new procedure for estimation following a test from a sample size reestimation design. The method for obtaining an exact confidence interval and point estimate is based on a general distribution property of a pivot function of the Self-designing group sequential clinical trial by Shen and Fisher (1999, Biometrics55, 190-197). A modified estimate is proposed to explicitly account for futility stopping boundary with reduced bias when block sizes are small. The proposed estimates are shown to be consistent. The computation of the estimates is straightforward. We also provide a modified weight function to improve the power of the test. Extensive simulation studies show that the exact confidence intervals have accurate nominal probability of coverage, and the proposed point estimates are nearly unbiased with practical sample sizes.  相似文献   

12.
OBJECTIVES: The use of conventional Transmission/Disequilibrium tests in the analysis of candidate-gene association studies requires the precise and complete pre-specification of the total number of trios to be sampled to obtain sufficient power at a certain significance level (type I error risk). In most of these studies, very little information about the genetic effect size will be available beforehand and thus it will be difficult to calculate a reasonable sample size. One would therefore wish to reassess the sample size during the course of a study. METHOD: We propose an adaptive group sequential procedure which allows for both early stopping of the study with rejection of the null hypothesis (H0) and for recalculation of the sample size based on interim effect size estimates when H0 cannot be rejected. The applicability of the method which was developed by Müller and Sch?fer [Biometrics 2001;57:886-891] in a clinical context is demonstrated by a numerical example. Monte Carlo simulations are performed comparing the adaptive procedure with a fixed sample and a conventional group sequential design. RESULTS: The main advantage of the adaptive procedure is its flexibility to allow for design changes in order to achieve a stabilized power characteristic while controlling the overall type I error and using the information already collected. CONCLUSIONS: Given these advantages, the procedure is a promising alternative to traditional designs.  相似文献   

13.
Inverse Adaptive Cluster Sampling   总被引:3,自引:0,他引:3  
Consider a population in which the variable of interest tends to be at or near zero for many of the population units but a subgroup exhibits values distinctly different from zero. Such a population can be described as rare in the sense that the proportion of elements having nonzero values is very small. Obtaining an estimate of a population parameter such as the mean or total that is nonzero is difficult under classical fixed sample-size designs since there is a reasonable probability that a fixed sample size will yield all zeroes. We consider inverse sampling designs that use stopping rules based on the number of rare units observed in the sample. We look at two stopping rules in detail and derive unbiased estimators of the population total. The estimators do not rely on knowing what proportion of the population exhibit the rare trait but instead use an estimated value. Hence, the estimators are similar to those developed for poststratification sampling designs. We also incorporate adaptive cluster sampling into the sampling design to allow for the case where the rare elements tend to cluster within the population in some manner. The formulas for the variances of the estimators do not allow direct analytic comparison of the efficiency of the various designs and stopping rules, so we provide the results of a small simulation study to obtain some insight into the differences among the stopping rules and sampling approaches. The results indicate that a modified stopping rule that incorporates an adaptive sampling component and utilizes an initial random sample of fixed size is the best in the sense of having the smallest variance.  相似文献   

14.
Predictive and prognostic biomarkers play an important role in personalized medicine to determine strategies for drug evaluation and treatment selection. In the context of continuous biomarkers, identification of an optimal cutoff for patient selection can be challenging due to limited information on biomarker predictive value, the biomarker’s distribution in the intended use population, and the complexity of the biomarker relationship to clinical outcomes. As a result, prespecified candidate cutoffs may be rationalized based on biological and practical considerations. In this context, adaptive enrichment designs have been proposed with interim decision rules to select a biomarker-defined subpopulation to optimize study performance. With a group sequential design as a reference, the performance of several proposed adaptive designs are evaluated and compared under various scenarios (e.g., sample size, study power, enrichment effects) where type I error rates are well controlled through closed testing procedures and where subpopulation selections are based upon the predictive probability of trial success. It is found that when the treatment is more effective in a subpopulation, these adaptive designs can improve study power substantially. Furthermore, we identified one adaptive design to have generally higher study power than the other designs under various scenarios.  相似文献   

15.
Inference after two‐stage single‐arm designs with binary endpoint is challenging due to the nonunique ordering of the sampling space in multistage designs. We illustrate the problem of specifying test‐compatible confidence intervals for designs with nonconstant second‐stage sample size and present two approaches that guarantee confidence intervals consistent with the test decision. Firstly, we extend the well‐known Clopper–Pearson approach of inverting a family of two‐sided hypothesis tests from the group‐sequential case to designs with fully adaptive sample size. Test compatibility is achieved by using a sample space ordering that is derived from a test‐compatible estimator. The resulting confidence intervals tend to be conservative but assure the nominal coverage probability. In order to assess the possibility of further improving these confidence intervals, we pursue a direct optimization approach minimizing the mean width of the confidence intervals. While the latter approach produces more stable coverage probabilities, it is also slightly anti‐conservative and yields only negligible improvements in mean width. We conclude that the Clopper–Pearson‐type confidence intervals based on a test‐compatible estimator are the best choice if the nominal coverage probability is not to be undershot and compatibility of test decision and confidence interval is to be preserved.  相似文献   

16.
Clinical trials with adaptive sample size re-assessment, based on an analysis of the unblinded interim results (ubSSR), have gained in popularity due to uncertainty regarding the value of \(\delta \) at which to power the trial at the start of the study. While the statistical methodology for controlling the type-1 error of such designs is well established, there remain concerns that conventional group sequential designs with no ubSSR can accomplish the same goals with greater efficiency. The precise manner in which this efficiency comparison can be objectified has been difficult to quantify, however. In this paper, we present a methodology for making this comparison in a standard, well-accepted manner by plotting the unconditional power curves of the two approaches while holding constant their expected sample size, at each value of \(\delta \) in the range of interest. It is seen that under reasonable decision rules for increasing sample size (conservative promising zones, and no more than a 50% increase in sample size) there is little or no loss of efficiency for the adaptive designs in terms of unconditional power. The two approaches, however, have very different conditional power profiles. More generally, a methodology has been provided for comparing any design with ubSSR relative to a comparable group sequential design with no ubSSR, so one can determine whether the efficiency loss, if any, of the ubSSR design is offset by the advantages it confers for re-powering the study at the time of the interim analysis.  相似文献   

17.
Curve-free and model-based continual reassessment method designs   总被引:2,自引:0,他引:2  
O'Quigley J 《Biometrics》2002,58(1):245-249
Gasparini and Eisele (2000, Biometrics 56, 609 615) present a development of the continual reassessment method of O'Quigley, Pepe, and Fisher (1990, Biometrics 46, 33-48). They call their development a curve-free method for Phase I clinical trials. However, unless we are dealing with informative prior information, then the curve-free method coincides with the usual model-based continual reassessment method. Both methods are subject to arbitrary specification parameters, and we provide some discussion on this. Whatever choices are made for one method, there exists equivalent choices for the other method, where " equivalent" means that the operating characteristics (sequential dose allocation and final recommendation) are the same. The insightful development of Gasparini and Eisele provides clarification on some of the basic ideas behind the continual reassessment method, particularly when viewed from a Bayesian perspective. But their development does not lead to a new class of designs and the comparative results in their article, indicating some preference for curve-free designs over model-based designs, are simply reflecting a more fortunate choice of arbitrary specification parameters. Other choices could equally well have inversed their conclusion. A correct conclusion should be one of operational equivalence. The story is different for the case of informative priors, a situation that is inherently much more difficult. We discuss this. We also mention the important idea of two-stage designs (Moller, 1995, Statistics in Medicine 14, 911-922; O'Quigley and Shen, 1996, Biometrics 52, 163-174), arguing, via a simple comparison with the results of Gasparini and Eisele (2000), that there is room for notable gains here. Two-stage designs also have an advantage of avoiding the issue of prior specification altogether.  相似文献   

18.
Due to increasing discoveries of biomarkers and observed diversity among patients, there is growing interest in personalized medicine for the purpose of increasing the well‐being of patients (ethics) and extending human life. In fact, these biomarkers and observed heterogeneity among patients are useful covariates that can be used to achieve the ethical goals of clinical trials and improving the efficiency of statistical inference. Covariate‐adjusted response‐adaptive (CARA) design was developed to use information in such covariates in randomization to maximize the well‐being of participating patients as well as increase the efficiency of statistical inference at the end of a clinical trial. In this paper, we establish conditions for consistency and asymptotic normality of maximum likelihood (ML) estimators of generalized linear models (GLM) for a general class of adaptive designs. We prove that the ML estimators are consistent and asymptotically follow a multivariate Gaussian distribution. The efficiency of the estimators and the performance of response‐adaptive (RA), CARA, and completely randomized (CR) designs are examined based on the well‐being of patients under a logit model with categorical covariates. Results from our simulation studies and application to data from a clinical trial on stroke prevention in atrial fibrillation (SPAF) show that RA designs lead to ethically desirable outcomes as well as higher statistical efficiency compared to CARA designs if there is no treatment by covariate interaction in an ideal model. CARA designs were however more ethical than RA designs when there was significant interaction.  相似文献   

19.
Designs incorporating more than one endpoint have become popular in drug development. One of such designs allows for incorporation of short‐term information in an interim analysis if the long‐term primary endpoint has not been yet observed for some of the patients. At first we consider a two‐stage design with binary endpoints allowing for futility stopping only based on conditional power under both fixed and observed effects. Design characteristics of three estimators: using primary long‐term endpoint only, short‐term endpoint only, and combining data from both are compared. For each approach, equivalent cut‐off point values for fixed and observed effect conditional power calculations can be derived resulting in the same overall power. While in trials stopping for futility the type I error rate cannot get inflated (it usually decreases), there is loss of power. In this study, we consider different scenarios, including different thresholds for conditional power, different amount of information available at the interim, different correlations and probabilities of success. We further extend the methods to adaptive designs with unblinded sample size reassessments based on conditional power with inverse normal method as the combination function. Two different futility stopping rules are considered: one based on the conditional power, and one from P‐values based on Z‐statistics of the estimators. Average sample size, probability to stop for futility and overall power of the trial are compared and the influence of the choice of weights is investigated.  相似文献   

20.
Traditionally drug development is generally divided into three phases which have different aims and objectives. Recently so-called adaptive seamless designs that allow combination of the objectives of different development phases into a single trial have gained much interest. Adaptive trials combining treatment selection typical for Phase II and confirmation of efficacy as in Phase III are referred to as adaptive seamless Phase II/III designs and are considered in this paper. We compared four methods for adaptive treatment selection, namely the classical Dunnett test, an adaptive version of the Dunnett test based on the conditional error approach, the combination test approach, and an approach within the classical group-sequential framework. The latter two approaches have only recently been published. In a simulation study we found that no one method dominates the others in terms of power apart from the adaptive Dunnett test that dominates the classical Dunnett by construction. Furthermore, scenarios under which one approach outperforms others are described.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号