共查询到20条相似文献,搜索用时 8 毫秒
1.
Hilton JF 《Biometrical journal. Biometrische Zeitschrift》2006,48(6):934-947
To better understand the design of noninferiority trials for binary data, we identify analogies and contrasts between this and the more familiar superiority trial design. We restrict attention to the problem of detecting a difference between experimental and control response rates in the setting where there is no difference (piE - piC = 0) under the noninferiority alternative hypothesis and under the superiority null, and a matching difference between groups under the complementary hypotheses (/piE - piC/ = delta). Our derivation of the constrained maximum likelihood estimates (MLEs) reveals that superiority and noninferiority trials have different nuisance parameters--the marginal response rate and the control-group response rate, respectively. Our empirical results show that when individuals are allocated to treatment groups in the ratio that minimizes the overall sample size, balanced allocation is optimal only for superiority trials when the error rates are equal; otherwise imbalanced allocation is optimal. Different allocation ratios between trial types lead to different variances, and thus to different sample sizes. Finally, since the value of the marginal response rate--a design parameter in noninferiority trials--typically cannot be obtained from preliminary or published studies, we suggest a means of identifying a value that can be used. We conclude that full documentation of the design of a trial requires specification not only of the design parameters but also of the allocation ratio and the nuisance parameter, the value of which is not obvious under unequal allocation. 相似文献
2.
3.
A novel confidence interval estimator is proposed for the risk difference in noninferiority binomial trials. The proposed confidence interval, which is dependent on the prespecified noninferiority margin, is consistent with an exact unconditional test that preserves the type-I error and has improved power, particularly for smaller sample sizes, compared to the confidence interval by Chan and Zhang. The improved performance of the proposed confidence interval is theoretically justified and demonstrated with simulations and examples. An R package is also distributed that implements the proposed methods along with other confidence interval estimators. 相似文献
4.
5.
6.
The problem of simultaneous sequential tests for noninferiority and superiority of a treatment, as compared to an active control, is considered in terms of continuous hierarchical families of one-sided null hypotheses, in the framework of group sequential and adaptive two-stage designs. The crucial point is that the decision boundaries for the individual null hypotheses may vary over the parameter space. This allows one to construct designs where, e.g., a rigid stopping criterion is chosen, rejecting or accepting all individual null hypotheses simultaneously. Another possibility is to use monitoring type stopping boundaries, which leave some flexibility to the experimenter: he can decide, at the interim analysis, whether he is satisfied with the noninferiority margin achieved at this stage, or wants to go for more at the second stage. In the case where he proceeds to the second stage, he may perform midtrial design modifications (e.g., reassess the sample size). The proposed approach allows one to "spend," e.g., less of alpha for an early proof of noninferiority than for an early proof of superiority, and is illustrated by typical examples. 相似文献
7.
In the process of monitoring clinical trials, it seems appealing to use the interim findings to determine whether the sample size originally planned will provide adequate power when the alternative hypothesis is true, and to adjust the sample size if necessary. In the present paper, we propose a flexible sequential monitoring method following the work of Fisher (1998), in which the maximum sample size does not have to be specified in advance. The final test statistic is constructed based on a weighted average of the sequentially collected data, where the weight function at each stage is determined by the observed data prior to that stage. Such a weight function is used to maintain the integrity of the variance of the final test statistic so that the overall type I error rate is preserved. Moreover, the weight function plays an implicit role in termination of a trial when a treatment difference exists. Finally, the design allows the trial to be stopped early when the efficacy result is sufficiently negative. Simulation studies confirm the performance of the method. 相似文献
8.
A comparison of methods for constructing confidence intervals after phase II/III clinical trials 下载免费PDF全文
Peter K. Kimani Susan Todd Nigel Stallard 《Biometrical journal. Biometrische Zeitschrift》2014,56(1):107-128
Recently, in order to accelerate drug development, trials that use adaptive seamless designs such as phase II/III clinical trials have been proposed. Phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages. Using stage 1 data, an interim analysis is performed to answer phase II objectives and after collection of stage 2 data, a final confirmatory analysis is performed to answer phase III objectives. In this paper we consider phase II/III clinical trials in which, at stage 1, several experimental treatments are compared to a control and the apparently most effective experimental treatment is selected to continue to stage 2. Although these trials are attractive because the confirmatory analysis includes phase II data from stage 1, the inference methods used for trials that compare a single experimental treatment to a control and do not have an interim analysis are no longer appropriate. Several methods for analysing phase II/III clinical trials have been developed. These methods are recent and so there is little literature on extensive comparisons of their characteristics. In this paper we review and compare the various methods available for constructing confidence intervals after phase II/III clinical trials. 相似文献
9.
Switching between testing for superiority and non-inferiority has been an important statistical issue in the design and analysis of active controlled clinical trial. In practice, it is often conducted with a two-stage testing procedure. It has been assumed that there is no type I error rate adjustment required when either switching to test for non-inferiority once the data fail to support the superiority claim or switching to test for superiority once the null hypothesis of non-inferiority is rejected with a pre-specified non-inferiority margin in a generalized historical control approach. However, when using a cross-trial comparison approach for non-inferiority testing, controlling the type I error rate sometimes becomes an issue with the conventional two-stage procedure. We propose to adopt a single-stage simultaneous testing concept as proposed by Ng (2003) to test both non-inferiority and superiority hypotheses simultaneously. The proposed procedure is based on Fieller's confidence interval procedure as proposed by Hauschke et al. (1999). 相似文献
10.
We consider the problem of drawing superiority inferences on individual endpoints following non-inferiority testing. R?hmel et al. (2006) pointed out this as an important problem which had not been addressed by the previous procedures that only tested for global superiority. R?hmel et al. objected to incorporating the non-inferiority tests in the assessment of the global superiority test by exploiting the relationship between the two, since the results of the latter test then depend on the non-inferiority margins specified for the former test. We argue that this is justified, besides the fact that it enhances the power of the global superiority test. We provide a closed testing formulation which generalizes the three-step procedure proposed by R?hmel et al. for two endpoints. For the global superiority test, R?hmel et al. suggest using the L?uter (1996) test which is modified to make it monotone. The resulting test not only is complicated to use, but the modification does not readily extend to more than two endpoints, and it is less powerful in general than several of its competitors. This is verified in a simulation study. Instead, we suggest applying the one-sided likelihood ratio test used by Perlman and Wu (2004) or the union-intersection t(max) test used by Tamhane and Logan (2004). 相似文献
11.
12.
Bristol DR 《Biometrical journal. Biometrische Zeitschrift》2005,47(1):75-81; discussion 99-107
Noninferiority of a new treatment to a reference treatment with respect to efficacy is usually associated with the superiority of the new treatment to the reference treatment with respect to other aspects not associated with efficacy. When the superiority of the new treatment to the reference treatment is with respect to a specified safety variable, it may be necessary to perform the between-treatment comparisons. The efficacy and safety comparisons may be considered separately or simultaneous comparisons may be performed. Here techniques are discussed for the simultaneous consideration of both aspects. 相似文献
13.
Hartung J 《Biometrical journal. Biometrische Zeitschrift》2006,48(4):521-536
Flexible designs are provided by adaptive planning of sample sizes as well as by introducing the weighted inverse normal combining method and the generalized inverse chi-square combining method in the context of conducting trials consecutively step by step. These general combining methods allow quite different weighting of sequential study parts, also in a completely adaptive way, based on full information from unblinded data in previously performed stages. So, in reviewing some basic developments of flexible designing, we consider a generalizing approach to group sequentially performed clinical trials of Pocock-type, of O'Brien-Fleming-type, and of Self-designing-type. A clinical trial may be originally planned either to show non-inferiority or superiority. The proposed flexible designs, however, allow in each interim analysis to change the planning from showing non-inferiority to showing superiority and vice versa. Several examples of clinical trials with normal and binary outcomes are worked out in detail. We demonstrate the practicable performance of the discussed approaches, confirmed in an extensive simulation study. Our flexible designing is a useful tool, provided that a priori information about parameters involved in the trial is not available or subject to uncertainty. 相似文献
14.
Robert Schall 《Biometrical journal. Biometrische Zeitschrift》2012,54(4):537-551
Many confidence intervals calculated in practice are potentially not exact, either because the requirements for the interval estimator to be exact are known to be violated, or because the (exact) distribution of the data is unknown. If a confidence interval is approximate, the crucial question is how well its true coverage probability approximates its intended coverage probability. In this paper we propose to use the bootstrap to calculate an empirical estimate for the (true) coverage probability of a confidence interval. In the first instance, the empirical coverage can be used to assess whether a given type of confidence interval is adequate for the data at hand. More generally, when planning the statistical analysis of future trials based on existing data pools, the empirical coverage can be used to study the coverage properties of confidence intervals as a function of type of data, sample size, and analysis scale, and thus inform the statistical analysis plan for the future trial. In this sense, the paper proposes an alternative to the problematic pretest of the data for normality, followed by selection of the analysis method based on the results of the pretest. We apply the methodology to a data pool of bioequivalence studies, and in the selection of covariance patterns for repeated measures data. 相似文献
15.
16.
Tsiatis, Rosner, and Mehta (1984, Biometrics 40, 797-803) proposed a procedure for constructing confidence intervals following group sequential tests of a normal mean. This method is first extended for group sequential tests for which the sample sizes between interim analyses are not identical or the times are not equally spaced. Then properties of this confidence interval estimation procedure are studied by simulation. The extension accommodates the flexible procedure by Lan and DeMets (1983, Biometrika 70, 659-663) for constructing discrete group sequential boundaries to form a structure for monitoring and estimation following a class of group sequential tests. Finally, it is demonstrated how to combine the procedures by Lan and DeMets and by Tsiatis, Rosner, and Mehta using a FORTRAN program. 相似文献
17.
18.
“Covariate adjustment” in the randomized trial context refers to an estimator of the average treatment effect that adjusts for chance imbalances between study arms in baseline variables (called “covariates”). The baseline variables could include, for example, age, sex, disease severity, and biomarkers. According to two surveys of clinical trial reports, there is confusion about the statistical properties of covariate adjustment. We focus on the analysis of covariance (ANCOVA) estimator, which involves fitting a linear model for the outcome given the treatment arm and baseline variables, and trials that use simple randomization with equal probability of assignment to treatment and control. We prove the following new (to the best of our knowledge) robustness property of ANCOVA to arbitrary model misspecification: Not only is the ANCOVA point estimate consistent (as proved by Yang and Tsiatis, 2001) but so is its standard error. This implies that confidence intervals and hypothesis tests conducted as if the linear model were correct are still asymptotically valid even when the linear model is arbitrarily misspecified, for example, when the baseline variables are nonlinearly related to the outcome or there is treatment effect heterogeneity. We also give a simple, robust formula for the variance reduction (equivalently, sample size reduction) from using ANCOVA. By reanalyzing completed randomized trials for mild cognitive impairment, schizophrenia, and depression, we demonstrate how ANCOVA can achieve variance reductions of 4 to 32%. 相似文献
19.
Flandre P 《PloS one》2011,6(9):e22871
Background
In recent years the “noninferiority” trial has emerged as the new standard design for HIV drug development among antiretroviral patients often with a primary endpoint based on the difference in success rates between the two treatment groups. Different statistical methods have been introduced to provide confidence intervals for that difference. The main objective is to investigate whether the choice of the statistical method changes the conclusion of the trials.Methods
We presented 11 trials published in 2010 using a difference in proportions as the primary endpoint. In these trials, 5 different statistical methods have been used to estimate such confidence intervals. The five methods are described and applied to data from the 11 trials. The noninferiority of the new treatment is not demonstrated if the prespecified noninferiority margin it includes in the confidence interval of the treatment difference.Results
Results indicated that confidence intervals can be quite different according to the method used. In many situations, however, conclusions of the trials are not altered because point estimates of the treatment difference were too far from the prespecified noninferiority margins. Nevertheless, in few trials the use of different statistical methods led to different conclusions. In particular the use of “exact” methods can be very confusing.Conclusion
Statistical methods used to estimate confidence intervals in noninferiority trials have a strong impact on the conclusion of such trials. 相似文献20.
Patcharee Maneerat Sa-Aat Niwitpong Suparat Niwitpong 《Biometrical journal. Biometrische Zeitschrift》2020,62(7):1769-1790
Unnatural rainfall fluctuation can result in such severe natural phenomena as drought and floods. This variability not only occurs in areas with unusual natural features such as land formations and drainage but can also be due to human intervention. Since rainfall data often contain zero values, evaluating rainfall change is an important undertaking, which can be estimated via the confidence intervals for the difference between delta-lognormal variances using the highest posterior density–based reference (HPD-ref) and probability-matching (HPD-pm) priors. Simulation results indicate that HPD-pm performances were better than other methods in terms of coverage rates and relative average lengths for the difference in delta-lognormal variances, even with a large difference in variances. To illustrate the efficacy of our proposed methods, we applied them to daily rainfall data sets for the lower and upper regions of northern Thailand. 相似文献