首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 265 毫秒
1.
    
When a new treatment is compared to an established one in a randomized clinical trial, it is standard practice to statistically test for non-inferiority rather than for superiority. When the endpoint is binary, one usually compares two treatments using either an odds-ratio or a difference of proportions. In this paper, we propose a mixed approach which uses both concepts. One first defines the non-inferiority margin using an odds-ratio and one ultimately proves non-inferiority statistically using a difference of proportions. The mixed approach is shown to be more powerful than the conventional odds-ratio approach when the efficacy of the established treatment is known (with good precision) and high (e.g. with more than 56% of success). The gain of power achieved may lead in turn to a substantial reduction in the sample size needed to prove non-inferiority. The mixed approach can be generalized to ordinal endpoints.  相似文献   

2.
    
A generalization of the Behrens‐Fisher problem for two samples is examined in a nonparametric model. It is not assumed that the underlying distribution functions are continuous so that data with arbitrary ties can be handled. A rank test is considered where the asymptotic variance is estimated consistently by using the ranks over all observations as well as the ranks within each sample. The consistency of the estimator is derived in the appendix. For small samples (n1, n2 ≥ 10), a simple approximation by a central t‐distribution is suggested where the degrees of freedom are taken from the Satterthwaite‐Smith‐Welch approximation in the parametric Behrens‐Fisher problem. It is demonstrated by means of a simulation study that the Wilcoxon‐Mann‐Whitney‐test may be conservative or liberal depending on the ratio of the sample sizes and the variances of the underlying distribution functions. For the suggested approximation, however, it turns out that the nominal level is maintained rather accurately. The suggested nonparametric procedure is applied to a data set from a clinical trial. Moreover, a confidence interval for the nonparametric treatment effect is given.  相似文献   

3.
4.
    
The three‐arm design with a test treatment, an active control and a placebo group is the gold standard design for non‐inferiority trials if it is ethically justifiable to expose patients to placebo. In this paper, we first use the closed testing principle to establish the hierarchical testing procedure for the multiple comparisons involved in the three‐arm design. For the effect preservation test we derive the explicit formula for the optimal allocation ratios. We propose a group sequential type design, which naturally accommodates the hierarchical testing procedure. Under this proposed design, Monte Carlo simulations are conducted to evaluate the performance of the sequential effect preservation test when the variance of the test statistic is estimated based on the restricted maximum likelihood estimators of the response rates under the null hypothesis. When there are uncertainties for the placebo response rate, the proposed design demonstrates better operating characteristics than the fixed sample design.  相似文献   

5.
    
  相似文献   

6.
In a clinical trial with an active treatment and a placebo the situation may occur that two (or even more) primary endpoints may be necessary to describe the active treatment's benefit. The focus of our interest is a more specific situation with two primary endpoints in which superiority in one of them would suffice given that non-inferiority is observed in the other. Several proposals exist in the literature for dealing with this or similar problems, but prove insufficient or inadequate at a closer look (e.g. Bloch et al. (2001, 2006) or Tamhane and Logan (2002, 2004)). For example, we were unable to find a good reason why a bootstrap p-value for superiority should depend on the initially selected non-inferiority margins or on the initially selected type I error alpha. We propose a hierarchical three step procedure, where non-inferiority in both variables must be proven in the first step, superiority has to be shown by a bivariate test (e.g. Holm (1979), O'Brien (1984), Hochberg (1988), a bootstrap (Wang (1998)), or L?uter (1996)) in the second step, and then superiority in at least one variable has to be verified in the third step by a corresponding univariate test. All statistical tests are performed at the same one-sided significance level alpha. From the above mentioned bivariate superiority tests we preferred L?uter's SS test and the Holm procedure for the reason that these have been proven to control the type I error strictly, irrespective of the correlation structure among the primary variables and the sample size applied. A simulation study reveals that the performance regarding power of the bivariate test depends to a considerable degree on the correlation and on the magnitude of the expected effects of the two primary endpoints. Therefore, the recommendation of which test to choose depends on knowledge of the possible correlation between the two primary endpoints. In general, L?uter's SS procedure in step 2 shows the best overall properties, whereas Holm's procedure shows an advantage if both a positive correlation between the two variables and a considerable difference between their standardized effect sizes can be expected.  相似文献   

7.
    
Sample size calculations in the planning of clinical trials depend on good estimates of the model parameters involved. When the estimates of these parameters have a high degree of uncertainty attached to them, it is advantageous to reestimate the sample size after an internal pilot study. For non-inferiority trials with binary outcome we compare the performance of Type I error rate and power between fixed-size designs and designs with sample size reestimation. The latter design shows itself to be effective in correcting sample size and power of the tests when misspecification of nuisance parameters occurs with the former design.  相似文献   

8.
    
New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.  相似文献   

9.
    
We consider the statistical testing for non-inferiority of a new treatment compared with the standard one under matched-pair setting in a stratified study or in several trials. A non-inferiority test based on the efficient scores and a Mantel-Haenszel (M-H) like procedure with restricted maximum likelihood estimators (RMLEs) of nuisance parameters and their corresponding sample size formulae are presented. We evaluate the above tests and the M-H type Wald test in level and power. The stratified score test is conservative and provides the best power. The M-H like procedure with RMLEs gives an accurate level. However, the Wald test is anti-conservative and we suggest caution when it is used. The unstratified score test is not biased but it is less powerful than the stratified score test when base-line probabilities related to strata are not the same. This investigation shows that the stratified score test possesses optimum statistical properties in testing non-inferiority. A common difference between two proportions across strata is the basic assumption of the stratified tests, we present appropriate tests to validate the assumption and related remarks.  相似文献   

10.
Neural factors appear to play a major role in the pathogenesis of vitiligo. To investigate the possible correlation between vitiligo and peripheral monoaminergic system activity, we used high‐pressure liquid chromatography and electrochemical detector methods to evaluate the basal urine excretion values of catecholamines [norepinephrine (NE), epinephrine and dopamine (DA)], their relative metabolites [3‐methoxy‐4‐hydroxyphenylglycol (MHPG), normetanephrine (NMN), metanephrine (MN), vanilmandelic acid (VMA) and homovanillic acid], as well as 5‐hydroxyindoleacetic acid (5‐HIAA), in 35 healthy subjects and in 70 patients, suffering from non‐segmental vitiligo at different stages of the disease. Levels of NE, DA, NMN, MN, MHPG, VMA and 5‐HIAA were found to be significantly higher in patients than in controls. The patients with progressive vitiligo (n = 56) presented increased urinary excretion values for all parameters (in particular, NE levels) than other patients. Interestingly, in patients at its more recent vitiligo onset (<1 yr), NE values were different to those of subjects affected from 1 to 5 yr and from 6 to 10 yr. This result was confirmed by the significant negative relationship detected between NE excretion values and disease duration. In both vitiligo and control groups, significant correlations were found between monoamines as well as between these monoamines and their metabolites. The increase in catecholamine turnover, mainly occurring at the onset of the disease, is probably due to the stress associated with the appearance of lesions. Moreover, considering that these compounds readily produce toxic free‐radicals and that vitiliginous subjects have a defective free radical defence mechanism, they may also contribute to the disappearance of melanocytes in the early phases of vitiligo.  相似文献   

11.
    
The application of stabilized multivariate tests is demonstrated in the analysis of a two‐stage adaptive clinical trial with three treatment arms. Due to the clinical problem, the multiple comparisons include tests of superiority as well as a test for non‐inferiority, where non‐inferiority is (because of missing absolute tolerance limits) expressed as linear contrast of the three treatments. Special emphasis is paid to the combination of the three sources of multiplicity – multiple endpoints, multiple treatments, and two stages of the adaptive design. Particularly, the adaptation after the first stage comprises a change of the a‐priori order of hypotheses.  相似文献   

12.
    
There has been growing interest, when comparing an experimental treatment with an active control with respect to a binary outcome, in allowing the non-inferiority margin to depend on the unknown success rate in the control group. It does not seem universally recognized, however, that the statistical test should appropriately adjust for the uncertainty surrounding the non-inferiority margin. In this paper, we inspect a naive procedure that treats an \"observed margin\" as if it were fixed a priori, and explain why it might not be valid. We then derive a class of tests based on the delta method, including the Wald test and the score test, for a smooth margin. An alternative derivation is given for the asymptotic distribution of the likelihood ratio statistic, again for a smooth margin. We discuss the asymptotic behavior of these tests when applied to a piecewise smooth margin. A simple condition on the margin function is given which allows the likelihood ratio test to carry over to a piecewise smooth margin using the same critical value as for a smooth margin. Simulation experiments are conducted, under a smooth margin and a piecewise linear margin, to evaluate the finite-sample performance of the asymptotic tests studied.  相似文献   

13.
    
Summary A class of nonignorable models is presented for handling nonmonotone missingness in categorical longitudinal responses. This class of models includes the traditional selection models and shared parameter models. This allows us to perform a broader than usual sensitivity analysis. In particular, instead of considering variations to a chosen nonignorable model, we study sensitivity between different missing data frameworks. An appealing feature of the developed class is that parameters with a marginal interpretation are obtained, while algebraically simple models are considered. Specifically, marginalized mixed‐effects models ( Heagerty, 1999 , Biometrics 55, 688–698) are used for the longitudinal process that model separately the marginal mean and the correlation structure. For the correlation structure, random effects are introduced and their distribution is modeled either parametrically or non‐parametrically to avoid potential misspecifications.  相似文献   

14.
  总被引:1,自引:0,他引:1  
Holm's (1979) step-down multiple-testing procedure (MTP) is appealing for its flexibility, transparency, and general validity, but the derivation of corresponding simultaneous confidence regions has remained an unsolved problem. This article provides such confidence regions. In fact, simultanenous confidence regions are provided for any MTP in the class of short-cut consonant closed-testing procedures based on marginal p -values and weighted Bonferroni tests for intersection hypotheses considered by Hommel, Bretz and Maurer (2007). In addition to Holm's MTP, this class includes the fixed-sequence MTP, recently proposed gatekeeping MTPs, and the fallback MTP. The simultaneous confidence regions are generally valid if underlying marginal p -values and corresponding marginal confidence regions (assumed to be available) are valid. The marginal confidence regions and estimated quantities are not assumed to be of any particular kinds/dimensions. Compared to the rejections made by the MTP for the family of null hypotheses H under consideration, the proposed confidence regions provide extra free information. In particular, with Holm's MTP, such extra information is provided: for all nonrejected H s, in case not all H s are rejected; or for certain (possibly all) H s, in case all H s are rejected. In case not all H s are rejected, no extra information is provided for rejected H s. This drawback seems however difficult to overcome. Illustrations concerning clinical studies are given.  相似文献   

15.
    
For a non‐inferiority trial without a placebo arm, the direct comparison between the test treatment and the selected positive control is in principle the only basis for statistical inference. Therefore, evaluating the test treatment relative to the non‐existent placebo presents extreme challenges and requires some kind of bridging from the past to the present with no current placebo data. For such inference based partly on an indirect bridging manipulation, fixed margin method and synthesis method are the two widely discussed methods in the recent literature. There are major differences in statistical inference paradigm between the two methods. The fixed margin method employs the historical data that assess the performances of the active control versus a placebo to guide the selection of the non‐inferiority margin. Such guidance is not part of the ultimate statistical inference in the non‐inferiority trial. In contrast, the synthesis method connects the historical data to the non‐inferiority trial data for making broader inferences relating the test treatment to the non‐existent current placebo. On the other hand, the type I error rate associated with the direct comparison between the test treatment and the active control cannot shed any light on the appropriateness of the indirect inference for faring the test treatment against the non‐existent placebo. This work explores an approach for assessing the impact of potential bias due to violation of a key statistical assumption to guide determination of the non‐inferiority margin.  相似文献   

16.
    
We consider the problem of drawing superiority inferences on individual endpoints following non-inferiority testing. R?hmel et al. (2006) pointed out this as an important problem which had not been addressed by the previous procedures that only tested for global superiority. R?hmel et al. objected to incorporating the non-inferiority tests in the assessment of the global superiority test by exploiting the relationship between the two, since the results of the latter test then depend on the non-inferiority margins specified for the former test. We argue that this is justified, besides the fact that it enhances the power of the global superiority test. We provide a closed testing formulation which generalizes the three-step procedure proposed by R?hmel et al. for two endpoints. For the global superiority test, R?hmel et al. suggest using the L?uter (1996) test which is modified to make it monotone. The resulting test not only is complicated to use, but the modification does not readily extend to more than two endpoints, and it is less powerful in general than several of its competitors. This is verified in a simulation study. Instead, we suggest applying the one-sided likelihood ratio test used by Perlman and Wu (2004) or the union-intersection t(max) test used by Tamhane and Logan (2004).  相似文献   

17.
    
Methods to examine whether genetic and/or environmental sources can account for the residual variation in ordinal family data usually assume proportional odds. However, standard software to fit the non‐proportional odds model to ordinal family data is limited because the correlation structure of family data is more complex than for other types of clustered data. To perform these analyses we propose the non‐proportional odds multivariate logistic regression model and take a simulation‐based approach to model fitting using Markov chain Monte Carlo methods, such as partially collapsed Gibbs sampling and the Metropolis algorithm. We applied the proposed methodology to male pattern baldness data from the Victorian Family Heart Study.  相似文献   

18.
    
Let d = p2 ? p1 be the difference between two binomial proportions obtained from two independent trials. For parameter d, three pairs of hypothesis may be of interest: H1: d ≤ δ vs. K1: d > δ; H2: d ? (δ1, δ2) vs. K2: d ∈ (δ1, δ2); and H3: d ∈ [δ1, δ2] vs. K3: d ? [δ1, δ2], where Hi is the null hypothesis and Ki is the alternative hypothesis. These tests are useful in clinical trials, pharmacological and vaccine studies and in statistics generally. The three problems may be investigated by exact unconditional tests when the sample sizes are moderate. Otherwise, one should use approximate (or asymptotical) tests generally based on a Z‐statistics like those suggested in the paper. The article defines a new procedure for testing H2 or H3, demonstrates that this is more powerful than tests based on confidence intervals (the classic TOST – two one sided tests – test), defines two corrections for continuity which reduce the liberality of the three tests, and selects the one that behaves better. The programs for executing the unconditional exact and asymptotic tests described in the paper can be loaded at http://www.ugr.es/~bioest/software.htm. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号