首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Jiang Q  Snapinn S  Iglewicz B 《Biometrics》2004,60(3):800-806
Sample size calculations for survival trials typically include an adjustment to account for the expected rate of noncompliance, or discontinuation from study medication. Existing sample size methods assume that when patients discontinue, they do so independently of their risk of an endpoint; that is, that noncompliance is noninformative. However, this assumption is not always true, as we illustrate using results from a published clinical trial database. In this article, we introduce a modified version of the method proposed by Lakatos (1988, Biometrics 44, 229-241) that can be used to calculate sample size under informative noncompliance. This method is based on the concept of two subpopulations: one with high rates of endpoint and discontinuation and another with low rates. Using this new method, we show that failure to consider the impact of informative noncompliance can lead to a considerably underpowered study.  相似文献   

2.
In a randomized clinical trial (RCT), noncompliance with an assigned treatment can occur due to serious side effects, while missing outcomes on patients may happen due to patients' withdrawal or loss to follow up. To avoid the possible loss of power to detect a given risk difference (RD) of interest between two treatments, it is essentially important to incorporate the information on noncompliance and missing outcomes into sample size calculation. Under the compound exclusion restriction model proposed elsewhere, we first derive the maximum likelihood estimator (MLE) of the RD among compliers between two treatments for a RCT with noncompliance and missing outcomes and its asymptotic variance in closed form. Based on the MLE with tanh(-1)(x) transformation, we develop an asymptotic test procedure for testing equality of two treatment effects among compliers. We further derive a sample size calculation formula accounting for both noncompliance and missing outcomes for a desired power 1 - beta at a nominal alpha-level. To evaluate the performance of the test procedure and the accuracy of the sample size calculation formula, we employ Monte Carlo simulation to calculate the estimated Type I error and power of the proposed test procedure corresponding to the resulting sample size in a variety of situations. We find that both the test procedure and the sample size formula developed here can perform well. Finally, we include a discussion on the effects of various parameters, including the proportion of compliers, the probability of non-missing outcomes, and the ratio of sample size allocation, on the minimum required sample size.  相似文献   

3.
Venkatraman ES  Begg CB 《Biometrics》1999,55(4):1171-1176
A nonparametric test is derived for comparing treatments with respect to the final endpoint in clinical trials in which the final endpoint has been observed for a random subset of patients, but results are available for a surrogate endpoint for a larger sample of patients. The test is an adaptation of the Wilcoxon-Mann-Whitney two-sample test, with an adjustment that involves a comparison of the ranks of the surrogate endpoints between patients with and without final endpoints. The validity of the test depends on the assumption that the patients with final endpoints represent a random sample of the patients registered in the study. This assumption is viable in trials in which the final endpoint is evaluated at a "landmark" timepoint in the patients' natural history. A small sample simulation study demonstrates that the test has a size that is close to the nominal value for all configurations evaluated. When compared with the conventional test based only on the final endpoints, the new test delivers substantial increases in power only when the surrogate endpoint is highly correlated with the true endpoint. Our research indicates that, in the absence of modeling assumptions, auxiliary information derived from surrogate endpoints can provide significant additional information only under special circumstances.  相似文献   

4.
Clinical trials are often planned with high uncertainty about the variance of the primary outcome variable. A poor estimate of the variance, however, may lead to an over‐ or underpowered study. In the internal pilot study design, the sample variance is calculated at an interim step and the sample size can be adjusted if necessary. The available recalculation procedures use the data of those patients for sample size recalculation that have already completed the study. In this article, we consider a variance estimator that takes into account both the data at the endpoint and at an intermediate point of the treatment phase. We derive asymptotic properties of this estimator and the relating sample size recalculation procedure. In a simulation study, the performance of the proposed approach is evaluated and compared with the procedure that uses only long‐term data. Simulation results demonstrate that the sample size resulting from the proposed procedure shows in general a smaller variability. At the same time, the Type I error rate is not inflated and the achieved power is close to the desired value.  相似文献   

5.
We consider sample size calculations for testing differences in means between two samples and allowing for different variances in the two groups. Typically, the power functions depend on the sample size and a set of parameters assumed known, and the sample size needed to obtain a prespecified power is calculated. Here, we account for two sources of variability: we allow the sample size in the power function to be a stochastic variable, and we consider estimating the parameters from preliminary data. An example of the first source of variability is nonadherence (noncompliance). We assume that the proportion of subjects who will adhere to their treatment regimen is not known before the study, but that the proportion is a stochastic variable with a known distribution. Under this assumption, we develop simple closed form sample size calculations based on asymptotic normality. The second source of variability is in parameter estimates that are estimated from prior data. For example, we account for variability in estimating the variance of the normal response from existing data which are assumed to have the same variance as the study for which we are calculating the sample size. We show that we can account for the variability of the variance estimate by simply using a slightly larger nominal power in the usual sample size calculation, which we call the calibrated power. We show that the calculation of the calibrated power depends only on the sample size of the existing data, and we give a table of calibrated power by sample size. Further, we consider the calculation of the sample size in the rarer situation where we account for the variability in estimating the standardized effect size from some existing data. This latter situation, as well as several of the previous ones, is motivated by sample size calculations for a Phase II trial of a malaria vaccine candidate.  相似文献   

6.
Designs incorporating more than one endpoint have become popular in drug development. One of such designs allows for incorporation of short‐term information in an interim analysis if the long‐term primary endpoint has not been yet observed for some of the patients. At first we consider a two‐stage design with binary endpoints allowing for futility stopping only based on conditional power under both fixed and observed effects. Design characteristics of three estimators: using primary long‐term endpoint only, short‐term endpoint only, and combining data from both are compared. For each approach, equivalent cut‐off point values for fixed and observed effect conditional power calculations can be derived resulting in the same overall power. While in trials stopping for futility the type I error rate cannot get inflated (it usually decreases), there is loss of power. In this study, we consider different scenarios, including different thresholds for conditional power, different amount of information available at the interim, different correlations and probabilities of success. We further extend the methods to adaptive designs with unblinded sample size reassessments based on conditional power with inverse normal method as the combination function. Two different futility stopping rules are considered: one based on the conditional power, and one from P‐values based on Z‐statistics of the estimators. Average sample size, probability to stop for futility and overall power of the trial are compared and the influence of the choice of weights is investigated.  相似文献   

7.
The internal pilot study design enables to estimate nuisance parameters required for sample size calculation on the basis of data accumulated in an ongoing trial. By this, misspecifications made when determining the sample size in the planning phase can be corrected employing updated knowledge. According to regulatory guidelines, blindness of all personnel involved in the trial has to be preserved and the specified type I error rate has to be controlled when the internal pilot study design is applied. Especially in the late phase of drug development, most clinical studies are run in more than one centre. In these multicentre trials, one may have to deal with an unequal distribution of the patient numbers among the centres. Depending on the type of the analysis (weighted or unweighted), unequal centre sample sizes may lead to a substantial loss of power. Like the variance, the magnitude of imbalance is difficult to predict in the planning phase. We propose a blinded sample size recalculation procedure for the internal pilot study design in multicentre trials with normally distributed outcome and two balanced treatment groups that are analysed applying the weighted or the unweighted approach. The method addresses both uncertainty with respect to the variance of the endpoint and the extent of disparity of the centre sample sizes. The actual type I error rate as well as the expected power and sample size of the procedure is investigated in simulation studies. For the weighted analysis as well as for the unweighted analysis, the maximal type I error rate was not or only minimally exceeded. Furthermore, application of the proposed procedure led to an expected power that achieves the specified value in many cases and is throughout very close to it.  相似文献   

8.
Ibrahim JG  Chen MH  Xia HA  Liu T 《Biometrics》2012,68(2):578-586
Recent guidance from the Food and Drug Administration for the evaluation of new therapies in the treatment of type 2 diabetes (T2DM) calls for a program-wide meta-analysis of cardiovascular (CV) outcomes. In this context, we develop a new Bayesian meta-analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for meta-analysis clinical trial design with a focus on controlling the type I error and power. We also propose the partial borrowing power prior to incorporate the historical survival meta data into the statistical design. Various properties of the proposed methodology are examined and an efficient Markov chain Monte Carlo sampling algorithm is developed to sample from the posterior distributions. In addition, we develop a simulation-based algorithm for computing various quantities, such as the power and the type I error in the Bayesian meta-analysis trial design. The proposed methodology is applied to the design of a phase 2/3 development program including a noninferiority clinical trial for CV risk assessment in T2DM studies.  相似文献   

9.
Most existing phase II clinical trial designs focus on conventional chemotherapy with binary tumor response as the endpoint. The advent of novel therapies, such as molecularly targeted agents and immunotherapy, has made the endpoint of phase II trials more complicated, often involving ordinal, nested, and coprimary endpoints. We propose a simple and flexible Bayesian optimal phase II predictive probability (OPP) design that handles binary and complex endpoints in a unified way. The Dirichlet-multinomial model is employed to accommodate different types of endpoints. At each interim, given the observed interim data, we calculate the Bayesian predictive probability of success, should the trial continue to the maximum planned sample size, and use it to make the go/no-go decision. The OPP design controls the type I error rate, maximizes power or minimizes the expected sample size, and is easy to implement, because the go/no-go decision boundaries can be enumerated and included in the protocol before the onset of the trial. Simulation studies show that the OPP design has satisfactory operating characteristics.  相似文献   

10.
Summary .  We develop sample size formulas for studies aiming to test mean differences between a treatment and control group when all-or-none nonadherence (noncompliance) and selection bias are expected. Recent work by Fay, Halloran, and Follmann (2007, Biometrics 63, 465–474) addressed the increased variances within groups defined by treatment assignment when nonadherence occurs, compared to the scenario of full adherence, under the assumption of no selection bias. In this article, we extend the authors' approach to allow selection bias in the form of systematic differences in means and variances among latent adherence subgroups. We illustrate the approach by performing sample size calculations to plan clinical trials with and without pilot adherence data. Sample size formulas and tests for normally distributed outcomes are also developed in a Web Appendix that account for uncertainty of estimates from external or internal pilot data.  相似文献   

11.
Sample sizes based on the log-rank statistic in complex clinical trials   总被引:1,自引:0,他引:1  
E Lakatos 《Biometrics》1988,44(1):229-241
The log-rank test is frequently used to compare survival curves. While sample size estimation for comparison of binomial proportions has been adapted to typical clinical trial conditions such as noncompliance, lag time, and staggered entry, the estimation of sample size when the log-rank statistic is to be used has not been generalized to these types of clinical trial conditions. This paper presents a method of estimating sample sizes for the comparison of survival curves by the log-rank statistic in the presence of unrestricted rates of noncompliance, lag time, and so forth. The method applies to stratified trials in which the above conditions may vary across the different strata, and does not assume proportional hazards. Power and duration, as well as sample sizes, can be estimated. The method also produces estimates for binomial proportions and the Tarone-Ware class of statistics.  相似文献   

12.
Many late-phase clinical trials recruit subjects at multiple study sites. This introduces a hierarchical structure into the data that can result in a power-loss compared to a more homogeneous single-center trial. Building on a recently proposed approach to sample size determination, we suggest a sample size recalculation procedure for multicenter trials with continuous endpoints. The procedure estimates nuisance parameters at interim from noncomparative data and recalculates the sample size required based on these estimates. In contrast to other sample size calculation methods for multicenter trials, our approach assumes a mixed effects model and does not rely on balanced data within centers. It is therefore advantageous, especially for sample size recalculation at interim. We illustrate the proposed methodology by a study evaluating a diabetes management system. Monte Carlo simulations are carried out to evaluate operation characteristics of the sample size recalculation procedure using comparative as well as noncomparative data, assessing their dependence on parameters such as between-center heterogeneity, residual variance of observations, treatment effect size and number of centers. We compare two different estimators for between-center heterogeneity, an unadjusted and a bias-adjusted estimator, both based on quadratic forms. The type 1 error probability as well as statistical power are close to their nominal levels for all parameter combinations considered in our simulation study for the proposed unadjusted estimator, whereas the adjusted estimator exhibits some type 1 error rate inflation. Overall, the sample size recalculation procedure can be recommended to mitigate risks arising from misspecified nuisance parameters at the planning stage.  相似文献   

13.
Sample size calculations in the planning of clinical trials depend on good estimates of the model parameters involved. When the estimates of these parameters have a high degree of uncertainty attached to them, it is advantageous to reestimate the sample size after an internal pilot study. For non-inferiority trials with binary outcome we compare the performance of Type I error rate and power between fixed-size designs and designs with sample size reestimation. The latter design shows itself to be effective in correcting sample size and power of the tests when misspecification of nuisance parameters occurs with the former design.  相似文献   

14.
Taylor L  Zhou XH 《Biometrics》2009,65(1):88-95
Summary .  Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine.  相似文献   

15.
We review a Bayesian predictive approach for interim data monitoring and propose its application to interim sample size reestimation for clinical trials. Based on interim data, this approach predicts how the sample size of a clinical trial needs to be adjusted so as to claim a success at the conclusion of the trial with an expected probability. The method is compared with predictive power and conditional power approaches using clinical trial data. Advantages of this approach over the others are discussed.  相似文献   

16.
In clinical trials with patients in a critical state, death may preclude measurement of a quantitative endpoint of interest, and even early measurements, for example for intention‐to‐treat analysis, may not be available. For example, a non‐negligible proportion of patients with acute pulmonary embolism will die before 30 day measurements on the efficacy of thrombolysis can be obtained. As excluding such patients may introduce bias, alternative analyses, and corresponding means for sample size calculation are needed. We specifically consider power analysis in a randomized clinical trial setting in which the goal is to demonstrate noninferiority of a new treatment as compared to a reference treatment. Also, a nonparametric approach may be needed due to the distribution of the quantitative endpoint of interest. While some approaches have been developed in a composite endpoint setting, our focus is on the continuous endpoint affected by death‐related censoring, for which no approach for noninferiority is available. We propose a solution based on ranking the quantitative outcome and assigning worst rank scores to the patients without quantitative outcome because of death. Based on this, we derive power formulae for a noninferiority test in the presence of death‐censored observations, considering settings with and without ties. The approach is illustrated for an exemplary clinical trial in pulmonary embolism. The results there show a substantial effect of death on power, also depending on differential effects in the two trial arms. Therefore, use of the proposed formulae is advisable whenever there is death to be expected before measurement of a quantitative primary outcome of interest.  相似文献   

17.
In clinical trials, sample size reestimation is a useful strategy for mitigating the risk of uncertainty in design assumptions and ensuring sufficient power for the final analysis. In particular, sample size reestimation based on unblinded interim effect size can often lead to sample size increase, and statistical adjustment is usually needed for the final analysis to ensure that type I error rate is appropriately controlled. In current literature, sample size reestimation and corresponding type I error control are discussed in the context of maintaining the original randomization ratio across treatment groups, which we refer to as “proportional increase.” In practice, not all studies are designed based on an optimal randomization ratio due to practical reasons. In such cases, when sample size is to be increased, it is more efficient to allocate the additional subjects such that the randomization ratio is brought closer to an optimal ratio. In this research, we propose an adaptive randomization ratio change when sample size increase is warranted. We refer to this strategy as “nonproportional increase,” as the number of subjects increased in each treatment group is no longer proportional to the original randomization ratio. The proposed method boosts power not only through the increase of the sample size, but also via efficient allocation of the additional subjects. The control of type I error rate is shown analytically. Simulations are performed to illustrate the theoretical results.  相似文献   

18.
Classical power analysis for sample size determination is typically performed in clinical trials. A “hybrid” classical Bayesian or a “fully Bayesian” approach can be alternatively used in order to add flexibility to the design assumptions needed at the planning stage of the study and to explicitly incorporate prior information in the procedure. In this paper, we exploit and compare these approaches to obtain the optimal sample size of a single-arm trial based on Poisson data. We adopt exact methods to establish the rejection of the null hypothesis within a frequentist or a Bayesian perspective and suggest the use of a conservative criterion for sample size determination that accounts for the not strictly monotonic behavior of the power function in the presence of discrete data. A Shiny web app in R has been developed to provide a user-friendly interface to easily compute the optimal sample size according to the proposed criteria and to assure the reproducibility of the results.  相似文献   

19.
Chen H  Geng Z  Zhou XH 《Biometrics》2009,65(3):675-682
Summary .  In this article, we first study parameter identifiability in randomized clinical trials with noncompliance and missing outcomes. We show that under certain conditions the parameters of interest are identifiable even under different types of completely nonignorable missing data: that is, the missing mechanism depends on the outcome. We then derive their maximum likelihood and moment estimators and evaluate their finite-sample properties in simulation studies in terms of bias, efficiency, and robustness. Our sensitivity analysis shows that the assumed nonignorable missing-data model has an important impact on the estimated complier average causal effect (CACE) parameter. Our new method provides some new and useful alternative nonignorable missing-data models over the existing latent ignorable model, which guarantees parameter identifiability, for estimating the CACE in a randomized clinical trial with noncompliance and missing data.  相似文献   

20.
Glaucoma, a disease of the optic nerve and retina, causes blindness in millions of people worldwide. Currently available therapies for this disease only attempt to reduce intraocular pressure, the major risk factor, without addressing the associated optic neuropathy and retinopathy. Development of glaucoma neuroprotective treatment is therefore a pressing unmet medical need. Unfortunately, many challenges hinder this effort, including an incomplete understanding of the mechanism of pathogenesis, leading to uncertain therapeutic targets and confounded by not yet validated preclinical models. Most importantly, with slow disease progression and a less than ideal endpoint measurement method, clinical trials are necessarily large, lengthy, expensive and, to many, prohibitive. No easy solution is available to overcome these challenges. Increased commitment to basic mechanistic research is an essential foundation for dealing with this problem. Innovations in clinical trials with novel surrogate endpoints, nontraditional study designs and the use of surrogate diseases might shorten the study time, reduce the patient sample size and consequently lower the budgetary hurdle for the development of new therapies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号