首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To optimize resources, randomized clinical trials with multiple arms can be an attractive option to simultaneously test various treatment regimens in pharmaceutical drug development. The motivation for this work was the successful conduct and positive final outcome of a three‐arm randomized clinical trial primarily assessing whether obinutuzumab plus chlorambucil in patients with chronic lympocytic lymphoma and coexisting conditions is superior to chlorambucil alone based on a time‐to‐event endpoint. The inference strategy of this trial was based on a closed testing procedure. We compare this strategy to three potential alternatives to run a three‐arm clinical trial with a time‐to‐event endpoint. The primary goal is to quantify the differences between these strategies in terms of the time it takes until the first analysis and thus potential approval of a new drug, number of required events, and power. Operational aspects of implementing the various strategies are discussed. In conclusion, using a closed testing procedure results in the shortest time to the first analysis with a minimal loss in power. Therefore, closed testing procedures should be part of the statistician's standard clinical trials toolbox when planning multiarm clinical trials.  相似文献   

2.
Designs incorporating more than one endpoint have become popular in drug development. One of such designs allows for incorporation of short‐term information in an interim analysis if the long‐term primary endpoint has not been yet observed for some of the patients. At first we consider a two‐stage design with binary endpoints allowing for futility stopping only based on conditional power under both fixed and observed effects. Design characteristics of three estimators: using primary long‐term endpoint only, short‐term endpoint only, and combining data from both are compared. For each approach, equivalent cut‐off point values for fixed and observed effect conditional power calculations can be derived resulting in the same overall power. While in trials stopping for futility the type I error rate cannot get inflated (it usually decreases), there is loss of power. In this study, we consider different scenarios, including different thresholds for conditional power, different amount of information available at the interim, different correlations and probabilities of success. We further extend the methods to adaptive designs with unblinded sample size reassessments based on conditional power with inverse normal method as the combination function. Two different futility stopping rules are considered: one based on the conditional power, and one from P‐values based on Z‐statistics of the estimators. Average sample size, probability to stop for futility and overall power of the trial are compared and the influence of the choice of weights is investigated.  相似文献   

3.
Clinical trials with Poisson distributed count data as the primary outcome are common in various medical areas such as relapse counts in multiple sclerosis trials or the number of attacks in trials for the treatment of migraine. In this article, we present approximate sample size formulae for testing noninferiority using asymptotic tests which are based on restricted or unrestricted maximum likelihood estimators of the Poisson rates. The Poisson outcomes are allowed to be observed for unequal follow‐up schemes, and both the situations that the noninferiority margin is expressed in terms of the difference and the ratio are considered. The exact type I error rates and powers of these tests are evaluated and the accuracy of the approximate sample size formulae is examined. The test statistic using the restricted maximum likelihood estimators (for the difference test problem) and the test statistic that is based on the logarithmic transformation and employs the maximum likelihood estimators (for the ratio test problem) show favorable type I error control and can be recommended for practical application. The approximate sample size formulae show high accuracy even for small sample sizes and provide power values identical or close to the aspired ones. The methods are illustrated by a clinical trial example from anesthesia.  相似文献   

4.
Adaptive two‐stage designs allow a data‐driven change of design characteristics during the ongoing trial. One of the available options is an adaptive choice of the test statistic for the second stage of the trial based on the results of the interim analysis. Since there is often only a vague knowledge of the distribution shape of the primary endpoint in the planning phase of a study, a change of the test statistic may then be considered if the data indicate that the assumptions underlying the initial choice of the test are not correct. Collings and Hamilton proposed a bootstrap method for the estimation of the power of the two‐sample Wilcoxon test for shift alternatives. We use this approach for the selection of the test statistic. By means of a simulation study, we show that the gain in terms of power may be considerable when the initial assumption about the underlying distribution was wrong, whereas the loss is relatively small when in the first instance the optimal test statistic was chosen. The results also hold true for comparison with a one‐stage design. Application of the method is illustrated by a clinical trial example.  相似文献   

5.
Yi Li  Lu Tian  Lee‐Jen Wei 《Biometrics》2011,67(2):427-435
Summary In a longitudinal study, suppose that the primary endpoint is the time to a specific event. This response variable, however, may be censored by an independent censoring variable or by the occurrence of one of several dependent competing events. For each study subject, a set of baseline covariates is collected. The question is how to construct a reliable prediction rule for the future subject's profile of all competing risks of interest at a specific time point for risk‐benefit decision making. In this article, we propose a two‐stage procedure to make inferences about such subject‐specific profiles. For the first step, we use a parametric model to obtain a univariate risk index score system. We then estimate consistently the average competing risks for subjects who have the same parametric index score via a nonparametric function estimation procedure. We illustrate this new proposal with the data from a randomized clinical trial for evaluating the efficacy of a treatment for prostate cancer. The primary endpoint for this study was the time to prostate cancer death, but had two types of dependent competing events, one from cardiovascular death and the other from death of other causes.  相似文献   

6.
A surrogate endpoint is an endpoint that is obtained sooner, at lower cost, or less invasively than the true endpoint for a health outcome and is used to make conclusions about the effect of intervention on the true endpoint. In this approach, each previous trial with surrogate and true endpoints contributes an estimated predicted effect of intervention on true endpoint in the trial of interest based on the surrogate endpoint in the trial of interest. These predicted quantities are combined in a simple random-effects meta-analysis to estimate the predicted effect of intervention on true endpoint in the trial of interest. Validation involves comparing the average prediction error of the aforementioned approach with (i) the average prediction error of a standard meta-analysis using only true endpoints in the other trials and (ii) the average clinically meaningful difference in true endpoints implicit in the trials. Validation is illustrated using data from multiple randomized trials of patients with advanced colorectal cancer in which the surrogate endpoint was tumor response and the true endpoint was median survival time.  相似文献   

7.
Three‐arm noninferiority trials (involving an experimental treatment, a reference treatment, and a placebo)—called the “gold standard” noninferiority trials—are conducted in patients with mental disorders whenever feasible, but often fail to show superiority of the experimental treatment and/or the reference treatment over the placebo. One possible reason is that some of the patients receiving the placebo show apparent improvement in the clinical condition. An approach to addressing this problem is the use of the sequential parallel comparison design (SPCD). Nonetheless, the SPCD has not yet been discussed in relation to gold standard noninferiority trials. In this article, our aim was to develop a hypothesis‐testing method and its corresponding sample size calculation method for gold standard noninferiority trials with the SPCD. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy.  相似文献   

8.
In many clinical trials, multiple time‐to‐event endpoints including the primary endpoint (e.g., time to death) and secondary endpoints (e.g., progression‐related endpoints) are commonly used to determine treatment efficacy. These endpoints are often biologically related. This work is motivated by a study of bone marrow transplant (BMT) for leukemia patients, who may experience the acute graft‐versus‐host disease (GVHD), relapse of leukemia, and death after an allogeneic BMT. The acute GVHD is associated with the relapse free survival, and both the acute GVHD and relapse of leukemia are intermediate nonterminal events subject to dependent censoring by the informative terminal event death, but not vice versa, giving rise to survival data that are subject to two sets of semi‐competing risks. It is important to assess the impacts of prognostic factors on these three time‐to‐event endpoints. We propose a novel statistical approach that jointly models such data via a pair of copulas to account for multiple dependence structures, while the marginal distribution of each endpoint is formulated by a Cox proportional hazards model. We develop an estimation procedure based on pseudo‐likelihood and carry out simulation studies to examine the performance of the proposed method in finite samples. The practical utility of the proposed method is further illustrated with data from the motivating example.  相似文献   

9.
Chen MH  Ibrahim JG  Lam P  Yu A  Zhang Y 《Biometrics》2011,67(3):1163-1170
Summary We develop a new Bayesian approach of sample size determination (SSD) for the design of noninferiority clinical trials. We extend the fitting and sampling priors of Wang and Gelfand (2002, Statistical Science 17 , 193–208) to Bayesian SSD with a focus on controlling the type I error and power. Historical data are incorporated via a hierarchical modeling approach as well as the power prior approach of Ibrahim and Chen (2000, Statistical Science 15 , 46–60). Various properties of the proposed Bayesian SSD methodology are examined and a simulation‐based computational algorithm is developed. The proposed methodology is applied to the design of a noninferiority medical device clinical trial with historical data from previous trials.  相似文献   

10.
Non‐inferiority trials are conducted for a variety of reasons including to show that a new treatment has a negligible reduction in efficacy or safety when compared to the current standard treatment, or a more complex setting of showing that a new treatment has a negligible reduction in efficacy when compared to the current standard yet is superior in terms of other treatment characteristics. The latter reason for conducting a non‐inferiority trial presents the challenge of deciding on a balance between a suitable reduction in efficacy, known as the non‐inferiority margin, in return for a gain in other important treatment characteristics/findings. It would be ideal to alleviate the dilemma on the choice of margin in this setting by reverting to a traditional superiority trial design where a single p ‐value for superiority of both the most important endpoint (efficacy) and the most important finding (treatment characteristic) is provided. We discuss how this can be done using the information‐preserving composite endpoint (IPCE) approach and consider binary outcome cases in which the combination of efficacy and treatment characteristics, but not one itself, paints a clear picture that the novel treatment is superior to the active control (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
12.
Ibrahim JG  Chen MH  Xia HA  Liu T 《Biometrics》2012,68(2):578-586
Recent guidance from the Food and Drug Administration for the evaluation of new therapies in the treatment of type 2 diabetes (T2DM) calls for a program-wide meta-analysis of cardiovascular (CV) outcomes. In this context, we develop a new Bayesian meta-analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for meta-analysis clinical trial design with a focus on controlling the type I error and power. We also propose the partial borrowing power prior to incorporate the historical survival meta data into the statistical design. Various properties of the proposed methodology are examined and an efficient Markov chain Monte Carlo sampling algorithm is developed to sample from the posterior distributions. In addition, we develop a simulation-based algorithm for computing various quantities, such as the power and the type I error in the Bayesian meta-analysis trial design. The proposed methodology is applied to the design of a phase 2/3 development program including a noninferiority clinical trial for CV risk assessment in T2DM studies.  相似文献   

13.
In clinical trials with time‐to‐event outcomes, it is of interest to predict when a prespecified number of events can be reached. Interim analysis is conducted to estimate the underlying survival function. When another correlated time‐to‐event endpoint is available, both outcome variables can be used to improve estimation efficiency. In this paper, we propose to use the convolution of two time‐to‐event variables to estimate the survival function of interest. Propositions and examples are provided based on exponential models that accommodate possible change points. We further propose a new estimation equation about the expected time that exploits the relationship of two endpoints. Simulations and the analysis of real data show that the proposed methods with bivariate information yield significant improvement in prediction over that of the univariate method.  相似文献   

14.
The development of clinical prediction models requires the selection of suitable predictor variables. Techniques to perform objective Bayesian variable selection in the linear model are well developed and have been extended to the generalized linear model setting as well as to the Cox proportional hazards model. Here, we consider discrete time‐to‐event data with competing risks and propose methodology to develop a clinical prediction model for the daily risk of acquiring a ventilator‐associated pneumonia (VAP) attributed to P. aeruginosa (PA) in intensive care units. The competing events for a PA VAP are extubation, death, and VAP due to other bacteria. Baseline variables are potentially important to predict the outcome at the start of ventilation, but may lose some of their predictive power after a certain time. Therefore, we use a landmark approach for dynamic Bayesian variable selection where the set of relevant predictors depends on the time already spent at risk. We finally determine the direct impact of a variable on each competing event through cause‐specific variable selection.  相似文献   

15.
A new statistical testing approach is developed for rodent tumorigenicity assays that have a single terminal sacrifice or occasionally interim sacrifices but not cause‐of‐death data. For experiments that lack cause‐of‐death data, statistically imputed numbers of fatal tumors and incidental tumors are used to modify Peto's cause‐of‐death test which is usually implemented using pathologist‐assigned cause‐of‐death information. The numbers of fatal tumors are estimated using a constrained nonparametric maximum likelihood estimation method. A new Newton‐based approach under inequality constraints is proposed for finding the global maximum likelihood estimates. In this study, the proposed method is concentrated on data with a single sacrifice experiment without implementing further assumptions. The new testing approach may be more reliable than Peto's test because of the potential for a misclassification of cause‐of‐death by pathologists. A Monte Carlo simulation study for the proposed test is conducted to assess size and power of the test. Asymptotic normality for the statistic of the proposed test is also investigated. The proposed testing approach is illustrated using a real data set.  相似文献   

16.
FAM3B has been suggested to play important roles in the progression of many cancers, such as gastric, oral, colon and prostate cancer. However, little is known about the role of FAM3B in human esophageal squamous cell carcinoma (ESCC). In the present study, we found that FAM3B expression was higher in ESCC tissues than in adjacent normal tissues. Using quantitative real‐time polymerase chain reaction, we found similar results in cell lines. FAM3B expression was significantly related to T/TNM stage. Importantly, Kaplan–Meier analysis revealed that a high expression level of FAM3B predicted a poor outcome for ESCC patients. Overexpression of FAM3B inhibits ESCC cell death, increases oesophageal tumour growth in xenografted nude mice, and promotes ESCC cell migration and invasion. Further studies confirmed that FAM3B regulates the AKT–MDM2–p53 pathway and two core epithelial‐to‐mesenchymal transition process markers, Snail and E‐cadherin. Our results provide new insights into the role of FAM3B in the progression of ESCC and suggest that FAM3B may be a promising molecular target and diagnostic marker for ESCC.  相似文献   

17.
Clinical trials are often planned with high uncertainty about the variance of the primary outcome variable. A poor estimate of the variance, however, may lead to an over‐ or underpowered study. In the internal pilot study design, the sample variance is calculated at an interim step and the sample size can be adjusted if necessary. The available recalculation procedures use the data of those patients for sample size recalculation that have already completed the study. In this article, we consider a variance estimator that takes into account both the data at the endpoint and at an intermediate point of the treatment phase. We derive asymptotic properties of this estimator and the relating sample size recalculation procedure. In a simulation study, the performance of the proposed approach is evaluated and compared with the procedure that uses only long‐term data. Simulation results demonstrate that the sample size resulting from the proposed procedure shows in general a smaller variability. At the same time, the Type I error rate is not inflated and the achieved power is close to the desired value.  相似文献   

18.
In a typical comparative clinical trial the randomization scheme is fixed at the beginning of the study, and maintained throughout the course of the trial. A number of researchers have championed a randomized trial design referred to as ‘outcome‐adaptive randomization.’ In this type of trial, the likelihood of a patient being enrolled to a particular arm of the study increases or decreases as preliminary information becomes available suggesting that treatment may be superior or inferior. While the design merits of outcome‐adaptive trials have been debated, little attention has been paid to significant ethical concerns that arise in the conduct of such studies. These include loss of equipoise, lack of processes for adequate informed consent, and inequalities inherent in the research design which could lead to perceptions of injustice that may have negative implications for patients and the research enterprise. This article examines the ethical difficulties inherent in outcome‐adaptive trials.  相似文献   

19.
J. Feifel  D. Dobler 《Biometrics》2021,77(1):175-185
Nested case‐control designs are attractive in studies with a time‐to‐event endpoint if the outcome is rare or if interest lies in evaluating expensive covariates. The appeal is that these designs restrict to small subsets of all patients at risk just prior to the observed event times. Only these small subsets need to be evaluated. Typically, the controls are selected at random and methods for time‐simultaneous inference have been proposed in the literature. However, the martingale structure behind nested case‐control designs allows for more powerful and flexible non‐standard sampling designs. We exploit that structure to find simultaneous confidence bands based on wild bootstrap resampling procedures within this general class of designs. We show in a simulation study that the intended coverage probability is obtained for confidence bands for cumulative baseline hazard functions. We apply our methods to observational data about hospital‐acquired infections.  相似文献   

20.
In scientific research, many hypotheses relate to the comparison of two independent groups. Usually, it is of interest to use a design (i.e., the allocation of sample sizes m and n for fixed ) that maximizes the power of the applied statistical test. It is known that the two‐sample t‐tests for homogeneous and heterogeneous variances may lose substantial power when variances are unequal but equally large samples are used. We demonstrate that this is not the case for the nonparametric Wilcoxon–Mann–Whitney‐test, whose application in biometrical research fields is motivated by two examples from cancer research. We prove the optimality of the design in case of symmetric and identically shaped distributions using normal approximations and show that this design generally offers power only negligibly lower than the optimal design for a wide range of distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号