首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the management of most chronic conditions characterized by the lack of universally effective treatments, adaptive treatment strategies (ATSs) have grown in popularity as they offer a more individualized approach. As a result, sequential multiple assignment randomized trials (SMARTs) have gained attention as the most suitable clinical trial design to formalize the study of these strategies. While the number of SMARTs has increased in recent years, sample size and design considerations have generally been carried out in frequentist settings. However, standard frequentist formulae require assumptions on interim response rates and variance components. Misspecifying these can lead to incorrect sample size calculations and correspondingly inadequate levels of power. The Bayesian framework offers a straightforward path to alleviate some of these concerns. In this paper, we provide calculations in a Bayesian setting to allow more realistic and robust estimates that account for uncertainty in inputs through the ‘two priors’ approach. Additionally, compared to the standard frequentist formulae, this methodology allows us to rely on fewer assumptions, integrate pre-trial knowledge, and switch the focus from the standardized effect size to the MDD. The proposed methodology is evaluated in a thorough simulation study and is implemented to estimate the sample size for a full-scale SMART of an internet-based adaptive stress management intervention on cardiovascular disease patients using data from its pilot study conducted in two Canadian provinces.  相似文献   

2.
Personalized intervention strategies, in particular those that modify treatment based on a participant's own response, are a core component of precision medicine approaches. Sequential multiple assignment randomized trials (SMARTs) are growing in popularity and are specifically designed to facilitate the evaluation of sequential adaptive strategies, in particular those embedded within the SMART. Advances in efficient estimation approaches that are able to incorporate machine learning while retaining valid inference can allow for more precise estimates of the effectiveness of these embedded regimes. However, to the best of our knowledge, such approaches have not yet been applied as the primary analysis in SMART trials. In this paper, we present a robust and efficient approach using targeted maximum likelihood estimation (TMLE) for estimating and contrasting expected outcomes under the dynamic regimes embedded in a SMART, together with generating simultaneous confidence intervals for the resulting estimates. We contrast this method with two alternatives (G-computation and inverse probability weighting estimators). The precision gains and robust inference achievable through the use of TMLE to evaluate the effects of embedded regimes are illustrated using both outcome-blind simulations and a real-data analysis from the Adaptive Strategies for Preventing and Treating Lapses of Retention in Human Immunodeficiency Virus (HIV) Care (ADAPT-R) trial (NCT02338739), a SMART with a primary aim of identifying strategies to improve retention in HIV care among people living with HIV in sub-Saharan Africa.  相似文献   

3.
The classical group sequential test procedures that were proposed by Pocock (1977) and O'Brien and Fleming (1979) rest on the assumption of equal sample sizes between the interim analyses. Regarding this it is well known that for most situations there is not a great amount of additional Type I error if monitoring is performed for unequal sample sizes between the stages. In some cases, however, problems can arise resulting in an unacceptable liberal behavior of the test procedure. In this article worst case scenarios in sample size imbalancements between the inspection times are considered. Exact critical values for the Pocock and the O'Brien and Fleming group sequential designs are derived for arbitrary and for varying but bounded sample sizes. The approach represents a reasonable alternative to the flexible method that is based on the Type I error rate spending function. The SAS syntax for performing the calculations is provided. Using these procedures, the inspection times or the sample sizes in the consecutive stages need to be chosen independently of the data observed so far.  相似文献   

4.
Hellmich M 《Biometrics》2001,57(3):892-898
In order to benefit from the substantial overhead expenses of a large group sequential clinical trial, the simultaneous investigation of several competing treatments becomes more popular. If at some interim analysis any treatment arm reveals itself to be inferior to any other treatment under investigation, this inferior arm may be or may even need to be dropped for ethical and/or economic reasons. Recently proposed methods for monitoring and analysis of group sequential clinical trials with multiple treatment arms are compared and discussed. The main focus of the article is on the application and extension of (adaptive) closed testing procedures in the group sequential setting that strongly control the familywise error rate. A numerical example is given for illustration.  相似文献   

5.
Müller HH  Schäfer H 《Biometrics》2001,57(3):886-891
A general method is presented integrating the concept of adaptive interim analyses into classical group sequential testing. This allows the researcher to represent every group sequential plan as an adaptive trial design and to make design changes during the course of the trial after every interim analysis in the same way as with adaptive designs. The concept of adaptive trial designing is thereby generalized to a large variety of possible sequential plans.  相似文献   

6.
Brannath W  Bauer P 《Biometrics》2004,60(3):715-723
Ethical considerations and the competitive environment of clinical trials usually require that any given trial have sufficient power to detect a treatment advance. If at an interim analysis the available data are used to decide whether the trial is promising enough to be continued, investigators and sponsors often wish to have a high conditional power, which is the probability to reject the null hypothesis given the interim data and the alternative of interest. Under this requirement a design with interim sample size recalculation, which keeps the overall and conditional power at a prespecified value and preserves the overall type I error rate, is a reasonable alternative to a classical group sequential design, in which the conditional power is often too small. In this article two-stage designs with control of overall and conditional power are constructed that minimize the expected sample size, either for a simple point alternative or for a random mixture of alternatives given by a prior density for the efficacy parameter. The presented optimality result applies to trials with and without an interim hypothesis test; in addition, one can account for constraints such as a minimal sample size for the second stage. The optimal designs will be illustrated with an example, and will be compared to the frequently considered method of using the conditional type I error level of a group sequential design.  相似文献   

7.
A simple shift algorithm is described enabling the exact determination of power functions and sample size distributions for a large variety of closed sequential two‐sample designs with a binary outcome variable. The test statistics are assumed to be based on relative frequencies of successes or failures, but the number of interim analyses, the monitoring times, and the continuation regions may be specified as desired. To give examples, exact properties of designs proposed by the program package EaSt (Cytel , 1992) are determined, and plans with interim analyses are considered where decisions are based on the conditional power given the observations obtained so far.  相似文献   

8.
Many group-sequential test procedures have been proposed to meet the ethical need for interim analyses. All of these papers, however, focus their discussion on the situation where there are only one standard control and one experimental treatment. In this paper, we consider a trial with one standard control, but with more than one experimental treatment. We have developed a group-sequential test procedure to accommodate any finite number of experimental treatments. To facilitate the practical application of the proposed test procedure, on the basis of Monte Carlo simulation, we have derived the critical values of α-levels equal to 0.01, 0.05 and 0.10 for the number of experimental treatments ranging from 2 to 4 and the number of multiple group sequential analysis ranging from 1 to 10. Comparing with a single non-sequential analysis that has a reasonable power (say, 0.80), we have demonstrated that the application of the proposed test procedure may substantially reduce the required sample size without seriously sacrificing the original power.  相似文献   

9.
For a Phase III randomized trial that compares survival outcomes between an experimental treatment versus a standard therapy, interim monitoring analysis is used to potentially terminate the study early based on efficacy. To preserve the nominal Type I error rate, alpha spending methods and information fractions are used to compute appropriate rejection boundaries in studies with planned interim analyses. For a one-sided trial design applied to a scenario in which the experimental therapy is superior to the standard therapy, interim monitoring should provide the opportunity to stop the trial prior to full follow-up and conclude that the experimental therapy is superior. This paper proposes a method called total control only (TCO) for estimating the information fraction based on the number of events within the standard treatment regimen. Based on theoretical derivations and simulation studies, for a maximum duration superiority design, the TCO method is not influenced by departure from the designed hazard ratio, is sensitive to detecting treatment differences, and preserves the Type I error rate compared to information fraction estimation methods that are based on total observed events. The TCO method is simple to apply, provides unbiased estimates of the information fraction, and does not rely on statistical assumptions that are impossible to verify at the design stage. For these reasons, the TCO method is a good approach when designing a maximum duration superiority trial with planned interim monitoring analyses.  相似文献   

10.
We introduce a new sequential monitoring approach to facilitate the use of observational electronic healthcare utilization databases in comparative drug safety surveillance studies comparing the safety between two approved medical products. The new approach enhances the confounder adjustment capabilities of the conditional sequential sampling procedure (CSSP), an existing group sequential method for sequentially monitoring excess risks of adverse events following the introduction of a new medical product. It applies to a prospective cohort setting where information for both treatment and comparison groups accumulates concurrently over time. CSSP adjusts for covariates through stratification and thus it may have limited capacity to control for confounding as it can only accommodate a few categorical covariates. To address this issue, we propose the propensity score (PS)-stratified CSSP, in which we construct strata based on selected percentiles of the estimated PSs. The PS is defined as the conditional probability of being treated given measured baseline covariates and is commonly used in epidemiological studies to adjust for confounding bias. The PS-stratified CSSP approach integrates this more flexible confounding adjustment, PS-stratification, with the sequential analytic approach, CSSP, thus inheriting CSSP’s attractive features: (i) it accommodates varying amounts of person follow-up time, (ii) it uses exact conditional inference, which can be important when studying rare safety outcomes, and (iii) it allows for a large number of interim tests. Further, it overcomes CSSP’s difficulty with adjusting for multiple categorical and continuous confounders.  相似文献   

11.
In two‐stage group sequential trials with a primary and a secondary endpoint, the overall type I error rate for the primary endpoint is often controlled by an α‐level boundary, such as an O'Brien‐Fleming or Pocock boundary. Following a hierarchical testing sequence, the secondary endpoint is tested only if the primary endpoint achieves statistical significance either at an interim analysis or at the final analysis. To control the type I error rate for the secondary endpoint, this is tested using a Bonferroni procedure or any α‐level group sequential method. In comparison with marginal testing, there is an overall power loss for the test of the secondary endpoint since a claim of a positive result depends on the significance of the primary endpoint in the hierarchical testing sequence. We propose two group sequential testing procedures with improved secondary power: the improved Bonferroni procedure and the improved Pocock procedure. The proposed procedures use the correlation between the interim and final statistics for the secondary endpoint while applying graphical approaches to transfer the significance level from the primary endpoint to the secondary endpoint. The procedures control the familywise error rate (FWER) strongly by construction and this is confirmed via simulation. We also compare the proposed procedures with other commonly used group sequential procedures in terms of control of the FWER and the power of rejecting the secondary hypothesis. An example is provided to illustrate the procedures.  相似文献   

12.
In many clinical trials, it is desirable to establish a sequential monitoring plan, whereby the test statistic is computed at an interim point or points in the trial and a decision is made whether to stop early due to evidence of treatment efficacy. In this article, we will set up a sequential monitoring plan for randomization-based inference under the permuted block design, stratified block design, and stratified urn design. We will also propose a definition of information fraction in these settings and discuss its calculation under these different designs.  相似文献   

13.
This paper proposes dynamic treatment regimes (DTRs) as effective individualized treatment strategies for managing chronic periodontitis. The proposed DTRs are studied via SMARTp —a two-stage sequential multiple assignment randomized trial (SMART) design. For this design, we propose a statistical analysis plan and a novel cluster-level sample size calculation method that factors in typical features of periodontal responses such as non-Gaussianity, spatial clustering, and nonrandom missingness. Here, each patient is viewed as a cluster, and a tooth within a patient's mouth is viewed as an individual unit inside the cluster, with the tooth-level covariance structure described by a conditionally autoregressive structure. To accommodate possible skewness and tail behavior, the tooth-level clinical attachment level (CAL) response is assumed to be skew-t, with the nonrandomly missing structure captured via a shared parameter model corresponding to the missingness indicator. The proposed method considers mean comparison for the regimes with or without sharing an initial treatment, where the expected values and corresponding variances or covariance for the sample means of a pair of DTRs are derived by the inverse probability weighting and method of moments. Simulation studies are conducted to investigate the finite-sample performance of the proposed sample size formulas under a variety of outcome-generating scenarios. An R package SMARTp implementing our sample size formula is available at the Comprehensive R Archive Network for free download.  相似文献   

14.
Group sequential stopping rules are often used during the conduct of clinical trials in order to attain more ethical treatment of patients and to better address efficiency concerns. Because the use of such stopping rules materially affects the frequentist operating characteristics of the hypothesis test, it is necessary to choose an appropriate stopping rule during the planning of the study. It is often the case, however, that the number and timing of interim analyses are not precisely known at the time of trial design, and thus the implementation of a particular stopping rule must allow for flexible determination of the schedule of interim analyses. In this article, we consider the use of constrained stopping boundaries in the implementation of stopping rules. We compare this approach when used on various scales for the test statistic. When implemented on the scale of boundary crossing probabilities, this approach is identical to the error spending function approach of Lan and DeMets (1983).  相似文献   

15.
Li Z 《Biometrics》1999,55(1):277-283
A method of interim monitoring is described for survival trials in which the proportional hazards assumption may not hold. This method extends the test statistics based on the cumulative weighted difference in the Kaplan-Meier estimates (Pepe and Fleming, 1989, Biometrics 45, 497-507) to the sequential setting. Therefore, it provides a useful alternative to the group sequential linear rank tests. With an appropriate weight function, the test statistic itself provides an estimator for the cumulative weighted difference in survival probabilities, which is an interpretable measure for the treatment difference, especially when the proportional hazards model fails. The method is illustrated based on the design of a real trial. The operating characteristics are studied through a small simulation.  相似文献   

16.
Many clinical trials compare two or more treatment groups by using a binary outcome measure. For example, the goal could be to determine whether the frequency of pain episodes is significantly reduced in the treatment group (arm A) as compared to the control group (arm B). However, for ethical or regulatory reasons, group sequential designs are commonly employed. Then, based on a binomial distribution, the stopping boundaries for the interim analyses are constructed for assessing the difference in the response probabilities between the two groups. This is easily accomplished by using any of the standard procedures, e.g., those discussed by Jennison and Turnbull (2000), and using one of the most commonly used software packages, East (2000). Several factors are known to often affect the primary outcome of interest, but their true distributions are not known in advance. In addition, these factors may cause heterogeneous treatment responses among individuals in a group, and their exact effect size may be unknown. To limit the effect of such factors on the comparison of the two arms, stratified randomization is used in the actual conduct of the trial. Then, a stratified analysis based on the odds ratio proposed in Jennison and Turnbull (2000, pages 251-252) and consistent with the stratified design is undertaken. However, the stopping rules used for the interim analyses are those obtained for determining the differences in response rates in a design that was not stratified. The purpose of this paper is to assess the robustness of such an approach on the performance of the odds ratio test when the underlying distribution and effect size of the factors that influence the outcome may vary. The simulation studies indicate that, in general, the stratified approach offers consistently better results than does the unstratified approach, as long as the difference in the weighted average of the response probabilities across strata between the two groups remains closer to the hypothesized values, irrespective of the differences in the (allocation) distributions and heterogeneous response rate. However, if the response probabilities deviate significantly from the hypothesized values so that the difference in the weighted average is less than the hypothesized value, then the proposed study could be significantly underpowered.  相似文献   

17.
Modification of sample size in group sequential clinical trials   总被引:1,自引:0,他引:1  
Cui L  Hung HM  Wang SJ 《Biometrics》1999,55(3):853-857
In group sequential clinical trials, sample size reestimation can be a complicated issue when it allows for change of sample size to be influenced by an observed sample path. Our simulation studies show that increasing sample size based on an interim estimate of the treatment difference can substantially inflate the probability of type I error in most practical situations. A new group sequential test procedure is developed by modifying the weights used in the traditional repeated significance two-sample mean test. The new test has the type I error probability preserved at the target level and can provide a substantial gain in power with the increase of sample size. Generalization of the new procedure is discussed.  相似文献   

18.
Shen Y  Fisher L 《Biometrics》1999,55(1):190-197
In the process of monitoring clinical trials, it seems appealing to use the interim findings to determine whether the sample size originally planned will provide adequate power when the alternative hypothesis is true, and to adjust the sample size if necessary. In the present paper, we propose a flexible sequential monitoring method following the work of Fisher (1998), in which the maximum sample size does not have to be specified in advance. The final test statistic is constructed based on a weighted average of the sequentially collected data, where the weight function at each stage is determined by the observed data prior to that stage. Such a weight function is used to maintain the integrity of the variance of the final test statistic so that the overall type I error rate is preserved. Moreover, the weight function plays an implicit role in termination of a trial when a treatment difference exists. Finally, the design allows the trial to be stopped early when the efficacy result is sufficiently negative. Simulation studies confirm the performance of the method.  相似文献   

19.
K K Lan  J M Lachin 《Biometrics》1990,46(3):759-770
To control the Type I error probability in a group sequential procedure using the logrank test, it is important to know the information times (fractions) at the times of interim analyses conducted for purposes of data monitoring. For the logrank test, the information time at an interim analysis is the fraction of the total number of events to be accrued in the entire trial. In a maximum information trial design, the trial is concluded when a prespecified total number of events has been accrued. For such a design, therefore, the information time at each interim analysis is known. However, many trials are designed to accrue data over a fixed duration of follow-up on a specified number of patients. This is termed a maximum duration trial design. Under such a design, the total number of events to be accrued is unknown at the time of an interim analysis. For a maximum duration trial design, therefore, these information times need to be estimated. A common practice is to assume that a fixed fraction of information will be accrued between any two consecutive interim analyses, and then employ a Pocock or O'Brien-Fleming boundary. In this article, we describe an estimate of the information time based on the fraction of total patient exposure, which tends to be slightly negatively biased (i.e., conservative) if survival is exponentially distributed. We then present a numerical exploration of the robustness of this estimate when nonexponential survival applies. We also show that the Lan-DeMets (1983, Biometrika 70, 659-663) procedure for constructing group sequential boundaries with the desired level of Type I error control can be computed using the estimated information fraction, even though it may be biased. Finally, we discuss the implications of employing a biased estimate of study information for a group sequential procedure.  相似文献   

20.
Background and Aims: The eradication rate of proton‐pump inhibitor‐based triple therapy for Helicobacter pylori infection is low due to increasing antibiotics resistance, especially clarithromycin. Recently, it was reported in Europe that a 10‐day sequential strategy produced good outcomes. The aim of this study was to assess the efficacy of sequential therapy as first‐line treatment for eradication of H. pylori in clinical practice in Korea. Materials and Methods: A total of 98 patients (mean age 55.2 years and male 47, female 51) with proven H. pylori infection received 10‐day sequential therapy (20 mg of rabeprazole, and 1 g of amoxicillin, twice daily for the first 5 days, followed by 20 mg of rabeprazole, 500 mg of clarithromycin, and 500 mg of metronidazole, twice daily for the remaining 5 days). Eradication was evaluated 4 weeks later, after completion of treatment by 13C‐urea breath testing. Eradication rates were calculated by intention‐to‐treat (ITT) and by per protocol (PP). Compliance and adverse events were also assessed in study group. Results: The eradication rate of sequential therapy was 91.8% (90/98) by ITT and same result was reported by PP analysis (89/97). The study group consisted of 66 H. pylori associated gastritis, 7 gastric ulcer, and 25 duodenal ulcer patients (67.3%, 7.1%, 25.5%, respectively). Mild adverse events happened frequently (21.4%) but the treatment was well tolerable. The most common adverse event was a bitter taste (9.2%) followed by nausea and diarrhea (4.1%). Conclusions: Ten‐day sequential therapy is found to effectively eradicate H. pylori infection as first‐line treatment in Korea.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号