首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary We consider a clinical trial with a primary and a secondary endpoint where the secondary endpoint is tested only if the primary endpoint is significant. The trial uses a group sequential procedure with two stages. The familywise error rate (FWER) of falsely concluding significance on either endpoint is to be controlled at a nominal level α. The type I error rate for the primary endpoint is controlled by choosing any α‐level stopping boundary, e.g., the standard O'Brien–Fleming or the Pocock boundary. Given any particular α‐level boundary for the primary endpoint, we study the problem of determining the boundary for the secondary endpoint to control the FWER. We study this FWER analytically and numerically and find that it is maximized when the correlation coefficient ρ between the two endpoints equals 1. For the four combinations consisting of O'Brien–Fleming and Pocock boundaries for the primary and secondary endpoints, the critical constants required to control the FWER are computed for different values of ρ. An ad hoc boundary is proposed for the secondary endpoint to address a practical concern that may be at issue in some applications. Numerical studies indicate that the O'Brien–Fleming boundary for the primary endpoint and the Pocock boundary for the secondary endpoint generally gives the best primary as well as secondary power performance. The Pocock boundary may be replaced by the ad hoc boundary for the secondary endpoint with a very little loss of secondary power if the practical concern is at issue. A clinical trial example is given to illustrate the methods.  相似文献   

2.
In the field of pharmaceutical drug development, there have been extensive discussions on the establishment of statistically significant results that demonstrate the efficacy of a new treatment with multiple co‐primary endpoints. When designing a clinical trial with such multiple co‐primary endpoints, it is critical to determine the appropriate sample size for indicating the statistical significance of all the co‐primary endpoints with preserving the desired overall power because the type II error rate increases with the number of co‐primary endpoints. We consider overall power functions and sample size determinations with multiple co‐primary endpoints that consist of mixed continuous and binary variables, and provide numerical examples to illustrate the behavior of the overall power functions and sample sizes. In formulating the problem, we assume that response variables follow a multivariate normal distribution, where binary variables are observed in a dichotomized normal distribution with a certain point of dichotomy. Numerical examples show that the sample size decreases as the correlation increases when the individual powers of each endpoint are approximately and mutually equal.  相似文献   

3.
Designs incorporating more than one endpoint have become popular in drug development. One of such designs allows for incorporation of short‐term information in an interim analysis if the long‐term primary endpoint has not been yet observed for some of the patients. At first we consider a two‐stage design with binary endpoints allowing for futility stopping only based on conditional power under both fixed and observed effects. Design characteristics of three estimators: using primary long‐term endpoint only, short‐term endpoint only, and combining data from both are compared. For each approach, equivalent cut‐off point values for fixed and observed effect conditional power calculations can be derived resulting in the same overall power. While in trials stopping for futility the type I error rate cannot get inflated (it usually decreases), there is loss of power. In this study, we consider different scenarios, including different thresholds for conditional power, different amount of information available at the interim, different correlations and probabilities of success. We further extend the methods to adaptive designs with unblinded sample size reassessments based on conditional power with inverse normal method as the combination function. Two different futility stopping rules are considered: one based on the conditional power, and one from P‐values based on Z‐statistics of the estimators. Average sample size, probability to stop for futility and overall power of the trial are compared and the influence of the choice of weights is investigated.  相似文献   

4.
Albert PS  Follmann DA  Wang SA  Suh EB 《Biometrics》2002,58(3):631-642
Longitudinal clinical trials often collect long sequences of binary data. Our application is a recent clinical trial in opiate addicts that examined the effect of a new treatment on repeated binary urine tests to assess opiate use over an extended follow-up. The dataset had two sources of missingness: dropout and intermittent missing observations. The primary endpoint of the study was comparing the marginal probability of a positive urine test over follow-up across treatment arms. We present a latent autoregressive model for longitudinal binary data subject to informative missingness. In this model, a Gaussian autoregressive process is shared between the binary response and missing-data processes, thereby inducing informative missingness. Our approach extends the work of others who have developed models that link the various processes through a shared random effect but do not allow for autocorrelation. We discuss parameter estimation using Monte Carlo EM and demonstrate through simulations that incorporating within-subject autocorrelation through a latent autoregressive process can be very important when longitudinal binary data is subject to informative missingness. We illustrate our new methodology using the opiate clinical trial data.  相似文献   

5.
Non‐inferiority trials are conducted for a variety of reasons including to show that a new treatment has a negligible reduction in efficacy or safety when compared to the current standard treatment, or a more complex setting of showing that a new treatment has a negligible reduction in efficacy when compared to the current standard yet is superior in terms of other treatment characteristics. The latter reason for conducting a non‐inferiority trial presents the challenge of deciding on a balance between a suitable reduction in efficacy, known as the non‐inferiority margin, in return for a gain in other important treatment characteristics/findings. It would be ideal to alleviate the dilemma on the choice of margin in this setting by reverting to a traditional superiority trial design where a single p ‐value for superiority of both the most important endpoint (efficacy) and the most important finding (treatment characteristic) is provided. We discuss how this can be done using the information‐preserving composite endpoint (IPCE) approach and consider binary outcome cases in which the combination of efficacy and treatment characteristics, but not one itself, paints a clear picture that the novel treatment is superior to the active control (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download.  相似文献   

7.
ABSTRACT: BACKGROUND: Postoperative surgical site infections cause substantial morbidity, prolonged hospitalization, costs and even mortality and remain one of the most frequent surgical complications. Approximately 14% to 30% of all patients undergoing elective open abdominal surgery are affected and methods to reduce surgical site infection rates warrant further investigation and evaluation in randomized controlled trials. METHODS: To investigate whether the application of a circular plastic wound protector reduces the rate of surgical site infections in general and visceral surgical patients that undergo midline or transverse laparotomy by 50%. BaFO is a randomized, controlled, patient-blinded and observer-blinded multicenter clinical trial with two parallel surgical groups. The primary outcome measure will be the rate of surgical site infections within 45 days postoperative assessed according to the definition of the Center for Disease Control. Statistical analysis of the primary endpoint will be based on the intention-to-treat population. The global level of significance is set at 5% (2 sided) and sample size (n = 258 per group) is determined to assure a power of 80% with a planned interim analysis for the primary endpoint after the inclusion of 340 patients. DISCUSSION: The BaFO trial will explore if the rate of surgical site infections can be reduced by a single, simple, inexpensive intervention in patients undergoing open elective abdominal surgery. Its pragmatic design guarantees high external validity and clinical relevance. Trial registration http://www.clinicaltrials.gov NCT01181206. Date of registration: 11 August 2010; date of first patient randomized: 8 September 2010.  相似文献   

8.
ABSTRACT: BACKGROUND: The optimal strategy for abdominal wall closure has been an issue of ongoing debate. Available studies do not specifically enroll patients who undergo emergency laparotomy and thus do not consider the distinct biological characteristics of these patients. The present randomized controlled trial evaluates the efficacy and safety of two commonly applied abdominal wall closure strategies in patients undergoing primary emergency midline laparotomy. Methods/design The CONTINT trial is a multicenter, open label, randomized controlled trial with a twogroup parallel design. Patients undergoing a primary emergency midline laparotomy are enrolled in the trial. The two most commonly applied strategies of abdominal wall closure after midline laparotomy are compared: the continuous, all-layer suture technique using slowly absorbable monofilament material (two Monoplus(R) loops) and the interrupted suture technique using rapidly absorbable braided material (Vicryl(R) sutures). The primary endpoint within the CONTINT trial is an incisional hernia within 12 months or a burst abdomen within 30 days after surgery. As reliable data on this primary endpoint is not available for patients undergoing emergency surgery, an adaptive interim analysis will be conducted after the inclusion of 80 patients, allowing early termination of the trial if necessary or modification of design characteristics such as recalculation of sample size. DISCUSSION: This is a randomized controlled multicenter trial with a two-group parallel design to assess the efficacy and safety of two commonly applied abdominal wall closure strategies in patients undergoing primary emergency midline laparotomy. Trial registration NCT00544583.  相似文献   

9.
The evaluation of surrogate endpoints for primary use in future clinical trials is an increasingly important research area, due to demands for more efficient trials coupled with recent regulatory acceptance of some surrogates as 'valid.' However, little consideration has been given to how a trial that utilizes a newly validated surrogate endpoint as its primary endpoint might be appropriately designed. We propose a novel Bayesian adaptive trial design that allows the new surrogate endpoint to play a dominant role in assessing the effect of an intervention, while remaining realistically cautious about its use. By incorporating multitrial historical information on the validated relationship between the surrogate and clinical endpoints, then subsequently evaluating accumulating data against this relationship as the new trial progresses, we adaptively guard against an erroneous assessment of treatment based upon a truly invalid surrogate. When the joint outcomes in the new trial seem plausible given similar historical trials, we proceed with the surrogate endpoint as the primary endpoint, and do so adaptively-perhaps stopping the trial for early success or inferiority of the experimental treatment, or for futility. Otherwise, we discard the surrogate and switch adaptive determinations to the original primary endpoint. We use simulation to test the operating characteristics of this new design compared to a standard O'Brien-Fleming approach, as well as the ability of our design to discriminate trustworthy from untrustworthy surrogates in hypothetical future trials. Furthermore, we investigate possible benefits using patient-level data from 18 adjuvant therapy trials in colon cancer, where disease-free survival is considered a newly validated surrogate endpoint for overall survival.  相似文献   

10.
To optimize resources, randomized clinical trials with multiple arms can be an attractive option to simultaneously test various treatment regimens in pharmaceutical drug development. The motivation for this work was the successful conduct and positive final outcome of a three‐arm randomized clinical trial primarily assessing whether obinutuzumab plus chlorambucil in patients with chronic lympocytic lymphoma and coexisting conditions is superior to chlorambucil alone based on a time‐to‐event endpoint. The inference strategy of this trial was based on a closed testing procedure. We compare this strategy to three potential alternatives to run a three‐arm clinical trial with a time‐to‐event endpoint. The primary goal is to quantify the differences between these strategies in terms of the time it takes until the first analysis and thus potential approval of a new drug, number of required events, and power. Operational aspects of implementing the various strategies are discussed. In conclusion, using a closed testing procedure results in the shortest time to the first analysis with a minimal loss in power. Therefore, closed testing procedures should be part of the statistician's standard clinical trials toolbox when planning multiarm clinical trials.  相似文献   

11.
The approach to early termination for efficacy in a trial where events occur over time but the primary question of interest relates to a long-term binary endpoint is not straightforward. This article considers comparison of treatment groups with Kaplan-Meier (KM) proportions evaluated at increasing times from randomization, at increasing calendar testing times. This strategy is employed to improve the ability to detect important treatment effects and provide critical treatments to patients in a timely manner. This dynamic Kaplan-Meier (DKM) approach is shown to be robust; that is, it produces high power and early termination time across a wide range of circumstances. In contrast, a fixed time KM comparison and the log-rank test are both shown to be more variable in performance. Practical considerations of implementing the DKM method are discussed.  相似文献   

12.
Adaptive two‐stage designs allow a data‐driven change of design characteristics during the ongoing trial. One of the available options is an adaptive choice of the test statistic for the second stage of the trial based on the results of the interim analysis. Since there is often only a vague knowledge of the distribution shape of the primary endpoint in the planning phase of a study, a change of the test statistic may then be considered if the data indicate that the assumptions underlying the initial choice of the test are not correct. Collings and Hamilton proposed a bootstrap method for the estimation of the power of the two‐sample Wilcoxon test for shift alternatives. We use this approach for the selection of the test statistic. By means of a simulation study, we show that the gain in terms of power may be considerable when the initial assumption about the underlying distribution was wrong, whereas the loss is relatively small when in the first instance the optimal test statistic was chosen. The results also hold true for comparison with a one‐stage design. Application of the method is illustrated by a clinical trial example.  相似文献   

13.
Summary .   The approach to early termination for efficacy in a trial where events occur over time but the primary question of interest relates to a long-term binary endpoint is not straightforward. This article considers comparison of treatment groups with Kaplan–Meier (KM) proportions evaluated at increasing times from randomization, at increasing calendar testing times. This strategy is employed to improve the ability to detect important treatment effects and provide critical treatments to patients in a timely manner. This dynamic Kaplan–Meier (DKM) approach is shown to be robust; that is, it produces high power and early termination time across a wide range of circumstances. In contrast, a fixed time KM comparison and the log-rank test are both shown to be more variable in performance. Practical considerations of implementing the DKM method are discussed.  相似文献   

14.
Neoadjuvant endocrine therapy trials for breast cancer are now a widely accepted investigational approach for oncology cooperative group and pharmaceutical company research programs. However, there remains considerable uncertainty regarding the most suitable endpoints for these studies, in part, because short-term clinical, radiological or biomarker responses have not been fully validated as surrogate endpoints that closely relate to long-term breast cancer outcome. This shortcoming must be addressed before neoadjuvant endocrine treatment can be used as a triage strategy designed to identify patients with endocrine therapy “curable” disease. In this summary, information from published studies is used as a basis to critique clinical trial designs and to suggest experimental endpoints for future validation studies. Three aspects of neoadjuvant endocrine therapy designs are considered: the determination of response; the assessment of surgical outcomes; and biomarker endpoint analysis. Data from the letrozole 024 (LET 024) trial that compared letrozole and tamoxifen is used to illustrate a combined endpoint analysis that integrates both clinical and biomarker information. In addition, the concept of a “cell cycle response” is explored as a simple post-treatment endpoint based on Ki67 analysis that might have properties similar to the pathological complete response endpoint used in neoadjuvant chemotherapy trials.  相似文献   

15.
Hung et al. (2007) considered the problem of controlling the type I error rate for a primary and secondary endpoint in a clinical trial using a gatekeeping approach in which the secondary endpoint is tested only if the primary endpoint crosses its monitoring boundary. They considered a two-look trial and showed by simulation that the naive method of testing the secondary endpoint at full level α at the time the primary endpoint reaches statistical significance does not control the familywise error rate at level α. Tamhane et al. (2010) derived analytic expressions for familywise error rate and power and confirmed the inflated error rate of the naive approach. Nonetheless, many people mistakenly believe that the closure principle can be used to prove that the naive procedure controls the familywise error rate. The purpose of this note is to explain in greater detail why there is a problem with the naive approach and show that the degree of alpha inflation can be as high as that of unadjusted monitoring of a single endpoint.  相似文献   

16.
Adaptive seamless phase II/III designs combine a phase II and a phase III study into one single confirmatory clinical trial. Several examples of such designs are presented, where the primary endpoint is binary, time-to-event or continuous. The interim adaptations considered include the selection of treatments and the selection of hypotheses related to a pre-specified subgroup of patients. Practical aspects concerning the planning and implementation of adaptive seamless confirmatory studies are also discussed.  相似文献   

17.
This paper addresses treatment effect heterogeneity (also referred to, more compactly, as 'treatment heterogeneity') in the context of a controlled clinical trial with binary endpoints. Treatment heterogeneity, variation in the true (causal) individual treatment effects, is explored using the concept of the potential outcome. This framework supposes the existance of latent responses for each subject corresponding to each possible treatment. In the context of a binary endpoint, treatment heterogeniety may be represented by the parameter, pi2, the probability that an individual would have a failure on the experimental treatment, if received, and would have a success on control, if received. Previous research derived bounds for pi2 based on matched pairs data. The present research extends this method to the blocked data context. Estimates (and their variances) and confidence intervals for the bounds are derived. We apply the new method to data from a renal disease clinical trial. In this example, bounds based on the blocked data are narrower than the corresponding bounds based only on the marginal success proportions. Some remaining challenges (including the possibility of further reducing bound widths) are discussed.  相似文献   

18.
In oncology studies with immunotherapies, populations of “super‐responders” (patients in whom the treatment works particularly well) are often suspected to be related to biomarkers. In this paper, we explore various ways of confirmatory statistical hypothesis testing for joint inference on the subpopulation of putative “super‐responders” and the full study population. A model‐based testing framework is proposed, which allows to define, up‐front, the strength of evidence required from both full and subpopulations in terms of clinical efficacy. This framework is based on a two‐way analysis of variance (ANOVA) model with an interaction in combination with multiple comparison procedures. The ease of implementation of this model‐based approach is emphasized and details are provided for the practitioner who would like to adopt this approach. The discussion is exemplified by a hypothetical trial that uses an immune‐marker in oncology to define the subpopulation and tumor growth as the primary endpoint.  相似文献   

19.
When a new treatment is compared to an established one in a randomized clinical trial, it is standard practice to statistically test for non-inferiority rather than for superiority. When the endpoint is binary, one usually compares two treatments using either an odds-ratio or a difference of proportions. In this paper, we propose a mixed approach which uses both concepts. One first defines the non-inferiority margin using an odds-ratio and one ultimately proves non-inferiority statistically using a difference of proportions. The mixed approach is shown to be more powerful than the conventional odds-ratio approach when the efficacy of the established treatment is known (with good precision) and high (e.g. with more than 56% of success). The gain of power achieved may lead in turn to a substantial reduction in the sample size needed to prove non-inferiority. The mixed approach can be generalized to ordinal endpoints.  相似文献   

20.
ABSTRACT: BACKGROUND: Antithrombotic treatment is a continuous therapy that is often performed in general practice and requires careful safety management. The aim of this study is to investigate whether a best practice model that applies major elements of case management, including patient education, can improve antithrombotic management in primary health care in terms of reducing major thromboembolic and bleeding events. METHODS: This 24-month cluster-randomized trial will be performed in 690 adult patients from 46 practices. The trial intervention will be a complex intervention involving general practitioners, health care assistants and patients with an indication for oral anticoagulation. To assess adherence to medication and symptoms in patients, as well as to detect complications early, health care assistants will be trained in case management and will use the Coagulation-Monitoring-List (Co-MoL) to regularly monitor patients. Patients will receive information (leaflets and a video), treatment monitoring via the Co-MoL and be motivated to perform self-management. Patients in the control group will continue to receive treatment-as-usual from their general practitioners. The primary endpoint is the combined endpoint of all thromboembolic events requiring hospitalization, and all major bleeding complications. Secondary endpoints are mortality, hospitalization, strokes, major bleeding and thromboembolic complications, severe treatment interactions, the number of adverse events, quality of anticoagulation, health-related quality of life and costs. Further secondary objectives will be investigated to explain the mechanism by which the intervention is effective: patients' assessment of chronic illness care, self-reported adherence to medication, general practitioners' and health care assistants' knowledge, patients' knowledge and satisfaction with shared decision making. Practice recruitment is expected to take place between July and December 2012. Recruitment of eligible patients will start in July 2012. Assessment will occur at three time points: baseline (T0), follow-up after 12 (T1) and after 24 months (T2). DISCUSSION: The efficacy and effectiveness of individual elements of the intervention, such as antithrombotic interventions, self-management concepts in orally anticoagulated patients and the methodological tool, case-management, have already been extensively demonstrated. This project foresees the combination of several proven instruments, as a result of which we expect to profit from a reduction in the major complications associated with antithrombotic treatment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号