首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Yujie Zhao  Rui Tang  Yeting Du  Ying Yuan 《Biometrics》2023,79(2):1459-1471
In the era of targeted therapies and immunotherapies, the traditional drug development paradigm of testing one drug at a time in one indication has become increasingly inefficient. Motivated by a real-world application, we propose a master-protocol–based Bayesian platform trial design with mixed endpoints (PDME) to simultaneously evaluate multiple drugs in multiple indications, where different subsets of efficacy measures (eg, objective response and landmark progression-free survival) may be used by different indications as single or multiple endpoints. We propose a Bayesian hierarchical model to accommodate mixed endpoints and reflect the trial structure of indications that are nested within treatments. We develop a two-stage approach that first clusters the indications into homogeneous subgroups and then applies the Bayesian hierarchical model to each subgroup to achieve precision information borrowing. Patients are enrolled in a group-sequential way and adaptively assigned to treatments according to their efficacy estimates. At each interim analysis, the posterior probabilities that the treatment effect exceeds prespecified clinically relevant thresholds are used to drop ineffective treatments and “graduate” effective treatments. Simulations show that the PDME design has desirable operating characteristics compared to existing method.  相似文献   

2.
There is growing interest in integrated Phase I/II oncology clinical trials involving molecularly targeted agents (MTA). One of the main challenges of these trials are nontrivial dose–efficacy relationships and administration of MTAs in combination with other agents. While some designs were recently proposed for such Phase I/II trials, the majority of them consider the case of binary toxicity and efficacy endpoints only. At the same time, a continuous efficacy endpoint can carry more information about the agent's mechanism of action, but corresponding designs have received very limited attention in the literature. In this work, an extension of a recently developed information‐theoretic design for the case of a continuous efficacy endpoint is proposed. The design transforms the continuous outcome using the logistic transformation and uses an information–theoretic argument to govern selection during the trial. The performance of the design is investigated in settings of single‐agent and dual‐agent trials. It is found that the novel design leads to substantial improvements in operating characteristics compared to a model‐based alternative under scenarios with nonmonotonic dose/combination–efficacy relationships. The robustness of the design to missing/delayed efficacy responses and to the correlation in toxicity and efficacy endpoints is also investigated.  相似文献   

3.
In many clinical trials, multiple time‐to‐event endpoints including the primary endpoint (e.g., time to death) and secondary endpoints (e.g., progression‐related endpoints) are commonly used to determine treatment efficacy. These endpoints are often biologically related. This work is motivated by a study of bone marrow transplant (BMT) for leukemia patients, who may experience the acute graft‐versus‐host disease (GVHD), relapse of leukemia, and death after an allogeneic BMT. The acute GVHD is associated with the relapse free survival, and both the acute GVHD and relapse of leukemia are intermediate nonterminal events subject to dependent censoring by the informative terminal event death, but not vice versa, giving rise to survival data that are subject to two sets of semi‐competing risks. It is important to assess the impacts of prognostic factors on these three time‐to‐event endpoints. We propose a novel statistical approach that jointly models such data via a pair of copulas to account for multiple dependence structures, while the marginal distribution of each endpoint is formulated by a Cox proportional hazards model. We develop an estimation procedure based on pseudo‐likelihood and carry out simulation studies to examine the performance of the proposed method in finite samples. The practical utility of the proposed method is further illustrated with data from the motivating example.  相似文献   

4.
A surrogate endpoint is an endpoint that is obtained sooner, at lower cost, or less invasively than the true endpoint for a health outcome and is used to make conclusions about the effect of intervention on the true endpoint. In this approach, each previous trial with surrogate and true endpoints contributes an estimated predicted effect of intervention on true endpoint in the trial of interest based on the surrogate endpoint in the trial of interest. These predicted quantities are combined in a simple random-effects meta-analysis to estimate the predicted effect of intervention on true endpoint in the trial of interest. Validation involves comparing the average prediction error of the aforementioned approach with (i) the average prediction error of a standard meta-analysis using only true endpoints in the other trials and (ii) the average clinically meaningful difference in true endpoints implicit in the trials. Validation is illustrated using data from multiple randomized trials of patients with advanced colorectal cancer in which the surrogate endpoint was tumor response and the true endpoint was median survival time.  相似文献   

5.
Basket trials simultaneously evaluate the effect of one or more drugs on a defined biomarker, genetic alteration, or molecular target in a variety of disease subtypes, often called strata. A conventional approach for analyzing such trials is an independent analysis of each of the strata. This analysis is inefficient as it lacks the power to detect the effect of drugs in each stratum. To address these issues, various designs for basket trials have been proposed, centering on designs using Bayesian hierarchical models. In this article, we propose a novel Bayesian basket trial design that incorporates predictive sample size determination, early termination for inefficacy and efficacy, and the borrowing of information across strata. The borrowing of information is based on the similarity between the posterior distributions of the response probability. In general, Bayesian hierarchical models have many distributional assumptions along with multiple parameters. By contrast, our method has prior distributions for response probability and two parameters for similarity of distributions. The proposed design is easier to implement and less computationally demanding than other Bayesian basket designs. Through a simulation with various scenarios, our proposed design is compared with other designs including one that does not borrow information and one that uses a Bayesian hierarchical model.  相似文献   

6.
An analysis for transient states with application to tumor shrinkage   总被引:3,自引:0,他引:3  
N R Temkin 《Biometrics》1978,34(4):571-580
The evaluation of therapies for chronic diseases is often based on the frequency and/or the duration of improvement. Treated separately, these endpoints may give contradictory impressions of the efficacy of the therapy. We propose a more unified method of summarizing improvement-related data--the probability of being in response, i.e., improved, as a function of time. Although improvement is not the only endpoint considered in most trials and this function will not always provide a clear answer to the question of which treatment has better improvement-related characteristics, it does combine the information on several endpoints usually considered separately into a single easily interpreted item. This function is estimated using the method of maximum likelihood on a distribution-free stochastic model of times to improvement and failure. Censored observations are taken into account. A detailed example using data from a cancer clinical trial is presented.  相似文献   

7.
Weight‐of‐evidence is the process by which multiple measurement endpoints are related to an assessment endpoint to evaluate whether significant risk of harm is posed to the environment. In this paper, a methodology is offered for reconciling or balancing multiple lines of evidence pertaining to an assessment endpoint. Weight‐of‐evidence is reflected in three characteristics of measurement endpoints: (a) the weight assigned to each measurement endpoint; (b) the magnitude of response observed in the measurement endpoint; and (c) the concurrence among outcomes of multiple measurement endpoints. First, weights are assigned to measurement endpoints based on attributes related to: (a) strength of association between assessment and measurement endpoints; (b) data quality; and (c) study design and execution. Second, the magnitude of response in the measurement endpoint is evaluated with respect to whether the measurement endpoint indicates the presence or absence of harm; as well as the magnitude. Third, concurrence among measurement endpoints is evaluated by plotting the findings of the two preceding steps on a matrix for each measurement endpoint evaluated. The matrix allows easy visual examination of agreements or divergences among measurement endpoints, facilitating interpretation of the collection of measurement endpoints with respect to the assessment endpoint. A qualitative adaptation of the weight‐of‐evidence approach is also presented.  相似文献   

8.
BACKGROUND: Surrogate measures for cardiovascular disease events have the potential to increase greatly the efficiency of clinical trials. A leading candidate for such a surrogate is the progression of intima-media thickness (IMT) of the carotid artery; much experience has been gained with this endpoint in trials of HMG-CoA reductase inhibitors (statins). METHODS AND RESULTS: We examine two separate systems of criteria that have been proposed to define surrogate endpoints, based on clinical and statistical arguments. We use published results and a formal meta-analysis to evaluate whether progression of carotid IMT meets these criteria for HMG-CoA reductase inhibitors (statins).IMT meets clinical-based criteria to serve as a surrogate endpoint for cardiovascular events in statin trials, based on relative efficiency, linkage to endpoints, and congruency of effects. Results from a meta-analysis and post-trial follow-up from a single published study suggest that IMT meets established statistical criteria by accounting for intervention effects in regression models. CONCLUSION: Carotid IMT progression meets accepted definitions of a surrogate for cardiovascular disease endpoints in statin trials. This does not, however, establish that it may serve universally as a surrogate marker in trials of other agents.  相似文献   

9.
Neoadjuvant endocrine therapy trials for breast cancer are now a widely accepted investigational approach for oncology cooperative group and pharmaceutical company research programs. However, there remains considerable uncertainty regarding the most suitable endpoints for these studies, in part, because short-term clinical, radiological or biomarker responses have not been fully validated as surrogate endpoints that closely relate to long-term breast cancer outcome. This shortcoming must be addressed before neoadjuvant endocrine treatment can be used as a triage strategy designed to identify patients with endocrine therapy “curable” disease. In this summary, information from published studies is used as a basis to critique clinical trial designs and to suggest experimental endpoints for future validation studies. Three aspects of neoadjuvant endocrine therapy designs are considered: the determination of response; the assessment of surgical outcomes; and biomarker endpoint analysis. Data from the letrozole 024 (LET 024) trial that compared letrozole and tamoxifen is used to illustrate a combined endpoint analysis that integrates both clinical and biomarker information. In addition, the concept of a “cell cycle response” is explored as a simple post-treatment endpoint based on Ki67 analysis that might have properties similar to the pathological complete response endpoint used in neoadjuvant chemotherapy trials.  相似文献   

10.
Ghosh D 《Biometrics》2009,65(2):521-529
Summary .  There has been a recent emphasis on the identification of biomarkers and other biologic measures that may be potentially used as surrogate endpoints in clinical trials. We focus on the setting of data from a single clinical trial. In this article, we consider a framework in which the surrogate must occur before the true endpoint. This suggests viewing the surrogate and true endpoints as semicompeting risks data; this approach is new to the literature on surrogate endpoints and leads to an asymmetrical treatment of the surrogate and true endpoints. However, such a data structure also conceptually complicates many of the previously considered measures of surrogacy in the literature. We propose novel estimation and inferential procedures for the relative effect and adjusted association quantities proposed by Buyse and Molenberghs (1998, Biometrics 54, 1014–1029). The proposed methodology is illustrated with application to simulated data, as well as to data from a leukemia study.  相似文献   

11.
The evaluation of surrogate endpoints for primary use in future clinical trials is an increasingly important research area, due to demands for more efficient trials coupled with recent regulatory acceptance of some surrogates as 'valid.' However, little consideration has been given to how a trial that utilizes a newly validated surrogate endpoint as its primary endpoint might be appropriately designed. We propose a novel Bayesian adaptive trial design that allows the new surrogate endpoint to play a dominant role in assessing the effect of an intervention, while remaining realistically cautious about its use. By incorporating multitrial historical information on the validated relationship between the surrogate and clinical endpoints, then subsequently evaluating accumulating data against this relationship as the new trial progresses, we adaptively guard against an erroneous assessment of treatment based upon a truly invalid surrogate. When the joint outcomes in the new trial seem plausible given similar historical trials, we proceed with the surrogate endpoint as the primary endpoint, and do so adaptively-perhaps stopping the trial for early success or inferiority of the experimental treatment, or for futility. Otherwise, we discard the surrogate and switch adaptive determinations to the original primary endpoint. We use simulation to test the operating characteristics of this new design compared to a standard O'Brien-Fleming approach, as well as the ability of our design to discriminate trustworthy from untrustworthy surrogates in hypothetical future trials. Furthermore, we investigate possible benefits using patient-level data from 18 adjuvant therapy trials in colon cancer, where disease-free survival is considered a newly validated surrogate endpoint for overall survival.  相似文献   

12.
Xu J  Zeger SL 《Biometrics》2001,57(1):81-87
Surrogate endpoints are desirable because they typically result in smaller, faster efficacy studies compared with the ones using the clinical endpoints. Research on surrogate endpoints has received substantial attention lately, but most investigations have focused on the validity of using a single biomarker as a surrogate. Our paper studies whether the use of multiple markers can improve inferences about a treatment's effects on a clinical endpoint. We propose a joint model for a time to clinical event and for repeated measures over time on multiple biomarkers that are potential surrogates. This model extends the formulation of Xu and Zeger (2001, in press) and Fawcett and Thomas (1996, Statistics in Medicine 15, 1663-1685). We propose two complementary measures of the relative benefit of multiple surrogates as opposed to a single one. Markov chain Monte Carlo is implemented to estimate model parameters. The methodology is illustrated with an analysis of data from a schizophrenia clinical trial.  相似文献   

13.
Summary Colorectal cancer is the second leading cause of cancer related deaths in the United States, with more than 130,000 new cases of colorectal cancer diagnosed each year. Clinical studies have shown that genetic alterations lead to different responses to the same treatment, despite the morphologic similarities of tumors. A molecular test prior to treatment could help in determining an optimal treatment for a patient with regard to both toxicity and efficacy. This article introduces a statistical method appropriate for predicting and comparing multiple endpoints given different treatment options and molecular profiles of an individual. A latent variable‐based multivariate regression model with structured variance covariance matrix is considered here. The latent variables account for the correlated nature of multiple endpoints and accommodate the fact that some clinical endpoints are categorical variables and others are censored variables. The mixture normal hierarchical structure admits a natural variable selection rule. Inference was conducted using the posterior distribution sampling Markov chain Monte Carlo method. We analyzed the finite‐sample properties of the proposed method using simulation studies. The application to the advanced colorectal cancer study revealed associations between multiple endpoints and particular biomarkers, demonstrating the potential of individualizing treatment based on genetic profiles.  相似文献   

14.
The validation of surrogate endpoints has been studied by Prentice (1989). He presented a definition as well as a set of criteria, which are equivalent only if the surrogate and true endpoints are binary. Freedman et al. (1992) supplemented these criteria with the so-called 'proportion explained'. Buyse and Molenberghs (1998) proposed replacing the proportion explained by two quantities: (1) the relative effect linking the effect of treatment on both endpoints and (2) an individual-level measure of agreement between both endpoints. The latter quantity carries over when data are available on several randomized trials, while the former can be extended to be a trial-level measure of agreement between the effects of treatment of both endpoints. This approach suggests a new method for the validation of surrogate endpoints, and naturally leads to the prediction of the effect of treatment upon the true endpoint, given its observed effect upon the surrogate endpoint. These ideas are illustrated using data from two sets of multicenter trials: one comparing chemotherapy regimens for patients with advanced ovarian cancer, the other comparing interferon-alpha with placebo for patients with age-related macular degeneration.  相似文献   

15.
Multiple endpoints are tested to assess an overall treatment effect and also to identify which endpoints or subsets of endpoints contributed to treatment differences. The conventional p‐value adjustment methods, such as single‐step, step‐up, or step‐down procedures, sequentially identify each significant individual endpoint. Closed test procedures can also detect individual endpoints that have effects via a step‐by‐step closed strategy. This paper proposes a global‐based statistic for testing an a priori number, say, r of the k endpoints, as opposed to the conventional approach of testing one (r = 1) endpoint. The proposed test statistic is an extension of the single‐step p‐value‐based statistic based on the distribution of the smallest p‐value. The test maintains strong control of the FamilyWise Error (FWE) rate under the null hypothesis of no difference in any (sub)set of r endpoints among all possible combinations of the k endpoints. After rejecting the null hypothesis, the individual endpoints in the sets that are rejected can be tested further, using a univariate test statistic in a second step, if desired. However, the second step test only weakly controls the FWE. The proposed method is illustrated by application to a psychosis data set.  相似文献   

16.
Regarding Paper “Sample size determination in clinical trials with multiple co‐primary endpoints including mixed continuous and binary variables” by T. Sozu , T. Sugimoto , and T. Hamasaki Biometrical Journal (2012) 54 (5): 716–729 Article: http://dx.doi.org/10.1002/bimj.201100221 Authors' Reply: http://dx.doi.org/10.1002/bimj.201300032 This paper recently introduced a methodology for calculating the sample size in clinical trials with multiple mixed binary and continuous co‐primary endpoints modeled by the so‐called conditional grouped continuous model (CGCM). The purpose of this note is to clarify certain aspects of the methodology and propose an alternative approach based on latent means tests for the binary endpoints. We demonstrate that our approach is more powerful, yielding smaller sample sizes at powers comparable to those used in the paper.  相似文献   

17.
We consider the problem of comparing two treatments on multiple endpoints where the goal is to identify the endpoints that have treatment effects, while controlling the familywise error rate. Two current approaches for this are (i) applying a global test within a closed testing procedure, and (ii) adjusting individual endpoint p‐values for multiplicity. We propose combining the two current methods. We compare the combined method with several competing methods in a simulation study. It is concluded that the combined approach maintains higher power under a variety of treatment effect configurations than the other methods and is thus more power‐robust.  相似文献   

18.
Clinical trials are often concerned with the comparison of two treatment groups with multiple endpoints. As alternatives to the commonly used methods, the T2 test and the Bonferroni method, O'Brien (1984, Biometrics 40, 1079-1087) proposes tests based on statistics that are simple or weighted sums of the single endpoints. This approach turns out to be powerful if all treatment differences are in the same direction [compare Pocock, Geller, and Tsiatis (1987, Biometrics 43, 487-498)]. The disadvantage of these multivariate methods is that they are suitable only for demonstrating a global difference, whereas the clinician is further interested in which specific endpoints or sets of endpoints actually caused this difference. It is shown here that all tests are suitable for the construction of a closed multiple test procedure where, after the rejection of the global hypothesis, all lower-dimensional marginal hypotheses and finally the single hypotheses are tested step by step. This procedure controls the experimentwise error rate. It is just as powerful as the multivariate test and, in addition, it is possible to detect significant differences between the endpoints or sets of endpoints.  相似文献   

19.
Summary .  Many assessment instruments used in the evaluation of toxicity, safety, pain, or disease progression consider multiple ordinal endpoints to fully capture the presence and severity of treatment effects. Contingency tables underlying these correlated responses are often sparse and imbalanced, rendering asymptotic results unreliable or model fitting prohibitively complex without overly simplistic assumptions on the marginal and joint distribution. Instead of a modeling approach, we look at stochastic order and marginal inhomogeneity as an expression or manifestation of a treatment effect under much weaker assumptions. Often, endpoints are grouped together into physiological domains or by the body function they describe. We derive tests based on these subgroups, which might supplement or replace the individual endpoint analysis because they are more powerful. The permutation or bootstrap distribution is used throughout to obtain global, subgroup, and individual significance levels as they naturally incorporate the correlation among endpoints. We provide a theorem that establishes a connection between marginal homogeneity and the stronger exchangeability assumption under the permutation approach. Multiplicity adjustments for the individual endpoints are obtained via stepdown procedures, while subgroup significance levels are adjusted via the full closed testing procedure. The proposed methodology is illustrated using a collection of 25 correlated ordinal endpoints, grouped into six domains, to evaluate toxicity of a chemical compound.  相似文献   

20.
Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号