首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We discuss the strengths and weaknesses of the meta-analytic approach to estimating the effect of a new treatment on a true clinical outcome measure, T, from the effect of treatment on a surrogate response, S. The meta-analytic approach (see Daniels and Hughes (1997) 16, 1965-1982) uses data from a series of previous studies of interventions similar to the new treatment. The data are used to estimate relationships between summary measures of treatment effects on T and S that can be used to infer the magnitude of the effect of the new treatment on T from its effects on S. We extend the class of models to cover a broad range of applications in which the parameters define features of the marginal distribution of (T, S). We present a new bootstrap procedure to allow for the variability in estimating the distribution that governs the between-study variation. Ignoring this variability can lead to confidence intervals that are much too narrow. The meta-analytic approach relies on quite different data and assumptions than procedures that depend, for example, on the conditional independence, at the individual level, of treatment and T, given S (see Prentice (1989) 8, 431-440). Meta-analytic calculations in this paper can be used to determine whether a new study, based only on S, will yield estimates of the treatment effect on T that are precise enough to be useful. Compared to direct measurement on T, the meta-analytic approach has a number of limitations, including likely serious loss of precision and difficulties in defining the class of previous studies to be used to predict the effects on T for a new intervention.  相似文献   

2.
The validation of surrogate endpoints has been studied by Prentice (1989). He presented a definition as well as a set of criteria, which are equivalent only if the surrogate and true endpoints are binary. Freedman et al. (1992) supplemented these criteria with the so-called 'proportion explained'. Buyse and Molenberghs (1998) proposed replacing the proportion explained by two quantities: (1) the relative effect linking the effect of treatment on both endpoints and (2) an individual-level measure of agreement between both endpoints. The latter quantity carries over when data are available on several randomized trials, while the former can be extended to be a trial-level measure of agreement between the effects of treatment of both endpoints. This approach suggests a new method for the validation of surrogate endpoints, and naturally leads to the prediction of the effect of treatment upon the true endpoint, given its observed effect upon the surrogate endpoint. These ideas are illustrated using data from two sets of multicenter trials: one comparing chemotherapy regimens for patients with advanced ovarian cancer, the other comparing interferon-alpha with placebo for patients with age-related macular degeneration.  相似文献   

3.
Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download.  相似文献   

4.
Zigler CM  Belin TR 《Biometrics》2012,68(3):922-932
Summary The literature on potential outcomes has shown that traditional methods for characterizing surrogate endpoints in clinical trials based only on observed quantities can fail to capture causal relationships between treatments, surrogates, and outcomes. Building on the potential-outcomes formulation of a principal surrogate, we introduce a Bayesian method to estimate the causal effect predictiveness (CEP) surface and quantify a candidate surrogate's utility for reliably predicting clinical outcomes. In considering the full joint distribution of all potentially observable quantities, our Bayesian approach has the following features. First, our approach illuminates implicit assumptions embedded in previously-used estimation strategies that have been shown to result in poor performance. Second, our approach provides tools for making explicit and scientifically-interpretable assumptions regarding associations about which observed data are not informative. Through simulations based on an HIV vaccine trial, we found that the Bayesian approach can produce estimates of the CEP surface with improved performance compared to previous methods. Third, our approach can extend principal-surrogate estimation beyond the previously considered setting of a vaccine trial where the candidate surrogate is constant in one arm of the study. We illustrate this extension through an application to an AIDS therapy trial where the candidate surrogate varies in both treatment arms.  相似文献   

5.
When the true end points (T) are difficult or costly to measure, surrogate markers (S) are often collected in clinical trials to help predict the effect of the treatment (Z). There is great interest in understanding the relationship among S, T, and Z. A principal stratification (PS) framework has been proposed by Frangakis and Rubin (2002) to study their causal associations. In this paper, we extend the framework to a multiple trial setting and propose a Bayesian hierarchical PS model to assess surrogacy. We apply the method to data from a large collection of colon cancer trials in which S and T are binary. We obtain the trial-specific causal measures among S, T, and Z, as well as their overall population-level counterparts that are invariant across trials. The method allows for information sharing across trials and reduces the nonidentifiability problem. We examine the frequentist properties of our model estimates and the impact of the monotonicity assumption using simulations. We also illustrate the challenges in evaluating surrogacy in the counterfactual framework that result from nonidentifiability.  相似文献   

6.
Gilbert PB  Hudgens MG 《Biometrics》2008,64(4):1146-1154
SUMMARY: Frangakis and Rubin (2002, Biometrics 58, 21-29) proposed a new definition of a surrogate endpoint (a "principal" surrogate) based on causal effects. We introduce an estimand for evaluating a principal surrogate, the causal effect predictiveness (CEP) surface, which quantifies how well causal treatment effects on the biomarker predict causal treatment effects on the clinical endpoint. Although the CEP surface is not identifiable due to missing potential outcomes, it can be identified by incorporating a baseline covariate(s) that predicts the biomarker. Given case-cohort sampling of such a baseline predictor and the biomarker in a large blinded randomized clinical trial, we develop an estimated likelihood method for estimating the CEP surface. This estimation assesses the "surrogate value" of the biomarker for reliably predicting clinical treatment effects for the same or similar setting as the trial. A CEP surface plot provides a way to compare the surrogate value of multiple biomarkers. The approach is illustrated by the problem of assessing an immune response to a vaccine as a surrogate endpoint for infection.  相似文献   

7.
One of the key features of network meta-analysis is ranking of interventions according to outcomes of interest. Ranking metrics are prone to misinterpretation because of two limitations associated with the current ranking methods. First, differences in relative treatment effects might not be clinically important and this is not reflected in the ranking metrics. Second, there are no established methods to include several health outcomes in the ranking assessments. To address these two issues, we extended the P-score method to allow for multiple outcomes and modified it to measure the mean extent of certainty that a treatment is better than the competing treatments by a certain amount, for example, the minimum clinical important difference. We suggest to present the tradeoff between beneficial and harmful outcomes allowing stakeholders to consider how much adverse effect they are willing to tolerate for specific gains in efficacy. We used a published network of 212 trials comparing 15 antipsychotics and placebo using a random effects network meta-analysis model, focusing on three outcomes; reduction in symptoms of schizophrenia in a standardized scale, all-cause discontinuation, and weight gain.  相似文献   

8.
Valid surrogate endpoints S can be used as a substitute for a true outcome of interest T to measure treatment efficacy in a clinical trial. We propose a causal inference approach to validate a surrogate by incorporating longitudinal measurements of the true outcomes using a mixed modeling approach, and we define models and quantities for validation that may vary across the study period using principal surrogacy criteria. We consider a surrogate-dependent treatment efficacy curve that allows us to validate the surrogate at different time points. We extend these methods to accommodate a delayed-start treatment design where all patients eventually receive the treatment. Not all parameters are identified in the general setting. We apply a Bayesian approach for estimation and inference, utilizing more informative prior distributions for selected parameters. We consider the sensitivity of these prior assumptions as well as assumptions of independence among certain counterfactual quantities conditional on pretreatment covariates to improve identifiability. We examine the frequentist properties (bias of point and variance estimates, credible interval coverage) of a Bayesian imputation method. Our work is motivated by a clinical trial of a gene therapy where the functional outcomes are measured repeatedly throughout the trial.  相似文献   

9.
The evaluation of surrogate endpoints for primary use in future clinical trials is an increasingly important research area, due to demands for more efficient trials coupled with recent regulatory acceptance of some surrogates as 'valid.' However, little consideration has been given to how a trial that utilizes a newly validated surrogate endpoint as its primary endpoint might be appropriately designed. We propose a novel Bayesian adaptive trial design that allows the new surrogate endpoint to play a dominant role in assessing the effect of an intervention, while remaining realistically cautious about its use. By incorporating multitrial historical information on the validated relationship between the surrogate and clinical endpoints, then subsequently evaluating accumulating data against this relationship as the new trial progresses, we adaptively guard against an erroneous assessment of treatment based upon a truly invalid surrogate. When the joint outcomes in the new trial seem plausible given similar historical trials, we proceed with the surrogate endpoint as the primary endpoint, and do so adaptively-perhaps stopping the trial for early success or inferiority of the experimental treatment, or for futility. Otherwise, we discard the surrogate and switch adaptive determinations to the original primary endpoint. We use simulation to test the operating characteristics of this new design compared to a standard O'Brien-Fleming approach, as well as the ability of our design to discriminate trustworthy from untrustworthy surrogates in hypothetical future trials. Furthermore, we investigate possible benefits using patient-level data from 18 adjuvant therapy trials in colon cancer, where disease-free survival is considered a newly validated surrogate endpoint for overall survival.  相似文献   

10.
One method for demonstrating disease modification is a delayed-start design, consisting of a placebo-controlled period followed by a delayed-start period wherein all patients receive active treatment. To address methodological issues in previous delayed-start approaches, we propose a new method that is robust across conditions of drug effect, discontinuation rates, and missing data mechanisms. We propose a modeling approach and test procedure to test the hypothesis of noninferiority, comparing the treatment difference at the end of the delayed-start period with that at the end of the placebo-controlled period. We conducted simulations to identify the optimal noninferiority testing procedure to ensure the method was robust across scenarios and assumptions, and to evaluate the appropriate modeling approach for analyzing the delayed-start period. We then applied this methodology to Phase 3 solanezumab clinical trial data for mild Alzheimer’s disease patients. Simulation results showed a testing procedure using a proportional noninferiority margin was robust for detecting disease-modifying effects; conditions of high and moderate discontinuations; and with various missing data mechanisms. Using all data from all randomized patients in a single model over both the placebo-controlled and delayed-start study periods demonstrated good statistical performance. In analysis of solanezumab data using this methodology, the noninferiority criterion was met, indicating the treatment difference at the end of the placebo-controlled studies was preserved at the end of the delayed-start period within a pre-defined margin. The proposed noninferiority method for delayed-start analysis controls Type I error rate well and addresses many challenges posed by previous approaches. Delayed-start studies employing the proposed analysis approach could be used to provide evidence of a disease-modifying effect. This method has been communicated with FDA and has been successfully applied to actual clinical trial data accrued from the Phase 3 clinical trials of solanezumab.  相似文献   

11.
Identifying effective and valid surrogate markers to make inference about a treatment effect on long-term outcomes is an important step in improving the efficiency of clinical trials. Replacing a long-term outcome with short-term and/or cheaper surrogate markers can potentially shorten study duration and reduce trial costs. There is sizable statistical literature on methods to quantify the effectiveness of a single surrogate marker. Both parametric and nonparametric approaches have been well developed for different outcome types. However, when there are multiple markers available, methods for combining markers to construct a composite marker with improved surrogacy remain limited. In this paper, building on top of the optimal transformation framework of Wang et al. (2020), we propose a novel calibrated model fusion approach to optimally combine multiple markers to improve surrogacy. Specifically, we obtain two initial estimates of optimal composite scores of the markers based on two sets of models with one set approximating the underlying data distribution and the other directly approximating the optimal transformation function. We then estimate an optimal calibrated combination of the two estimated scores which ensures both validity of the final combined score and optimality with respect to the proportion of treatment effect explained by the final combined score. This approach is unique in that it identifies an optimal combination of the multiple surrogates without strictly relying on parametric assumptions while borrowing modeling strategies to avoid fully nonparametric estimation which is subject to the curse of dimensionality. Our identified optimal transformation can also be used to directly quantify the surrogacy of this identified combined score. Theoretical properties of the proposed estimators are derived, and the finite sample performance of the proposed method is evaluated through simulation studies. We further illustrate the proposed method using data from the Diabetes Prevention Program study.  相似文献   

12.
Summary .  Four major frameworks have been developed for evaluating surrogate markers in randomized trials: one based on conditional independence of observable variables, another based on direct and indirect effects, a third based on a meta-analysis, and a fourth based on principal stratification. The first two of these fit into a paradigm we call the causal-effects (CE) paradigm, in which, for a good surrogate, the effect of treatment on the surrogate, combined with the effect of the surrogate on the clinical outcome, allow prediction of the effect of the treatment on the clinical outcome. The last two approaches fall into the causal-association (CA) paradigm, in which the effect of the treatment on the surrogate is associated with its effect on the clinical outcome. We consider the CE paradigm first, and consider identifying assumptions and some simple estimation procedures; we then consider the CA paradigm. We examine the relationships among these approaches and associated estimators. We perform a small simulation study to illustrate properties of the various estimators under different scenarios, and conclude with a discussion of the applicability of both paradigms.  相似文献   

13.
Ghosh D 《Biometrics》2009,65(2):521-529
Summary .  There has been a recent emphasis on the identification of biomarkers and other biologic measures that may be potentially used as surrogate endpoints in clinical trials. We focus on the setting of data from a single clinical trial. In this article, we consider a framework in which the surrogate must occur before the true endpoint. This suggests viewing the surrogate and true endpoints as semicompeting risks data; this approach is new to the literature on surrogate endpoints and leads to an asymmetrical treatment of the surrogate and true endpoints. However, such a data structure also conceptually complicates many of the previously considered measures of surrogacy in the literature. We propose novel estimation and inferential procedures for the relative effect and adjusted association quantities proposed by Buyse and Molenberghs (1998, Biometrics 54, 1014–1029). The proposed methodology is illustrated with application to simulated data, as well as to data from a leukemia study.  相似文献   

14.
Zhang L  Rosenberger WF 《Biometrics》2006,62(2):562-569
We provide an explicit asymptotic method to evaluate the performance of different response-adaptive randomization procedures in clinical trials with continuous outcomes. We use this method to investigate four different response-adaptive randomization procedures. Their performance, especially in power and treatment assignment skewing to the better treatment, is thoroughly evaluated theoretically. These results are then verified by simulation. Our analysis concludes that the doubly adaptive biased coin design procedure targeting optimal allocation is the best one for practical use. We also consider the effect of delay in responses and nonstandard responses, for example, Cauchy distributed response. We illustrate our procedure by redesigning a real clinical trial.  相似文献   

15.
SUMMARY: We consider two-armed clinical trials in which the response and/or the covariates are observed on either a binary, ordinal, or continuous scale. A new general nonparametric (NP) approach for covariate adjustment is presented using the notion of a relative effect to describe treatment effects. The relative effect is defined by the probability of observing a higher response in the experimental than in the control arm. The notion is invariant under monotone transformations of the data and is therefore especially suitable for ordinal data. For a normal or binary distributed response the relative effect is the transformed effect size or the difference of response probability, respectively. An unbiased and consistent NP estimator for the relative effect is presented. Further, we suggest a NP procedure for correcting the relative effect for covariate imbalance and random covariate imbalance, yielding a consistent estimator for the adjusted relative effect. Asymptotic theory has been developed to derive test statistics and confidence intervals. The test statistic is based on the joint behavior of the estimated relative effect for the response and the covariates. It is shown that the test statistic can be used to evaluate the treatment effect in the presence of (random) covariate imbalance. Approximations for small sample sizes are considered as well. The sampling behavior of the estimator of the adjusted relative effect is examined. We also compare the probability of a type I error and the power of our approach to standard covariate adjustment methods by means of a simulation study. Finally, our approach is illustrated on three studies involving ordinal responses and covariates.  相似文献   

16.
A. B. Miller 《CMAJ》1980,122(7):776-779
Prophylaxis is likely to be of increasing importance in the control of lung cancer. Areas already suggested as potential fields of investigation include the use of specific immunoprophylaxis or synthetic retinoids. Thorough evaluation of the effects and hazards of such prophylaxis will be required, and the most satisfactory approach is to conduct controlled prophylactic trials. Such trials must rest on the same basic principles as those established for therapeutic trials. However, new problems bearing on ethical requirements, the sampling procedure, and the practical conduct of the trials will arise. Compliance, adverse effects, end points for analysis of the prophylactic effects, and rules for when to stop a trial are discussed.  相似文献   

17.
Layla Parast  Tianxi Cai  Lu Tian 《Biometrics》2019,75(4):1253-1263
The development of methods to identify, validate, and use surrogate markers to test for a treatment effect has been an area of intense research interest given the potential for valid surrogate markers to reduce the required costs and follow‐up times of future studies. Several quantities and procedures have been proposed to assess the utility of a surrogate marker. However, few methods have been proposed to address how one might use the surrogate marker information to test for a treatment effect at an earlier time point, especially in settings where the primary outcome and the surrogate marker are subject to censoring. In this paper, we propose a novel test statistic to test for a treatment effect using surrogate marker information measured prior to the end of the study in a time‐to‐event outcome setting. We propose a robust nonparametric estimation procedure and propose inference procedures. In addition, we evaluate the power for the design of a future study based on surrogate marker information. We illustrate the proposed procedure and relative power of the proposed test compared to a test performed at the end of the study using simulation studies and an application to data from the Diabetes Prevention Program.  相似文献   

18.
One of the central aims in randomized clinical trials is to find well‐validated surrogate endpoints to reduce the sample size and/or duration of trials. Clinical researchers and practitioners have proposed various surrogacy measures for assessing candidate surrogate endpoints. However, most existing surrogacy measures have the following shortcomings: (i) they often fall outside the range [0,1], (ii) they are imprecisely estimated, and (iii) they ignore the interaction associations between a treatment and candidate surrogate endpoints in the evaluation of the surrogacy level. To overcome these difficulties, we propose a new surrogacy measure, the proportion of treatment effect mediated by candidate surrogate endpoints (PMS), based on the decomposition of the treatment effect into direct, indirect, and interaction associations mediated by candidate surrogate endpoints. In addition, we validate the advantages of PMS through Monte Carlo simulations and the application to empirical data from ORIENT (the Olmesartan Reducing Incidence of Endstage Renal Disease in Diabetic Nephropathy Trial).  相似文献   

19.
Dose-finding based on efficacy-toxicity trade-offs   总被引:1,自引:0,他引:1  
Thall PF  Cook JD 《Biometrics》2004,60(3):684-693
We present an adaptive Bayesian method for dose-finding in phase I/II clinical trials based on trade-offs between the probabilities of treatment efficacy and toxicity. The method accommodates either trinary or bivariate binary outcomes, as well as efficacy probabilities that possibly are nonmonotone in dose. Doses are selected for successive patient cohorts based on a set of efficacy-toxicity trade-off contours that partition the two-dimensional outcome probability domain. Priors are established by solving for hyperparameters that optimize the fit of the model to elicited mean outcome probabilities. For trinary outcomes, the new algorithm is compared to the method of Thall and Russell (1998, Biometrics 54, 251-264) by application to a trial of rapid treatment for ischemic stroke. The bivariate binary outcome case is illustrated by a trial of graft-versus-host disease treatment in allogeneic bone marrow transplantation. Computer simulations show that, under a wide rage of dose-outcome scenarios, the new method has high probabilities of making correct decisions and treats most patients at doses with desirable efficacy-toxicity trade-offs.  相似文献   

20.
In recent years, serum S100B has been used as a secondary endpoint in some clinical trials, in which serum S100B has successfully indicated the benefits or harm done by the tested agents. Compared to clinical stroke studies, few experimental stroke studies report using serum S100B as a surrogate marker for estimating the long-term effects of neuroprotectants. This study sought to observe serum S100B kinetics in PIT stroke models and to clarify the association between serum S100B and both final infarct volumes and long-term neurological outcomes. Furthermore, to demonstrate that early elevations in serum S100B reflect successful neuroprotective treatment, a pharmacological study was performed with a non-competitive NMDA glutamate receptor antagonist, MK-801. Serum S100B levels were significantly elevated after PIT stroke, reaching peak values 48 h after the onset and declining thereafter. Single measurements of serum S100B as early as 48 h after PIT stroke correlated significantly with final infarct volumes and long-term neurological outcomes. Elevated serum S100B was significantly attenuated by MK-801, correlating significantly with long-term beneficial effects of MK-801 on infarct volumes and neurological outcomes. Our results showed that single measurements of serum S100B 48 h after PIT stroke would serve as an early and simple surrogate marker for long-term evaluation of histological and neurological outcomes in PIT stroke rat models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号