首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The evaluation of surrogate endpoints for primary use in future clinical trials is an increasingly important research area, due to demands for more efficient trials coupled with recent regulatory acceptance of some surrogates as 'valid.' However, little consideration has been given to how a trial that utilizes a newly validated surrogate endpoint as its primary endpoint might be appropriately designed. We propose a novel Bayesian adaptive trial design that allows the new surrogate endpoint to play a dominant role in assessing the effect of an intervention, while remaining realistically cautious about its use. By incorporating multitrial historical information on the validated relationship between the surrogate and clinical endpoints, then subsequently evaluating accumulating data against this relationship as the new trial progresses, we adaptively guard against an erroneous assessment of treatment based upon a truly invalid surrogate. When the joint outcomes in the new trial seem plausible given similar historical trials, we proceed with the surrogate endpoint as the primary endpoint, and do so adaptively-perhaps stopping the trial for early success or inferiority of the experimental treatment, or for futility. Otherwise, we discard the surrogate and switch adaptive determinations to the original primary endpoint. We use simulation to test the operating characteristics of this new design compared to a standard O'Brien-Fleming approach, as well as the ability of our design to discriminate trustworthy from untrustworthy surrogates in hypothetical future trials. Furthermore, we investigate possible benefits using patient-level data from 18 adjuvant therapy trials in colon cancer, where disease-free survival is considered a newly validated surrogate endpoint for overall survival.  相似文献   

2.
Carey VJ  Baker CJ  Platt R 《Biometrics》2001,57(1):135-142
In the study of immune responses to infectious pathogens, the minimum protective antibody concentration (MPAC) is a quantity of great interest. We use case-control data to estimate the posterior distribution of the conditional risk of disease given a lower bound on antibody concentration in an at-risk subject. The concentration bound beyond which there is high credibility that infection risk is zero or nearly so is a candidate for the MPAC. A very simple Gibbs sampling procedure that permits inference on the risk of disease given antibody level is presented. In problems involving small numbers of patients, the procedure is shown to have favorable accuracy and robustness to choice/misspecification of priors. Frequentist evaluation indicates good coverage probabilities of credibility intervals for antibody-dependent risk, and rules for estimation of the MPAC are illustrated with epidemiological data.  相似文献   

3.
Summary We examine situations where interest lies in the conditional association between outcome and exposure variables, given potential confounding variables. Concern arises that some potential confounders may not be measured accurately, whereas others may not be measured at all. Some form of sensitivity analysis might be employed, to assess how this limitation in available data impacts inference. A Bayesian approach to sensitivity analysis is straightforward in concept: a prior distribution is formed to encapsulate plausible relationships between unobserved and observed variables, and posterior inference about the conditional exposure–disease relationship then follows. In practice, though, it can be challenging to form such a prior distribution in both a realistic and simple manner. Moreover, it can be difficult to develop an attendant Markov chain Monte Carlo (MCMC) algorithm that will work effectively on a posterior distribution arising from a highly nonidentified model. In this article, a simple prior distribution for acknowledging both poorly measured and unmeasured confounding variables is developed. It requires that only a small number of hyperparameters be set by the user. Moreover, a particular computational approach for posterior inference is developed, because application of MCMC in a standard manner is seen to be ineffective in this problem.  相似文献   

4.
Bayesian inference allows the transparent communication and systematic updating of model uncertainty as new data become available. When applied to material flow analysis (MFA), however, Bayesian inference is undermined by the difficulty of defining proper priors for the MFA parameters and quantifying the noise in the collected data. We start to address these issues by first deriving and implementing an expert elicitation procedure suitable for generating MFA parameter priors. Second, we propose to learn the data noise concurrent with the parametric uncertainty. These methods are demonstrated using a case study on the 2012 US steel flow. Eight experts are interviewed to elicit distributions on steel flow uncertainty from raw materials to intermediate goods. The experts' distributions are combined and weighted according to the expertise demonstrated in response to seeding questions. These aggregated distributions form our model parameters' informative priors. Sensible, weakly informative priors are adopted for learning the data noise. Bayesian inference is then performed to update the parametric and data noise uncertainty given MFA data collected from the United States Geological Survey and the World Steel Association. The results show a reduction in MFA parametric uncertainty when incorporating the collected data. Only a modest reduction in data noise uncertainty was observed using 2012 data; however, greater reductions were achieved when using data from multiple years in the inference. These methods generate transparent MFA and data noise uncertainties learned from data rather than pre-assumed data noise levels, providing a more robust basis for decision-making that affects the system.  相似文献   

5.
Atkinson AC  Biswas A 《Biometrics》2005,61(1):118-125
Adaptive designs are used in phase III clinical trials for skewing the allocation pattern toward the better treatments. We use optimum design theory to derive a skewed Bayesian biased-coin procedure for sequential designs with continuous responses. The skewed designs are used to provide adaptive designs, the performance of which is studied numerically and theoretically. Important properties are loss and the proportion of allocation to the better treatment.  相似文献   

6.
7.
8.
9.
10.
11.
Groundwater modeling typically relies on some hypothesis and approximations of reality, as the real hydrologic systems are far more complex than we can mathematically characterize. This kind of a model's errors cannot be neglected in the uncertainty analysis for a model's predictions in practical issues. As the scale and complexity increase, the associated uncertainties boost dramatically. In this study, a Bayesian uncertainty analysis method for a deterministic model's predictions is presented. The geostatistics of hydrogeologic parameters obtained from site characterization are treated as the prior parameter distribution in the Bayes’ theorem. Then the Markov-Chain Monte Carlo method is used to generate the posterior statistical distribution of the model's predictions, conditional to the observed hydrologic system behaviors. Finally, a series of synthetic examples are given by applying this method to a MODFLOW pumping test model, to test its capability and efficiency in order to assess various sources of the model's prediction uncertainty. The impacts of the model's parameter sensitivity, simplification, and observation errors to predict uncertainty are evaluated, respectively. The results are analyzed statistically to provide deterministic predictions with associated prediction errors. Risk analysis is also derived from the Bayesian results to draw tradeoff curves for decision-making about exploitation of groundwater resources.  相似文献   

12.
Ignorable and informative designs in survey sampling inference   总被引:1,自引:0,他引:1  
SUGDEN  R. A.; SMITH  T. M. F. 《Biometrika》1984,71(3):495-506
  相似文献   

13.
The assessment of human and ecological risks and associated risk-management decisions are characterized by only partial knowledge of the relevant systems. Typically, large variability and measurement errors in data create challenges for estimating risks and identifying appropriate management strategies. The formal quantitative method of decision analysis can help deal with these challenges because it takes uncertainties into account explicitly and quantitatively. In recent years, research in several areas of natural resource management has demonstrated that decision analysis can identify policies that are appropriate in the presence of uncertainties. More importantly, the resulting optimal decision is often different from the one that would have been chosen had the uncertainties not been taken into account quantitatively. However, challenges still exist to effective implementation of decision analysis.  相似文献   

14.
Abstract

Quantitative risk assessment (QRA) approaches systematically evaluate the likelihood, impacts, and risk of adverse events. QRA using fault tree analysis (FTA) is based on the assumptions that failure events have crisp probabilities and they are statistically independent. The crisp probabilities of the events are often absent, which leads to data uncertainty. However, the independence assumption leads to model uncertainty. Experts’ knowledge can be utilized to obtain unknown failure data; however, this process itself is subject to different issues such as imprecision, incompleteness, and lack of consensus. For this reason, to minimize the overall uncertainty in QRA, in addition to addressing the uncertainties in the knowledge, it is equally important to combine the opinions of multiple experts and update prior beliefs based on new evidence. In this article, a novel methodology is proposed for QRA by combining fuzzy set theory and evidence theory with Bayesian networks to describe the uncertainties, aggregate experts’ opinions, and update prior probabilities when new evidences become available. Additionally, sensitivity analysis is performed to identify the most critical events in the FTA. The effectiveness of the proposed approach has been demonstrated via application to a practical system.  相似文献   

15.
Large amounts of longitudinal health records are now available for dynamic monitoring of the underlying processes governing the observations. However, the health status progression across time is not typically observed directly: records are observed only when a subject interacts with the system, yielding irregular and often sparse observations. This suggests that the observed trajectories should be modeled via a latent continuous‐time process potentially as a function of time‐varying covariates. We develop a continuous‐time hidden Markov model to analyze longitudinal data accounting for irregular visits and different types of observations. By employing a specific missing data likelihood formulation, we can construct an efficient computational algorithm. We focus on Bayesian inference for the model: this is facilitated by an expectation‐maximization algorithm and Markov chain Monte Carlo methods. Simulation studies demonstrate that these approaches can be implemented efficiently for large data sets in a fully Bayesian setting. We apply this model to a real cohort where patients suffer from chronic obstructive pulmonary disease with the outcome being the number of drugs taken, using health care utilization indicators and patient characteristics as covariates.  相似文献   

16.
17.
In the decade since their invention, spotted microarrays have been undergoing technical advances that have increased the utility, scope and precision of their ability to measure gene expression. At the same time, more researchers are taking advantage of the fundamentally quantitative nature of these tools with refined experimental designs and sophisticated statistical analyses. These new approaches utilise the power of microarrays to estimate differences in gene expression levels, rather than just categorising genes as up- or down-regulated, and allow the comparison of expression data across multiple samples. In this review, some of the technical aspects of spotted microarrays that can affect statistical inference are highlighted, and a discussion is provided of how several methods for estimating gene expression level across multiple samples deal with these challenges. The focus is on a Bayesian analysis method, BAGEL, which is easy to implement and produces easily interpreted results.  相似文献   

18.
Individual growth is an important parameter and is linked to a number of other biological processes. It is commonly modeled using the von Bertalanffy growth function (VBGF), which is regularly fitted to age data where the ages of the animals are not known exactly but are binned into yearly age groups, such as fish survey data. Current methods of fitting the VBGF to these data treat all the binned ages as the actual ages. We present a new VBGF model that combines data from multiple surveys and allows the actual age of an animal to be inferred. By fitting to survey data for Atlantic herring (Clupea harengus) and Atlantic cod (Gadus morhua), we compare our model with two other ways of combining data from multiple surveys but where the ages are as reported in the survey data. We use the fitted parameters as inputs into a yield‐per‐recruit model to see what would happen to advice given to management. We found that each of the ways of combining the data leads to different parameter estimates for the VBGF and advice for policymakers. Our model fitted to the data better than either of the other models and also reduced the uncertainty in the parameter estimates and models used to inform management. Our model is a robust way of fitting the VBGF and can be used to combine data from multiple sources. The model is general enough to fit other growth curves for any taxon when the age of individuals is binned into groups.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号