首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.

Objectives

Definitive sample sizes for clinical trials in rare diseases are usually infeasible. Bayesian methodology can be used to maximise what is learnt from clinical trials in these circumstances. We elicited expert prior opinion for a future Bayesian randomised controlled trial for a rare inflammatory paediatric disease, polyarteritis nodosa (MYPAN, Mycophenolate mofetil for polyarteritis nodosa).

Methods

A Bayesian prior elicitation meeting was convened. Opinion was sought on the probability that a patient in the MYPAN trial treated with cyclophosphamide would achieve disease remission within 6-months, and on the relative efficacies of mycophenolate mofetil and cyclophosphamide. Expert opinion was combined with previously unseen data from a recently completed randomised controlled trial in ANCA associated vasculitis.

Results

A pan-European group of fifteen experts participated in the elicitation meeting. Consensus expert prior opinion was that the most likely rates of disease remission within 6 months on cyclophosphamide or mycophenolate mofetil were 74% and 71%, respectively. This prior opinion will now be taken forward and will be modified to formulate a Bayesian posterior opinion once the MYPAN trial data from 40 patients randomised 1:1 to either CYC or MMF become available.

Conclusions

We suggest that the methodological template we propose could be applied to trial design for other rare diseases.  相似文献   

2.
A variable density sampling pattern based on Bayesian statistics is presented and compared to a uniform density statistical pattern and a judgmental approach in a real case study. The Bayesian statistics, supported by a software tool, supplied a soil sampling plan similar to the judgmental one, especially for the number of sampling points and their location. It allowed statistical goals to be set and expert judgment to be included in the sampling strategy in a transparent and systematic procedure. For these reasons, it appears quite suitable for inclusion into Quality Assurance Quality Control plans.  相似文献   

3.
4.
Bayesian phylogenetic methods require the selection of prior probability distributions for all parameters of the model of evolution. These distributions allow one to incorporate prior information into a Bayesian analysis, but even in the absence of meaningful prior information, a prior distribution must be chosen. In such situations, researchers typically seek to choose a prior that will have little effect on the posterior estimates produced by an analysis, allowing the data to dominate. Sometimes a prior that is uniform (assigning equal prior probability density to all points within some range) is chosen for this purpose. In reality, the appropriate prior depends on the parameterization chosen for the model of evolution, a choice that is largely arbitrary. There is an extensive Bayesian literature on appropriate prior choice, and it has long been appreciated that there are parameterizations for which uniform priors can have a strong influence on posterior estimates. We here discuss the relationship between model parameterization and prior specification, using the general time-reversible model of nucleotide evolution as an example. We present Bayesian analyses of 10 simulated data sets obtained using a variety of prior distributions and parameterizations of the general time-reversible model. Uniform priors can produce biased parameter estimates under realistic conditions, and a variety of alternative priors avoid this bias.  相似文献   

5.
Using validation sets for outcomes can greatly improve the estimation of vaccine efficacy (VE) in the field (Halloran and Longini, 2001; Halloran and others, 2003). Most statistical methods for using validation sets rely on the assumption that outcomes on those with no cultures are missing at random (MAR). However, often the validation sets will not be chosen at random. For example, confirmational cultures are often done on people with influenza-like illness as part of routine influenza surveillance. VE estimates based on such non-MAR validation sets could be biased. Here we propose frequentist and Bayesian approaches for estimating VE in the presence of validation bias. Our work builds on the ideas of Rotnitzky and others (1998, 2001), Scharfstein and others (1999, 2003), and Robins and others (2000). Our methods require expert opinion about the nature of the validation selection bias. In a re-analysis of an influenza vaccine study, we found, using the beliefs of a flu expert, that within any plausible range of selection bias the VE estimate based on the validation sets is much higher than the point estimate using just the non-specific case definition. Our approach is generally applicable to studies with missing binary outcomes with categorical covariates.  相似文献   

6.
When a dataset is imbalanced, the prediction of the scarcely-sampled subpopulation can be over-influenced by the population contributing to the majority of the data. The aim of this study was to develop a Bayesian modelling approach with balancing informative prior so that the influence of imbalance to the overall prediction could be minimised. The new approach was developed in order to weigh the data in favour of the smaller subset(s). The method was assessed in terms of bias and precision in predicting model parameter estimates of simulated datasets. Moreover, the method was evaluated in predicting optimal dose levels of tobramycin for various age groups in a motivating example. The bias estimates using the balancing informative prior approach were smaller than those generated using the conventional approach which was without the consideration for the imbalance in the datasets. The precision estimates were also superior. The method was further evaluated in a motivating example of optimal dosage prediction of tobramycin. The resulting predictions also agreed well with what had been reported in the literature. The proposed Bayesian balancing informative prior approach has shown a real potential to adequately weigh the data in favour of smaller subset(s) of data to generate robust prediction models.  相似文献   

7.
The council of the European Economic Communities has issued a directive (86/469/EEC) for the examination of animals and fresh meat for the presence of residues of several hormones, drugs and environmental pollutants. The purpose of this paper is to compare the method used by the EEC with a Bayesian approach to calculate necessary sample size using prior information in different situations (assuming Beta-, Normal- and Gamma-distribution for the prior distribution). It has been shown that the Bayesian approach requires significantly smaller sample sizes for all levels of inspection. Furthermore, an example is given in the examination of pigs for the presence of residues of drugs in pork.  相似文献   

8.
New scientific problems, arising from the human genome project, are challenging the classical means of using statistics. Yet quantified knowledge in the form of rules and rule strengths based on real relationships in data, as opposed to expert opinion, is urgently required for researcher and physician decision support. The problem is that with many parameters, the space to be analyzed is highly dimensional. That is, the combinations of data to examine are subject to a combinatorial explosion as the number of possible events (entries, items, sub-records) (a),(b),(c),... per record (a,b,c,..) increases, and hence much of the space is sparsely populated. These combinatorial considerations are particularly problematic for identifying those associations called "Unicorn Events" which occur significantly less than expected to the extent that they are never seen to be counted. To cope with the combinatorial explosion, a novel numerical "book keeping" approach is taken to generate information terms relating to the combinatorial subsets of events (a,b,c,..), and, most importantly, the zeta (Zeta) function is employed. The incomplete Zeta function zeta(s,n) with s = 1, in which frequencies of occurrence such as n = n(a,b,c,...) determine the range of summation n, is argued to be the natural choice of information function. It emerges from Bayesian integration, taken over the distribution of possible values of information measures for sparse and ample data alike. Expected mutual information l(a;b;c) in nats (i.e., natural units analogous to bits but based on the natural logarithm), such as is available to the observer, is measured as e.g., the difference zeta(s,o(a,b,c..)) - zeta(s,e(a,b,c..)) where o(a,b,c,..) and e(a,b,c,..) are, or relate to, the observed and expected frequencies of occurrence, respectively. For real values of s > 1 the qualitative impact of strongly (positively or negatively) ranked data is preserved despite several numerical approximations. As real s increases, and the output of the information functions converge into three values +1, 0, and -1 nats representing a trinary logic system. For quantitative data, a useful ad hoc method, to report sigma-normalized covariations in an analogous manner to mutual information for significance comparison purposes, is demonstrated. Finally, the potential ability to make use of mutual information in a complex biomedical study, and to include Bayesian prior information derived from statistical, tabular, anecdotal, and expert opinion is briefly illustrated.  相似文献   

9.
Fisheries stock assessment and decision analysis: the Bayesian approach   总被引:4,自引:0,他引:4  
The Bayesian approach to stock assessment determines the probabilities of alternative hypotheses using information for the stock in question and from inferences for other stocks/species. These probabilities are essential if the consequences of alternative management actions are to be evaluated through a decision analysis. Using the Bayesian approach to stock assessment and decision analysis it becomes possible to admit the full range of uncertainty and use the collective historical experience of fisheries science when estimating the consequences of proposed management actions. Recent advances in computing algorithms and power have allowed methods based on the Bayesian approach to be used even for fairly complex stock assessment models and to be within the reach of most stock assessment scientists. However, to avoid coming to ill-founded conclusions, care must be taken when selecting prior distributions. In particular, selection of priors designed to be noninformative with respect to quantities of interest to management is problematic. The arguments of the paper are illustrated using New Zealand's western stock of hoki, Macruronus novaezelandiae (Merlucciidae) and the Bering--Chukchi--Beaufort Seas stock of bowhead whales as examples  相似文献   

10.
This paper develops Bayesian sample size formulae for experiments comparing two groups, where relevant preexperimental information from multiple sources can be incorporated in a robust prior to support both the design and analysis. We use commensurate predictive priors for borrowing of information and further place Gamma mixture priors on the precisions to account for preliminary belief about the pairwise (in)commensurability between parameters that underpin the historical and new experiments. Averaged over the probability space of the new experimental data, appropriate sample sizes are found according to criteria that control certain aspects of the posterior distribution, such as the coverage probability or length of a defined density region. Our Bayesian methodology can be applied to circumstances that compare two normal means, proportions, or event times. When nuisance parameters (such as variance) in the new experiment are unknown, a prior distribution can further be specified based on preexperimental data. Exact solutions are available based on most of the criteria considered for Bayesian sample size determination, while a search procedure is described in cases for which there are no closed-form expressions. We illustrate the application of our sample size formulae in the design of clinical trials, where pretrial information is available to be leveraged. Hypothetical data examples, motivated by a rare-disease trial with an elicited expert prior opinion, and a comprehensive performance evaluation of the proposed methodology are presented.  相似文献   

11.
There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the “contraction bias”, in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants'' performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments.  相似文献   

12.
When making Bayesian inferences we need to elicit an expert's opinion to set up the prior distribution. For applications in clinical trials, we study this problem with binary variables. A critical and often ignored issue in the process of eliciting priors in clinical trials is that medical investigators can seldom specify the prior quantities with precision. In this paper, we discuss several methods of eliciting beta priors from clinical information, and we use simulations to conduct sensitivity analyses of the effect of imprecise assessment of the prior information. These results provide useful guidance for choosing methods of eliciting the prior information in practice.  相似文献   

13.
One prevalent goal within clinical trials is to determine whether or not a combination of two drugs is more effective than each of its components. Many researchers have addressed this issue for fixed-dose combination trials, using frequentist hypothesis testing techniques. In addition, several of these have incorporated prior information from sources such as Phase II trials or expert opinions. The Bayesian approach to the general selection problem naturally accomodates the need to utilize such information. It is useful in the dose combination problem because it does not rely on a nuisance parameter that affects the power of frequentist procedures. We show that hierarchical Bayesian methods may be easily applied to this problem, yielding the probability that a drug combination is superior to its components. Moreover, we present methods that may be implemented using readily available software for numerical integration as well as ones that incorporate Markov Chain Monte Carlo methods.  相似文献   

14.
The degree of overdiagnosis in common cancer screening trials is uncertain due to inadequate design of trials, varying definition and methods used to estimate overdiagnosis. Therefore, we aimed to quantify the risk of overdiagnosis for the most widely implemented cancer screening programmes and assess the implications of design limitations and biases in cancer screening trials on the estimates of overdiagnosis by conducting an overview and re-analysis of systematic reviews of cancer screening. We searched PubMed and the Cochrane Library from their inception dates to November 29, 2021. Eligible studies included systematic reviews of randomised trials comparing cancer screening interventions to no screening, which reported cancer incidence for both trial arms. We extracted data on study characteristics, cancer incidence and assessed the risk of bias using the Cochrane Collaboration’s risk of bias tool. We included 19 trials described in 30 articles for review, reporting results for the following types of screening: mammography for breast cancer, chest X-ray or low-dose CT for lung cancer, alpha-foetoprotein and ultrasound for liver cancer, digital rectal examination, prostate-specific antigen, and transrectal ultrasound for prostate cancer, and CA-125 test and/or ultrasound for ovarian cancer. No trials on screening for melanoma were eligible. Only one trial (5%) had low risk in all bias domains, leading to a post-hoc meta-analysis, excluding trials with high risk of bias in critical domains, finding the extent of overdiagnosis ranged from 17% to 38% across cancer screening programmes. We conclude that there is a significant risk of overdiagnosis in the included randomised trials on cancer screening. We found that trials were generally not designed to estimate overdiagnosis and many trials had high risk of biases that may draw the estimates of overdiagnosis towards the null. In effect, the true extent of overdiagnosis due to cancer screening is likely underestimated.  相似文献   

15.
Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause‐specific mortality provide an example of implicit use of expert knowledge when causes‐of‐death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause‐specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause‐of‐death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event‐time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause‐of‐death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause‐of‐death assignment in modeling of cause‐specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause‐specific survival data for white‐tailed deer, and compared results. We demonstrate that model selection results changed between the two approaches, and incorporating observer knowledge in cause‐of‐death increased the variability associated with parameter estimates when compared to the traditional approach. These differences between the two approaches can impact reported results, and therefore, it is critical to explicitly incorporate expert knowledge in statistical methods to ensure rigorous inference.  相似文献   

16.
A major drawback of epidemiological ecological studies, in which the association between area-level summaries of risk and exposure is used to make inference about individual risk, is the difficulty in characterizing within-area variability in exposure and confounder variables. To avoid ecological bias, samples of individual exposure/confounder data within each area are required. Unfortunately, these may be difficult or expensive to obtain, particularly if large samples are required. In this paper, we propose a new approach suitable for use with small samples. We combine a Bayesian nonparametric Dirichlet process prior with an estimating functions' approach and show that this model gives a compromise between 2 previously described methods. The method is investigated using simulated data, and a practical illustration is provided through an analysis of lung cancer mortality and residential radon exposure in counties of Minnesota. We conclude that we require good quality prior information about the exposure/confounder distributions and a large between- to within-area variability ratio for an ecological study to be feasible using only small samples of individual data.  相似文献   

17.
We present a new method to efficiently estimate very large numbers of p-values using empirically constructed null distributions of a test statistic. The need to evaluate a very large number of p-values is increasingly common with modern genomic data, and when interaction effects are of interest, the number of tests can easily run into billions. When the asymptotic distribution is not easily available, permutations are typically used to obtain p-values but these can be computationally infeasible in large problems. Our method constructs a prediction model to obtain a first approximation to the p-values and uses Bayesian methods to choose a fraction of these to be refined by permutations. We apply and evaluate our method on the study of association between 2-way interactions of genetic markers and colorectal cancer using the data from the first phase of a large, genome-wide case-control study. The results show enormous computational savings as compared to evaluating a full set of permutations, with little decrease in accuracy.  相似文献   

18.
We propose a new statistical method for constructing a genetic network from microarray gene expression data by using a Bayesian network. An essential point of Bayesian network construction is the estimation of the conditional distribution of each random variable. We consider fitting nonparametric regression models with heterogeneous error variances to the microarray gene expression data to capture the nonlinear structures between genes. Selecting the optimal graph, which gives the best representation of the system among genes, is still a problem to be solved. We theoretically derive a new graph selection criterion from Bayes approach in general situations. The proposed method includes previous methods based on Bayesian networks. We demonstrate the effectiveness of the proposed method through the analysis of Saccharomyces cerevisiae gene expression data newly obtained by disrupting 100 genes.  相似文献   

19.
Statistical analyses are used in many fields of genetic research. Most geneticists are taught classical statistics, which includes hypothesis testing, estimation and the construction of confidence intervals; this framework has proved more than satisfactory in many ways. What does a Bayesian framework have to offer geneticists? Its utility lies in offering a more direct approach to some questions and the incorporation of prior information. It can also provide a more straightforward interpretation of results. The utility of a Bayesian perspective, especially for complex problems, is becoming increasingly clear to the statistics community; geneticists are also finding this framework useful and are increasingly utilizing the power of this approach.  相似文献   

20.
Evaluating the sustainability of hunting is key to the conservation of species exploited for bushmeat. Researchers are often hampered by a lack of basic biological data, the usual response to which is to develop sustainability indices based on highly simplified population models. However, the standard indices in the bushmeat literature do not perform well under realistic conditions of uncertainty, bias in parameter estimation, and habitat loss. Another possible approach to estimating the sustainability of hunting under uncertainty is to use Bayesian statistics, but this is mathematically demanding. Red listing of threatened species has to be carried out in extremely data-poor situations: uncertainty has been incorporated into this process in a relatively simple and intuitive way using fuzzy numbers. The current methods for estimating sustainability of bushmeat hunting also do not incorporate spatial heterogeneity. No-take areas are one management tool that can address uncertainty in a spatially explicit way.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号