首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Uncertainty management for the evaluation of evidence based on linguistic and conceptual data is taking advantage of developments in the Dempster-Shafer (DS) theory of evidence, possibility theory and fuzzy logic. The DS theory offers the capability to assess the uncertainty of different subsets of assertions in a domain and the way in which uncertainty is affected by accumulating evidence. The DS theory goes beyond probability theory in its ability to represent ignorance about certain aspects of a situation. However, the theory is very sensitive to the numerical assessments provided by users and can lead to intuitively unexpected and even undesirable results. Certainty factors are widely used in various expert systems. Their definition and updating may follow either a probabilistic model or fuzzy set theoretic concept.  相似文献   

2.
Using models to simulate and analyze biological networks requires principled approaches to parameter estimation and model discrimination. We use Bayesian and Monte Carlo methods to recover the full probability distributions of free parameters (initial protein concentrations and rate constants) for mass‐action models of receptor‐mediated cell death. The width of the individual parameter distributions is largely determined by non‐identifiability but covariation among parameters, even those that are poorly determined, encodes essential information. Knowledge of joint parameter distributions makes it possible to compute the uncertainty of model‐based predictions whereas ignoring it (e.g., by treating parameters as a simple list of values and variances) yields nonsensical predictions. Computing the Bayes factor from joint distributions yields the odds ratio (~20‐fold) for competing ‘direct’ and ‘indirect’ apoptosis models having different numbers of parameters. Our results illustrate how Bayesian approaches to model calibration and discrimination combined with single‐cell data represent a generally useful and rigorous approach to discriminate between competing hypotheses in the face of parametric and topological uncertainty.  相似文献   

3.
Mathematical models have substantially improved our ability to predict the response of a complex biological system to perturbation, but their use is typically limited by difficulties in specifying model topology and parameter values. Additionally, incorporating entities across different biological scales ranging from molecular to organismal in the same model is not trivial. Here, we present a framework called "querying quantitative logic models" (Q2LM) for building and asking questions of constrained fuzzy logic (cFL) models. cFL is a recently developed modeling formalism that uses logic gates to describe influences among entities, with transfer functions to describe quantitative dependencies. Q2LM does not rely on dedicated data to train the parameters of the transfer functions, and it permits straight-forward incorporation of entities at multiple biological scales. The Q2LM framework can be employed to ask questions such as: Which therapeutic perturbations accomplish a designated goal, and under what environmental conditions will these perturbations be effective? We demonstrate the utility of this framework for generating testable hypotheses in two examples: (i) a intracellular signaling network model; and (ii) a model for pharmacokinetics and pharmacodynamics of cell-cytokine interactions; in the latter, we validate hypotheses concerning molecular design of granulocyte colony stimulating factor.  相似文献   

4.
As a model system for the understanding of human cancer, the mouse has proved immensely valuable. Indeed, studies of mouse models have helped to define the nature of cancer as a genetic disease and demonstrated the causal role of genetic events found in tumors. As the scientific and medical community's understanding of human cancer becomes more sophisticated, however, limitations and potential weaknesses of existing models are revealed. How valid are these murine models for the understanding and treatment of human cancer? The answer, it appears, depends on the nature of the research requirement. Certain models are better suited for particular applications. Using novel molecular tools and genetic strategies, improved models have recently been described that accurately mimic many aspects of human cancer.  相似文献   

5.
As a model system for the understanding of human cancer, the mouse has proved immensely valuable. Indeed, studies of mouse models have helped to define the nature of cancer as a genetic disease and demonstrated the causal role of genetic events found in tumors. As the scientific and medical community's understanding of human cancer becomes more sophisticated, however, limitations and potential weaknesses of existing models are revealed. How valid are these murine models for the understanding and treatment of human cancer? The answer, it appears, depends on the nature of the research requirement. Certain models are better suited for particular applications. Using novel molecular tools and genetic strategies, improved models have recently been described that accurately mimic many aspects of human cancer.  相似文献   

6.
Bayes factors comparing two or more competing hypotheses are often estimated by constructing a Markov chain Monte Carlo (MCMC) sampler to explore the joint space of the hypotheses. To obtain efficient Bayes factor estimates, Carlin and Chib (1995, Journal of the Royal Statistical Society, Series B57, 473-484) suggest adjusting the prior odds of the competing hypotheses so that the posterior odds are approximately one, then estimating the Bayes factor by simple division. A byproduct is that one often produces several independent MCMC chains, only one of which is actually used for estimation. We extend this approach to incorporate output from multiple chains by proposing three statistical models. The first assumes independent sampler draws and models the hypothesis indicator function using logistic regression for various choices of the prior odds. The two more complex models relax the independence assumption by allowing for higher-lag dependence within the MCMC output. These models allow us to estimate the uncertainty in our Bayes factor calculation and to fully use several different MCMC chains even when the prior odds of the hypotheses vary from chain to chain. We apply these methods to calculate Bayes factors for tests of monophyly in two phylogenetic examples. The first example explores the relationship of an unknown pathogen to a set of known pathogens. Identification of the unknown's monophyletic relationship may affect antibiotic choice in a clinical setting. The second example focuses on HIV recombination detection. For potential clinical application, these types of analyses must be completed as efficiently as possible.  相似文献   

7.
Parallel processing of laboratory tests across more than one instrument platform: permits dealing with increasing workloads; but broadens uncertainty of measurement; minimising measurement uncertainty means keeping assay performances continuously aligned. Important questions are: Why is there the need to demonstrate "acceptable alignment" between methods/instruments? What methods/tools can be used to test method/instrument alignment and how adjustments can be made? What is an "acceptable" alignment? How often should alignments be checked and what is the reasoning for this?  相似文献   

8.
Li Z  Sillanpää MJ 《Genetics》2012,190(1):231-249
Bayesian hierarchical shrinkage methods have been widely used for quantitative trait locus mapping. From the computational perspective, the application of the Markov chain Monte Carlo (MCMC) method is not optimal for high-dimensional problems such as the ones arising in epistatic analysis. Maximum a posteriori (MAP) estimation can be a faster alternative, but it usually produces only point estimates without providing any measures of uncertainty (i.e., interval estimates). The variational Bayes method, stemming from the mean field theory in theoretical physics, is regarded as a compromise between MAP and MCMC estimation, which can be efficiently computed and produces the uncertainty measures of the estimates. Furthermore, variational Bayes methods can be regarded as the extension of traditional expectation-maximization (EM) algorithms and can be applied to a broader class of Bayesian models. Thus, the use of variational Bayes algorithms based on three hierarchical shrinkage models including Bayesian adaptive shrinkage, Bayesian LASSO, and extended Bayesian LASSO is proposed here. These methods performed generally well and were found to be highly competitive with their MCMC counterparts in our example analyses. The use of posterior credible intervals and permutation tests are considered for decision making between quantitative trait loci (QTL) and non-QTL. The performance of the presented models is also compared with R/qtlbim and R/BhGLM packages, using a previously studied simulated public epistatic data set.  相似文献   

9.
BackgroundProstate-specific antigen (PSA) testing for prostate cancer is controversial. There are unresolved tensions and disagreements amongst experts, and clinical guidelines conflict. This both reflects and generates significant uncertainty about the appropriateness of screening. Little is known about general practitioners’ (GPs’) perspectives and experiences in relation to PSA testing of asymptomatic men. In this paper we asked the following questions: (1) What are the primary sources of uncertainty as described by GPs in the context of PSA testing? (2) How do GPs experience and respond to different sources of uncertainty?MethodsThis was a qualitative study that explored general practitioners’ current approaches to, and reasoning about, PSA testing of asymptomatic men. We draw on accounts generated from interviews with 69 general practitioners located in Australia (n = 40) and the United Kingdom (n = 29). The interviews were conducted in 2013–2014. Data were analysed using grounded theory methods. Uncertainty in PSA testing was identified as a core issue.FindingsAustralian GPs reported experiencing substantially more uncertainty than UK GPs. This seemed partly explainable by notable differences in conditions of practice between the two countries. Using Han et al’s taxonomy of uncertainty as an initial framework, we first outline the different sources of uncertainty GPs (mostly Australian) described encountering in relation to prostate cancer screening and what the uncertainty was about. We then suggest an extension to Han et al’s taxonomy based on our analysis of data relating to the varied ways that GPs manage uncertainties in the context of PSA testing. We outline three broad strategies: (1) taking charge of uncertainty; (2) engaging others in managing uncertainty; and (3) transferring the responsibility for reducing or managing some uncertainties to other parties.ConclusionOur analysis suggests some GPs experienced uncertainties associated with ambiguous guidance and the complexities of their situation as professionals with responsibilities to patients as considerably burdensome. This raises important questions about responsibility for uncertainty. In Australia in particular they feel insufficiently supported by the health care system to practice in ways that are recognisably consistent with ‘evidence based’ professional standards and appropriate for patients. More work is needed to clarify under what circumstances and how uncertainty should be communicated. Closer attention to different types and aspects of the uncertainty construct could be useful.  相似文献   

10.
Application of uncertainty and variability in LCA   总被引:1,自引:0,他引:1  
As yet, the application of an uncertainty and variability analysis is not common practice in LCAs. A proper analysis will be facilitated when it is clear which types of uncertainties and variabilities exist in LCAs and which tools are available to deal with them. Therefore, a framework is developed to classify types of uncertainty and variability in LCAs. Uncertainty is divided in (1) parameter uncertainty, (2) model uncertainty, and (3) uncertainty due to choices, while variability covers (4) spatial variability, (5) temporal variability, and (6) variability between objects and sources. A tool to deal with parameter uncertainty and variability between objects and sources in both the inventory and the impact assessment is probabilistic simulation. Uncertainty due to choices can be dealt with in a scenario analysis or reduced by standardisation and peer review. The feasibility of dealing with temporal and spatial variability is limited, implying model uncertainty in LCAs. Other model uncertainties can be reduced partly by more sophisticated modelling, such as the use of non-linear inventory models in the inventory and multi media models in the characterisation phase.  相似文献   

11.
The recent development of Bayesian phylogenetic inference using Markov chain Monte Carlo (MCMC) techniques has facilitated the exploration of parameter-rich evolutionary models. At the same time, stochastic models have become more realistic (and complex) and have been extended to new types of data, such as morphology. Based on this foundation, we developed a Bayesian MCMC approach to the analysis of combined data sets and explored its utility in inferring relationships among gall wasps based on data from morphology and four genes (nuclear and mitochondrial, ribosomal and protein coding). Examined models range in complexity from those recognizing only a morphological and a molecular partition to those having complex substitution models with independent parameters for each gene. Bayesian MCMC analysis deals efficiently with complex models: convergence occurs faster and more predictably for complex models, mixing is adequate for all parameters even under very complex models, and the parameter update cycle is virtually unaffected by model partitioning across sites. Morphology contributed only 5% of the characters in the data set but nevertheless influenced the combined-data tree, supporting the utility of morphological data in multigene analyses. We used Bayesian criteria (Bayes factors) to show that process heterogeneity across data partitions is a significant model component, although not as important as among-site rate variation. More complex evolutionary models are associated with more topological uncertainty and less conflict between morphology and molecules. Bayes factors sometimes favor simpler models over considerably more parameter-rich models, but the best model overall is also the most complex and Bayes factors do not support exclusion of apparently weak parameters from this model. Thus, Bayes factors appear to be useful for selecting among complex models, but it is still unclear whether their use strikes a reasonable balance between model complexity and error in parameter estimates.  相似文献   

12.
Use of models for integrated assessment of ecosystem health   总被引:2,自引:0,他引:2  
An argument is presented for a greater use of numerical models in integrated assessment of ecosystem health. Ecosystem health has many facets which are interconnected and interact, and which can only be measured in integrated assessments. Modelling is an essential feature of integrated assessment being one of the few ways human groups can form a consensus understanding of the complex dynamics which occur. Functional assumptions are made explicit. The argument is expanded in response to a series of key questions: What is ecosystem health? How do we do integrated assessments? What is modelling? What are some successful examples? What should one conclude? The answers are illustrated with references to the International Joint Commission's program to develop and implement Remedial Action Plans for the Great Lakes' Areas of Concern, particularly in the Bay of Quinte, Lake Ontario. Three recommendations are offered: (i) Increase the use of models, (ii) Build models with existing data and hypotheses before initiating new programs, and (iii) Allow for iterative model development but be prepared to build a new model when a new problem arises.  相似文献   

13.
A common problem in molecular phylogenetics is choosing a model of DNA substitution that does a good job of explaining the DNA sequence alignment without introducing superfluous parameters. A number of methods have been used to choose among a small set of candidate substitution models, such as the likelihood ratio test, the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and Bayes factors. Current implementations of any of these criteria suffer from the limitation that only a small set of models are examined, or that the test does not allow easy comparison of non-nested models. In this article, we expand the pool of candidate substitution models to include all possible time-reversible models. This set includes seven models that have already been described. We show how Bayes factors can be calculated for these models using reversible jump Markov chain Monte Carlo, and apply the method to 16 DNA sequence alignments. For each data set, we compare the model with the best Bayes factor to the best models chosen using AIC and BIC. We find that the best model under any of these criteria is not necessarily the most complicated one; models with an intermediate number of substitution types typically do best. Moreover, almost all of the models that are chosen as best do not constrain a transition rate to be the same as a transversion rate, suggesting that it is the transition/transversion rate bias that plays the largest role in determining which models are selected. Importantly, the reversible jump Markov chain Monte Carlo algorithm described here allows estimation of phylogeny (and other phylogenetic model parameters) to be performed while accounting for uncertainty in the model of DNA substitution.  相似文献   

14.
MOTIVATION: Molecular biology databases hold a large number of empirical facts about many different aspects of biological entities. That data is static in the sense that one cannot ask a database 'What effect has protein A on gene B?' or 'Do gene A and gene B interact, and if so, how?'. Those questions require an explicit model of the target organism. Traditionally, biochemical systems are modelled using kinetics and differential equations in a quantitative simulator. For many biological processes however, detailed quantitative information is not available, only qualitative or fuzzy statements about the nature of interactions. RESULTS: We designed and implemented a qualitative simulation model of lambda phage growth control in Escherichia coli based on the existing simulation environment QSim. Qualitative reasoning can serve as the basis for automatic transformation of contents of genomic databases into interactive modelling systems that can reason about the relations and interactions of biological entities.   相似文献   

15.
Bayesian inference in ecology   总被引:14,自引:1,他引:13  
Bayesian inference is an important statistical tool that is increasingly being used by ecologists. In a Bayesian analysis, information available before a study is conducted is summarized in a quantitative model or hypothesis: the prior probability distribution. Bayes’ Theorem uses the prior probability distribution and the likelihood of the data to generate a posterior probability distribution. Posterior probability distributions are an epistemological alternative to P‐values and provide a direct measure of the degree of belief that can be placed on models, hypotheses, or parameter estimates. Moreover, Bayesian information‐theoretic methods provide robust measures of the probability of alternative models, and multiple models can be averaged into a single model that reflects uncertainty in model construction and selection. These methods are demonstrated through a simple worked example. Ecologists are using Bayesian inference in studies that range from predicting single‐species population dynamics to understanding ecosystem processes. Not all ecologists, however, appreciate the philosophical underpinnings of Bayesian inference. In particular, Bayesians and frequentists differ in their definition of probability and in their treatment of model parameters as random variables or estimates of true values. These assumptions must be addressed explicitly before deciding whether or not to use Bayesian methods to analyse ecological data.  相似文献   

16.
This article explains estimation of gene frequencies from a Bayesian viewpoint using prior information. How to obtain Bayes estimators and the highest posterior density credible sets (Bayesian counterpart to classical confidence intervals) for gene frequencies is described. Tests of hypotheses are also discussed. A readily available mathematical application package is used to demonstrate the mathematical computations.  相似文献   

17.
Information on past land cover in terms of absolute areas of different landscape units (forest, open land, pasture land, cultivated land, etc.) at local to regional scales is needed to test hypotheses and answer questions related to climate change (e.g. feedbacks effects of land-cover change), archaeological research, and nature conservancy (e.g. management strategy). The palaeoecological technique best suited to achieve quantitative reconstruction of past vegetation is pollen analysis. A simulation approach developed by Sugita (the computer model POLLSCAPE) which uses models based on the theory of pollen analysis is presented together with examples of application. POLLSCAPE has been adopted as the central tool for POLLANDCAL (POLlen/LANdscape CALibration), an international research network focusing on this topic. The theory behind models of the pollen–vegetation relationship and POLLSCAPE is reviewed. The two model outputs which receive greatest attention in this paper are the relevant source area of pollen (RSAP) and pollen loading in mires and lakes. Six examples of application of POLLSCAPE are presented, each of which explores a possible use of the POLLANDCAL tools and a means of validating or evaluating the models with empirical data. The landscape and vegetation factors influencing the size of the RSAP, the importance of pollen productivity estimates (PPEs) for the model outputs, the detection of small and rare patches of plant taxa in pollen records, and quantitative reconstructions of past vegetation and landscapes are discussed on the basis of these examples. The simulation approach is seen to be useful both for exploring different vegetation/landscape scenarios and for refuting hypotheses.  相似文献   

18.
The STAR project: context, objectives and approaches   总被引:20,自引:20,他引:0  
STAR is a European Commission Framework V project (EVK1-CT-2001-00089). The project aim is to provide practical advice and solutions with regard to many of the issues associated with the Water Framework Directive. This paper provides a context for the STAR research programme through a review of the requirements of the directive and the Common Implementation Strategy responsible for guiding its implementation. The scientific and strategic objectives of STAR are set out in the form of a series of research questions and the reader is referred to the papers in this volume that address those objectives, which include: (a) Which methods or biological quality elements are best able to indicate certain stressors? (b) Which method can be used on which scale? (c) Which method is suited for early and late warnings? (d) How are different assessment methods affected by errors and uncertainty? (e) How can data from different assessment methods be intercalibrated? (f) How can the cost-effectiveness of field and laboratory protocols be optimised? (g) How can boundaries of the five classes of Ecological Status be best set? (h) What contribution can STAR make to the development of European standards? The methodological approaches adopted to meet these objectives are described. These include the selection of the 22 stream-types and 263 sites sampled in 11 countries, the sampling protocols used to sample and survey phytobenthos, macrophytes, macroinvertebrates, fish and hydromorphology, the quality control and uncertainty analyses that were applied, including training, replicate sampling and audit of performance, the development of bespoke software and the project outputs. This paper provides the detailed background information to be referred to in conjunction with most of the other papers in this volume. These papers are divided into seven sections: (1) typology, (2) organism groups, (3) macrophytes and diatoms, (4) hydromorphology, (5) tools for assessing European streams with macroinvertebrates, (6) intercalibration and comparison and (7) errors and uncertainty. The principal findings of the papers in each section and their relevance to the Water Framework Directive are synthesised in short summary papers at the beginning of each section. Additional outputs, including all sampling and laboratory protocols and project deliverables, together with a range of freely downloadable software are available from the project website at www.eu_star.at.  相似文献   

19.
Aim When hypotheses of historical biogeography are evaluated, age estimates of individual nodes in a phylogeny often have a direct impact on what explanation is concluded to be most likely. Confidence intervals of estimated divergence times obtained in molecular dating analyses are usually very large, but the uncertainty is rarely incorporated in biogeographical analyses. The aim of this study is to use the group Urophylleae, which has a disjunct pantropical distribution, to explore how the uncertainty in estimated divergence times affects conclusions in biogeographical analysis. Two hypotheses are evaluated: (1) long‐distance dispersal from Africa to Asia and the Neotropics, and (2) a continuous distribution in the boreotropics, probably involving migration across the North Atlantic Land Bridge, followed by isolation in equatorial refugia. Location Tropical and subtropical Asia, tropical Africa, and central and southern tropical America. Methods This study uses parsimony and Bayesian phylogenetic analyses of chloroplast DNA and nuclear ribosomal DNA data from 56 ingroup species, beast molecular dating and a Bayesian approach to dispersal–vicariance analysis (Bayes‐DIVA) to reconstruct the ancestral area of the group, and the dispersal–extinction–cladogenesis method to test biogeographical hypotheses. Results When the two models of geographic range evolution were compared using the maximum likelihood (ML) tree with mean estimates of divergence times, boreotropical migration was indicated to be much more likely than long‐distance dispersal. Analyses of a large sample of dated phylogenies did, however, show that this result was not consistent. The age estimate of one specific node had a major impact on likelihood values and on which model performed best. The results show that boreotropical migration provides a slightly better explanation of the geographical distribution patterns of extant Urophylleae than long‐distance dispersal. Main conclusions This study shows that results from biogeographical analyses based on single phylogenetic trees, such as a ML or consensus tree, can be misleading, and that it may be very important to take the uncertainty in age estimates into account. Methods that account for the uncertainty in topology, branch lengths and estimated divergence times are not commonly used in biogeographical inference today but should definitely be preferred in order to avoid unwarranted conclusions.  相似文献   

20.
MOTIVATION: There often are many alternative models of a biochemical system. Distinguishing models and finding the most suitable ones is an important challenge in Systems Biology, as such model ranking, by experimental evidence, will help to judge the support of the working hypotheses forming each model. Bayes factors are employed as a measure of evidential preference for one model over another. Marginal likelihood is a key component of Bayes factors, however computing the marginal likelihood is a difficult problem, as it involves integration of nonlinear functions in multidimensional space. There are a number of methods available to compute the marginal likelihood approximately. A detailed investigation of such methods is required to find ones that perform appropriately for biochemical modelling. RESULTS: We assess four methods for estimation of the marginal likelihoods required for computing Bayes factors. The Prior Arithmetic Mean estimator, the Posterior Harmonic Mean estimator, the Annealed Importance Sampling and the Annealing-Melting Integration methods are investigated and compared on a typical case study in Systems Biology. This allows us to understand the stability of the analysis results and make reliable judgements in uncertain context. We investigate the variance of Bayes factor estimates, and highlight the stability of the Annealed Importance Sampling and the Annealing-Melting Integration methods for the purposes of comparing nonlinear models. AVAILABILITY: Models used in this study are available in SBML format as the supplementary material to this article.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号