首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For the computational analysis of biological problems-analyzing data, inferring networks and complex models, and estimating model parameters-it is common to use a range of methods based on probabilistic logic constructions, sometimes collectively called machine learning methods. Probabilistic modeling methods such as Bayesian Networks (BN) fall into this class, as do Hierarchical Bayesian Networks (HBN), Probabilistic Boolean Networks (PBN), Hidden Markov Models (HMM), and Markov Logic Networks (MLN). In this review, we describe the most general of these (MLN), and show how the above-mentioned methods are related to MLN and one another by the imposition of constraints and restrictions. This approach allows us to illustrate a broad landscape of constructions and methods, and describe some of the attendant strengths, weaknesses, and constraints of many of these methods. We then provide some examples of their applications to problems in biology and medicine, with an emphasis on genetics. The key concepts needed to picture this landscape of methods are the ideas of probabilistic graphical models, the structures of the graphs, and the scope of the logical language repertoire used (from First-Order Logic [FOL] to Boolean logic.) These concepts are interlinked and together define the nature of each of the probabilistic logic methods. Finally, we discuss the initial applications of MLN to genetics, show the relationship to less general methods like BN, and then mention several examples where such methods could be effective in new applications to specific biological and medical problems.  相似文献   

2.
The growing availability of high-quality genomic annotation has increased the potential for mechanistic insights when the specific variants driving common genome-wide association signals are accurately localized. A range of fine-mapping strategies have been advocated, and specific successes reported, but the overall performance of such approaches, in the face of the extensive linkage disequilibrium that characterizes the human genome, is not well understood. Using simulations based on sequence data from the 1000 Genomes Project, we quantify the extent to which fine-mapping, here conducted using an approximate Bayesian approach, can be expected to lead to useful improvements in causal variant localization. We show that resolution is highly variable between loci, and that performance is severely degraded as the statistical power to detect association is reduced. We confirm that, where causal variants are shared between ancestry groups, further improvements in performance can be obtained in a trans-ethnic fine-mapping design. Finally, using empirical data from a recently published genome-wide association study for ankylosing spondylitis, we provide empirical confirmation of the behaviour of the approximate Bayesian approach and demonstrate that seven of twenty-six loci can be fine-mapped to fewer than ten variants.  相似文献   

3.
Dynamic Causal Modelling (DCM) and the theory of autopoietic systems are two important conceptual frameworks. In this review, we suggest that they can be combined to answer important questions about self-organising systems like the brain. DCM has been developed recently by the neuroimaging community to explain, using biophysical models, the non-invasive brain imaging data are caused by neural processes. It allows one to ask mechanistic questions about the implementation of cerebral processes. In DCM the parameters of biophysical models are estimated from measured data and the evidence for each model is evaluated. This enables one to test different functional hypotheses (i.e., models) for a given data set. Autopoiesis and related formal theories of biological systems as autonomous machines represent a body of concepts with many successful applications. However, autopoiesis has remained largely theoretical and has not penetrated the empiricism of cognitive neuroscience. In this review, we try to show the connections that exist between DCM and autopoiesis. In particular, we propose a simple modification to standard formulations of DCM that includes autonomous processes. The idea is to exploit the machinery of the system identification of DCMs in neuroimaging to test the face validity of the autopoietic theory applied to neural subsystems. We illustrate the theoretical concepts and their implications for interpreting electroencephalographic signals acquired during amygdala stimulation in an epileptic patient. The results suggest that DCM represents a relevant biophysical approach to brain functional organisation, with a potential that is yet to be fully evaluated.  相似文献   

4.
Learning is often understood as an organism''s gradual acquisition of the association between a given sensory stimulus and the correct motor response. Mathematically, this corresponds to regressing a mapping between the set of observations and the set of actions. Recently, however, it has been shown both in cognitive and motor neuroscience that humans are not only able to learn particular stimulus-response mappings, but are also able to extract abstract structural invariants that facilitate generalization to novel tasks. Here we show how such structure learning can enhance facilitation in a sensorimotor association task performed by human subjects. Using regression and reinforcement learning models we show that the observed facilitation cannot be explained by these basic models of learning stimulus-response associations. We show, however, that the observed data can be explained by a hierarchical Bayesian model that performs structure learning. In line with previous results from cognitive tasks, this suggests that hierarchical Bayesian inference might provide a common framework to explain both the learning of specific stimulus-response associations and the learning of abstract structures that are shared by different task environments.  相似文献   

5.
Scientists and managers are not the only holders of knowledge regarding environmental issues: other stakeholders such as farmers or fishers do have empirical and relevant knowledge. Thus, new approaches for knowledge representation in the case of multiple knowledge sources, but still enabling reasoning, are needed. Cognitive maps and Bayesian networks constitute some useful formalisms to address knowledge representations. Cognitive maps are powerful graphical models for knowledge gathering or displaying. If they offer an easy means to express individual judgments, drawing inferences in cognitive maps remains a difficult task. Bayesian networks are widely used for decision making processes that face uncertain information or diagnosis. But they are difficult to elicitate. To take advantage of each formalism and to overcome their drawbacks, Bayesian causal maps have been developed. In this approach, cognitive maps are used to build the network and obtain conditional probability tables. We propose here a complete framework applied on a real problem. From the different views of a group of shellfish dredgers about their activity, we derive a decision facilitating tool, enabling scenarios testing for fisheries management.  相似文献   

6.
During the last two decades, it has become widely accepted that GABA, the main inhibitory neurotransmitter in mammalian nervous system, exhibits excitatory action at the early stages of postnatal development. This results from a high intracellular Cl concentration at these stages of the development and is associated with spontaneous synchronized network discharges known as Giant Depolarizing Potentials (GDPs). It has been hypothesized that the excitatory action of GABAergic system stimulates synaptogenesis and the development of neuronal networks. However, accumulating recent observations challenge this view. Here we present a brief review of the current concepts and problems they face in the light of new data.  相似文献   

7.
Asymmetric regression is an alternative to conventional linear regression that allows us to model the relationship between predictor variables and the response variable while accommodating skewness. Advantages of asymmetric regression include incorporating realistic ecological patterns observed in data, robustness to model misspecification and less sensitivity to outliers. Bayesian asymmetric regression relies on asymmetric distributions such as the asymmetric Laplace (ALD) or asymmetric normal (AND) in place of the normal distribution used in classic linear regression models. Asymmetric regression concepts can be used for process and parameter components of hierarchical Bayesian models and have a wide range of applications in data analyses. In particular, asymmetric regression allows us to fit more realistic statistical models to skewed data and pairs well with Bayesian inference. We first describe asymmetric regression using the ALD and AND. Second, we show how the ALD and AND can be used for Bayesian quantile and expectile regression for continuous response data. Third, we consider an extension to generalize Bayesian asymmetric regression to survey data consisting of counts of objects. Fourth, we describe a regression model using the ALD, and show that it can be applied to add needed flexibility, resulting in better predictive models compared to Poisson or negative binomial regression. We demonstrate concepts by analyzing a data set consisting of counts of Henslow’s sparrows following prescribed fire and provide annotated computer code to facilitate implementation. Our results suggest Bayesian asymmetric regression is an essential component of a scientist’s statistical toolbox.  相似文献   

8.

Background

Logic models are becoming an increasingly common feature of systematic reviews, as is the use of programme theory more generally in systematic reviewing. Logic models offer a framework to help reviewers to ‘think’ conceptually at various points during the review, and can be a useful tool in defining study inclusion and exclusion criteria, guiding the search strategy, identifying relevant outcomes, identifying mediating and moderating factors, and communicating review findings.

Methods and Findings

In this paper we critique the use of logic models in systematic reviews and protocols drawn from two databases representing reviews of health interventions and international development interventions. Programme theory featured only in a minority of the reviews and protocols included. Despite drawing from different disciplinary traditions, reviews and protocols from both sources shared several limitations in their use of logic models and theories of change, and these were used almost unanimously to solely depict pictorially the way in which the intervention worked. Logic models and theories of change were consequently rarely used to communicate the findings of the review.

Conclusions

Logic models have the potential to be an aid integral throughout the systematic reviewing process. The absence of good practice around their use and development may be one reason for the apparent limited utility of logic models in many existing systematic reviews. These concerns are addressed in the second half of this paper, where we offer a set of principles in the use of logic models and an example of how we constructed a logic model for a review of school-based asthma interventions.  相似文献   

9.
How size is controlled during animal development remains a fascinating problem despite decades of research. Here we review key concepts in size biology and develop our thesis that much can be learned by studying how different organ sizes are differentially scaled by homeotic selector genes. A common theme from initial studies using this approach is that morphogen pathways are modified in numerous ways by selector genes to effect size control. We integrate these results with other pathways known to regulate organ size in developing a comprehensive model for organ size control.  相似文献   

10.
11.
Complex processes resulting from interaction of multiple elements can rarely be understood by analytical scientific approaches alone; additional, mathematical models of system dynamics are required. This insight, which disciplines like physics have embraced for a long time already, is gradually gaining importance in the study of cognitive processes by functional neuroimaging. In this field, causal mechanisms in neural systems are described in terms of effective connectivity. Recently, dynamic causal modelling (DCM) was introduced as a generic method to estimate effective connectivity from neuroimaging data in a Bayesian fashion. One of the key advantages of DCM over previous methods is that it distinguishes between neural state equations and modality-specific forward models that translate neural activity into a measured signal. Another strength is its natural relation to Bayesian model selection (BMS) procedures. In this article, we review the conceptual and mathematical basis of DCM and its implementation for functional magnetic resonance imaging data and event-related potentials. After introducing the application of BMS in the context of DCM, we conclude with an outlook to future extensions of DCM. These extensions are guided by the long-term goal of using dynamic system models for pharmacological and clinical applications, particularly with regard to synaptic plasticity.  相似文献   

12.
The error management model of altruism in one-shot interactions provides an influential explanation for one of the most controversial behaviors in evolutionary social science. The model posits that one-shot altruism arises from a domain-specific cognitive bias that avoids the error of mistaking a long-term relationship for a one-shot interaction. One-shot altruism is thus, in an intriguingly paradoxical way, a form of reciprocity. We examine the logic behind this idea in detail. In its most general form the error management model is exceedingly flexible, and restrictions about the psychology of agents are necessary for selection to be well-defined. Once these restrictions are in place, selection is well defined, but it leads to behavior that is perfectly consistent with an unbiased rational benchmark. Thus, the evolution of one-shot reciprocity does not require an evoked cognitive bias based on repeated interactions and reputation. Moreover, in spite of its flexibility in terms of psychology, the error management model assumes that behavior is exceedingly rigid when individuals face a new interaction partner. Reciprocity can only take the form of tit-for-tat, and individuals cannot adjust their behavior in response to new information about the duration of a relationship. Zefferman (2014) showed that one-shot reciprocity does not reliably evolve if one relaxes the first restriction, and we show that the behavior does not reliably evolve if one relaxes the second restriction. Altogether, these theoretical results on one-shot reciprocity do not square well with experiments showing increased altruism in the presence of payoff-irrelevant stimuli that suggest others are watching.  相似文献   

13.
Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.  相似文献   

14.
We consider a new frequentist gene expression index for Affymetrix oligonucleotide DNA arrays, using a similar probe intensity model as suggested by Hein and others (2005), called the Bayesian gene expression index (BGX). According to this model, the perfect match and mismatch values are assumed to be correlated as a result of sharing a common gene expression signal. Rather than a Bayesian approach, we develop a maximum likelihood algorithm for estimating the underlying common signal. In this way, estimation is explicit and much faster than the BGX implementation. The observed Fisher information matrix, rather than a posterior credibility interval, gives an idea of the accuracy of the estimators. We evaluate our method using benchmark spike-in data sets from Affymetrix and GeneLogic by analyzing the relationship between estimated signal and concentration, i.e. true signal, and compare our results with other commonly used methods.  相似文献   

15.
16.
We all expect our students to learn facts and concepts, but more importantly, we want them to learn how to evaluate new information from an educated and skeptical perspective; that is, we want them to become critical thinkers. For many of us who are scientists and teachers, critical thought is either intuitive or we learned it so long ago that it is not at all obvious how to pass on the skills to our students. Explicitly discussing the logic that underlies the experimental basis of developmental biology is an easy and very successful way to teach critical thinking skills. Here, I describe some simple changes to a lecture course that turn the practice of critical thinking into the centerpiece of the learning process. My starting point is the "Evidence and Antibodies" sidelight in Gilbert's Developmental Biology (2000), which I use as an introduction to the ideas of correlation, necessity and sufficiency, and to the kinds of experiments required to gather each type of evidence: observation ("show it"), loss of function ("block it") and gain of function ("move it"). Thereafter, every experiment can be understood quickly by the class and discussed intelligently with a common vocabulary. Both verbal and written reinforcement of these ideas dramatically improve the students' ability to evaluate new information. In particular, they are able to evaluate claims about cause and effect; they become experts at distinguishing between correlation and causation. Because the intellectual techniques are so powerful and the logic so satisfying, the students come to view the critical assessment of knowledge as a fun puzzle and the rigorous thinking behind formulating a question as an exciting challenge.  相似文献   

17.
Large-scale association studies hold promise for discovering the genetic basis of common human disease. These studies will consist of a large number of individuals, as well as large number of genetic markers, such as single nucleotide polymorphisms (SNPs). The potential size of the data and the resulting model space require the development of efficient methodology to unravel associations between phenotypes and SNPs in dense genetic maps. Our approach uses a genetic algorithm (GA) to construct logic trees consisting of Boolean expressions involving strings or blocks of SNPs. These blocks or nodes of the logic trees consist of SNPs in high linkage disequilibrium (LD), that is, SNPs that are highly correlated with each other due to evolutionary processes. At each generation of our GA, a population of logic tree models is modified using selection, cross-over and mutation moves. Logic trees are selected for the next generation using a fitness function based on the marginal likelihood in a Bayesian regression frame-work. Mutation and cross-over moves use LD measures to pro pose changes to the trees, and facilitate the movement through the model space. We demonstrate our method and the flexibility of logic tree structure with variable nodal lengths on simulated data from a coalescent model, as well as data from a candidate gene study of quantitative genetic variation.  相似文献   

18.
Gilet E  Diard J  Bessière P 《PloS one》2011,6(6):e20387
In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception-action loop, based on probabilistic modeling and bayesian inference, which we call the Bayesian Action-Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments.  相似文献   

19.
Inferring genetic regulatory logic from expression data   总被引:1,自引:0,他引:1  
MOTIVATION: High-throughput molecular genetics methods allow the collection of data about the expression of genes at different time points and under different conditions. The challenge is to infer gene regulatory interactions from these data and to get an insight into the mechanisms of genetic regulation. RESULTS: We propose a model for genetic regulatory interactions, which has a biologically motivated Boolean logic semantics, but is of a probabilistic nature, and is hence able to confront noisy biological processes and data. We propose a method for learning the model from data based on the Bayesian approach and utilizing Gibbs sampling. We tested our method with previously published data of the Saccharomyces cerevisiae cell cycle and found relations between genes consistent with biological knowledge.  相似文献   

20.
Bayesian spatial modeling of haplotype associations   总被引:9,自引:0,他引:9  
We review methods for relating the risk of disease to a collection of single nucleotide polymorphisms (SNPs) within a small region. Association studies using case-control designs with unrelated individuals could be used either to test for a direct effect of a candidate gene and characterize the responsible variant(s), or to fine map an unknown gene by exploiting the pattern of linkage disequilibrium (LD). We consider a flexible class of logistic penetrance models based on haplotypes and compare them with an alternative formulation based on unphased multilocus genotypes. The likelihood for haplotype-based models requires summation over all possible haplotype assignments consistent with the observed genotype data, and can be fitted using either Expectation-Maximization (E-M) or Markov chain Monte Carlo (MCMC) methods. Subtleties involving ascertainment correction for case-control studies are discussed. There has been great interest in methods for LD mapping based on the coalescent or ancestral recombination graphs as well as methods based on haplotype sharing, both of which we review briefly. Because of their computational complexity, we propose some alternative empirical modeling approaches using techniques borrowed from the Bayesian spatial statistics literature. Here, space is interpreted in terms of a distance metric describing the similarity of any pair of haplotypes to each other, and hence their presumed common ancestry. Specifically, we discuss the conditional autoregressive model and two spatial clustering models: Potts and Voronoi. We conclude with a discussion of the implications of these methods for modeling cryptic relatedness, haplotype blocks, and haplotype tagging SNPs, and suggest a Bayesian framework for the HapMap project.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号