首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Many achievements in medicine have come from applying linear theory to problems. Most current methods of data analysis use linear models, which are based on proportionality between two variables and/or relationships described by linear differential equations. However, nonlinear behavior commonly occurs within human systems due to their complex dynamic nature; this cannot be described adequately by linear models. Nonlinear thinking has grown among physiologists and physicians over the past century, and non-linear system theories are beginning to be applied to assist in interpreting, explaining, and predicting biological phenomena. Chaos theory describes elements manifesting behavior that is extremely sensitive to initial conditions, does not repeat itself and yet is deterministic. Complexity theory goes one step beyond chaos and is attempting to explain complex behavior that emerges within dynamic nonlinear systems. Nonlinear modeling still has not been able to explain all of the complexity present in human systems, and further models still need to be refined and developed. However, nonlinear modeling is helping to explain some system behaviors that linear systems cannot and thus will augment our understanding of the nature of complex dynamic systems within the human body in health and in disease states.  相似文献   

3.
Biological systems are traditionally studied by focusing on a specific subsystem, building an intuitive model for it, and refining the model using results from carefully designed experiments. Modern experimental techniques provide massive data on the global behavior of biological systems, and systematically using these large datasets for refining existing knowledge is a major challenge. Here we introduce an extended computational framework that combines formalization of existing qualitative models, probabilistic modeling, and integration of high-throughput experimental data. Using our methods, it is possible to interpret genomewide measurements in the context of prior knowledge on the system, to assign statistical meaning to the accuracy of such knowledge, and to learn refined models with improved fit to the experiments. Our model is represented as a probabilistic factor graph, and the framework accommodates partial measurements of diverse biological elements. We study the performance of several probabilistic inference algorithms and show that hidden model variables can be reliably inferred even in the presence of feedback loops and complex logic. We show how to refine prior knowledge on combinatorial regulatory relations using hypothesis testing and derive p-values for learned model features. We test our methodology and algorithms on a simulated model and on two real yeast models. In particular, we use our method to explore uncharacterized relations among regulators in the yeast response to hyper-osmotic shock and in the yeast lysine biosynthesis system. Our integrative approach to the analysis of biological regulation is demonstrated to synergistically combine qualitative and quantitative evidence into concrete biological predictions.  相似文献   

4.
Rule-based modeling provides a means to represent cell signaling systems in a way that captures site-specific details of molecular interactions. For rule-based models to be more widely understood and (re)used, conventions for model visualization and annotation are needed. We have developed the concepts of an extended contact map and a model guide for illustrating and annotating rule-based models. An extended contact map represents the scope of a model by providing an illustration of each molecule, molecular component, direct physical interaction, post-translational modification, and enzyme-substrate relationship considered in a model. A map can also illustrate allosteric effects, structural relationships among molecular components, and compartmental locations of molecules. A model guide associates elements of a contact map with annotation and elements of an underlying model, which may be fully or partially specified. A guide can also serve to document the biological knowledge upon which a model is based. We provide examples of a map and guide for a published rule-based model that characterizes early events in IgE receptor (FcεRI) signaling. We also provide examples of how to visualize a variety of processes that are common in cell signaling systems but not considered in the example model, such as ubiquitination. An extended contact map and an associated guide can document knowledge of a cell signaling system in a form that is visual as well as executable. As a tool for model annotation, a map and guide can communicate the content of a model clearly and with precision, even for large models.  相似文献   

5.
Giavitto JL  Michel O 《Bio Systems》2003,70(2):149-163
The cell as a dynamical system presents the characteristics of having a dynamical structure. That is, the exact phase space of the system cannot be fixed before the evolution and integrative cell models must state the evolution of the structure jointly with the evolution of the cell state. This kind of dynamical systems is very challenging to model and simulate. New programming concepts must be developed to ease their modeling and simulation. In this context, the goal of the MGS project is to develop an experimental programming language dedicated to the simulation of this kind of systems. MGS proposes a unified view on several computational mechanisms (CHAM, Lindenmayer systems, Paun systems, cellular automata) enabling the specification of spatially localized computations on heterogeneous entities. The evolution of a dynamical structure is handled through the concept of transformation which relies on the topological organization of the system components. An example based on the modeling of spatially distributed biochemical networks is used to illustrate how these notions can be used to model the spatial and temporal organization of intracellular processes.  相似文献   

6.
NMDA receptors are among the crucial elements of central nervous system models. Recent studies show that both conductance and kinetics of these receptors are changing voltage-dependently in some parts of the brain. Therefore, several models have been introduced to simulate their current. However, on the one hand, kinetic models—which are able to simulate these voltage-dependent phenomena—are computationally expensive for modeling of large neural networks. On the other hand, classic exponential models, which are computationally less expensive, are not able to simulate the voltage-dependency of these receptors, accurately. In this study, we have modified these classic models to endow them with the voltage-dependent conductance and time constants. Temperature sensitivity and desensitization of these receptors are also taken into account. We show that, it is possible to simulate the most important physiological aspects of NMDA receptor’s behavior using only three to four differential equations, which is significantly smaller than the previous kinetic models. Consequently, it seems that our model is both fast and physiologically plausible and therefore is a suitable candidate for the modeling of large neural networks.  相似文献   

7.
Most biological processes are orchestrated by large-scale molecular networks which are described in large-scale model repositories and whose dynamics are extremely complex. An observed phenotype is a state of this system that results from control mechanisms whose identification is key to its understanding. The Biological Pathway Exchange (BioPAX) format is widely used to standardize the biological information relative to regulatory processes. However, few modeling approaches developed so far enable for computing the events that control a phenotype in large-scale networks.Here we developed an integrated approach to build large-scale dynamic networks from BioPAX knowledge databases in order to analyse trajectories and to identify sets of biological entities that control a phenotype. The Cadbiom approach relies on the guarded transitions formalism, a discrete modeling approach which models a system dynamics by taking into account competition and cooperation events in chains of reactions. The method can be applied to every BioPAX (large-scale) model thanks to a specific package which automatically generates Cadbiom models from BioPAX files.The Cadbiom framework was applied to the BioPAX version of two resources (PID, KEGG) of the Pathway Commons database and to the Atlas of Cancer Signalling Network (ACSN). As a case-study, it was used to characterize sets of biological entities implicated in the epithelial-mesenchymal transition. Our results highlight the similarities between the PID and ACSN resources in terms of biological content, and underline the heterogeneity of usage of the BioPAX semantics limiting the fusion of models that require curation. Causality analyses demonstrate the smart complementarity of the databases in terms of combinatorics of controllers that explain a phenotype. From a biological perspective, our results show the specificity of controllers for epithelial and mesenchymal phenotypes that are consistent with the literature and identify a novel signature for intermediate states.  相似文献   

8.
9.
In previous work by the authors about dynamic system modeling, basic ecosystems concepts and their application to ecological modeling theory were formalized. Measuring how a variable effects certain processes leads to improvements in dynamic systems modelling and facilitates the author’s study of system diversity in which model sensitivity is a key theme. Initially, some variables and their numeric data are used for modeling. Predictions from the constructed models depend on these data. Generic study of sensitivity aims to show to what degree model behavior is altered by modification of some specific data. If small variations cause important modifications in the model’s global behavior then the model is very sensitive in relation to the variables used. If uncertain systems are considered, then it is important to submit the system to extreme situations and analyze its behavior. In this article, some indexes of uncertainty will be defined in order to determine the variables' influence in the case of extreme changes. This will permit analysis of the system’s sensitivity in relation to several simulations.  相似文献   

10.
The main problem of ecological data modeling is their interpretation and its correct understanding. This problem cannot be solved solely by a big data collection. To sufficiently understand ecosystems we need to know how these processes behave and how they respond to internal and external factors. Similarly, we need to know the behavior of processes that are involved in the climate system and the biosphere of the earth. In order to characterize precisely the behavior of individual elements and ecosystems we need to use deterministic, stochastic and chaotic behavior. Unfortunately, the chaotic part of systems is typically completely ignored in almost all approaches. Ignoring of chaotical part leads to many biased outcomes. To overcome this gap we model chaotic system behavior by random iterated function system which provides a generic guideline for such data management. This also allows to replicate a complexity and chaos of ecosystem.  相似文献   

11.
Many load-bearing soft tissues exhibit mechanical anisotropy. In order to understand the behavior of natural tissues and to create tissue engineered replacements, quantitative relationships must be developed between the tissue structures and their mechanical behavior. We used a novel collagen gel system to test the hypothesis that collagen fiber alignment is the primary mechanism for the mechanical anisotropy we have reported in structurally anisotropic gels. Loading constraints applied during culture were used to control the structural organization of the collagen fibers of fibroblast populated collagen gels. Gels constrained uniaxially during culture developed fiber alignment and a high degree of mechanical anisotropy, while gels constrained biaxially remained isotropic with randomly distributed collagen fibers. We hypothesized that the mechanical anisotropy that developed in these gels was due primarily to collagen fiber orientation. We tested this hypothesis using two mathematical models that incorporated measured collagen fiber orientations: a structural continuum model that assumes affine fiber kinematics and a network model that allows for nonaffine fiber kinematics. Collagen fiber mechanical properties were determined by fitting biaxial mechanical test data from isotropic collagen gels. The fiber properties of each isotropic gel were then used to predict the biaxial mechanical behavior of paired anisotropic gels. Both models accurately described the isotropic collagen gel behavior. However, the structural continuum model dramatically underestimated the level of mechanical anisotropy in aligned collagen gels despite incorporation of measured fiber orientations; when estimated remodeling-induced changes in collagen fiber length were included, the continuum model slightly overestimated mechanical anisotropy. The network model provided the closest match to experimental data from aligned collagen gels, but still did not fully explain the observed mechanics. Two different modeling approaches showed that the level of collagen fiber alignment in our uniaxially constrained gels cannot explain the high degree of mechanical anisotropy observed in these gels. Our modeling results suggest that remodeling-induced redistribution of collagen fiber lengths, nonaffine fiber kinematics, or some combination of these effects must also be considered in order to explain the dramatic mechanical anisotropy observed in this collagen gel model system.  相似文献   

12.
UML as a cell and biochemistry modeling language   总被引:2,自引:0,他引:2  
Webb K  White T 《Bio Systems》2005,80(3):283-302
The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top–down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.  相似文献   

13.
Jung S  Lee KH  Lee D 《Bio Systems》2007,90(1):197-210
The Bayesian network is a popular tool for describing relationships between data entities by representing probabilistic (in)dependencies with a directed acyclic graph (DAG) structure. Relationships have been inferred between biological entities using the Bayesian network model with high-throughput data from biological systems in diverse fields. However, the scalability of those approaches is seriously restricted because of the huge search space for finding an optimal DAG structure in the process of Bayesian network learning. For this reason, most previous approaches limit the number of target entities or use additional knowledge to restrict the search space. In this paper, we use the hierarchical clustering and order restriction (H-CORE) method for the learning of large Bayesian networks by clustering entities and restricting edge directions between those clusters, with the aim of overcoming the scalability problem and thus making it possible to perform genome-scale Bayesian network analysis without additional biological knowledge. We use simulations to show that H-CORE is much faster than the widely used sparse candidate method, whilst being of comparable quality. We have also applied H-CORE to retrieving gene-to-gene relationships in a biological system (The 'Rosetta compendium'). By evaluating learned information through literature mining, we demonstrate that H-CORE enables the genome-scale Bayesian analysis of biological systems without any prior knowledge.  相似文献   

14.
Several models of flocking have been promoted based on simulations with qualitatively naturalistic behavior. In this paper we provide the first direct application of computational modeling methods to infer flocking behavior from experimental field data. We show that this approach is able to infer general rules for interaction, or lack of interaction, among members of a flock or, more generally, any community. Using experimental field measurements of homing pigeons in flight we demonstrate the existence of a basic distance dependent attraction/repulsion relationship and show that this rule is sufficient to explain collective behavior observed in nature. Positional data of individuals over time are used as input data to a computational algorithm capable of building complex nonlinear functions that can represent the system behavior. Topological nearest neighbor interactions are considered to characterize the components within this model. The efficacy of this method is demonstrated with simulated noisy data generated from the classical (two dimensional) Vicsek model. When applied to experimental data from homing pigeon flights we show that the more complex three dimensional models are capable of simulating trajectories, as well as exhibiting realistic collective dynamics. The simulations of the reconstructed models are used to extract properties of the collective behavior in pigeons, and how it is affected by changing the initial conditions of the system. Our results demonstrate that this approach may be applied to construct models capable of simulating trajectories and collective dynamics using experimental field measurements of herd movement. From these models, the behavior of the individual agents (animals) may be inferred.  相似文献   

15.
Studies of developmental biology are often facilitated by diagram “models” that summarize the current understanding of underlying mechanisms. The increasing complexity of our understanding of development necessitates computational models that can extend these representations to include their dynamic behavior. Here we present a prototype model of Caenorhabditis elegans vulval precursor cell fate specification that represents many processes crucial for this developmental event but that are hard to integrate using other modeling methodologies. We demonstrate the integrative capabilities of our methodology by comprehensively incorporating the contents of three seminal papers, showing that this methodology can lead to comprehensive models of developmental biology. The prototype computational model was built and is run using a language (Live Sequence Charts) and tool (the Play-Engine) that facilitate the same conceptual processes biologists use to construct and probe diagram-type models. We demonstrate that this modeling approach permits rigorous tests of mutual consistency between experimental data and mechanistic hypotheses and can identify specific conflicting results, providing a useful approach to probe developmental systems.  相似文献   

16.
Individual-based modeling is widely applied to investigate the ecological mechanisms driving microbial community dynamics. In such models, the population or community dynamics emerge from the behavior and interplay of individual entities, which are simulated according to a predefined set of rules. If the rules that govern the behavior of individuals are based on generic and mechanistically sound principles, the models are referred to as next-generation individual-based models. These models perform particularly well in recapitulating actual ecological dynamics. However, implementation of such models is time-consuming and requires proficiency in programming or in using specific software, which likely hinders a broader application of this powerful method. Here we present McComedy, a modeling tool designed to facilitate the development of next-generation individual-based models of microbial consumer-resource systems. This tool allows flexibly combining pre-implemented building blocks that represent physical and biological processes. The ability of McComedy to capture the essential dynamics of microbial consumer-resource systems is demonstrated by reproducing and furthermore adding to the results of two distinct studies from the literature. With this article, we provide a versatile tool for developing next-generation individual-based models that can foster understanding of microbial ecology in both research and education.  相似文献   

17.
Oceanography and marine ecology have a considerable history in the use of computers for modeling both physical and ecological processes. With increasing stress on the marine environment due to human activities such as fisheries and numerous forms of pollution, the analysis of marine problems must increasingly and jointly consider physical, ecological and socio-economic aspects in a broader systems framework that transcends more traditional disciplinary boundaries. This often introduces difficult-to-quantify, “soft” elements, such as values and perceptions, into formal analysis. Thus, the problem domain combines a solid foundation in the physical sciences, with strong elements of ecological, socio-economic and political considerations. At the same time, the domain is also characterized by both a very large volume of some data, and an extremely datapoor situation for other variables, as well as a very high degree of uncertainty, partly due to the temporal and spatial heterogeneity of the marine environment. Consequently, marine systems analysis and management require tools that can integrate these diverse aspects into efficient information systems that can support research as well as planning and also policy- and decisionmaking processes. Supporting scientific research, as well as decision-making processes and the diverse groups and actors involved, requires better access and direct understanding of the information basis as well as easy-to-use, but powerful tools for analysis. Advanced information technology provides the tools to design and implement smart software where, in a broad sense, the emphasis is on the man-machine interface. Symbolic and analogous, graphical interaction, visual representation of problems, integrated data sources, and built-in domain knowledge can effectively support users of complex and complicated software systems. Integration, interaction, visualization and intelligence are key concepts that are discussed in detail, using an operational software example of a coastal water quality model. The model comprises components of a geographical information and mapping system, data bases, dynamic simulation models, and an integrated expert system. An interactive graphical user interface, dynamic visualization of model results, and a hyper-text-based help-and-explain system illustrate some of the features of new and powerful software tools for marine systems analysis and modeling.  相似文献   

18.
19.
Mechanism-based chemical kinetic models are increasingly being used to describe biological signaling. Such models serve to encapsulate current understanding of pathways and to enable insight into complex biological processes. One challenge in model development is that, with limited experimental data, multiple models can be consistent with known mechanisms and existing data. Here, we address the problem of model ambiguity by providing a method for designing dynamic stimuli that, in stimulus–response experiments, distinguish among parameterized models with different topologies, i.e., reaction mechanisms, in which only some of the species can be measured. We develop the approach by presenting two formulations of a model-based controller that is used to design the dynamic stimulus. In both formulations, an input signal is designed for each candidate model and parameterization so as to drive the model outputs through a target trajectory. The quality of a model is then assessed by the ability of the corresponding controller, informed by that model, to drive the experimental system. We evaluated our method on models of antibody–ligand binding, mitogen-activated protein kinase (MAPK) phosphorylation and de-phosphorylation, and larger models of the epidermal growth factor receptor (EGFR) pathway. For each of these systems, the controller informed by the correct model is the most successful at designing a stimulus to produce the desired behavior. Using these stimuli we were able to distinguish between models with subtle mechanistic differences or where input and outputs were multiple reactions removed from the model differences. An advantage of this method of model discrimination is that it does not require novel reagents, or altered measurement techniques; the only change to the experiment is the time course of stimulation. Taken together, these results provide a strong basis for using designed input stimuli as a tool for the development of cell signaling models.  相似文献   

20.
We present a qualitative reasoning model of how plant colonization of land during the mid Paleozoic era (450–300 million years ago) altered the long-term carbon cycle resulting in a dramatic decrease in global atmospheric carbon dioxide levels. This model is aimed at facilitating learning and communication about how interactions between biological and geological processes drove system behavior. The model is developed in three submodels of the main system components, namely how competition for limited land habitat drove natural selection for increasing adaptations to life on land; how these adaptations resulted in increased formation of organic-rich sedimentary rocks (coal); and how these adaptations altered weathering of calcium and magnesium silicate rocks, resulting in increased deposition of inorganic carbonates in oceans. These separate submodels are then assembled to derive the full dynamic model of plant macroevolution, colonization of land, and plummeting carbon dioxide levels that occurred during the mid Paleozoic. The qualitative reasoning framework supports explicit representation of causal feedbacks — as with previously developed systems analysis models — but also supports simulation of system dynamics arising from the configuration of entities in the system. The ability of qualitative reasoning to provide causal accounts (explanations) of why certain phenomena occurred and when, is a powerful advantage over numerical simulation such as the complex GEOCARB models, where explanation must be left to interpretation by experts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号