首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
系统生物学时代,各种高通量组学技术产生了大量数据。一些旨在挖掘数据和整合信息的计算机建模技术也逐渐用于系统水平定量分析细胞代谢。模型有助于指导实验设计,实验结果反过来检验和优化模型,虚实结合,有利于在系统层面认识复杂的代谢过程。根据这些信息,可以设计、优化工业微生物代谢特征,高表达目标代谢物。本文综述了系统生物技术在工业(药用)微生物育种和高通量筛选中的最新应用进展。  相似文献   

2.
In addition to traditional and novel experimental approaches to study host–pathogen interactions, mathematical and computer modelling have recently been applied to address open questions in this area. These modelling tools not only offer an additional avenue for exploring disease dynamics at multiple biological scales, but also complement and extend knowledge gained via experimental tools. In this review, we outline four examples where modelling has complemented current experimental techniques in a way that can or has already pushed our knowledge of host–pathogen dynamics forward. Two of the modelling approaches presented go hand in hand with articles in this issue exploring fluorescence resonance energy transfer and two-photon intravital microscopy. Two others explore virtual or ' in silico ' deletion and depletion as well as a new method to understand and guide studies in genetic epidemiology. In each of these examples, the complementary nature of modelling and experiment is discussed. We further note that multi-scale modelling may allow us to integrate information across length (molecular, cellular, tissue, organism, population) and time (e.g. seconds to lifetimes). In sum, when combined, these compatible approaches offer new opportunities for understanding host–pathogen interactions.  相似文献   

3.
4.
5.
This review addresses strategies for the generation of ligands for G-protein-coupled receptors outside classical high-throughput screening and literature based approaches. These range from the chemical intuition-based strategies of endogenous ligand elaboration and privileged structure decoration to the in silico approaches of virtual screening and de novo design. Examples are cited where supporting pharmacological data has been presented.  相似文献   

6.
Computational modelling of whole biological systems from cells to organs is gaining momentum in cell biology and disease studies. This pathway is essential for the derivation of explanatory frameworks that will facilitate the development of a predictive capacity for estimating outcomes or risk associated with particular disease processes and therapeutic or stressful treatments. This article introduces a series of invited papers covering a hierarchy of issues and modelling problems, ranging from crucial conceptual considerations of the validity of cellular modelling through to multi-scale modelling up to organ level. The challenges and approaches in cellular modelling are described, including the potential of in silico modelling applications for receptor–ligand interactions in cell signalling, simulated organ dysfunction (i.e., heart), human and environmental toxicity and the progress of the IUPS Physiome Project. A major challenge now facing biologists is how to translate the wealth of reductionist detail about cells and tissues into a real understanding of how these systems function and are perturbed in disease processes. In biomedicine, simulation models of biological systems now contain sufficient detail, not only to reconstruct normal functions, but also, to reconstruct major disease states. More widely, simulation modelling will aid the targeting of current knowledge gaps and how to fill them; and also provide a research tool for selecting critical factors from multiple simulated experiments for real experimental design. The envisaged longer-term end- product is the creation of simulation models for predicting drug interactions and harmful side-effects; and their use in therapeutic and environmental health risk management. Finally, we take a speculative look at possible future scenarios in cellular modelling, where it is envisioned that integrative biology will move from being largely qualitative and instead become a highly quantitative, computer-intensive discipline.  相似文献   

7.
In this paper we present a multiscale, individual-based simulation environment that integrates CompuCell3D for lattice-based modelling on the cellular level and Bionetsolver for intracellular modelling. CompuCell3D or CC3D provides an implementation of the lattice-based Cellular Potts Model or CPM (also known as the Glazier-Graner-Hogeweg or GGH model) and a Monte Carlo method based on the metropolis algorithm for system evolution. The integration of CC3D for cellular systems with Bionetsolver for subcellular systems enables us to develop a multiscale mathematical model and to study the evolution of cell behaviour due to the dynamics inside of the cells, capturing aspects of cell behaviour and interaction that is not possible using continuum approaches. We then apply this multiscale modelling technique to a model of cancer growth and invasion, based on a previously published model of Ramis-Conde et al. (2008) where individual cell behaviour is driven by a molecular network describing the dynamics of E-cadherin and β-catenin. In this model, which we refer to as the centre-based model, an alternative individual-based modelling technique was used, namely, a lattice-free approach. In many respects, the GGH or CPM methodology and the approach of the centre-based model have the same overall goal, that is to mimic behaviours and interactions of biological cells. Although the mathematical foundations and computational implementations of the two approaches are very different, the results of the presented simulations are compatible with each other, suggesting that by using individual-based approaches we can formulate a natural way of describing complex multi-cell, multiscale models. The ability to easily reproduce results of one modelling approach using an alternative approach is also essential from a model cross-validation standpoint and also helps to identify any modelling artefacts specific to a given computational approach.  相似文献   

8.
Random mutagenesis and selection approaches used traditionally for the development of industrial strains have largely been complemented by metabolic engineering, which allows purposeful modification of metabolic and cellular characteristics by using recombinant DNA and other molecular biological techniques. As systems biology advances as a new paradigm of research thanks to the development of genome-scale computational tools and high-throughput experimental technologies including omics, systems metabolic engineering allowing modification of metabolic, regulatory and signaling networks of the cell at the systems-level is becoming possible. In silico genome-scale metabolic model and its simulation play increasingly important role in providing systematic strategies for metabolic engineering. The in silico genome-scale metabolic model is developed using genomic annotation, metabolic reactions, literature information, and experimental data. The advent of in silico genome-scale metabolic model brought about the development of various algorithms to simulate the metabolic status of the cell as a whole. In this paper, we review the algorithms developed for the system-wide simulation and perturbation of cellular metabolism, discuss the characteristics of these algorithms, and suggest future research direction.  相似文献   

9.
Modelling and simulation techniques are valuable tools for the understanding of complex biological systems. The design of a computer model necessarily has many diverse inputs, such as information on the model topology, reaction kinetics and experimental data, derived either from the literature, databases or direct experimental investigation. In this review, we describe different data resources, standards and modelling and simulation tools that are relevant to integrative systems biology.  相似文献   

10.
In this article, we define systems biology of virus entry in mammalian cells as the discipline that combines several approaches to comprehensively understand the collective physical behaviour of virus entry routes, and to understand the coordinated operation of the functional modules and molecular machineries that lead to this physical behaviour. Clearly, these are extremely ambitious aims, but recent developments in different life science disciplines slowly allow us to set them as realistic, although very distant, goals. Besides classical approaches to obtain high-resolution information of the molecules, particles and machines involved, we require approaches that can monitor collective behaviour of many molecules, particles and machines simultaneously, in order to reveal design principles of the systems as a whole. Here we will discuss approaches that fall in the latter category, namely time-lapse imaging and single-particle tracking (SPT) combined with computational analysis and modelling, and genome-wide RNA interference approaches to reveal the host components required for virus entry. These techniques should in the future allow us to assign host genes to the systems' functions and characteristics, and allow emergence-driven, in silico assembly of networks that include interactions with increasing hierarchy (molecules-multiprotein complexes-vesicles and organelles), and kinetics and subcellular spatiality, in order to allow realistic simulations of virus entry in real time.  相似文献   

11.
12.
A significant challenge facing high-throughput phenotyping of in-vivo knockout mice is ensuring phenotype calls are robust and reliable. Central to this problem is selecting an appropriate statistical analysis that models both the experimental design (the workflow and the way control mice are selected for comparison with knockout animals) and the sources of variation. Recently we proposed a mixed model suitable for small batch-oriented studies, where controls are not phenotyped concurrently with mutants. Here we evaluate this method both for its sensitivity to detect phenotypic effects and to control false positives, across a range of workflows used at mouse phenotyping centers. We found the sensitivity and control of false positives depend on the workflow. We show that the phenotypes in control mice fluctuate unexpectedly between batches and this can cause the false positive rate of phenotype calls to be inflated when only a small number of batches are tested, when the effect of knockout becomes confounded with temporal fluctuations in control mice. This effect was observed in both behavioural and physiological assays. Based on this analysis, we recommend two approaches (workflow and accompanying control strategy) and associated analyses, which would be robust, for use in high-throughput phenotyping pipelines. Our results show the importance in modelling all sources of variability in high-throughput phenotyping studies.  相似文献   

13.
The cellular environment creates numerous obstacles to efficient chemistry, as molecular components must navigate through a complex, densely crowded, heterogeneous, and constantly changing landscape in order to function at the appropriate times and places. Such obstacles are especially challenging to self-organizing or self-assembling molecular systems, which often need to build large structures in confined environments and typically have high-order kinetics that should make them exquisitely sensitive to concentration gradients, stochastic noise, and other non-ideal reaction conditions. Yet cells nonetheless manage to maintain a finely tuned network of countless molecular assemblies constantly forming and dissolving with a robustness and efficiency generally beyond what human engineers currently can achieve under even carefully controlled conditions. Significant advances in high-throughput biochemistry and genetics have made it possible to identify many of the components and interactions of this network, but its scale and complexity will likely make it impossible to understand at a global, systems level without predictive computational models. It is thus necessary to develop a clear understanding of how the reality of cellular biochemistry differs from the ideal models classically assumed by simulation approaches and how simulation methods can be adapted to accurately reflect biochemistry in the cell, particularly for the self-organizing systems that are most sensitive to these factors. In this review, we present approaches that have been undertaken from the modeling perspective to address various ways in which self-organization in the cell differs from idealized models.  相似文献   

14.
The functioning of even a simple biological system is much more complicated than the sum of its genes, proteins and metabolites. A premise of systems biology is that molecular profiling will facilitate the discovery and characterization of important disease pathways. However, as multiple levels of effector pathway regulation appear to be the norm rather than the exception, a significant challenge presented by high-throughput genomics and proteomics technologies is the extraction of the biological implications of complex data. Thus, integration of heterogeneous types of data generated from diverse global technology platforms represents the first challenge in developing the necessary foundational databases needed for predictive modelling of cell and tissue responses. Given the apparent difficulty in defining the correspondence between gene expression and protein abundance measured in several systems to date, how do we make sense of these data and design the next experiment? In this review, we highlight current approaches and challenges associated with integration and analysis of heterogeneous data sets, focusing on global analysis obtained from high-throughput technologies.  相似文献   

15.
Redefining plant systems biology: from cell to ecosystem   总被引:1,自引:0,他引:1  
Molecular biologists typically restrict systems biology to cellular levels. By contrast, ecologists define biological systems as communities of interacting individuals at different trophic levels that process energy, nutrient and information flows. Modern plant breeding needs to increase agricultural productivity while decreasing the ecological footprint. This requires a holistic systems biology approach that couples different aggregation levels while considering the variables that affect these biological systems from cell to community. The challenge is to generate accurate experimental data that can be used together with modelling concepts and techniques that allow experimentally verifying in silico predictions. The coupling of aggregation levels in plant sciences, termed Integral Quantification of Biological Organization (IQ(BiO)), might enhance our abilities to generate new desired plant phenotypes.  相似文献   

16.
RNA molecules are important cellular components involved in many fundamental biological processes. Understanding the mechanisms behind their functions requires RNA tertiary structure knowledge. Although modeling approaches for the study of RNA structures and dynamics lag behind efforts in protein folding, much progress has been achieved in the past two years. Here, we review recent advances in RNA folding algorithms, RNA tertiary motif discovery, applications of graph theory approaches to RNA structure and function, and in silico generation of RNA sequence pools for aptamer design. Advances within each area can be combined to impact many problems in RNA structure and function.  相似文献   

17.
Using the transcriptome to annotate the genome   总被引:35,自引:0,他引:35  
A remaining challenge for the human genome project involves the identification and annotation of expressed genes. The public and private sequencing efforts have identified approximately 15,000 sequences that meet stringent criteria for genes, such as correspondence with known genes from humans or other species, and have made another approximately 10,000-20,000 gene predictions of lower confidence, supported by various types of in silico evidence, including homology studies, domain searches, and ab initio gene predictions. These computational methods have limitations, both because they are unable to identify a significant fraction of genes and exons and because they are unable to provide definitive evidence about whether a hypothetical gene is actually expressed. As the in silico approaches identified a smaller number of genes than anticipated, we wondered whether high-throughput experimental analyses could be used to provide evidence for the expression of hypothetical genes and to reveal previously undiscovered genes. We describe here the development of such a method--called long serial analysis of gene expression (LongSAGE), an adaption of the original SAGE approach--that can be used to rapidly identify novel genes and exons.  相似文献   

18.
This review is devoted to describing, summarizing, and analyzing of dynamic proteomics data obtained over the last few years and concerning the role of protein-protein interactions in modeling of the living cell. Principles of modern high-throughput experimental methods for investigation of protein-protein interactions are described. Systems biology approaches based on integrative view on cellular processes are used to analyze organization of protein interaction networks. It is proposed that finding of some proteins in different protein complexes can be explained by their multi-modular and polyfunctional properties; the different protein modules can be located in the nodes of protein interaction networks. Mathematical and computational approaches to modeling of the living cell with emphasis on molecular dynamics simulation are provided. The role of the network analysis in fundamental medicine is also briefly reviewed.  相似文献   

19.
Many essential cellular processes such as signal transduction, transport, cellular motion and most regulatory mechanisms are mediated by protein-protein interactions. In recent years, new experimental techniques have been developed to discover the protein-protein interaction networks of several organisms. However, the accuracy and coverage of these techniques have proven to be limited, and computational approaches remain essential both to assist in the design and validation of experimental studies and for the prediction of interaction partners and detailed structures of protein complexes. Here, we provide a critical overview of existing structure-independent and structure-based computational methods. Although these techniques have significantly advanced in the past few years, we find that most of them are still in their infancy. We also provide an overview of experimental techniques for the detection of protein-protein interactions. Although the developments are promising, false positive and false negative results are common, and reliable detection is possible only by taking a consensus of different experimental approaches. The shortcomings of experimental techniques affect both the further development and the fair evaluation of computational prediction methods. For an adequate comparative evaluation of prediction and high-throughput experimental methods, an appropriately large benchmark set of biophysically characterized protein complexes would be needed, but is sorely lacking.  相似文献   

20.
Peptides in solution currently exist under several conformations; an equilibrium which varies with solvent polarity. Despite or because of this structure versatility, peptides can be selective biological tools: they can adapt to a target, vary conformation with solvents and so on. These capacities are crucial for cargo carriers. One promising way of using peptides in biotechnologies is to decipher their medium-sequence-structure-function relationships and one approach is molecular modelling. Only few "in silico" methods of peptide design are described in the literature. Most are used in support of experimental screening of peptide libraries. However, the way they are made does not teach us much for future researches. In this paper, we describe an "in silico" method (PepDesign) which starts by analysing the native interaction of a peptide with a target molecule in order to define which points are important. From there, a modelling protocol for the design of 'better' peptides is set. The PepDesign procedure calculates new peptides fulfilling the hypothesis, tests the conformational space of these peptides in interaction with the target by angular dynamics and goes up to the selection of the best peptide based on the analysis of complex structure properties. Experimental biological assays are finally used to test the selected peptides, hence to validate the approach. Applications of PepDesign are wide because the procedure will remain similar irrespective of the target which can be a protein, a drug or a nucleic acid. In this paper, we describe the design of peptides which binds to the fusogenic helical form of the C-terminal domain of the Abeta peptide (Abeta29-42).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号