Background
Protein-protein interaction (PPI) plays a core role in cellular functions. Massively parallel supercomputing systems have been actively developed over the past few years, which enable large-scale biological problems to be solved, such as PPI network prediction based on tertiary structures.Results
We have developed a high throughput and ultra-fast PPI prediction system based on rigid docking, “MEGADOCK”, by employing a hybrid parallelization (MPI/OpenMP) technique assuming usages on massively parallel supercomputing systems. MEGADOCK displays significantly faster processing speed in the rigid-body docking process that leads to full utilization of protein tertiary structural data for large-scale and network-level problems in systems biology. Moreover, the system was scalable as shown by measurements carried out on two supercomputing environments. We then conducted prediction of biological PPI networks using the post-docking analysis.Conclusions
We present a new protein-protein docking engine aimed at exhaustive docking of mega-order numbers of protein pairs. The system was shown to be scalable by running on thousands of nodes. The software package is available at: http://www.bi.cs.titech.ac.jp/megadock/k/.Areas covered: This article reviews the existing Boolean network modeling approaches, which provide in comparison with alternative modeling techniques several advantages for the processing of proteomics data. Application of methods for inference, reduction and validation of protein co-expression networks that are derived from quantitative high-throughput proteomics measurements is presented. It’s also shown how Boolean models can be used to derive system-theoretic characteristics that describe both the dynamical behavior of such networks as a whole and the properties of different cell states (e.g. healthy or diseased cell states). Furthermore, application of methods derived from control theory is proposed in order to simulate the effects of therapeutic interventions on such networks, which is a promising approach for the computer-assisted discovery of biomarkers and drug targets. Finally, the clinical application of Boolean modeling analyses is discussed.
Expert commentary: Boolean modeling of proteomics data is still in its infancy. Progress in this field strongly depends on provision of a repository with public access to relevant reference models. Also required are community supported standards that facilitate input of both proteomics and patient related data (e.g. age, gender, laboratory results, etc.). 相似文献
Background
Synthetic biology brings together concepts and techniques from engineering and biology. In this field, computer-aided design (CAD) is necessary in order to bridge the gap between computational modeling and biological data. Using a CAD application, it would be possible to construct models using available biological "parts" and directly generate the DNA sequence that represents the model, thus increasing the efficiency of design and construction of synthetic networks.Results
An application named TinkerCell has been developed in order to serve as a CAD tool for synthetic biology. TinkerCell is a visual modeling tool that supports a hierarchy of biological parts. Each part in this hierarchy consists of a set of attributes that define the part, such as sequence or rate constants. Models that are constructed using these parts can be analyzed using various third-party C and Python programs that are hosted by TinkerCell via an extensive C and Python application programming interface (API). TinkerCell supports the notion of a module, which are networks with interfaces. Such modules can be connected to each other, forming larger modular networks. TinkerCell is a free and open-source project under the Berkeley Software Distribution license. Downloads, documentation, and tutorials are available at http://www.tinkercell.com.Conclusion
An ideal CAD application for engineering biological systems would provide features such as: building and simulating networks, analyzing robustness of networks, and searching databases for components that meet the design criteria. At the current state of synthetic biology, there are no established methods for measuring robustness or identifying components that fit a design. The same is true for databases of biological parts. TinkerCell's flexible modeling framework allows it to cope with changes in the field. Such changes may involve the way parts are characterized or the way synthetic networks are modeled and analyzed computationally. TinkerCell can readily accept third-party algorithms, allowing it to serve as a platform for testing different methods relevant to synthetic biology. 相似文献Purpose: In this contribution, we critically evaluate the various efforts, and the (limited) success thereof, in order to introduce standards for defining, designing, assembling, characterizing, and sharing synthetic biology parts. The causes for this success or the lack thereof, as well as possible solutions to overcome these, are discussed.
Conclusion: Akin to other engineering disciplines, extensive standardization will undoubtedly speed-up and reduce the cost of bioprocess development. In this respect, further implementation of synthetic biology standards will be crucial for the field in order to redeem its promise, i.e. to enable predictable forward engineering. 相似文献
Areas covered: Here, we report a general introduction about the systems biology approach and mechanistic insights recently obtained by over-representation analysis of proteomics data of cellular and animal models of Alzheimer’s disease, Parkinson’s disease and other neurodegenerative disorders, as well as of affected human tissues.
Expert commentary: As an inductive method, proteomics is based on unbiased observations that further require validation of generated hypotheses. Pathway databases and over-representation analysis tools allow researchers to assign an expectation value to pathogenetic mechanisms linked to neurodegenerative diseases. The systems biology approach based on omics data may be the key to unravel the complex mechanisms underlying neurodegeneration. 相似文献
Purpose: For rational metabolic engineering, the elucidation of metabolic pathways in fine detail and their manipulation according to requirements is the key to exploiting the use of microalgae. Emergence of site-specific nucleases have revolutionized applied research leading to biotechnological gains. Genome engineering as well as modulation of the endogenous genome with high precision using CRISPR systems is being gradually employed in microalgal research. Further, to optimize and produce better algal platforms, use of systems biology network analysis and integration of omics data is required. This review discusses two important approaches: systems biology and gene editing strategies used on microalgal systems with a focus on biofuel production and sustainable solutions. It also emphasizes that the integration of such systems would contribute and compliment applied research on microalgae.
Conclusions: Recent advances in microalgae are discussed, including systems biology, gene editing approaches in lipid bio-synthesis, and antenna engineering. Lastly, it has been attempted here to showcase how CRISPR/Cas systems are a better editing tool than existing techniques that can be utilized for gene modulation and engineering during biofuel production. 相似文献
Chemical biology is a research field utilizing small molecules to investigate biological phenomena. One of the most important aims of chemical biology is to find the small molecules, and natural products are ideal screening sources due to their structural diversity. Therefore, natural product screening based on the progress of chemical biology prompted us to find small molecules targeting cancer characteristics. Another contribution of chemical biology is to facilitate the target identification of small molecule. Therefore, among a variety of methods to uncover protein function, chemical biology is a remarkable approach in which small molecules are used as probes to elucidate protein functions related to cancer development.
Abbreviations: EGF: Epidermal growth factor; PDGF: Platelet-derived growth factor; CRPC: Castration-resistant prostate cancer; AR: Androgen receptor; FTase: Farnesyl transferase; 5-LOX: 5-Lipoxygenase; LT: Leukotriene; CysLT1: Cysteinyl leukotriene receptor 1; GPA: Glucopiericidin A; PA: Piericidin A; XN: Xanthohumol; VCP: Valosin-containing protein; ACACA: Acetyl-CoA carboxylase-α. 相似文献
Areas covered: Proteomics-based methods to investigate proteolysis activity, focusing on substrate identification, protease specificity and their applications in systems biology are reviewed. Their quantification strategies, challenges and pitfalls are underlined and the biological implications of protease malfunction are highlighted.
Expert commentary: Dysregulated protease activity is a hallmark for some disease pathologies such as cancer. Current biochemical approaches are low throughput and some are limited by the amount of sample required to obtain reliable results. Mass spectrometry based proteomics provides a suitable platform to investigate protease activity, providing information about substrate specificity and mapping cleavage sites. 相似文献
Background
Given the complex mechanisms underlying biochemical processes systems biology researchers tend to build ever increasing computational models. However, dealing with complex systems entails a variety of problems, e.g. difficult intuitive understanding, variety of time scales or non-identifiable parameters. Therefore, methods are needed that, at least semi-automatically, help to elucidate how the complexity of a model can be reduced such that important behavior is maintained and the predictive capacity of the model is increased. The results should be easily accessible and interpretable. In the best case such methods may also provide insight into fundamental biochemical mechanisms.Results
We have developed a strategy based on the Computational Singular Perturbation (CSP) method which can be used to perform a "biochemically-driven" model reduction of even large and complex kinetic ODE systems. We provide an implementation of the original CSP algorithm in COPASI (a COmplex PAthway SImulator) and applied the strategy to two example models of different degree of complexity - a simple one-enzyme system and a full-scale model of yeast glycolysis.Conclusion
The results show the usefulness of the method for model simplification purposes as well as for analyzing fundamental biochemical mechanisms. COPASI is freely available at http://www.copasi.org. 相似文献Areas covered: Here, we review the advancement of the middle-down MS strategy applied to histones, which consists in the analysis of intact histone N-terminal tails (aa 50–60). Middle-down MS has reached sufficient robustness and reliability, and it is far less technically challenging than PTM quantification on intact histones (top-down). However, the very few chromatin biology studies applying middle-down MS resulting from PubMed searches indicate that it is still very scarcely exploited, potentially due to the apparent high complexity of method and analysis.
Expert commentary: We will discuss the state-of-the-art workflow and examples of existing studies, aiming to highlight its potential and feasibility for studies of cell biologists interested in chromatin and epigenetics. 相似文献
Areas covered: In this paper, we reviewed a variety of existing methods for tracing the 3WIs. Furthermore, we provide a comprehensive review of the previous biological studies based on 3WI models.
Expert commentary: Comparison of features of these methods indicates that the modified liquid association algorithm has the best efficiency for tracing 3WI between others. The limited number of biological studies based on the 3WI suggests that high computational demand of the available algorithms is a major challenge to apply this approach for analyzing high-throughput omics data. 相似文献
Area Covered: New bioinformatic tools and pipelines for the integration of data from different omics disciplines continue to emerge, and will support scientists to reliably interpret data in the context of biological processes. comprehensive data integration strategies will fundamentally improve systems biology and systems medicine. to present recent developments of integrative omics, the göttingen proteomics forum (gpf) organized its 6th symposium on the 23rd of november 2017, as part of a series of regular gpf symposia. more than 140 scientists attended the event that highlighted the challenges and opportunities but also the caveats of integrating data from different omics disciplines.
Expert commentary: The continuous exponential growth in omics data require similar development in software solutions for handling this challenge. Integrative omics tools offer the chance to handle this challenge but profound investigations and coordinated efforts are required to boost this field. 相似文献
Objective: To characterize by in vitro biochemical and in silico studies the NBDHEX analogues named MC2752 and MC2753.
Materials and methods: Synthesis of MC2752 and MC2753, biochemical assays and in silico docking and normal-mode analyses.
Results: The presence of a hydrophobic moiety in the side chain of MC2753 confers unique features to this molecule. Unlike its parent drug NBDHEX, MC2753 does not require GSH to trigger the dissociation of the complex between GSTP1-1 and TRAF2, and displays high stability towards the nucleophilic attack of the tripeptide under physiological conditions.
Discussion and conclusion: MC2753 may represent a lead compound for the development of novel GSTP1-1 inhibitors not affected in their anticancer action by fluctuations of cellular GSH levels, and characterized by an increased half-life in vivo. 相似文献