首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In recent years, the deluge of complicated molecular and cellular microscopic images creates compelling challenges for the image computing community. There has been an increasing focus on developing novel image processing, data mining, database and visualization techniques to extract, compare, search and manage the biological knowledge in these data-intensive problems. This emerging new area of bioinformatics can be called 'bioimage informatics'. This article reviews the advances of this field from several aspects, including applications, key techniques, available tools and resources. Application examples such as high-throughput/high-content phenotyping and atlas building for model organisms demonstrate the importance of bioimage informatics. The essential techniques to the success of these applications, such as bioimage feature identification, segmentation and tracking, registration, annotation, mining, image data management and visualization, are further summarized, along with a brief overview of the available bioimage databases, analysis tools and other resources.  相似文献   

2.
The field of lipidomics, as coined in 2003, has made profound advances and been rapidly expanded. The mass spectrometry-based strategies of this analytical methodology-oriented research discipline for lipid analysis are largely fallen into three categories: direct infusion-based shotgun lipidomics, liquid chromatography-mass spectrometry-based platforms, and matrix-assisted laser desorption/ionization mass spectrometry-based approaches (particularly in imagining lipid distribution in tissues or cells). This review focuses on shotgun lipidomics. After briefly introducing its fundamentals, the major materials of this article cover its recent advances. These include the novel methods of lipid extraction, novel shotgun lipidomics strategies for identification and quantification of previously hardly accessible lipid classes and molecular species including isomers, and novel tools for processing and interpretation of lipidomics data. Representative applications of advanced shotgun lipidomics for biological and biomedical research are also presented in this review. We believe that with these novel advances in shotgun lipidomics, this approach for lipid analysis should become more comprehensive and high throughput, thereby greatly accelerating the lipidomics field to substantiate the aberrant lipid metabolism, signaling, trafficking, and homeostasis under pathological conditions and their underpinning biochemical mechanisms.  相似文献   

3.
4.
Metabolic network analysis has attracted much attention in the area of systems biology. It has a profound role in understanding the key features of organism metabolic networks and has been successfully applied in several fields of systems biology, including in silico gene knockouts, production yield improvement using engineered microbial strains, drug target identification, and phenotype prediction. A variety of metabolic network databases and tools have been developed in order to assist research in these fields. Databases that comprise biochemical data are normally integrated with the use of metabolic network analysis tools in order to give a more comprehensive result. This paper reviews and compares eight databases as well as twenty one recent tools. The aim of this review is to study the different types of tools in terms of the features and usability, as well as the databases in terms of the scope and data provided. These tools can be categorised into three main types: standalone tools; toolbox-based tools; and web-based tools. Furthermore, comparisons of the databases as well as the tools are also provided to help software developers and users gain a clearer insight and a better understanding of metabolic network analysis. Additionally, this review also helps to provide useful information that can be used as guidance in choosing tools and databases for a particular research interest.  相似文献   

5.
6.
An architecture for biological information extraction and representation   总被引:1,自引:0,他引:1  
Motivations: Technological advances in biomedical research are generating a plethora of heterogeneous data at a high rate. There is a critical need for extraction, integration and management tools for information discovery and synthesis from these heterogeneous data. RESULTS: In this paper, we present a general architecture, called ALFA, for information extraction and representation from diverse biological data. The ALFA architecture consists of: (i) a networked, hierarchical, hyper-graph object model for representing information from heterogeneous data sources in a standardized, structured format; and (ii) a suite of integrated, interactive software tools for information extraction and representation from diverse biological data sources. As part of our research efforts to explore this space, we have currently prototyped the ALFA object model and a set of interactive software tools for searching, filtering, and extracting information from scientific text. In particular, we describe BioFerret, a meta-search tool for searching and filtering relevant information from the web, and ALFA Text Viewer, an interactive tool for user-guided extraction, disambiguation, and representation of information from scientific text. We further demonstrate the potential of our tools in integrating the extracted information with experimental data and diagrammatic biological models via the common underlying ALFA representation. CONTACT: aditya_vailaya@agilent.com.  相似文献   

7.
Evolution of advanced manufacturing technologies and the new manufacturing paradigm has enriched the computer integrated manufacturing (CIM) methodology. The new advances have put more demands for CIM integration technology and associated supporting tools. One of these demands is to provide CIM systems with better software architecture, more flexible integration mechanisms, and powerful support platforms. In this paper, we present an integrating infrastructure for CIM implementation in manufacturing enterprises to form an integrated automation system. A research prototype of an integrating infrastructure has been developed for the development, integration, and operation of integrated CIM system. It is based on the client/server structure and employs object-oriented and agent technology. System openness, scalability, and maintenance are ensured by conforming to international standards and by using effective system design software and management tools.  相似文献   

8.
A recent article about genomic filtering highlights exciting new opportunities for antiparasitic drug discovery resulting from major advances in genomic technologies. In this article, we discuss several approaches in which model-organism genomics and proteomics could be applied to the identification and validation of novel targets for antiparasitic drug discovery in veterinary medicine.  相似文献   

9.
Modern chemotherapy has significantly improved patient outcomes against drug-sensitive tuberculosis. However, the rapid emergence of drug-resistant tuberculosis, together with the bacterium’s ability to persist and remain latent present a major public health challenge. To overcome this problem, research into novel anti-tuberculosis targets and drug candidates is thus of paramount importance. This review article provides an overview of tuberculosis highlighting the recent advances and tools that are employed in the field of anti-tuberculosis drug discovery. The predominant focus is on anti-tuberculosis agents that are currently in the pipeline, i.e. clinical trials.  相似文献   

10.
Biomedical applications of protein chips   总被引:2,自引:0,他引:2  
The development of microchips involving proteins has accelerated within the past few years. Although DNA chip technologies formed the precedent, many different strategies and technologies have been used because proteins are inherently a more complex type of molecule. This review covers the various biomedical applications of protein chips in diagnostics, drug screening and testing, disease monitoring, drug discovery (proteomics), and medical research. The proteomics and drug discovery section is further subdivided to cover drug discovery tools (on-chip separations, expression profiling, and antibody arrays), molecular interactions and signaling pathways, the identification of protein function, and the identification of novel therapeutic compounds. Although largely focused on protein chips, this review includes chips involving cells and tissues as a logical extension of the type of data that can be generated from these microchips.  相似文献   

11.
Chen Q  Liu T  Chen G 《Current Genomics》2011,12(6):380-390
Proteomics will contribute greatly to the understanding of gene functions in the post-genomic era. In proteome research, protein digestion is a key procedure prior to mass spectrometry identification. During the past decade, a variety of electromagnetic waves have been employed to accelerate proteolysis. This review focuses on the recent advances and the key strategies of these novel proteolysis approaches for digesting and identifying proteins. The subjects covered include microwave-accelerated protein digestion, infrared-assisted proteolysis, ultraviolet-enhanced protein digestion, laser-assisted proteolysis, and future prospects. It is expected that these novel proteolysis strategies accelerated by various electromagnetic waves will become powerful tools in proteome research and will find wide applications in high throughput protein digestion and identification.  相似文献   

12.
13.
Recent advances in DNA sequencing technology have enabled elucidation of whole genome information from a plethora of organisms. In parallel with this technology, various bioinformatics tools have driven the comparative analysis of the genome sequences between species and within isolates. While drawing meaningful conclusions from a large amount of raw material, computer-aided identification of suitable targets for further experimental analysis and characterization, has also led to the prediction of non-human homologous essential genes in bacteria as promising candidates for novel drug discovery. Here, we present a comparative genomic analysis to identify essential genes in Burkholderia pseudomallei. Our in silico prediction has identified 312 essential genes which could also be potential drug candidates. These genes encode essential proteins to support the survival of B. pseudomallei including outer-inner membrane and surface structures, regulators, proteins involved in pathogenenicity, adaptation, chaperones as well as degradation of small and macromolecules, energy metabolism, information transfer, central/intermediate/miscellaneous metabolism pathways and some conserved hypothetical proteins of unknown function. Therefore, our in silico approach has enabled rapid screening and identification of potential drug targets for further characterization in the laboratory.  相似文献   

14.
Protein tyrosine phosphorylation is a fundamental mechanism for controlling many aspects of cellular processes, as well as aspects of human health and diseases. Compared with phosphoserine and phosphothreonine, phosphotyrosine signaling is more tightly regulated, but often more challenging to characterize, due to significantly lower levels of tyrosine phosphorylation (i.e., a relative abundance of 1800:200:1 was estimated for phosphoserine/phosphothreonine/phosphotyrosine in vertebrate cells). In this review, we outline recent advances in analytical methodologies for enrichment, identification and accurate quantitation of tyrosine-phosphorylated proteins and peptides. Advances in antibody-based technologies, capillary liquid chromatography coupled with mass spectrometry, and various stable isotope labeling strategies are discussed, as well as non-mass spectrometry-based methods, such as those using protein/peptide arrays. As a result of these advances, powerful tools now have the power to crack signal transduction codes at the system level, and provide a basis for discovering novel drug targets for human diseases.  相似文献   

15.
Biomedical research relies increasingly on large collections of data sets and knowledge whose generation, representation and analysis often require large collaborative and interdisciplinary efforts. This dimension of 'big data' research calls for the development of computational tools to manage such a vast amount of data, as well as tools that can improve communication and access to information from collaborating researchers and from the wider community. Whenever research projects have a defined temporal scope, an additional issue of data management arises, namely how the knowledge generated within the project can be made available beyond its boundaries and life-time. DC-THERA is a European 'Network of Excellence' (NoE) that spawned a very large collaborative and interdisciplinary research community, focusing on the development of novel immunotherapies derived from fundamental research in dendritic cell immunobiology. In this article we introduce the DC-THERA Directory, which is an information system designed to support knowledge management for this research community and beyond. We present how the use of metadata and Semantic Web technologies can effectively help to organize the knowledge generated by modern collaborative research, how these technologies can enable effective data management solutions during and beyond the project lifecycle, and how resources such as the DC-THERA Directory fit into the larger context of e-science.  相似文献   

16.
MOTIVATION: Bioinformatics clustering tools are useful at all levels of proteomic data analysis. Proteomics studies can provide a wealth of information and rapidly generate large quantities of data from the analysis of biological specimens. The high dimensionality of data generated from these studies requires the development of improved bioinformatics tools for efficient and accurate data analyses. For proteome profiling of a particular system or organism, a number of specialized software tools are needed. Indeed, significant advances in the informatics and software tools necessary to support the analysis and management of these massive amounts of data are needed. Clustering algorithms based on probabilistic and Bayesian models provide an alternative to heuristic algorithms. The number of clusters (diseased and non-diseased groups) is reduced to the choice of the number of components of a mixture of underlying probability. The Bayesian approach is a tool for including information from the data to the analysis. It offers an estimation of the uncertainties of the data and the parameters involved. RESULTS: We present novel algorithms that can organize, cluster and derive meaningful patterns of expression from large-scaled proteomics experiments. We processed raw data using a graphical-based algorithm by transforming it from a real space data-expression to a complex space data-expression using discrete Fourier transformation; then we used a thresholding approach to denoise and reduce the length of each spectrum. Bayesian clustering was applied to the reconstructed data. In comparison with several other algorithms used in this study including K-means, (Kohonen self-organizing map (SOM), and linear discriminant analysis, the Bayesian-Fourier model-based approach displayed superior performances consistently, in selecting the correct model and the number of clusters, thus providing a novel approach for accurate diagnosis of the disease. Using this approach, we were able to successfully denoise proteomic spectra and reach up to a 99% total reduction of the number of peaks compared to the original data. In addition, the Bayesian-based approach generated a better classification rate in comparison with other classification algorithms. This new finding will allow us to apply the Fourier transformation for the selection of the protein profile for each sample, and to develop a novel bioinformatic strategy based on Bayesian clustering for biomarker discovery and optimal diagnosis.  相似文献   

17.
Challenges and solutions in proteomics   总被引:1,自引:0,他引:1  
The accelerated growth of proteomics data presents both opportunities and challenges. Large-scale proteomic profiling of biological samples such as cells, organelles or biological fluids has led to discovery of numerous key and novel proteins involved in many biological/disease processes including cancers, as well as to the identification of novel disease biomarkers and potential therapeutic targets. While proteomic data analysis has been greatly assisted by the many bioinformatics tools developed in recent years, a careful analysis of the major steps and flow of data in a typical highthroughput analysis reveals a few gaps that still need to be filled to fully realize the value of the data. To facilitate functional and pathway discovery for large-scale proteomic data, we have developed an integrated proteomic expression analysis system, iProXpress, which facilitates protein identification using a comprehensive sequence library and functional interpretation using integrated data. With its modular design, iProXpress complements and can be integrated with other software in a proteomic data analysis pipeline. This novel approach to complex biological questions involves the interrogation of multiple data sources, thereby facilitating hypothesis generation and knowledge discovery from the genomic-scale studies and fostering disease diagnosis and drug development.  相似文献   

18.
王明凤  曹佳莉  袁权  夏宁邵 《微生物学报》2019,59(12):2263-2275
慢性乙型肝炎病毒(Hepatitis B virus,HBV)感染是严重威胁人类生命健康的世界性公共卫生问题。基于现有抗HBV药物的治疗策略,仅能在极少部分患者中实现慢性乙肝的功能性治愈。发展更为有效的抗HBV药物,需要更加透彻全面地认识各个病毒组分和关键宿主因子在HBV感染和复制生命周期中发挥的功能和机制,并在此基础上发现鉴定新的治疗靶点。支持HBV体外感染和复制的细胞模型,是研究HBV生活史的重要工具,并在治疗新靶点的发现和候选药物功效评估等研究工作中发挥关键作用。本文对支持HBV感染和复制细胞模型的新近研究进展进行梳理分析,并对这些模型的应用特点和局限性、新近研究进展和未来发展方向进行系统阐述和讨论。  相似文献   

19.
The drastic increase in the cost for discovering and developing a new drug along with the high attrition rate of development candidates led to shifting drug‐discovery strategy to parallel assessment of comprehensive drug physicochemical, and absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties alongside efficacy. With the proposal of a profiling paradigm and utilization of integrated risk assessment, one can exponentially enhance the predictive power of in vitro tools by taking into consideration the interplay among profiling parameters. In particular, this article will review recent advances in accurate assessment of solubility and other physicochemical parameters. The proper interpretation of these experimental data is crucial for rapid and meaningful risk assessment and rational optimization of drug candidates in drug discovery. The impact of these tools on assisting drug‐discovery teams in establishing in vitro–in vivo correlation (IVIVC) as well as structure–property relationship (SPR) will be presented.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号