首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Microarray analysis has become a widely used method for generating gene expression data on a genomic scale. Microarrays have been enthusiastically applied in many fields of biological research, even though several open questions remain about the analysis of such data. A wide range of approaches are available for computational analysis, but no general consensus exists as to standard for microarray data analysis protocol. Consequently, the choice of data analysis technique is a crucial element depending both on the data and on the goals of the experiment. Therefore, basic understanding of bioinformatics is required for optimal experimental design and meaningful interpretation of the results. This review summarizes some of the common themes in DNA microarray data analysis, including data normalization and detection of differential expression. Algorithms are demonstrated by analyzing cDNA microarray data from an experiment monitoring gene expression in T helper cells. Several computational biology strategies, along with their relative merits, are overviewed and potential areas for additional research discussed. The goal of the review is to provide a computational framework for applying and evaluating such bioinformatics strategies. Solid knowledge of microarray informatics contributes to the implementation of more efficient computational protocols for the given data obtained through microarray experiments.  相似文献   

2.
Taverna: a tool for the composition and enactment of bioinformatics workflows   总被引:12,自引:0,他引:12  
MOTIVATION: In silico experiments in bioinformatics involve the co-ordinated use of computational tools and information repositories. A growing number of these resources are being made available with programmatic access in the form of Web services. Bioinformatics scientists will need to orchestrate these Web services in workflows as part of their analyses. RESULTS: The Taverna project has developed a tool for the composition and enactment of bioinformatics workflows for the life sciences community. The tool includes a workbench application which provides a graphical user interface for the composition of workflows. These workflows are written in a new language called the simple conceptual unified flow language (Scufl), where by each step within a workflow represents one atomic task. Two examples are used to illustrate the ease by which in silico experiments can be represented as Scufl workflows using the workbench application.  相似文献   

3.
T-cell recognition of peptide/major histocompatibility complex (MHC) is a prerequisite for cellular immunity. Recently, there has been an influx of bioinformatics tools to facilitate the identification of T-cell epitopes to specific MHC alleles. This article examines existing computational strategies for the study of peptide/MHC interactions. The most important bioinformatics tools and methods with relevance to the study of peptide/MHC interactions have been reviewed. We have also provided guidelines for predicting antigenic peptides based on the availability of existing experimental data.  相似文献   

4.
Post ‘omic’ era has resulted in the development of many primary, secondary and derived databases. Many analytical and visualization bioinformatics tools have been developed to manage and analyze the data available through large sequencing projects. Availability of heterogeneous databases and tools make it difficult for researchers to access information from varied sources and run different bioinformatics tools to get desired analysis done. Building integrated bioinformatics platforms is one of the most challenging tasks that bioinformatics community is facing. Integration of various databases, tools and algorithm is a challenging problem to deal with. This article describes the bioinformatics analysis workflow management systems that are developed in the area of gene sequence analysis and phylogeny. This article will be useful for biotechnologists, molecular biologists, computer scientists and statisticians engaged in computational biology and bioinformatics research.  相似文献   

5.
Being a relatively new addition to the 'omics' field, metabolomics is still evolving its own computational infrastructure and assessing its own computational needs. Due to its strong emphasis on chemical information and because of the importance of linking that chemical data to biological consequences, metabolomics must combine elements of traditional bioinformatics with traditional cheminformatics. This is a significant challenge as these two fields have evolved quite separately and require very different computational tools and skill sets. This review is intended to familiarize readers with the field of metabolomics and to outline the needs, the challenges and the recent progress being made in four areas of computational metabolomics: (i) metabolomics databases; (ii) metabolomics LIMS; (iii) spectral analysis tools for metabolomics and (iv) metabolic modeling.  相似文献   

6.
7.
随着蛋白质组学研究的不断深入,基于质谱的选择反应监测技术(SRM)已经成为以发现生物标志物为代表的定向蛋白质组学研究的重要手段.SRM技术根据假设信息,特异性地获取符合假设条件的质谱信号,去除不符合条件的离子信号干扰,从而得到特定蛋白质的定量信息.SRM技术具有更高的灵敏度和精确性、更大的动态范围等优势.该技术可分为实验设计、数据获取和数据分析三个步骤.在这几个步骤中,最重要的是利用生物信息学手段总结当前实验数据的结果,并用机器学习方法和总结的经验规则进行SRM实验的母离子和子离子对的预测.针对数据质控和定量的生物信息学方法研究在提高SRM数据可靠性方面具有重要作用.此外,为方便SRM的研究,本文还收集、汇总了SRM技术相关的软件、工具和数据库资源.随着质谱仪器的不断发展,新的SRM实验策略以及分析方法、计算工具也应运而生.结合更优化的实验策略、方法,采用更精准的生物信息学算法和工具,SRM在未来蛋白质组学的发展中将发挥更加重要的作用.  相似文献   

8.
翻译后修饰在调控蛋白质构象变化、活性以及功能方面具有重要作用,并参与了几乎所有细胞通路和过程。蛋白质翻译后修饰的鉴定是阐明细胞内分子机理的基础。相对于劳动密集的、耗费时间的实验工作,利用各种生物信息学方法开展翻译后修饰预测,能够提供准确、简便和快速的研究方案,并产生有价值的信息为进一步实验研究提供参考。文章主要综述了中国生物信息学者在翻译后修饰生物信息学领域所取得的研究进展,包括修饰底物与位点预测的计算方法学设计与完善、在线或本地化工具的设计与维护、修饰相关数据库及数据资源的构建及基于修饰蛋白质组学数据的生物信息学分析。通过比较国内外的同类研究,发现优势和不足,并对未来的研究作出前瞻。  相似文献   

9.
Novel and improved computational tools are required to transform large-scale proteomics data into valuable information of biological relevance. To this end, we developed ProteoConnections, a bioinformatics platform tailored to address the pressing needs of proteomics analyses. The primary focus of this platform is to organize peptide and protein identifications, evaluate the quality of the acquired data set, profile abundance changes, and accelerate data interpretation. Peptide and protein identifications are stored into a relational database to facilitate data mining and to evaluate the quality of data sets using graphical reports. We integrated databases of known PTMs and other bioinformatics tools to facilitate the analysis of phosphoproteomics data sets and to provide insights for subsequent biological validation experiments. Phosphorylation sites are also annotated according to kinase consensus motifs, contextual environment, protein domains, binding motifs, and evolutionary conservation across different species. The practical application of ProteoConnections is further demonstrated for the analysis of the phosphoproteomics data sets from rat intestinal IEC-6 cells where we identified 9615 phosphorylation sites on 2108 phosphoproteins. Combined proteomics and bioinformatics analyses revealed valuable biological insights on the regulation of phosphoprotein functions via the introduction of new binding sites on scaffold proteins or the modulation of protein-protein, protein-DNA, or protein-RNA interactions. Quantitative proteomics data can be integrated into ProteoConnections to determine the changes in protein phosphorylation under different cell stimulation conditions or kinase inhibitors, as demonstrated here for the MEK inhibitor PD184352.  相似文献   

10.
The adoption of agent technologies and multi-agent systems constitutes an emerging area in bioinformatics. In this article, we report on the activity of the Working Group on Agents in Bioinformatics (BIOAGENTS) founded during the first AgentLink III Technical Forum meeting on the 2nd of July, 2004, in Rome. The meeting provided an opportunity for seeding collaborations between the agent and bioinformatics communities to develop a different (agent-based) approach of computational frameworks both for data analysis and management in bioinformatics and for systems modelling and simulation in computational and systems biology. The collaborations gave rise to applications and integrated tools that we summarize and discuss in context of the state of the art in this area. We investigate on future challenges and argue that the field should still be explored from many perspectives ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages to be used by information agents, and to the adoption of agents for computational grids.  相似文献   

11.
12.
13.
The advent of genome-wide RNA interference (RNAi)–based screens puts us in the position to identify genes for all functions human cells carry out. However, for many functions, assay complexity and cost make genome-scale knockdown experiments impossible. Methods to predict genes required for cell functions are therefore needed to focus RNAi screens from the whole genome on the most likely candidates. Although different bioinformatics tools for gene function prediction exist, they lack experimental validation and are therefore rarely used by experimentalists. To address this, we developed an effective computational gene selection strategy that represents public data about genes as graphs and then analyzes these graphs using kernels on graph nodes to predict functional relationships. To demonstrate its performance, we predicted human genes required for a poorly understood cellular function—mitotic chromosome condensation—and experimentally validated the top 100 candidates with a focused RNAi screen by automated microscopy. Quantitative analysis of the images demonstrated that the candidates were indeed strongly enriched in condensation genes, including the discovery of several new factors. By combining bioinformatics prediction with experimental validation, our study shows that kernels on graph nodes are powerful tools to integrate public biological data and predict genes involved in cellular functions of interest.  相似文献   

14.
15.
MOTIVATION: Analysis and simulation of pathway data is of high importance in bioinformatics. Standards for representation of information about pathways are necessary for integration and analysis of data from various sources. Recently, a number of representation formats for pathway data, SBML, PSI MI and BioPAX, have been proposed. RESULTS: In this paper we compare these formats and evaluate them with respect to their underlying models, information content and possibilities for easy creation of tools. The evaluation shows that the main structure of the formats is similar. However, SBML is tuned towards simulation models of molecular pathways while PSI MI is more suitable for representing details about particular interactions and experiments. BioPAX is the most general and expressive of the formats. These differences are apparent in allowed information and the structure for representation of interactions. We discuss the impact of these differences both with respect to information content in existing databases and computational properties for import and analysis of data.  相似文献   

16.
The availability of hundreds of complete bacterial genomes has created new challenges and simultaneously opportunities for bioinformatics. In the area of statistical analysis of genomic sequences, the studies of nucleotide compositional bias and gene bias between strands and replichores paved way to the development of tools for prediction of bacterial replication origins. Only a few (about 20) origin regions for eubacteria and archaea have been proven experimentally. One reason for that may be that this is now considered as an essentially bioinformatics problem, where predictions are sufficiently reliable not to run labor-intensive experiments, unless specifically needed. Here we describe the main existing approaches to the identification of replication origin (oriC) and termination (terC) loci in prokaryotic chromosomes and characterize a number of computational tools based on various skew types and other types of evidence. We also classify the eubacterial and archaeal chromosomes by predictability of their replication origins using skew plots. Finally, we discuss possible combined approaches to the identification of the oriC sites that may be used to improve the prediction tools, in particular, the analysis of DnaA binding sites using the comparative genomic methods.  相似文献   

17.
18.
The ever increasing applications of bioinformatics in providing effective interpretation of large and complex biological data require expertise in the use of sophisticated computational tools and advanced statistical tests, skills that are mostly lacking in the Sudanese research community. This can be attributed to paucity in the development and promotion of bioinformatics, lack of senior bioinformaticians, and the general status quo of inadequate research funding in Sudan. In this paper, we describe the challenges that have encountered the development of bioinformatics as a discipline in Sudan. Additionally, we highlight on specific actions that may help develop and promote its education and training. The paper takes the National University Biomedical Research Institute (NUBRI) as an example of an institute that has tackled many of these challenges and strives to drive powerful efforts in the development of bioinformatics in the country.  相似文献   

19.
ABSTRACT

Introduction: Discovery proteomics for cancer research generates complex datasets of diagnostic, prognostic, and therapeutic significance in human cancer. With the advent of high-resolution mass spectrometers, able to identify thousands of proteins in complex biological samples, only the application of bioinformatics can lead to the interpretation of data which can be relevant for cancer research.

Areas covered: Here, we give an overview of the current bioinformatic tools used in cancer proteomics. Moreover, we describe their applications in cancer proteomics studies of cell lines, serum, and tissues, highlighting recent results and critically evaluating their outcomes.

Expert opinion: The use of bioinformatic tools is a fundamental step in order to manage the large amount of proteins (from hundreds to thousands) that can be identified and quantified in a cancer biological samples by proteomics. To handle this challenge and obtain useful data for translational medicine, it is important the combined use of different bioinformatic tools. Moreover, a particular attention to the global experimental design, and the integration of multidisciplinary skills are essential for best setting of tool parameters and best interpretation of bioinformatics output.  相似文献   

20.
After the progress made during the genomics era, bioinformatics was tasked with supporting the flow of information generated by nanobiotechnology efforts. This challenge requires adapting classical bioinformatic and computational chemistry tools to store, standardize, analyze, and visualize nanobiotechnological information. Thus, old and new bioinformatic and computational chemistry tools have been merged into a new sub-discipline: nanoinformatics. This review takes a second look at the development of this new and exciting area as seen from the perspective of the evolution of nanobiotechnology applied to the life sciences. The knowledge obtained at the nano-scale level implies answers to new questions and the development of new concepts in different fields. The rapid convergence of technologies around nanobiotechnologies has spun off collaborative networks and web platforms created for sharing and discussing the knowledge generated in nanobiotechnology. The implementation of new database schemes suitable for storage, processing and integrating physical, chemical, and biological properties of nanoparticles will be a key element in achieving the promises in this convergent field. In this work, we will review some applications of nanobiotechnology to life sciences in generating new requirements for diverse scientific fields, such as bioinformatics and computational chemistry.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号