首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Bioinformatics tools for proteomics, also called proteome informatics tools, span today a large panel of very diverse applications ranging from simple tools to compare protein amino acid compositions to sophisticated software for large-scale protein structure determination. This review considers the available and ready to use tools that can help end-users to interpret, validate and generate biological information from their experimental data. It concentrates on bioinformatics tools for 2-DE analysis, for LC followed by MS analysis, for protein identification by PMF, by peptide fragment fingerprinting and by de novo sequencing and for data quantitation with MS data. It also discloses initiatives that propose to automate the processes of MS analysis and enhance the quality of the obtained results.  相似文献   

2.
The systematic characterization of the whole interactomes of different model organisms has revealed that the eukaryotic proteome is highly interconnected. Therefore, biological research is progressively shifting away from classical approaches that focus only on a few proteins toward whole protein interaction networks to describe the relationship of proteins in biological processes. In this minireview, we survey the most common methods for the systematic identification of protein interactions and exemplify different strategies for the generation of protein interaction networks. In particular, we will focus on the recent development of protein interaction networks derived from quantitative proteomics data sets.  相似文献   

3.
Allan R Brasier 《BioTechniques》2002,32(1):100-2, 104, 106, 108-9
High-density oligonucleotide arrays are widely employed for detecting global changes in gene expression profiles of cells or tissues exposed to specific stimuli. Presented with large amounts of data, investigators can spend significant amounts of time analyzing and interpreting this array data. In our application of GeneChip arrays to analyze changes in gene expression in viral-infected epithelium, we have needed to develop additional computational tools that may be of utility to other investigators using this methodology. Here, I describe two executable programs to facilitate data extraction and multiple data point analysis. These programs run in a virtual DOS environment on Microsoft Windows 95/98/2K operating systems on a desktop PC. Both programs can be freely downloaded from the BioTechniques Software Library (www.BioTechniques.com). The first program, Retriever, extracts primary data from an array experiment contained in an Affymetrix textfile using user-inputted individual identification strings (e.g., the probe set identification numbers). With specific data retrieved for individual genes, hybridization profiles can be examined and data normalized. The second program, CompareTable, is used to facilitate comparison analysis of two experimental replicates. CompareTable compares two lists of genes, identifies common entries, extracts their data, and writes an output text file containing only those genes present in both of the experiments. The output files generated by these two programs can be opened and manipulated by any software application recognizing tab-delimited text files (e.g., Microsoft NotePad or Excel).  相似文献   

4.
5.
6.
Nakai K  Vert JP 《Genome biology》2002,3(4):reports4010.1-reports40103
A report on the 12th International Conference on Genome Informatics, Tokyo, Japan, 17-19 December 2001.  相似文献   

7.
More and more antibody therapeutics are being approved every year, mainly due to their high efficacy and antigen selectivity. However, it is still difficult to identify the antigen, and thereby the function, of an antibody if no other information is available. There are obstacles inherent to the antibody science in every project in antibody drug discovery. Recent experimental technologies allow for the rapid generation of large-scale data on antibody sequences, affinity, potency, structures, and biological functions; this should accelerate drug discovery research. Therefore, a robust bioinformatic infrastructure for these large data sets has become necessary. In this article, we first identify and discuss the typical obstacles faced during the antibody drug discovery process. We then summarize the current status of three sub-fields of antibody informatics as follows: (i) recent progress in technologies for antibody rational design using computational approaches to affinity and stability improvement, as well as ab-initio and homology-based antibody modeling; (ii) resources for antibody sequences, structures, and immune epitopes and open drug discovery resources for development of antibody drugs; and (iii) antibody numbering and IMGT. Here, we review “antibody informatics,” which may integrate the above three fields so that bridging the gaps between industrial needs and academic solutions can be accelerated. This article is part of a Special Issue entitled: Recent advances in molecular engineering of antibody.  相似文献   

8.
Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES) is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations.  相似文献   

9.
10.
The Open Microscopy Environment (OME) defines a data model and a software implementation to serve as an informatics framework for imaging in biological microscopy experiments, including representation of acquisition parameters, annotations and image analysis results. OME is designed to support high-content cell-based screening as well as traditional image analysis applications. The OME Data Model, expressed in Extensible Markup Language (XML) and realized in a traditional database, is both extensible and self-describing, allowing it to meet emerging imaging and analysis needs.  相似文献   

11.
12.
13.
14.
Katsumi Isono 《Genome biology》2001,2(3):reports4006.1-reports40063
A report on the 11th Workshop on Genome Informatics, the annual meeting on genome informatics and related subjects supported by the Genome Informatics Society of Japan, Tokyo, Japan, 18-19 December, 2000.  相似文献   

15.
16.
Whereas genomic data are universally machine-readable, data from imaging, multiplex biochemistry, flow cytometry and other cell- and tissue-based assays usually reside in loosely organized files of poorly documented provenance. This arises because the relational databases used in genomic research are difficult to adapt to rapidly evolving experimental designs, data formats and analytic algorithms. Here we describe an adaptive approach to managing experimental data based on semantically typed data hypercubes (SDCubes) that combine hierarchical data format 5 (HDF5) and extensible markup language (XML) file types. We demonstrate the application of SDCube-based storage using ImageRail, a software package for high-throughput microscopy. Experimental design and its day-to-day evolution, not rigid standards, determine how ImageRail data are organized in SDCubes. We applied ImageRail to collect and analyze drug dose-response landscapes in human cell lines at single-cell resolution.  相似文献   

17.
With DNA sequencing now getting cheaper more quickly than data storage or computation, the time may have come for genome informatics to migrate to the cloud.  相似文献   

18.
Public informatics resources for rice and other grasses   总被引:1,自引:0,他引:1  
As an emerging model system, rice will benefit from an informatics infrastructure which organizes genome data and makes it available worldwide. RiceGenes and other Internet-accessible resources are evolving to meet these goals. Grass crops such as rice, maize, millet, sorghum and wheat are closely related but are represented by independent database projects; interlinking these resources would create a broad view of grass genetics and make it easier to compare data across genomes. The future success of grass informatics depends on the development of new comparative mapping displays as well as the participation of the research community in assembling and curating comparative map data.  相似文献   

19.
The present review attempts to cover the most recent initiatives directed towards representing, storing, displaying and processing protein-related data suited to undertake "comparative proteomics" studies. Data interpretation is brought into focus. Efforts invested into analysing and interpreting experimental data increasingly express the need for adding meaning. This trend is perceptible in work dedicated to determining ontologies, modelling interaction networks, etc. In parallel, technical advances in computer science are spurred by the development of the Web and the growing need to channel and understand massive volumes of data. Biology benefits from these advances as an application of choice for many generic solutions. Some examples of bioinformatics solutions are discussed and directions for on-going and future work conclude the review.  相似文献   

20.
caCORE: a common infrastructure for cancer informatics   总被引:4,自引:0,他引:4  
MOTIVATION:Sites with substantive bioinformatics operations are challenged to build data processing and delivery infrastructure that provides reliable access and enables data integration. Locally generated data must be processed and stored such that relationships to external data sources can be presented. Consistency and comparability across data sets requires annotation with controlled vocabularies and, further, metadata standards for data representation. Programmatic access to the processed data should be supported to ensure the maximum possible value is extracted. Confronted with these challenges at the National Cancer Institute Center for Bioinformatics, we decided to develop a robust infrastructure for data management and integration that supports advanced biomedical applications. RESULTS: We have developed an interconnected set of software and services called caCORE. Enterprise Vocabulary Services (EVS) provide controlled vocabulary, dictionary and thesaurus services. The Cancer Data Standards Repository (caDSR) provides a metadata registry for common data elements. Cancer Bioinformatics Infrastructure Objects (caBIO) implements an object-oriented model of the biomedical domain and provides Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. caCORE has been used to develop scientific applications that bring together data from distinct genomic and clinical science sources. AVAILABILITY: caCORE downloads and web interfaces can be accessed from links on the caCORE web site (http://ncicb.nci.nih.gov/core). caBIO software is distributed under an open source license that permits unrestricted academic and commercial use. Vocabulary and metadata content in the EVS and caDSR, respectively, is similarly unrestricted, and is available through web applications and FTP downloads. SUPPLEMENTARY INFORMATION: http://ncicb.nci.nih.gov/core/publications contains links to the caBIO 1.0 class diagram and the caCORE 1.0 Technical Guide, which provide detailed information on the present caCORE architecture, data sources and APIs. Updated information appears on a regular basis on the caCORE web site (http://ncicb.nci.nih.gov/core).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号