共查询到20条相似文献,搜索用时 15 毫秒
1.
Jürgen Hartler Gerhard G Thallinger Gernot Stocker Alexander Sturn Thomas R Burkard Erik Körner Robert Rader Andreas Schmidt Karl Mechtler Zlatko Trajanoski 《BMC bioinformatics》2007,8(1):197
Background
The advancements of proteomics technologies have led to a rapid increase in the number, size and rate at which datasets are generated. Managing and extracting valuable information from such datasets requires the use of data management platforms and computational approaches. 相似文献2.
3.
Background
Surface enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI) is a proteomics tool for biomarker discovery and other high throughput applications. Previous studies have identified various areas for improvement in preprocessing algorithms used for protein peak detection. Bottom-up approaches to preprocessing that emphasize modeling SELDI data acquisition are promising avenues of research to find the needed improvements in reproducibility. 相似文献4.
Lars Malmström György Marko-Varga Gunilla Westergren-Thorsson Thomas Laurell Johan Malmström 《BMC bioinformatics》2006,7(1):158-9
Background
We present 2DDB, a bioinformatics solution for storage, integration and analysis of quantitative proteomics data. As the data complexity and the rate with which it is produced increases in the proteomics field, the need for flexible analysis software increases. 相似文献5.
Recursive SVM feature selection and sample classification for mass-spectrometry and microarray data 总被引:3,自引:0,他引:3
Xuegong Zhang Xin Lu Qian Shi Xiu-qin Xu Hon-chiu E Leung Lyndsay N Harris James D Iglehart Alexander Miron Jun S Liu Wing H Wong 《BMC bioinformatics》2006,7(1):197-13
Background
Like microarray-based investigations, high-throughput proteomics techniques require machine learning algorithms to identify biomarkers that are informative for biological classification problems. Feature selection and classification algorithms need to be robust to noise and outliers in the data. 相似文献6.
7.
Background
With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. 相似文献8.
Background
Liquid chromatography coupled to mass spectrometry (LC/MS) has been widely used in proteomics and metabolomics research. In this context, the technology has been increasingly used for differential profiling, i.e. broad screening of biomolecular components across multiple samples in order to elucidate the observed phenotypes and discover biomarkers. One of the major challenges in this domain remains development of better solutions for processing of LC/MS data. 相似文献9.
Paulo C Carvalho Juliana SG Fischer Emily I Chen John R YatesIII Valmir C Barbosa 《BMC bioinformatics》2008,9(1):316
Background
A goal of proteomics is to distinguish between states of a biological system by identifying protein expression differences. Liu et al. demonstrated a method to perform semi-relative protein quantitation in shotgun proteomics data by correlating the number of tandem mass spectra obtained for each protein, or "spectral count", with its abundance in a mixture; however, two issues have remained open: how to normalize spectral counting data and how to efficiently pinpoint differences between profiles. Moreover, Chen et al. recently showed how to increase the number of identified proteins in shotgun proteomics by analyzing samples with different MS-compatible detergents while performing proteolytic digestion. The latter introduced new challenges as seen from the data analysis perspective, since replicate readings are not acquired. 相似文献10.
Development and implementation of an algorithm for detection of protein complexes in large interaction networks 总被引:4,自引:0,他引:4
Md Altaf-Ul-Amin Yoko Shinbo Kenji Mihara Ken Kurokawa Shigehiko Kanaya 《BMC bioinformatics》2006,7(1):207-13
Background
After complete sequencing of a number of genomes the focus has now turned to proteomics. Advanced proteomics technologies such as two-hybrid assay, mass spectrometry etc. are producing huge data sets of protein-protein interactions which can be portrayed as networks, and one of the burning issues is to find protein complexes in such networks. The enormous size of protein-protein interaction (PPI) networks warrants development of efficient computational methods for extraction of significant complexes. 相似文献11.
Monica Chagoyen Pedro Carmona-Saez Hagit Shatkay Jose M Carazo Alberto Pascual-Montano 《BMC bioinformatics》2006,7(1):41
Background
Experimental techniques such as DNA microarray, serial analysis of gene expression (SAGE) and mass spectrometry proteomics, among others, are generating large amounts of data related to genes and proteins at different levels. As in any other experimental approach, it is necessary to analyze these data in the context of previously known information about the biological entities under study. The literature is a particularly valuable source of information for experiment validation and interpretation. Therefore, the development of automated text mining tools to assist in such interpretation is one of the main challenges in current bioinformatics research. 相似文献12.
Background
Mascot™ is a commonly used protein identification program for MS as well as for tandem MS data. When analyzing huge shotgun proteomics datasets with Mascot™'s native tools, limits of computing resources are easily reached. Up to now no application has been available as open source that is capable of converting the full content of Mascot™ result files from the original MIME format into a database-compatible tabular format, allowing direct import into database management systems and efficient handling of huge datasets analyzed by Mascot™. 相似文献13.
Jignesh R Parikh Manor Askenazi Scott B Ficarro Tanya Cashorali James T Webber Nathaniel C Blank Yi Zhang Jarrod A Marto 《BMC bioinformatics》2009,10(1):364
Background
Efficient analysis of results from mass spectrometry-based proteomics experiments requires access to disparate data types, including native mass spectrometry files, output from algorithms that assign peptide sequence to MS/MS spectra, and annotation for proteins and pathways from various database sources. Moreover, proteomics technologies and experimental methods are not yet standardized; hence a high degree of flexibility is necessary for efficient support of high- and low-throughput data analytic tasks. Development of a desktop environment that is sufficiently robust for deployment in data analytic pipelines, and simultaneously supports customization for programmers and non-programmers alike, has proven to be a significant challenge. 相似文献14.
Background
Recent advances in proteomics technologies such as SELDI-TOF mass spectrometry has shown promise in the detection of early stage cancers. However, dimensionality reduction and classification are considerable challenges in statistical machine learning. We therefore propose a novel approach for dimensionality reduction and tested it using published high-resolution SELDI-TOF data for ovarian cancer. 相似文献15.
Background
OFFGEL isoelectric focussing (IEF) has become a popular tool in proteomics to fractionate peptides or proteins. As a consequence there is a need for software solutions supporting data mining, interpretation and characterisation of experimental quality. 相似文献16.
Background
Proteogenomics aims to utilize experimental proteome information for refinement of genome annotation. Since mass spectrometry-based shotgun proteomics approaches provide large-scale peptide sequencing data with high throughput, a data repository for shotgun proteogenomics would represent a valuable source of gene expression evidence at the translational level for genome re-annotation. 相似文献17.
Background
Recent progresses in high-throughput proteomics have provided us with a first chance to characterize protein interaction networks (PINs), but also raised new challenges in interpreting the accumulating data. 相似文献18.
Eisuke Chikayama Atsushi Kurotani Takanori Tanaka Takashi Yabuki Satoshi Miyazaki Shigeyuki Yokoyama Yutaka Kuroda 《BMC bioinformatics》2010,11(1):113
Background
Efficient dissection of large proteins into their structural domains is critical for high throughput proteome analysis. So far, no study has focused on mathematically modeling a protein dissection protocol in terms of a production system. Here, we report a mathematical model for empirically optimizing the cost of large-scale domain production in proteomics research. 相似文献19.
Fan Mo Qun Mo Yuanyuan Chen David R Goodlett Leroy Hood Gilbert S Omenn Song Li Biaoyang Lin 《BMC bioinformatics》2010,11(1):219
Background
Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification. 相似文献20.
Johan Palmfeldt Søren Vang Vibeke Stenbroen Christina B Pedersen Jane H Christensen Peter Bross Niels Gregersen 《Proteome science》2009,7(1):20-10