首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications.LC-MS/MS provides the most widely used technology platform for proteomics analyses of purified proteins, simple mixtures, and complex proteomes. In a typical analysis, protein mixtures are proteolytically digested, the peptide digest is fractionated, and the resulting peptide fractions then are analyzed by LC-MS/MS (1, 2). Database searches of the MS/MS spectra yield peptide identifications and, by inference and assembly, protein identifications. Depending on protein sample load and the extent of peptide fractionation used, LC-MS/MS analytical systems can generate from hundreds to thousands of peptide and protein identifications (3). Many variations of LC-MS/MS analytical platforms have been described, and the performance of these systems is influenced by a number of experimental design factors (4).Comparison of data sets obtained by LC-MS/MS analyses provides a means to evaluate the proteomic basis for biologically significant states or phenotypes. For example, data-dependent LC-MS/MS analyses of tumor and normal tissues enabled unbiased discovery of proteins whose expression is enhanced in cancer (57). Comparison of data-dependent LC-MS/MS data sets from phosphotyrosine peptides in drug-responsive and -resistant cell lines identified differentially regulated phosphoprotein signaling networks (8, 9). Similarly, activity-based probes and data-dependent LC-MS/MS analysis were used to identify differentially regulated enzymes in normal and tumor tissues (10). All of these approaches assume that the observed differences reflect differences in the proteomic composition of the samples analyzed rather than analytical system variability. The validity of this assumption is difficult to assess because of a lack of objective criteria to assess analytical system performance.The problem of variability poses three practical questions for analysts using LC-MS/MS proteomics platforms. First, is the analytical system performing optimally for the reproducible analysis of complex proteomes? Second, can the sources of suboptimal performance and variability be identified, and can the impact of changes or improvements be evaluated? Third, can system performance metrics provide documentation to support the assessment of proteomic differences between biologically interesting samples?Currently, the most commonly used measure of variability in LC-MS/MS proteomics analyses is the number of confident peptide identifications (1113). Although consistency in numbers of identifications may indicate repeatability, the numbers do not indicate whether system performance is optimal or which components require optimization. One well characterized source of variability in peptide identifications is the automated sampling of peptide ion signals for acquisition of MS/MS spectra by instrument control software, which results in stochastic sampling of lower abundance peptides (14). Variability certainly also arises from sample preparation methods (e.g. protein extraction and digestion). A largely unexplored source of variability is the performance of the core LC-MS/MS analytical system, which includes the LC system, the MS instrument, and system software. The configuration, tuning, and operation of these system components govern sample injection, chromatography, electrospray ionization, MS signal detection, and sampling for MS/MS analysis. These characteristics all are subject to manipulation by the operator and thus provide means to optimize system performance.Here we describe the development of 46 metrics for evaluating the performance of LC-MS/MS system components. We have implemented a freely available software pipeline that generates these metrics directly from LC-MS/MS data files. We demonstrate their use in characterizing sources of variability in proteomics platforms, both for replicate analyses on a single instrument and in the context of large interlaboratory studies conducted by the National Cancer Institute-supported Clinical Proteomic Technology Assessment for Cancer (CPTAC)1 Network.  相似文献   

2.
Here, we describe the novel use of a volatile surfactant, perfluorooctanoic acid (PFOA), for shotgun proteomics. PFOA was found to solubilize membrane proteins as effectively as sodium dodecyl sulfate (SDS). PFOA concentrations up to 0.5% (w/v) did not significantly inhibit trypsin activity. The unique features of PFOA allowed us to develop a single-tube shotgun proteomics method that used all volatile chemicals that could easily be removed by evaporation prior to mass spectrometry analysis. The experimental procedures involved: 1) extraction of proteins in 2% PFOA; 2) reduction of cystine residues with triethyl phosphine and their S-alkylation with iodoethanol; 3) trypsin digestion of proteins in 0.5% PFOA; 4) removal of PFOA by evaporation; and 5) LC-MS/MS analysis of the resulting peptides. The general applicability of the method was demonstrated with the membrane preparation of photoreceptor outer segments. We identified 75 proteins from 1 μg of the tryptic peptides in a single, 1-hour, LC-MS/MS run. About 67% of the proteins identified were classified as membrane proteins. We also demonstrate that a proteolytic (18)O labeling procedure can be incorporated after the PFOA removal step for quantitative proteomic experiments. The present method does not require sample clean-up devices such as solid-phase extractions and membrane filters, so no proteins/peptides are lost in any experimental steps. Thus, this single-tube shotgun proteomics method overcomes the major drawbacks of surfactant use in proteomic experiments.  相似文献   

3.
Matros A  Kaspar S  Witzel K  Mock HP 《Phytochemistry》2011,72(10):963-974
Recent innovations in liquid chromatography-mass spectrometry (LC-MS)-based methods have facilitated quantitative and functional proteomic analyses of large numbers of proteins derived from complex samples without any need for protein or peptide labelling. Regardless of its great potential, the application of these proteomics techniques to plant science started only recently. Here we present an overview of label-free quantitative proteomics features and their employment for analysing plants. Recent methods used for quantitative protein analyses by MS techniques are summarized and major challenges associated with label-free LC-MS-based approaches, including sample preparation, peptide separation, quantification and kinetic studies, are discussed. Database search algorithms and specific aspects regarding protein identification of non-sequenced organisms are also addressed. So far, label-free LC-MS in plant science has been used to establish cellular or subcellular proteome maps, characterize plant-pathogen interactions or stress defence reactions, and for profiling protein patterns during developmental processes. Improvements in both, analytical platforms (separation technology and bioinformatics/statistical analysis) and high throughput nucleotide sequencing technologies will enhance the power of this method.  相似文献   

4.
Comparing a protein's concentrations across two or more treatments is the focus of many proteomics studies. A frequent source of measurements for these comparisons is a mass spectrometry (MS) analysis of a protein's peptide ions separated by liquid chromatography (LC) following its enzymatic digestion. Alas, LC-MS identification and quantification of equimolar peptides can vary significantly due to their unequal digestion, separation, and ionization. This unequal measurability of peptides, the largest source of LC-MS nuisance variation, stymies confident comparison of a protein's concentration across treatments. Our objective is to introduce a mixed-effects statistical model for comparative LC-MS proteomics studies. We describe LC-MS peptide abundance with a linear model featuring pivotal terms that account for unequal peptide LC-MS measurability. We advance fitting this model to an often incomplete LC-MS data set with REstricted Maximum Likelihood (REML) estimation, producing estimates of model goodness-of-fit, treatment effects, standard errors, confidence intervals, and protein relative concentrations. We illustrate the model with an experiment featuring a known dilution series of a filamentous ascomycete fungus Trichoderma reesei protein mixture. For 781 of the 1546 T. reesei proteins with sufficient data coverage, the fitted mixed-effects models capably described the LC-MS measurements. The LC-MS measurability terms effectively accounted for this major source of uncertainty. Ninety percent of the relative concentration estimates were within 0.5-fold of the true relative concentrations. Akin to the common ratio method, this model also produced biased estimates, albeit less biased. Bias decreased significantly, both absolutely and relative to the ratio method, as the number of observed peptides per protein increased. Mixed-effects statistical modeling offers a flexible, well-established methodology for comparative proteomics studies integrating common experimental designs with LC-MS sample processing plans. It favorably accounts for the unequal LC-MS measurability of peptides and produces informative quantitative comparisons of a protein's concentration across treatments with objective measures of uncertainties.  相似文献   

5.
6.
Optimal performance of LC-MS/MS platforms is critical to generating high quality proteomics data. Although individual laboratories have developed quality control samples, there is no widely available performance standard of biological complexity (and associated reference data sets) for benchmarking of platform performance for analysis of complex biological proteomes across different laboratories in the community. Individual preparations of the yeast Saccharomyces cerevisiae proteome have been used extensively by laboratories in the proteomics community to characterize LC-MS platform performance. The yeast proteome is uniquely attractive as a performance standard because it is the most extensively characterized complex biological proteome and the only one associated with several large scale studies estimating the abundance of all detectable proteins. In this study, we describe a standard operating protocol for large scale production of the yeast performance standard and offer aliquots to the community through the National Institute of Standards and Technology where the yeast proteome is under development as a certified reference material to meet the long term needs of the community. Using a series of metrics that characterize LC-MS performance, we provide a reference data set demonstrating typical performance of commonly used ion trap instrument platforms in expert laboratories; the results provide a basis for laboratories to benchmark their own performance, to improve upon current methods, and to evaluate new technologies. Additionally, we demonstrate how the yeast reference, spiked with human proteins, can be used to benchmark the power of proteomics platforms for detection of differentially expressed proteins at different levels of concentration in a complex matrix, thereby providing a metric to evaluate and minimize preanalytical and analytical variation in comparative proteomics experiments.Access to proteomics performance standards is essential for several reasons. First, to generate the highest quality data possible, proteomics laboratories routinely benchmark and perform quality control (QC)1 monitoring of the performance of their instrumentation using standards. Second, appropriate standards greatly facilitate the development of improvements in technologies by providing a timeless standard with which to evaluate new protocols or instruments that claim to improve performance. For example, it is common practice for an individual laboratory considering purchase of a new instrument to require the vendor to run “demo” samples so that data from the new instrument can be compared head to head with existing instruments in the laboratory. Third, large scale proteomics studies designed to aggregate data across laboratories can be facilitated by the use of a performance standard to measure reproducibility across sites or to compare the performance of different LC-MS configurations or sample processing protocols used between laboratories to facilitate development of optimized standard operating procedures (SOPs).Most individual laboratories have adopted their own QC standards, which range from mixtures of known synthetic peptides to digests of bovine serum albumin or more complex mixtures of several recombinant proteins (1). However, because each laboratory performs QC monitoring in isolation, it is difficult to compare the performance of LC-MS platforms throughout the community.Several standards for proteomics are available for request or purchase (2, 3). RM8327 is a mixture of three peptides developed as a reference material in collaboration between the National Institute of Standards and Technology (NIST) and the Association of Biomolecular Resource Facilities. Mixtures of 15–48 purified human proteins are also available, such as the HUPO (Human Proteome Organisation) Gold MS Protein Standard (Invitrogen), the Universal Proteomics Standard (UPS1; Sigma), and CRM470 from the European Union Institute for Reference Materials and Measurements. Although defined mixtures of peptides or proteins can address some benchmarking and QC needs, there is an additional need for more complex reference materials to fully represent the challenges of LC-MS data acquisition in complex matrices encountered in biological samples (2, 3).Although it has not been widely distributed as a reference material, the yeast Saccharomyces cerevisiae proteome has been extensively used by the proteomics community to characterize the capabilities of a variety of LC-MS-based approaches (415). Yeast provides a uniquely attractive complex performance standard for several reasons. Yeast encodes a complex proteome consisting of ∼4,500 proteins expressed during normal growth conditions (7, 1618). The concentration range of yeast proteins is sufficient to challenge the dynamic range of conventional mass spectrometers; the abundance of proteins ranges from fewer than 50 to more than 106 molecules per cell (4, 15, 16). Additionally, it is the most extensively characterized complex biological proteome and the only one associated with several large scale studies estimating the abundance of all detectable proteins (5, 9, 16, 17, 19, 20) as well as LC-MS/MS data sets showing good correlation between LC-MS/MS detection efficiency and the protein abundance estimates (4, 11, 12, 15). Finally, it is inexpensive and easy to produce large quantities of yeast protein extract for distribution.In this study, we describe large scale production of a yeast S. cerevisiae performance standard, which we offer to the community through NIST. Through a series of interlaboratory studies, we created a reference data set characterizing the yeast performance standard and defining reasonable performance of ion trap-based LC-MS platforms in expert laboratories using a series of performance metrics. This publicly available data set provides a basis for additional laboratories using the yeast standard to benchmark their own performance as well as to improve upon the current status by evolving protocols, improving instrumentation, or developing new technologies. Finally, we demonstrate how the yeast performance standard, spiked with human proteins, can be used to benchmark the power of proteomics platforms for detection of differentially expressed proteins at different levels of concentration in a complex matrix.  相似文献   

7.
8.
Visualization tools that allow both optimization of instrument''s parameters for data acquisition and specific quality control (QC) for a given sample prior to time-consuming database searches have been scarce until recently and are currently still not freely available. To address this need, we have developed the visualization tool LogViewer, which uses diagnostic data from the RAW files of the Thermo Orbitrap and linear trap quadrupole-Fourier transform (LTQ-FT) mass spectrometers to monitor relevant metrics. To summarize and visualize the performance on our test samples, log files from RawXtract are imported and displayed. LogViewer is a visualization tool that allows a specific and fast QC for a given sample without time-consuming database searches. QC metrics displayed include: mass spectrometry (MS) ion-injection time histograms, MS ion-injection time versus retention time, MS2 ion-injection time histograms, MS2 ion-injection time versus retention time, dependent scan histograms, charge-state histograms, mass-to-charge ratio (M/Z) distributions, M/Z histograms, mass histograms, mass distribution, summary, repeat analyses, Raw MS, and Raw MS2. Systematically optimizing all metrics allowed us to increase our protein identification rates from 600 proteins to routinely determine up to 1400 proteins in any 160-min analysis of a complex mixture (e.g., yeast lysate) at a false discovery rate of <1%. Visualization tools, such as LogViewer, make QC of complex liquid chromotography (LC)-MS and LC-MS/MS data and optimization of the instrument''s parameters accessible to users.  相似文献   

9.
We introduce the computer tool “Know Your Samples” (KYSS) for assessment and visualisation of large scale proteomics datasets, obtained by mass spectrometry (MS) experiments. KYSS facilitates the evaluation of sample preparation protocols, LC peptide separation, and MS and MS/MS performance by monitoring the number of missed cleavages, precursor ion charge states, number of protein identifications and peptide mass error in experiments. KYSS generates several different protein profiles based on protein abundances, and allows for comparative analysis of multiple experiments. KYSS was adapted for blood plasma proteomics and provides concentrations of identified plasma proteins. We demonstrate the utility of the KYSS tool for MS based proteome analysis of blood plasma and for assessment of hydrogel particles for depletion of abundant proteins in plasma. The KYSS software is open source and is freely available at http://kyssproject.github.io/.  相似文献   

10.
MOTIVATION: Mass spectrometry (MS) data are impaired by noise similar to many other analytical methods. Therefore, proteomics requires statistical approaches to determine the reliability of regulatory information if protein quantification is based on ion intensities observed in MS. RESULTS: We suggest a procedure to model instrument and workflow-specific noise behaviour of iTRAQ reporter ions that can provide regulatory information during automated peptide sequencing by LC-MS/MS. The established mathematical model representatively predicts possible variations of iTRAQ reporter ions in an MS data-dependent manner. The model can be utilized to calculate the robustness of regulatory information systematically at the peptide level in so-called bottom-up proteome approaches. It allows to determine the best fitting regulation factor and in addition to calculate the probability of alternative regulations. The result can be visualized as likelihood curves summarizing both the quantity and quality of regulatory information. Likelihood curves basically can be calculated from all peptides belonging to different regions of proteins if they are detected in LC-MS/MS experiments. Therefore, this approach renders excellent opportunities to detect and statistically validate dynamic post-translational modifications usually affecting only particular regions of the whole protein. The detection of known phosphorylation events at protein kinases served as a first proof of concept in this study and underscores the potential for noise models in quantitative proteomics.  相似文献   

11.

Background  

Liquid chromatography coupled to mass spectrometry (LC-MS) has become a prominent tool for the analysis of complex proteomics and metabolomics samples. In many applications multiple LC-MS measurements need to be compared, e. g. to improve reliability or to combine results from different samples in a statistical comparative analysis. As in all physical experiments, LC-MS data are affected by uncertainties, and variability of retention time is encountered in all data sets. It is therefore necessary to estimate and correct the underlying distortions of the retention time axis to search for corresponding compounds in different samples. To this end, a variety of so-called LC-MS map alignment algorithms have been developed during the last four years. Most of these approaches are well documented, but they are usually evaluated on very specific samples only. So far, no publication has been assessing different alignment algorithms using a standard LC-MS sample along with commonly used quality criteria.  相似文献   

12.
We have developed a proteomics technology featuring on-line three-dimensional liquid chromatography coupled to tandem mass spectrometry (3D LC-MS/MS). Using 3D LC-MS/MS, the yeast-soluble, urea-solubilized peripheral membrane and SDS-solubilized membrane protein samples collectively yielded 3019 unique yeast protein identifications with an average of 5.5 peptides per protein from the 6300-gene Saccharomyces Genome Database searched with SEQUEST. A single run of the urea-solubilized sample yielded 2255 unique protein identifications, suggesting high peak capacity and resolving power of 3D LC-MS/MS. After precipitation of SDS from the digested membrane protein sample, 3D LC-MS/MS allowed the analysis of membrane proteins. Among 1221 proteins containing two or more predicted transmembrane domains, 495 such proteins were identified. The improved yeast proteome data allowed the mapping of many metabolic pathways and functional categories. The 3D LC-MS/MS technology provides a suitable tool for global proteome discovery.  相似文献   

13.
Researchers have several options when designing proteomics experiments. Primary among these are choices of experimental method, instrumentation and spectral interpretation software. To evaluate these choices on a proteome scale, we compared triplicate measurements of the yeast proteome by liquid chromatography tandem mass spectrometry (LC-MS/MS) using linear ion trap (LTQ) and hybrid quadrupole time-of-flight (QqTOF; QSTAR) mass spectrometers. Acquired MS/MS spectra were interpreted with Mascot and SEQUEST algorithms with and without the requirement that all returned peptides be tryptic. Using a composite target decoy database strategy, we selected scoring criteria yielding 1% estimated false positive identifications at maximum sensitivity for all data sets, allowing reasonable comparisons between them. These comparisons indicate that Mascot and SEQUEST yield similar results for LTQ-acquired spectra but less so for QSTAR spectra. Furthermore, low reproducibility between replicate data acquisitions made on one or both instrument platforms can be exploited to increase sensitivity and confidence in large-scale protein identifications.  相似文献   

14.
A proteomics approach to understanding protein ubiquitination   总被引:28,自引:0,他引:28  
There is a growing need for techniques that can identify and characterize protein modifications on a large or global scale. We report here a proteomics approach to enrich, recover, and identify ubiquitin conjugates from Saccharomyces cerevisiae lysate. Ubiquitin conjugates from a strain expressing 6xHis-tagged ubiquitin were isolated, proteolyzed with trypsin and analyzed by multidimensional liquid chromatography coupled with tandem mass spectrometry (LC/LC-MS/MS) for amino acid sequence determination. We identified 1,075 proteins from the sample. In addition, we detected 110 precise ubiquitination sites present in 72 ubiquitin-protein conjugates. Finally, ubiquitin itself was found to be modified at seven lysine residues providing evidence for unexpected diversity in polyubiquitin chain topology in vivo. The methodology described here provides a general tool for the large-scale analysis and characterization of protein ubiquitination.  相似文献   

15.
16.
Surface plasmon resonance mass spectrometry in proteomics   总被引:1,自引:0,他引:1  
Due to the enormous complexity of the proteome, focus in proteomics shifts more and more from the study of the complete proteome to the targeted analysis of part of the proteome. The isolation of this specific part of the proteome generally includes an affinity-based enrichment. Surface plasmon resonance (SPR), a label-free technique able to follow enrichment in real-time and in a semiquantitative manner, is an emerging tool for targeted affinity enrichment. Furthermore, in combination with mass spectrometry (MS), SPR can be used to both selectively enrich for and identify proteins from a complex sample. Here we illustrate the use of SPR-MS to solve proteomics-based research questions, describing applications that use very different types of immobilized components: such as small (drug or messenger) molecules, peptides, DNA and proteins. We evaluate the current possibilities and limitations and discuss the future developments of the SPR-MS technique.  相似文献   

17.
Due to the enormous complexity of the proteome, focus in proteomics shifts more and more from the study of the complete proteome to the targeted analysis of part of the proteome. The isolation of this specific part of the proteome generally includes an affinity-based enrichment. Surface plasmon resonance (SPR), a label-free technique able to follow enrichment in real-time and in a semiquantitative manner, is an emerging tool for targeted affinity enrichment. Furthermore, in combination with mass spectrometry (MS), SPR can be used to both selectively enrich for and identify proteins from a complex sample. Here we illustrate the use of SPR-MS to solve proteomics-based research questions, describing applications that use very different types of immobilized components: such as small (drug or messenger) molecules, peptides, DNA and proteins. We evaluate the current possibilities and limitations and discuss the future developments of the SPR-MS technique.  相似文献   

18.
Proteomics, the study of the protein complement of a biologicalsystem, is generating increasing quantities of data from rapidlydeveloping technologies employed in a variety of different experimentalworkflows. Experimental processes, e.g. for comparative 2D gelstudies or LC-MS/MS analyses of complex protein mixtures, involvea number of steps: from experimental design, through wet anddry lab operations, to publication of data in repositories andfinally to data annotation and maintenance. The presence ofinaccuracies throughout the processing pipeline, however, resultsin data that can be untrustworthy, thus offsetting the benefitsof high-throughput technology. While researchers and practitionersare generally aware of some of the information quality issuesassociated with public proteomics data, there are few acceptedcriteria and guidelines for dealing with them. In this article,we highlight factors that impact on the quality of experimentaldata and review current approaches to information quality managementin proteomics. Data quality issues are considered throughoutthe lifecycle of a proteomics experiment, from experiment designand technique selection, through data analysis, to archivingand sharing.   相似文献   

19.
Mass spectrometry-based proteomics holds great promise as a discovery tool for biomarker candidates in the early detection of diseases. Recently much emphasis has been placed upon producing highly reliable data for quantitative profiling for which highly reproducible methodologies are indispensable. The main problems that affect experimental reproducibility stem from variations introduced by sample collection, preparation, and storage protocols and LC-MS settings and conditions. On the basis of a formally precise and quantitative definition of similarity between LC-MS experiments, we have developed Chaorder, a fully automatic software tool that can assess experimental reproducibility of sets of large scale LC-MS experiments. By visualizing the similarity relationships within a set of experiments, this tool can form the basis of systematic quality control and thus help assess the comparability of mass spectrometry data over time, across different laboratories, and between instruments. Applying Chaorder to data from multiple laboratories and a range of instruments, experimental protocols, and sample complexities revealed biases introduced by the sample processing steps, experimental protocols, and instrument choices. Moreover we show that reducing bias by correcting for just a few steps, for example randomizing the run order, does not provide much gain in statistical power for biomarker discovery.  相似文献   

20.
Multiparameter optimization of an LC-MS/MS shotgun proteomics experiment was performed without any hardware or software modification of the commercial instrument. Under the optimized experimental conditions, with a 50-cm-long separation column and a 4-h LC-MS run (including a 3-h optimized gradient), 4,825 protein groups and 37,550 peptides were identified in a single run and 5,354 protein groups and 56,390 peptides in a triplicate analysis of the A375 human cell line, for approximately 50% coverage of the expressed proteome. The major steps enabling such performance included optimization of the cell lysis and protein extraction, digestion of even insoluble cell debris, tailoring the LC gradient profile, and choosing the optimal dynamic exclusion window in data-dependent MS/MS, as well as the optimal m/z scan window.LC-MS-based proteomics has by now become an analytical method of choice in biological studies that demand deep proteome coverage (13). In order to increase the number of identified proteins, LC-MS analysis is commonly preceded by sample fractionation on the level of proteins or proteolytic peptides, or both (e.g. using two-dimensional gel electrophoresis, strong anion exchange, or isoelectric focusing) (47). These multidimensional approaches greatly reduce the complexity of the protein or peptide mixture in each fraction prior to MS detection, which enables comprehensive analysis of nearly the entire human proteome (>10,000 proteins) (6). The reverse side of the coin is the substantial operational cost, sample consumption (up to milligrams), and integral instrument time spent in these analyses (typically several days or longer). This puts severe limitations on high-throughput biological and clinical research.In recent years, the power of the core analytical methods employed in proteomics, liquid chromatography and mass spectrometry, has sizably increased. Owing to the technological developments in packing materials of analytical columns and coupling interfaces, LC is now entering the era of ultra-high-pressure liquid chromatography (UPLC) characterized by unparalleled peak capacity and speed of separation (811). High-resolution MS is progressing at a fast rate with regard to sequencing capabilities and sensitivity of detection (1215). Apart from that, notable improvements have been achieved in related areas, such as sample preparation methods and MS data processing (1621).The improving performance of shotgun LC-MS proteomics reduces the gap between the analytical capabilities of one-dimensional and multidimensional approaches. This trend is likely to continue in the near future, in view of the ongoing rapid technology developments. Considering the evident advantages of one-dimensional proteomics (i.e. the ease and speed of operation, lower sample consumption, and lower cost per run), it may regain the dominant position in many biological and clinical applications that it lost with the advent of multidimensional strategies. A wide selection of one-dimensional LC-MS platforms is commercially available nowadays for routine protein analyses with complete automation of the operational workflow, allowing large arrays of biological samples to be screened without attendance. In contrast, multidimensional analyses often involve interruptions in the experimental procedure for important steps that need to be performed manually by experienced personnel.Most recent one-dimensional proteomics studies employing the combination of UPLC separation and high-resolution MS detection demonstrate remarkable progress in protein coverage, as well as in sensitivity and speed of analysis. In a very recent study, Nagaraj et al. reported an average of 3,923 protein groups identified in a single 4-h LC-MS analysis of 4 μg of yeast cell lysate (22). Combined analysis of six single runs increased the number of identifications to more than 4,000, which is close to the total number of proteins expressed in yeast under normal conditions. The median coverage of proteins in pathways with at least 10 members in the Kyoto Encyclopedia of Genes and Genomes was 88%, and the pathways that were not covered have not been expected to be active under the conditions used (22). But relative to the yeast proteome, the comprehensive analysis of the human proteome is considerably more challenging in view of its greater complexity and large dynamic range (at least 7 orders of magnitude, compared with 4 orders of magnitude for yeast). Nonetheless, significant progress has recently been achieved in the field of one-dimensional LC-MS shotgun human proteomics. For example, in a single 8-h LC-MS run of proteolytic digest from a human cancer cell line, Cristobal et al. identified over 4,500 proteins and more than 26,000 unique peptides from as little as 1 μg of loaded sample (23). Thakur et al. reported an average of 4,695 proteins in a single LC-MS run of a human embryonic kidney cell line (HEK293) with a 480-min gradient time, and 5,376 proteins after a combined triplicate analysis (∼1 day of total MS time). The identified proteins covered in total 173 out of the 200 metabolic and signaling pathways in the Kyoto Encyclopedia of Genes and Genomes (24).In this study, we set out to investigate, and if possible extend, the current limits of one-dimensional shotgun human proteomics using the advanced commercial LC-MS instrumentation available today. All experiments utilized multiparameter optimization, without any hardware or software modification of the vendor-provided installation.In a single UPLC-MS run of proteolytic digest from the A375 cancer line (3 μg) with a 3-h gradient time, we were able to identify 37,554 peptides and 4,825 protein groups. These numbers increased to 56,390 peptides and 5,354 proteins in a triplicate analysis, which is likely to be over 50% of the expressed cellular proteome. To the best of our knowledge, this is the deepest proteome coverage ever reported for such a low amount of loaded material and such a short replicate analysis time.This level of analytical performance required careful optimization of the entire experimental workflow, including sample preparation, LC separation, MS detection, and data processing. Here we describe the optimization steps that made the greatest contribution to the ultimate analytical performance and discuss the venues remaining for further improvement in one-dimensional human proteomics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号