首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
血浆蛋白质组——人类蛋白质组计划的“探路者”   总被引:10,自引:0,他引:10  
概述了血浆蛋白的研究现状、难点和策略.血浆是血液中无形的液体成分,是一种十分复杂和多样化的基质,包含数百万种蛋白质和小分子多肽、盐、类脂、氨基酸和糖等.血浆蛋白参与机体免疫、凝血-抗凝血、物质运输、营养和对生长信号调节等多种重要的生理功能.人体器官的病理变化可导致血浆蛋白在结构和数量上的改变,这种特征性的变化对疾病诊断和疗效监测具有十分重要的意义.然而,迄今为止人类对血浆蛋白的了解还十分有限,只有很少一部分血浆蛋白被用于常规的临床诊断.全面而系统地认识健康和疾病状态下血液循环中血浆蛋白的性质,会极大地加速对具有疾病诊断和治疗监测作用的血浆标志蛋白的研发.国际人类蛋白质组组织于2002年首先选择了血浆蛋白质组作为人类蛋白质组首期执行计划之一,其初期目标是:a.比较各种蛋白质组分析技术平台的优点和局限性;b.用这些技术平台分析人类血浆和血清的参考样本;c.建立人类血浆蛋白质组知识库.  相似文献   

2.
The lack of sensitive, specific, multiplexable assays for most human proteins is the major technical barrier impeding development of candidate biomarkers into clinically useful tests. Recent progress in mass spectrometry-based assays for proteotypic peptides, particularly those with specific affinity peptide enrichment, offers a systematic and economical path to comprehensive quantitative coverage of the human proteome. A complete suite of assays, e.g. two peptides from the protein product of each of the ∼20,500 human genes (here termed the human Proteome Detection and Quantitation project), would enable rapid and systematic verification of candidate biomarkers and lay a quantitative foundation for subsequent efforts to define the larger universe of splice variants, post-translational modifications, protein-protein interactions, and tissue localization.There is growing interest in the idea of a comprehensive Human Proteome Project (1) to exploit and extend the successful effort to sequence the human genome. Major challenges in defining a comprehensive Human Proteome Project (and distinguishing it from the genome effort) are 1) the potentially very large number of proteins with modified forms; 2) the diversity of technology platforms involved in their study; 3) the variety of overlapping biological “units” into which the proteome might be divided for organized conquest; and 4) sensitivity limitations in detecting proteins present in trace amounts. The process of analyzing and discussing these issues may (and ought to) be lengthy, as it addresses core scientific unknowns as well as decisions about the organization and scale of biomedical research in the future. The benefits of taking time to involve the entire biological research community, and especially the medical research segment, in these discussions are substantial.Progress in systematically measuring proteins, however, need not wait for the conclusion of such discussions. We propose a near-term tactical approach, called the human Proteome Detection and Quantitation (hPDQ)1 project that will enable measurement of the human proteome in a way that would yield immediately useful results while the strategy for a comprehensive Human Proteome Project is worked out. The hPDQ project is aimed at overcoming present difficulties in answering basic biological questions about the relationship between protein abundance (or concentration) and gene expression, phenotype, disease, and treatment response; i.e., the growing field of protein biomarkers. It is thus focused on the study of biological variation affecting protein expression rather than study of structure and mechanism and in this initial form does not directly address splice variants or most post-translational modifications. It is aimed at providing immediately useful capabilities to the human biology research community, in a way that does not adversely impact funding for individual investigators and does not generate administrative constraints on their ability to set and change courses in the conduct of research. Specifically, the goal of the hPDQ is to enable individual biological researchers to measure defined collections of human proteins in biological samples with 1 ng/ml sensitivity and absolute specificity, at throughput and cost levels that permit the study of meaningfully large biological populations (∼500–5,000 samples).We clearly do not have this capability today. If an investigator defines a set of 20 proteins hypothesized to change in relation to some biological process or event, assays for only a minority (often none!) will typically be available. Further, these assays will lack absolute specificity and will not easily be multiplexed. Current proteomics research platforms are focused mainly on discovery; providing increasingly broad protein sampling surveys, generally at low throughput and high cost. Such approaches generally do not yield an economical or accurate measurement of a defined set of proteins in every sample. There is thus a fundamental barrier to hypothesis testing in quantitative proteomics, where relationships between protein abundance and biology are sought. A particularly important instance of this limitation occurs in the effort to establish useful biomarkers of disease, for diagnosis, for measuring efficacy of treatment, and for monitoring of disease recurrence. This limitation is largely responsible for the research community''s failure in recent years to bring forward significant numbers of new proteins as Food and Drug Administration approved diagnostic tests (2). However, if a robust, economical, and widely diffused capability to measure all human proteins existed, the research community would have the collective means to assess the utility of all human proteins as biomarkers in hundreds of diseases and other processes in the most efficient way.The need for new or improved biomarkers in many areas of healthcare has become critical. Early detection of cancer, coupled with surgical intervention, has the potential to radically improve survival (3), provided early markers exist and can be found. Without good biomarkers, degenerative diseases such as Alzheimer and chronic obstructive pulmonary disease (COPD) are difficult to detect early enough to benefit from the potential therapies. Clinical development of new drugs increasingly depends on identification of biomarkers for pharmacodynamic assessment of drug action to help guide dose and schedule, and predictive biomarkers for selection of patients who will benefit from therapy (4). Companion diagnostics are the currency of personalized medicine and represent those predictive or response biomarkers that are linked to specific therapeutics, substantially increasing their clinical value. Surrogate biomarkers (those biomarkers that substitute for a clinical outcome or response) are the most difficult to discover and to verify because of the long timeframe required but can radically shorten appropriate clinical trials. The impact of a vigorous increase in clinical biomarkers could thus be enormous, both in terms of patient well being and financial viability of healthcare systems worldwide.Protein measurements are also likely to play an important role in assessing the quality of material stored in large clinical sample collections (Biobanks). Much discussion has occurred recently regarding the value of banked samples because of unknown degrees of protein degradation occurring during acquisition, processing, and storage. This matter is of acute concern in the case of serum, where coagulation initiates a plethora of proteolytic cleavage events. The hPDQ may provide the opportunity to determine the value of each sample through the development of prototypic peptides tracking the stability of labile proteins.An attractive technology for achieving the objective of hPDQ is quantitative mass spectrometry, the sensitivity, and specificity of which are well established in the measurement of small molecules (5,6) and peptides (7,8). To achieve comprehensive quantitation of proteins, given the immense variability in their physical properties, these larger molecules are digested to component peptides using an enzyme such as trypsin, and protein amount is measured using proteotypic peptides (9,10) as specific stoichiometric surrogates. Multiple peptides from a target protein provide independent confirmation of this stoichiometry (equivalent to having multiple enzyme-linked immunosorbent assays with different antibody pairs), serving to control for the possibility of incomplete digestion or subsequent losses. Accurate calibration is achieved by spiking digested samples with known quantities of synthetic stable-isotope labeled peptides as internal standards (11,12). The sensitivity of this approach for multiplexed analysis of proteins in plasma has been extended from the microgram (13) to nanogram/ml levels by depletion of abundant proteins and limited peptide fractionation prior to analysis (14) or by capture of the subset of glycopeptides (15). Sensitivity and throughput of peptide MS measurements can be further increased to levels required in hPDQ by specific enrichment of the target peptides using anti-peptide antibodies. This method, called SISCAPA (for “stable isotope standards and capture by anti-peptide antibodies”) (16) or iMALDI (for immuno-MALDI) (17), combines the enhanced sensitivity of immunoassays with the specificity of mass spectrometry, while maintaining multiplexing capability. For these reasons we emphasize SISCAPA and iMALDI in this hPDQ proposal, although proteins in the 100 ng/ml or higher concentration are readily accessible by targeted MS in plasma without antibody enrichment. Combining these elements results in a measurement system, with the potential to measure 10–100 selected proteins at ng/ml levels in small (∼10 μl) samples of human plasma in a single short analytical run. Sensitivity can be further increased through the use of larger samples and/or advances in MS sensitivity. In comparison to the conventional ELISA approach, MS-based SISCAPA assays are less expensive to develop (one antibody instead of a carefully matched pair), easier to multiplex (off-target interactions being less likely with peptides than proteins), and provide absolute structural specificity (by reading the masses of multiple specific peptide fragments). This improved specificity solves a major problem plaguing clinical immunoassays for proteins such as thyroglobulin (18) and has led to the development of first clinical SISCAPA assay (19). In addition, since the mass spectrometer functions as a “second antibody” that identifies the captured peptides, the anti-peptide antibody used for peptide enrichment need not have perfect specificity. This greatly reduces the cost of affinity reagents, currently a limiting factor in developing ELISA assays for large numbers of protein analytes.Achieving the hPDQ goal by this approach would require that four resources be generally available. 1) A comprehensive database of proteotypic (protein-unique) peptides for each of the 21,500 human proteins (20), coupled with experimental or computational data identifying the best peptides for MS measurement and associated optimized MS instrument parameters. 2) At least two synthetic proteotypic peptides, labeled with stable isotope(s) and available in accurately quantitated aliquots, for use as internal measurement standards for quantitation of each protein. Such peptides are readily available today through custom order, at rapidly declining prices. 3) Anti-peptide antibodies specific for the same two proteotypic peptides per target protein, capable of binding the peptides with dissociation constants < 1e-9 (the level required in theory and practice to enrich low-abundance peptides from complex sample digests). Such antibodies are now being made for a variety of targets, and a robust production pipeline is being developed. Monoclonal antibodies would be preferred, despite their higher development cost, to establish a stable reagent supply, especially for those targets that prove useful as biomarkers. 4) Robust and affordable instrument platforms for quantitative analysis of small (amol to fmol) amounts of tryptic peptides and for sample preparation. Existing triple-quadrupole mass spectrometers (with a current worldwide installed base of more than 6,000 instruments) coupled with nanoflow (∼300–600 nl/min) liquid chromatography systems can meet this requirement and are undergoing rapid improvement with declining cost. MALDI platforms may provide similar capabilities at even higher throughput.We estimate that an initial pilot phase targeting 2,000 proteins selected for biomarker potential could be completed in two years at a cost of less than $50 million through funding of existing academic and commercial resources in a distributed network. In the following five years, the remaining 18,500 proteins could be targeted for $250 million, making use of anticipated technical improvements, particularly in the strategies for generating suitable high affinity monoclonal antibodies (21) in large numbers at low cost (22).Although the natural mechanism for providing the hPDQ database (resource 1 above) is through an academic collaboration, perhaps modeled on the successful Global Protein Machine (23) and Peptide Atlas (24) databases, the other resources would benefit from commercial distribution by experienced providers of instruments and reagents. The required instrument platforms (4 above) serve existing markets, and their further development is unlikely to require additional funding for hPDQ applications. However, business economics does not presently justify the expense of developing well characterized antibodies and peptides for quantitation of proteins that are not already recognized as pivotal in biological research (i.e. precisely those in need of the attention of the research community). Hence a substantial portion of the required funding for the proposed approach for such antibody and peptide reagents will be needed from government and philanthropic sources. A significant advantage of such diversified support would be the leverage it would provide in retaining in the public domain the identities of the selected peptides, their parameters and basic measurement protocols.The value of a general protein measurement capability for research is very substantial, but the proposed effort would not solve several larger issues that must await definition of a broader human proteome program. For example, the hPDQ project does not address the basic process of de novo proteome-wide discovery; the comprehensive exploration of splice forms, post-translational modifications, active fragments of preproteins or genetic variants (although once known, most of these can be targeted by the methods used here); interactions among proteins or with other molecules; or spatial arrangement of proteins in organs and tissues. Each of these areas would benefit from the resources proposed in hPDQ, but will likely require separate, coordinated large-scale efforts that are likely to identify additional sets of biomarkers. Thus although a complete suite of targeted assays is only a first step toward the complete human proteome, we feel that its fundamental importance for progress in biomarker research and its value as a foundation for protein quantitation justifies consideration as an initial step.In the beginning of the study of protein diagnostics, investigators at the Behring Institute discovered many of the well known plasma proteins and made associated specific antibodies and antibody-based quantitative tests available to the research community worldwide, spurring the initial round of plasma biomarker research. The application of monoclonal antibodies sparked additional discoveries through close coupling of protein “discovery” with simple quantitative monoclonal antibody-based assays - this “shortcut” to clinical measurement allowed investigators to publish more than 1,000 papers referring to the ovarian cancer marker CA125 (measured by ELISA) before the sequence of the protein was finally identified in 2001 (25). The broader proteomics technologies (beginning with the two-dimensional electrophoresis technology that formed the basis of the Human Protein Index Project (26) formulated by two of us almost 30 years ago, and extending to modern shotgun-style MS-based approaches) have radically expanded the universe of observable proteins. However, quantitative specific assay capabilities have not kept pace with this expansion, leading to the current gap between biomarker proteomics and clinical biomarker output. It is now time to address this gap and realize the benefits of a clinically accessible human proteome. Effective translation of basic research into tangible medical benefit requires it.  相似文献   

3.
4.
5.
There are an estimated 285 million people with visual impairment worldwide, of whom 39 million are blind. The pathogenesis of many eye diseases remains poorly understood. The human eye is currently an emerging proteome that may provide key insight into the biological pathways of disease. We review proteomic investigations of the human eye and present a catalogue of 4842 nonredundant proteins identified in human eye tissues and biofluids to date. We highlight the need to identify new biomarkers for eye diseases using proteomics. Recent advances in proteomics do now allow the identification of hundreds to thousands of proteins in tissues and fluids, characterization of various PTMs and simultaneous quantification of multiple proteins. To facilitate proteomic studies of the eye, the Human Eye Proteome Project (HEPP) was organized in September 2012. The HEPP is one of the most recent components of the Biology/Disease‐driven Human Proteome Project (B/D‐HPP) whose overarching goal is to support the broad application of state‐of‐the‐art measurements of proteins and proteomes by life scientists studying the molecular mechanisms of biological processes and human disease. The large repertoire of investigative proteomic tools has great potential to transform vision science and enhance understanding of physiology and disease processes that affect sight.  相似文献   

6.
Introduction: The mission of the Chromosome-Centric Human Proteome Project (C-HPP), is to map and annotate the entire predicted human protein set (~20,000 proteins) encoded by each chromosome. The initial steps of the project are focused on ‘missing proteins (MPs)’, which lacked documented evidence for existence at protein level. In addition to remaining 2,579 MPs, we also target those annotated proteins having unknown functions, uPE1 proteins, alternative splice isoforms and post-translational modifications. We also consider how to investigate various protein functions involved in cis-regulatory phenomena, amplicons lncRNAs and smORFs.

Areas covered: We will cover the scope, historic background, progress, challenges and future prospects of C-HPP. This review also addresses the question of how we can best improve the methodological approaches, select the optimal biological samples, and recommend stringent protocols for the identification and characterization of MPs. A new strategy for functional analysis of some of those annotated proteins having unknown function will also be discussed.

Expert commentary: If the project moves well by reshaping the original goals, the current working modules and team work in the proposed extended planning period, it is anticipated that a progressively more detailed draft of an accurate chromosome-based proteome map will become available with functional information.  相似文献   


7.
The international Human Proteome Project (HPP), a logical continuation of the Human Genome Project, was launched on 23 September 2010 in Sydney, Australia. In accordance with the gene-centric approach, the goals of the HPP are to prepare an inventory of all human proteins and decipher the network of cellular protein interactions. The greater complexity of the proteome in comparison to the genome gives rise to three bottlenecks in the implementation of the HPP. The main bottleneck is the insufficient sensitivity of proteomic technologies, hampering the detection of proteins with low- and ultra-low copy numbers. The second bottleneck is related to poor reproducibility of proteomic methods and the lack of a so-called ‘gold’ standard. The last bottleneck is the dynamic nature of the proteome: its instability over time. The authors here discuss approaches to overcome these bottlenecks in order to improve the success of the HPP.  相似文献   

8.
Hamacher M  Meyer HE 《Proteomics》2005,5(2):334-336
More than 1200 attendees came together at the 3(rd) HUPO World Congess in Beijing, October 25-27, 2004. In numerous different sessions the wide range of proteomic areas became visible. The HUPO Brain Proteome Project (HUPO BPP) organized an evening session on October 23, presenting the first results of two pilot studies as well as the newest, very positive international development in this field. The rising importance became even more apparent in the plenary presentation of all HUPO initiatives and the following congress activities.  相似文献   

9.
In the summer of 2013, distinguished global representatives of proteome science gathered to discuss the futuristic visions of the chromosome‐centric human proteome project (C‐HPP) (Cochairs: Y. K. Paik, G. Omenn; hosted by A. Archakov, Institute of Biomedical Chemistry, Russia) that was broadcast to the annual Federation of European Biochemical Societies Congress (St. Petersburg, Russia, July 10–11, 2013). Technology breakthroughs presented included a new ultra‐sensitive Tribrid mass‐spectrometer from Thermo and SOMAmers—Slow Off‐rate Modified Aptamers (SOMAlogic, USA), a new type of protein capture reagents. Professor Archakov's group introduced the “rectangle” concept of proteome size as a product of proteome width and depth. The discussion on proteome width culminated with the introduction of digital biomarkers—low‐copied aberrant proteins that differ from their typical forms by PTMs, alternative splicing, or single amino acid polymorphisms. The aberrant proteoforms, a complement to whole‐genome proteomic surveys, were presented as an ultimate goal for the proteomic community.  相似文献   

10.
11.
The pilot phase of the Human Brain Proteome Project as a part of the Human Proteome Organisation has just been started. In two pilot studies, 18 different laboratories are analyzing mouse brains of three age stages and human brain autopsy versus biopsy material, respectively. The overall aim is to elucidate the portfolio of available techniques as well as to elaborate common standards. As a first step, it was decided to use the common bioinformatics platform ProteinScape that was introduced to the participating groups in a two day course in Bochum, Germany.  相似文献   

12.
This report describes the 17th Chromosome‐Centric Human Proteome Project which was held in Tehran, Iran, April 27 and 28, 2017. A brief summary of the symposium's talks including new technical and computational approaches for the identification of novel proteins from non‐coding genomic regions, physicochemical and biological causes of missing proteins, and the close interactions between Chromosome‐ and Biology/Disease‐driven Human Proteome Project are presented. A synopsis of decisions made on the prospective programs to maintain collaborative works, share resources and information, and establishment of a newly organized working group, the task force for missing protein analysis are discussed.  相似文献   

13.
Introduction: The technological and scientific progress performed in the Human Proteome Project (HPP) has provided to the scientific community a new set of experimental and bioinformatic methods in the challenging field of shotgun and SRM/MRM-based Proteomics. The requirements for a protein to be considered experimentally validated are now well-established, and the information about the human proteome is available in the neXtProt database, while targeted proteomic assays are stored in SRMAtlas. However, the study of the missing proteins continues being an outstanding issue.

Areas covered: This review is focused on the implementation of proteogenomic methods designed to improve the detection and validation of the missing proteins. The evolution of the methodological strategies based on the combination of different omic technologies and the use of huge publicly available datasets is shown taking the Chromosome 16 Consortium as reference.

Expert commentary: Proteogenomics and other strategies of data analysis implemented within the C-HPP initiative could be used as guidance to complete in a near future the catalog of the human proteins. Besides, in the next years, we will probably witness their use in the B/D-HPP initiative to go a step forward on the implications of the proteins in the human biology and disease.  相似文献   


14.
Heeren RM 《Proteomics》2005,5(17):4316-4326
Imaging the proteome is a term that is used in many different contexts. The term implies that the entire cohort of proteins and their modifications are visualized. This unfortunately is not the case. In this mini-review, a concise overview is provided on different imaging technologies that are currently used to investigate the structure, function and dynamics of proteins and their organization. These techniques have been selected for review based on the unique insights they provide in subsets of the proteome. These techniques have been illustrated with practical examples of their merits. Mass spectrometry-based imaging technologies are playing a key role in proteome research and have been reviewed in more detail. They hold the promise of detailed molecular insight in the spatial organization of living system.  相似文献   

15.
Omenn GS 《Proteomics》2004,4(5):1235-1240
A comprehensive, systematic characterization of cirolating proteins in health and disease will greatly facilitate development of biomarkers for prevention, diagnosis, and therapy of cancers and other diseases. The Human Proteome Organization Plasma Proteome Project pilot phase aims to (1) compare the advantages and limitations of many technology platforms; (2) contrast reference specimens of human plasma (ethylenediaminetetra acetic acid, heparin, citrate-anticoagulated) and serum, in terms of numbers of proteins identified and any interferences with various technology platforms; and (3) create a global knowledge base/data repository.  相似文献   

16.
The Human Proteome Organisation Brain Proteome Project aims at coordinating neuroproteomic activities with respect to analysis of development, aging, and evolution in human and mice and at analysing normal aging processes as well as neurodegenerative diseases. Our group participated in the mouse pilot study of this project using two different 2-DE systems, to find out the optimal conditions for comprehensive gel-based differential proteome analysis. Besides the assessment of the best methodical conditions the question of "How many biological replicate analyses have to be performed to get reliable statistically validated results?" was addressed. In total 420 differences were detected in all analyses. Both 2-DE methods were found to be suitable for comprehensive differential proteome analysis. Nevertheless, each of the methods showed substantial advantages and disadvantages resulting in the fact that modification of both systems is essential. From our results we can draw the conclusions that for the future optimal quantitative differential gel-based brain proteome analyses the sample preparation has to be slightly changed, the resolution of the first as well as the second dimension has to be advanced, the number of experiments has to be increased and that the 2D-DIGE system should be applied.  相似文献   

17.
The Human Proteome Organisation (HUPO) Brain Proteome Project (BPP) pilot studies have generated over 200 2-D gels from eight participating laboratories. This data includes 67 single-channel and 60 DIGE gels comparing 30 whole frozen C57/BL6 female mouse brains, ten each at embryonic day 16, postnatal day 7 (juvenile) and postnatal day 54-56 (adult); and ten single-channel and three DIGE gels comparing human epilepsy surgery of the temporal front lobe with a corresponding post-mortem specimen. The samples were generated centrally and distributed to the participating laboratories, but otherwise no restrictions were placed on sample preparation, running and staining protocols, nor on the 2-D gel analysis packages used. Spots were characterised by MS and the annotated gel images published on a ProteinScape web server. In order to examine the resultant differential expression and protein identifications, we have reprocessed a large subset of the gels using the newly developed RAIN (Robust Automated Image Normalisation) 2-D gel matching algorithm. Traditional approaches use symbolic representation of spots at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in spot modelling and matching. With RAIN, image intensity distributions, rather than selected features, are used, where smooth geometric deformation and expression bias are modelled using multi-resolution image registration and bias-field correction. The method includes a new approach of volume-invariant warping which ensures the volume of protein expression under transformation is preserved. An image-based statistical expression analysis phase is then proposed, where small insignificant expression changes over one gel pair can be revealed when reinforced by the same consistent changes in others. Results of the proposed method as applied to the HUPO BPP data show significant intra-laboratory improvements in matching accuracy over a previous state-of-the-art technique, Multi-resolution Image Registration (MIR), and the commercial Progenesis PG240 package.  相似文献   

18.
In 2001, the German Federal Ministry of Education and Research (BMBF) initiated the National Genome Research Network (NGFN; www.ngfn.de) as a nation-wide multidisciplinary networking platform aiming at the analysis of common human diseases and aging. Within the NGFN the Human Brain Proteome Project (HBPP; www.smp-proteomics.de) focuses on the analysis of the human brain in health and disease. The concept is based on two consecutive steps: (i) Elaborating and establishing the necessary technology platforms. (ii) Application of the established technologies for research in Alzheimer's disease and Parkinson's disease. In the first funding period, HBPP1, running from 2001 to 2004, necessary technologies were established and optimized. In HBPP2, which started 2004 and will end in May 2008, the developed technologies are used for large-scale experiments, offering new links for disease related research and therapies. The following overview describes structure, aims and outcome of this unique German Brain Proteome Project.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号