首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The role of the Human Proteome Organisation Proteomics Standards Initiative (HUPO-PSI) is to produce and release community-accepted reporting requirements, interchange formats and controlled vocabularies for mass spectrometry proteomics and related technologies such as gel electrophoresis, column chromatography and molecular interactions. A number of significant advances were made at this workshop, with the new MS standard, mzML, being finalised prior to release on 1(st) June 2008 and analysisXML, which will allow protein and peptide identifications and post-translational modifications to be captured, being prepared to enter the review process this summer. The accompanying controlled vocabularies are continuing to evolve and a number of standards papers are now being finalised prior to publication.  相似文献   

2.
3.
In order to understand the relevance of microbial communities on crop productivity, the identification and characterization of the rhizosphere soil microbial community is necessary. Characteristic profiles of the microbial communities are obtained by denaturing gradient gel electrophoresis (DGGE) of polymerase chain reaction (PCR) amplified 16S rDNA from soil extracted DNA. These characteristic profiles, commonly called community DNA fingerprints, can be represented in the form of high-dimensional binary vectors. We address the problem of modeling and variable selection in high-dimensional multivariate binary data and present an application of our methodology in the context of a controlled agricultural experiment.  相似文献   

4.
The global analysis of proteins is now feasible due to improvements in techniques such as two-dimensional gel electrophoresis (2-DE), mass spectrometry, yeast two-hybrid systems and the development of bioinformatics applications. The experiments form the basis of proteomics, and present significant challenges in data analysis, storage and querying. We argue that a standard format for proteome data is required to enable the storage, exchange and subsequent re-analysis of large datasets. We describe the criteria that must be met for the development of a standard for proteomics. We have developed a model to represent data from 2-DE experiments, including difference gel electrophoresis along with image analysis and statistical analysis across multiple gels. This part of proteomics analysis is not represented in current proposals for proteomics standards. We are working with the Proteomics Standards Initiative to develop a model encompassing biological sample origin, experimental protocols, a number of separation techniques and mass spectrometry. The standard format will facilitate the development of central repositories of data, enabling results to be verified or re-analysed, and the correlation of results produced by different research groups using a variety of laboratory techniques.  相似文献   

5.
Informatics standards and controlled vocabularies are essentialfor allowing information technology to help exchange, manage,interpret and compare large data collections. In a rapidly evolvingfield, the challenge is to work out how best to describe, butnot prescribe, the use of these technologies and methods. AMetabolomics Standards Workshop was held by the US NationalInstitutes of Health (NIH) to bring together multiple ongoingstandards efforts in metabolomics with the NIH research community.The goals were to discuss metabolomics workflows (methods, technologiesand data treatments) and the needs, challenges and potentialapproaches to developing a Metabolomics Standards Initiativethat will help facilitate this rapidly growing field which hasbeen a focus of the NIH roadmap effort. This report highlightsspecific aspects of what was presented and discussed at the1st and 2nd August 2005 Metabolomics Standards Workshop.   相似文献   

6.
7.
The theme of the third annual Spring workshop of the HUPO-PSI was "proteomics and beyond" and its underlying goal was to reach beyond the boundaries of the proteomics community to interact with groups working on the similar issues of developing interchange standards and minimal reporting requirements. Significant developments in many of the HUPO-PSI XML interchange formats, minimal reporting requirements and accompanying controlled vocabularies were reported, with many of these now feeding into the broader efforts of the Functional Genomics Experiment (FuGE) data model and Functional Genomics Ontology (FuGO) ontologies.  相似文献   

8.
The Microarray Gene Expression Data (MGED) society is an international organization established in 1999 for facilitating sharing of functional genomics and proteomics array data. To facilitate microarray data sharing, the MGED society has been working in establishing the relevant data standards. The three main components (which will be described in more detail later) of MGED standards are Minimum Information About a Microarray Experiment (MIAME), a document that outlines the minimum information that should be reported about a microarray experiment to enable its unambiguous interpretation and reproduction; MAGE, which consists of three parts, The Microarray Gene Expression Object Model (MAGE-OM), an XML-based document exchange format (MAGE-ML), which is derived directly from the object model, and the supporting tool kit MAGEstk; and MO, or MGED Ontology, which defines sets of common terms and annotation rules for microarray experiments, enabling unambiguous annotation and efficient queries, data analysis and data exchange without loss of meaning. We discuss here how these standards have been established, how they have evolved, and how they are used.  相似文献   

9.
Referring to European history of natural sciences as an example, I discuss the relation between development of standards and the emergence of new epistemic virtues. I distinguish standards relating to scientific argumentation from standards relating to data production. The former are based on truth-seeking epistemic virtues and use criteria of logical coherence and empirical grounding. They are important for the justification of an explanatory hypothesis. Data and metadata standards, on the other hand, concern the data record itself and all steps and actions taken during data production and are based on virtues of objectivity. In the second part I focus on data and metadata standards and argue that, in order to meet the requirements of eScience, the specification of the currently popular minimum information checklists should be complemented to cover four aspects: (i) content standards, which increase reproducibility and operational transparency of data production, (ii) concept standards, which increase the semantic transparency of the terms used in data records, (iii) nomenclatural standards, which provide stable and unambiguous links between the terms used and their underlying definitions or their real referents, and (iv) format standards, which increase compatibility and computer-parsability of data records. I discuss the role of scientific terminology for standardizing data and the need for using semantically standardized and formalized data-reporting languages in the form of controlled vocabularies and ontologies for establishing content standards for data in the life sciences. Finally I comment on the necessity of community participation in the development and application of standards and in making data openly available.  相似文献   

10.
The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present ‐omics community, because ‐omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data.  相似文献   

11.
Community databases have become crucial to the collection, ordering and retrieval of data gathered on model organisms, as well as to the ways in which these data are interpreted and used across a range of research contexts. This paper analyses the impact of community databases on research practices in model organism biology by focusing on the history and current use of four community databases: FlyBase, Mouse Genome Informatics, WormBase and The Arabidopsis Information Resource. We discuss the standards used by the curators of these databases for what counts as reliable evidence, acceptable terminology, appropriate experimental set-ups and adequate materials (e.g., specimens). On the one hand, these choices are informed by the collaborative research ethos characterising most model organism communities. On the other hand, the deployment of these standards in databases reinforces this ethos and gives it concrete and precise instantiations by shaping the skills, practices, values and background knowledge required of the database users. We conclude that the increasing reliance on community databases as vehicles to circulate data is having a major impact on how researchers conduct and communicate their research, which affects how they understand the biology of model organisms and its relation to the biology of other species.  相似文献   

12.
The Human Proteome Organization's Proteomics Standards Initiative (PSI) promotes the development of exchange standards to improve data integration and interoperability. PSI specifies the suitable level of detail required when reporting a proteomics experiment (via the Minimum Information About a Proteomics Experiment), and provides extensible markup language (XML) exchange formats and dedicated controlled vocabularies (CVs) that must be combined to generate a standard compliant document. The framework presented here tackles the issue of checking that experimental data reported using a specific format, CVs and public bio‐ontologies (e.g. Gene Ontology, NCBI taxonomy) are compliant with the Minimum Information About a Proteomics Experiment recommendations. The semantic validator not only checks the XML syntax but it also enforces rules regarding the use of an ontology class or CV terms by checking that the terms exist in the resource and that they are used in the correct location of a document. Moreover, this framework is extremely fast, even on sizable data files, and flexible, as it can be adapted to any standard by customizing the parameters it requires: an XML Schema Definition, one or more CVs or ontologies, and a mapping file describing in a formal way how the semantic resources and the format are interrelated. As such, the validator provides a general solution to the common problem in data exchange: how to validate the correct usage of a data standard beyond simple XML Schema Definition validation. The framework source code and its various applications can be found at http://psidev.info/validator .  相似文献   

13.
Sodium dodecyl sulfate (NaDodSO4)--polyacrylamide gel electrophoresis and gel filtration chromatography of protein--NaDodSO4 complexes are frequently used to characterize collagen-like polypeptide components in mixtures obtained from extracts of basement membranes. However, electrophoresis yields anomalously high apparent molecular weights for collagenous polypeptides when typical globular proteins are used as molecular weight standards, and the use of gel filtration chromatography for this purpose was suspect because Nozaki et al. [Nozaki, Y., Schechter, N. M., Reynolds, J. A., & Tanford, C. (1976) Biochemistry 15, 3884--3890] found that asymmetric particles, including NaDodSO4--protein complexes, coeluted with native globular proteins of lower Stokes radius, when Sepharose 4B was used. To understand these effects and to improve the characterization of collagenous polypeptides, we investigated the secondary structure of NaDodSO4--collagen complexes with the use of circular dichroism, measured the NaDodSO4 content, studied the dependence of electrophoretic mobility on gel concentration, and extended work on gel filtration by use of a more porous gel, Sepharose CL-4B. We found that the anomalous behavior of collagen chains on NaDodSO4--polyacrylamide gel electrophoresis is due in large part to treatment of data and that the method can be used to determine rather accurate values for the number of residues per polypeptide chain. Our gel filtration results indicated that reliable molecular weights can be obtained when Sepharose CL-4B is used. These methods can be applied equally well to collagenous and noncollagenous polypeptides.  相似文献   

14.
Since its conception in April 2002, the Human Proteome Organisation Proteomics Standards Initiative has contributed to the development of community standards for proteomics in a collaborative and very dynamic manner, resulting in the publication and increasing adoption of a number of interchange formats and controlled vocabularies. Repositories supporting these formats are being established or are already operational. In parallel with this, minimum reporting requirement have been developed and are now maturing to the point where they have been submitted for journal publication after prolonged exposure to community-input via the PSI website.  相似文献   

15.

Background  

Large-scale genomic studies often identify large gene lists, for example, the genes sharing the same expression patterns. The interpretation of these gene lists is generally achieved by extracting concepts overrepresented in the gene lists. This analysis often depends on manual annotation of genes based on controlled vocabularies, in particular, Gene Ontology (GO). However, the annotation of genes is a labor-intensive process; and the vocabularies are generally incomplete, leaving some important biological domains inadequately covered.  相似文献   

16.

Background  

With the vast amounts of biomedical data being generated by high-throughput analysis methods, controlled vocabularies and ontologies are becoming increasingly important to annotate units of information for ease of search and retrieval. Each scientific community tends to create its own locally available ontology. The interfaces to query these ontologies tend to vary from group to group. We saw the need for a centralized location to perform controlled vocabulary queries that would offer both a lightweight web-accessible user interface as well as a consistent, unified SOAP interface for automated queries.  相似文献   

17.
Lars Vogt 《Zoomorphology》2009,128(3):201-217
Due to lack of common data standards, the communicability and comparability of biological data across various levels of organization and taxonomic groups is continuously decreasing. However, the interdependence between molecular and higher levels of organization is of growing interest and calls for co-operations between biologists from different methodological and theoretical backgrounds. A general data standard in biology would greatly facilitate such co-operations. This article examines the role that defined and formalized vocabularies (i.e., ontologies) could have in developing such a data standard. I suggest basic criteria for developing data standards on grounds of distinguishing content, concept, nomenclatural, and format standards and discuss the role of data bases and their use of bio-ontologies in current activities for data standardization in biology. General principles of ontology development are introduced, including foundational ontology properties (e.g. class–subclass, parthood), and how concepts are defined. After addressing problems that are specific to morphological data, the notion of a general structure concept for morphology is introduced and why it is required for developing a morphological ontology. The necessity for a general morphological ontology to be taxon-independent and free of homology assumptions is discussed and how it can solve the problems of morphology. The article concludes with an outlook on how the use of ontologies will likely establish some sort of general data standard in biology and why the development of a set of commonly used foundational ontology properties and the use of globally unique identifiers for all classes defined in ontologies is crucial for its success.  相似文献   

18.
Immunoblotting is a commonly used technique for the immunodetection of specific proteins which have been fractionated by polyacrylamide gel electrophoresis. We describe here a simple procedure for the double staining of immunoblots, first to detect the immunoreactive component(s) by histochemistry using enzyme-conjugated secondary antibodies, and second to visualize the general protein electrophoretogram using India ink. This procedure permits the direct comparison of electrophoretic mobilities between the immunoreactive protein(s) and the total protein population as well as protein standards of known Mr. The experimental advantage of the procedure is that no additional manipulation of the protein samples or the standards is necessary prior to electrophoretic fractionation. In this report, detection of the vitamin D-dependent calcium-binding protein, calbindin-D28K, is used to illustrate the application of the procedure.  相似文献   

19.
20.
The use of internal standards both during DNA extraction and PCR-DGGE procedure gives the opportunity to analyse the relative abundance of individual species back to the original sample, thereby facilitating relative comparative analysis of diversity. Internal standards were used throughout the DNA extraction and PCR-DGGE to compensate for experimental variability. Such variability causes decreased reproducibility among replicate samples as well as compromise comparisons between samples, since experimental errors cannot be differentiated from actual changes in the community abundance and structure. The use of internal standards during DNA extraction and PCR-DGGE is suitable for ecological and ecotoxicological experiments with microbial communities, where relative changes in the community abundance and structure are studied. We have developed a protocol Internal Standards in Molecular Analysis of Diversity (ISMAD) that is simple to use, inexpensive, rapid to perform and it does not require additional samples to be processed. The internal standard for DNA extraction (ExtrIS) is a fluorescent 510-basepair PCR product which is added to the samples prior to DNA extraction, recovered together with the extracted DNA from the samples and analysed with fluorescence spectrophotometry. The use of ExtrIS during isolation of sample DNA significantly reduced variation among replicate samples. The PCR internal standard (PCR(IS)) originates from the Drosophila melanogaster genome and is a 140-basepair long PCR product, which is amplified by non-competitive primers in the same PCR reaction tubes as the target DNA and analysed together with the target PCR product on the same DGGE gel. The use of PCR(IS) during PCR significantly reduced variation among replicate samples both when assessing total PCR product and when comparing bands representing species on a DGGE gel. The entire ISMAD protocol was shown to accurately describe changes in relative abundance in an environmental sample using PCR-DGGE. It should, however, be mentioned that despite the use of ISMAD some inherent biases still exist in DNA extraction and PCR-DGGE and these should be taken into consideration when interpreting the diversity in a sample based on a DGGE gel.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号