首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The growing availability of software tools has increased the speed of generating LCA studies. Databases and visual tools for constructing material balance modules greatly facilitate the process of analyzing the environmental aspects of product systems over their life cycle. A robust software tool, containing a large LCI dataset and functions for performing LCIA and sensitivity analysis will allow companies and LCA practitioners to conduct systems analyses efficiently and reliably. This paper discusses how the GaBi 3 software tool can be used to perform LCA and Life Cycle Engineering (LCE), a methodology that combines life cycle economic, environmental, and technology assessment. The paper highlights important attributes of LCA software tools, including high quality, well-documented data, transparency in modeling, and data analysis functionality. An example of a regional power grid mix model is used to illustrate the versatility of GaBi 3.  相似文献   

2.
Massively parallel signature sequencing (MPSS) is one of the newest tools available for conducting in-depth expression profiling. MPSS is an open-ended platform that analyses the level of expression of virtually all genes in a sample by counting the number of individual mRNA molecules produced from each gene. There is no requirement that genes be identified and characterised prior to conducting an experiment. MPSS has a routine sensitivity at a level of a few molecules of mRNA per cell, and the datasets are in a digital format that simplifies the management and analysis of the data. Therefore, of the various microarray and non-microarray technologies currently available, MPSS provides many advantages for generating the type of complete datasets that will help to facilitate hypothesis-driven experiments in the era of digital biology.  相似文献   

3.
Abstract New methods for performing quantitative proteome analyses based on differential labeling protocols or label-free techniques are reported in the literature on an almost monthly basis. In parallel, a correspondingly vast number of software tools for the analysis of quantitative proteomics data has also been described in the literature and produced by private companies. In this article we focus on the review of some of the most popular techniques in the field and present a critical appraisal of several software packages available to process and analyze the data produced. We also describe the importance of community standards to support the wide range of software, which may assist researchers in the analysis of data using different platforms and protocols. It is intended that this review will serve bench scientists both as a useful reference and a guide to the selection and use of different pipelines to perform quantitative proteomics data analysis. We have produced a web-based tool ( http://www.proteosuite.org/?q=other_resources ) to help researchers find appropriate software for their local instrumentation, available file formats, and quantitative methodology.  相似文献   

4.
QGENE: software for marker-based genomic analysis and breeding   总被引:15,自引:0,他引:15  
Efficient use of DNA markers for genomic research and crop improvement will depend as much on computational tools as on laboratory technology. The large size and multidimensional character of marker datasets invite novel approaches to data visualization. Described here is a software application embodying two design principles: conventional reduction of raw genetic marker data to numerical summary statistics, and fast, interactive graphical display of both data and statistics. The program performs various analyses for mapping quantitative-trait loci in real or simulated datasets and other analyses in aid of phenotypic and marker-assisted breeding. Functionality is described and some output is illustrated.  相似文献   

5.
With the advances of genome-wide sequencing technologies and bioinformatics approaches, a large number of datasets of normal and malignant erythropoiesis have been gener-ated and made public to researchers around the world. Collection and integration of these datasets greatly facilitate basic research and clinical diagnosis and treatment of blood disorders. Here we provide a brief introduction of the most popular omics data resources of normal and malignant hematopoiesis, including some integrated web tools, to help users get better equipped to perform common analyses. We hope this review will promote the awareness and facilitate the usage of public database resources in the hematology research.  相似文献   

6.
Advances in high-throughput sequencing(HTS)have fostered rapid developments in the field of microbiome research,and massive microbiome datasets are now being generated.However,the diversity of software tools and the complexity of analysis pipelines make it difficult to access this field.Here,we systematically summarize the advantages and limitations of micro-biome methods.Then,we recommend specific pipelines for amplicon and metagenomic analyses,and describe commonly-used software and databases,to help researchers select the appropriate tools.Furthermore,we introduce statistical and visualization methods suit-able for microbiome analysis,including alpha-and beta-diversity,taxonomic composition,difference compar-isons,correlation,networks,machine learning,evolu-tion,source tracing,and common visualization styles to help researchers make informed choices.Finally,a step-by-step reproducible analysis guide is introduced.We hope this review will allow researchers to carry out data analysis more effectively and to quickly select the appropriate tools in order to efficiently mine the bio-logical significance behind the data.  相似文献   

7.
The FaceBase Consortium consists of ten interlinked research and technology projects whose goal is to generate craniofacial research data and technology for use by the research community through a central data management and integrated bioinformatics hub. Funded by the National Institute of Dental and Craniofacial Research (NIDCR) and currently focused on studying the development of the middle region of the face, the Consortium will produce comprehensive datasets of global gene expression patterns, regulatory elements and sequencing; will generate anatomical and molecular atlases; will provide human normative facial data and other phenotypes; conduct follow up studies of a completed genome-wide association study; generate independent data on the genetics of craniofacial development, build repositories of animal models and of human samples and data for community access and analysis; and will develop software tools and animal models for analyzing and functionally testing and integrating these data. The FaceBase website (http://www.facebase.org) will serve as a web home for these efforts, providing interactive tools for exploring these datasets, together with discussion forums and other services to support and foster collaboration within the craniofacial research community.  相似文献   

8.

Background  

As a high-throughput technology that offers rapid quantification of multidimensional characteristics for millions of cells, flow cytometry (FCM) is widely used in health research, medical diagnosis and treatment, and vaccine development. Nevertheless, there is an increasing concern about the lack of appropriate software tools to provide an automated analysis platform to parallelize the high-throughput data-generation platform. Currently, to a large extent, FCM data analysis relies on the manual selection of sequential regions in 2-D graphical projections to extract the cell populations of interest. This is a time-consuming task that ignores the high-dimensionality of FCM data.  相似文献   

9.
10.
11.
The global analysis of proteins is now feasible due to improvements in techniques such as two-dimensional gel electrophoresis (2-DE), mass spectrometry, yeast two-hybrid systems and the development of bioinformatics applications. The experiments form the basis of proteomics, and present significant challenges in data analysis, storage and querying. We argue that a standard format for proteome data is required to enable the storage, exchange and subsequent re-analysis of large datasets. We describe the criteria that must be met for the development of a standard for proteomics. We have developed a model to represent data from 2-DE experiments, including difference gel electrophoresis along with image analysis and statistical analysis across multiple gels. This part of proteomics analysis is not represented in current proposals for proteomics standards. We are working with the Proteomics Standards Initiative to develop a model encompassing biological sample origin, experimental protocols, a number of separation techniques and mass spectrometry. The standard format will facilitate the development of central repositories of data, enabling results to be verified or re-analysed, and the correlation of results produced by different research groups using a variety of laboratory techniques.  相似文献   

12.
13.
Bioinformatics software for biologists in the genomics era   总被引:1,自引:0,他引:1  
  相似文献   

14.
While minimum information about a microarray experiment (MIAME) standards have helped to increase the value of the microarray data deposited into public databases like ArrayExpress and Gene Expression Omnibus (GEO), limited means have been available to assess the quality of this data or to identify the procedures used to normalize and transform raw data. The EMERALD FP6 Coordination Action was designed to deliver approaches to assess and enhance the overall quality of microarray data and to disseminate these approaches to the microarray community through an extensive series of workshops, tutorials, and symposia. Tools were developed for assessing data quality and used to demonstrate how the removal of poor-quality data could improve the power of statistical analyses and facilitate analysis of multiple joint microarray data sets. These quality metrics tools have been disseminated through publications and through the software package arrayQualityMetrics. Within the framework provided by the Ontology of Biomedical Investigations, ontology was developed to describe data transformations, and software ontology was developed for gene expression analysis software. In addition, the consortium has advocated for the development and use of external reference standards in microarray hybridizations and created the Molecular Methods (MolMeth) database, which provides a central source for methods and protocols focusing on microarray-based technologies.  相似文献   

15.
DNA arrays and chips are powerful new tools for gene expression profiling. Current arrays contain hundreds or thousands of probes and large scale sequencing and screening projects will likely lead to the creation of global genomic arrays. DNA arrays and chips will be key in understanding how genes respond to specific changes of environment and will also greatly assist in drug discovery and molecular diagnostics. To facilitate widespread realization of the quantitative potential of this approach, we have designed procedures and software which facilitate analysis of autoradiography films with accuracy comparable to phosphorimaging devices. Algorithms designed for analysis of DNA array autoradiographs incorporate 3-D peak fitting of features on films and estimation of local backgrounds. This software has a flexible grid geometry and can be applied to different types of DNA arrays, including custom arrays.  相似文献   

16.
DNA microarrays were originally devised and described as a convenient technology for the global analysis of plant gene expression. Over the past decade, their use has expanded enormously to cover all kingdoms of living organisms. At the same time, the scope of applications of microarrays has increased beyond expression analyses, with plant genomics playing a leadership role in the on-going development of this technology. As the field has matured, the rate-limiting step has moved from that of the technical process of data generation to that of data analysis. We currently face major problems in dealing with the accumulating datasets, not simply with respect to how to archive, access, and process the huge amounts of data that have been and are being produced, but also in determining the relative quality of the different datasets. A major recognized concern is the appropriate use of statistical design in microarray experiments, without which the datasets are rendered useless. A vigorous area of current research involves the development of novel statistical tools specifically for microarray experiments. This article describes, in a necessarily selective manner, the types of platforms currently employed in microarray research and provides an overview of recent activities using these platforms in plant biology.  相似文献   

17.
Comprehensive understanding of biological systems requires efficient and systematic assimilation of high-throughput datasets in the context of the existing knowledge base. A major limitation in the field of proteomics is the lack of an appropriate software platform that can synthesize a large number of experimental datasets in the context of the existing knowledge base. Here, we describe a software platform, termed PROTEOME-3D, that utilizes three essential features for systematic analysis of proteomics data: creation of a scalable, queryable, customized database for identified proteins from published literature; graphical tools for displaying proteome landscapes and trends from multiple large-scale experiments; and interactive data analysis that facilitates identification of crucial networks and pathways. Thus, PROTEOME-3D offers a standardized platform to analyze high-throughput experimental datasets for the identification of crucial players in co-regulated pathways and cellular processes.  相似文献   

18.

Background  

Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments.  相似文献   

19.
Microarray technology has become a standard molecular biology tool. Experimental data have been generated on a huge number of organisms, tissue types, treatment conditions and disease states. The Gene Expression Omnibus (Barrett et al., 2005), developed by the National Center for Bioinformatics (NCBI) at the National Institutes of Health is a repository of nearly 140,000 gene expression experiments. The BioConductor project (Gentleman et al., 2004) is an open-source and open-development software project built in the R statistical programming environment (R Development core Team, 2005) for the analysis and comprehension of genomic data. The tools contained in the BioConductor project represent many state-of-the-art methods for the analysis of microarray and genomics data. We have developed a software tool that allows access to the wealth of information within GEO directly from BioConductor, eliminating many the formatting and parsing problems that have made such analyses labor-intensive in the past. The software, called GEOquery, effectively establishes a bridge between GEO and BioConductor. Easy access to GEO data from BioConductor will likely lead to new analyses of GEO data using novel and rigorous statistical and bioinformatic tools. Facilitating analyses and meta-analyses of microarray data will increase the efficiency with which biologically important conclusions can be drawn from published genomic data. Availability: GEOquery is available as part of the BioConductor project.  相似文献   

20.
BIAS: Bioinformatics Integrated Application Software   总被引:2,自引:0,他引:2  
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号