首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.

Background

Venn diagrams are commonly used to display list comparison. In biology, they are widely used to show the differences between gene lists originating from different differential analyses, for instance. They thus allow the comparison between different experimental conditions or between different methods. However, when the number of input lists exceeds four, the diagram becomes difficult to read. Alternative layouts and dynamic display features can improve its use and its readability.

Results

jvenn is a new JavaScript library. It processes lists and produces Venn diagrams. It handles up to six input lists and presents results using classical or Edwards-Venn layouts. User interactions can be controlled and customized. Finally, jvenn can easily be embeded in a web page, allowing to have dynamic Venn diagrams.

Conclusions

jvenn is an open source component for web environments helping scientists to analyze their data. The library package, which comes with full documentation and an example, is freely available at http://bioinfo.genotoul.fr/jvenn.  相似文献   

2.

Background

A minor but significant fraction of samples subjected to next-generation sequencing methods are either mixed-up or cross-contaminated. These events can lead to false or inconclusive results. We have therefore developed SASI-Seq; a process whereby a set of uniquely barcoded DNA fragments are added to samples destined for sequencing. From the final sequencing data, one can verify that all the reads derive from the original sample(s) and not from contaminants or other samples.

Results

By adding a mixture of three uniquely barcoded amplicons, of different sizes spanning the range of insert sizes one would normally use for Illumina sequencing, at a spike-in level of approximately 0.1%, we demonstrate that these fragments remain intimately associated with the sample. They can be detected following even the tightest size selection regimes or exome enrichment and can report the occurrence of sample mix-ups and cross-contamination.As a consequence of this work, we have designed a set of 384 eleven-base Illumina barcode sequences that are at least 5 changes apart from each other, allowing for single-error correction and very low levels of barcode misallocation due to sequencing error.

Conclusion

SASI-Seq is a simple, inexpensive and flexible tool that enables sample assurance, allows deconvolution of sample mix-ups and reports levels of cross-contamination between samples throughout NGS workflows.

Electronic supplementary material

The online version of this article (doi:10.1186/1471-2164-15-110) contains supplementary material, which is available to authorized users.  相似文献   

3.
4.

Background

Driver mutations are positively selected during the evolution of cancers. The relative frequency of a particular mutation within a gene is typically used as a criterion for identifying a driver mutation. However, driver mutations may occur with relative infrequency at a particular site, but cluster within a region of the gene. When analyzing across different cancers, particular mutation sites or mutations within a particular region of the gene may be of relatively low frequency in some cancers, but still provide selective growth advantage.

Results

This paper presents a method that allows rapid and easy visualization of mutation data sets and identification of potential gene mutation hotspot sites and/or regions. As an example, we identified hotspot regions in the NFE2L2 gene that are potentially functionally relevant in endometrial cancer, but would be missed using other analyses.

Conclusions

HotSpotter is a quick, easy-to-use visualization tool that delivers gene identities with associated mutation locations and frequencies overlaid upon a large cancer mutation reference set. This allows the user to identify potential driver mutations that are less frequent in a cancer or are localized in a hotspot region of relatively infrequent mutations.

Electronic supplementary material

The online version of this article (doi:10.1186/1471-2164-15-1044) contains supplementary material, which is available to authorized users.  相似文献   

5.

Background

Ontology-based enrichment analysis aids in the interpretation and understanding of large-scale biological data. Ontologies are hierarchies of biologically relevant groupings. Using ontology annotations, which link ontology classes to biological entities, enrichment analysis methods assess whether there is a significant over or under representation of entities for ontology classes. While many tools exist that run enrichment analysis for protein sets annotated with the Gene Ontology, there are only a few that can be used for small molecules enrichment analysis.

Results

We describe BiNChE, an enrichment analysis tool for small molecules based on the ChEBI Ontology. BiNChE displays an interactive graph that can be exported as a high-resolution image or in network formats. The tool provides plain, weighted and fragment analysis based on either the ChEBI Role Ontology or the ChEBI Structural Ontology.

Conclusions

BiNChE aids in the exploration of large sets of small molecules produced within Metabolomics or other Systems Biology research contexts. The open-source tool provides easy and highly interactive web access to enrichment analysis with the ChEBI ontology tool and is additionally available as a standalone library.

Electronic supplementary material

The online version of this article (doi:10.1186/s12859-015-0486-3) contains supplementary material, which is available to authorized users.  相似文献   

6.

Background

Mass spectrometry analyses of complex protein samples yield large amounts of data and specific expertise is needed for data analysis, in addition to a dedicated computer infrastructure. Furthermore, the identification of proteins and their specific properties require the use of multiple independent bioinformatics tools and several database search algorithms to process the same datasets. In order to facilitate and increase the speed of data analysis, there is a need for an integrated platform that would allow a comprehensive profiling of thousands of peptides and proteins in a single process through the simultaneous exploitation of multiple complementary algorithms.

Results

We have established a new proteomics pipeline designated as APP that fulfills these objectives using a complete series of tools freely available from open sources. APP automates the processing of proteomics tasks such as peptide identification, validation and quantitation from LC-MS/MS data and allows easy integration of many separate proteomics tools. Distributed processing is at the core of APP, allowing the processing of very large datasets using any combination of Windows/Linux physical or virtual computing resources.

Conclusions

APP provides distributed computing nodes that are simple to set up, greatly relieving the need for separate IT competence when handling large datasets. The modular nature of APP allows complex workflows to be managed and distributed, speeding up throughput and setup. Additionally, APP logs execution information on all executed tasks and generated results, simplifying information management and validation.

Electronic supplementary material

The online version of this article (doi:10.1186/s12859-014-0441-8) contains supplementary material, which is available to authorized users.  相似文献   

7.

Background

Epilepsy is a common chronic neurological disorder characterized by recurrent unprovoked seizures. Electroencephalogram (EEG) signals play a critical role in the diagnosis of epilepsy. Multichannel EEGs contain more information than do single-channel EEGs. Automatic detection algorithms for spikes or seizures have traditionally been implemented on single-channel EEG, and algorithms for multichannel EEG are unavailable.

Methodology

This study proposes a physiology-based detection system for epileptic seizures that uses multichannel EEG signals. The proposed technique was tested on two EEG data sets acquired from 18 patients. Both unipolar and bipolar EEG signals were analyzed. We employed sample entropy (SampEn), statistical values, and concepts used in clinical neurophysiology (e.g., phase reversals and potential fields of a bipolar EEG) to extract the features. We further tested the performance of a genetic algorithm cascaded with a support vector machine and post-classification spike matching.

Principal Findings

We obtained 86.69% spike detection and 99.77% seizure detection for Data Set I. The detection system was further validated using the model trained by Data Set I on Data Set II. The system again showed high performance, with 91.18% detection of spikes and 99.22% seizure detection.

Conclusion

We report a de novo EEG classification system for seizure and spike detection on multichannel EEG that includes physiology-based knowledge to enhance the performance of this type of system.  相似文献   

8.

Background

The Edinburgh Postnatal Depression Scale (EPDS) is a widely used screening tool for postpartum depression (PPD). Although the reliability and validity of EPDS in Japanese has been confirmed and the prevalence of PPD is found to be about the same as Western countries, the factor structure of the Japanese version of EPDS has not been elucidated yet.

Methods

690 Japanese mothers completed all items of the EPDS at 1 month postpartum. We divided them randomly into two sample sets. The first sample set (n = 345) was used for exploratory factor analysis, and the second sample set was used (n = 345) for confirmatory factor analysis.

Results

The result of exploratory factor analysis indicated a three-factor model consisting of anxiety, depression and anhedonia. The results of confirmatory factor analysis suggested that the anxiety and anhedonia factors existed for EPDS in a sample of Japanese women at 1 month postpartum. The depression factor varies by the models of acceptable fit.

Conclusions

We examined EPDS scores. As a result, “anxiety” and “anhedonia” exist for EPDS among postpartum women in Japan as already reported in Western countries. Cross-cultural research is needed for future research.  相似文献   

9.
10.

Background

The growing field of formalin-fixed paraffin-embedded (FFPE) tissue proteomics holds promise for improving translational research. Direct tissue trypsinization (DT) and protein extraction followed by in solution digestion (ISD) or filter-aided sample preparation (FASP) are the most common workflows for shotgun analysis of FFPE samples, but a critical comparison of the different methods is currently lacking.

Experimental design

DT, FASP and ISD workflows were compared by subjecting to the same label-free quantitative approach three independent technical replicates of each method applied to FFPE liver tissue. Data were evaluated in terms of method reproducibility and protein/peptide distribution according to localization, MW, pI and hydrophobicity.

Results

DT showed lower reproducibility, good preservation of high-MW proteins, a general bias towards hydrophilic and acidic proteins, much lower keratin contamination, as well as higher abundance of non-tryptic peptides. Conversely, FASP and ISD proteomes were depleted in high-MW proteins and enriched in hydrophobic and membrane proteins; FASP provided higher identification yields, while ISD exhibited higher reproducibility.

Conclusions

These results highlight that diverse sample preparation strategies provide significantly different proteomic information, and present typical biases that should be taken into account when dealing with FFPE samples. When a sufficient amount of tissue is available, the complementary use of different methods is suggested to increase proteome coverage and depth.  相似文献   

11.

Background

Next-Generation Sequencing (NGS) has emerged as a widely used tool in molecular biology. While time and cost for the sequencing itself are decreasing, the analysis of the massive amounts of data remains challenging. Since multiple algorithmic approaches for the basic data analysis have been developed, there is now an increasing need to efficiently use these tools to obtain results in reasonable time.

Results

We have developed QuickNGS, a new workflow system for laboratories with the need to analyze data from multiple NGS projects at a time. QuickNGS takes advantage of parallel computing resources, a comprehensive back-end database, and a careful selection of previously published algorithmic approaches to build fully automated data analysis workflows. We demonstrate the efficiency of our new software by a comprehensive analysis of 10 RNA-Seq samples which we can finish in only a few minutes of hands-on time. The approach we have taken is suitable to process even much larger numbers of samples and multiple projects at a time.

Conclusion

Our approach considerably reduces the barriers that still limit the usability of the powerful NGS technology and finally decreases the time to be spent before proceeding to further downstream analysis and interpretation of the data.

Electronic supplementary material

The online version of this article (doi:10.1186/s12864-015-1695-x) contains supplementary material, which is available to authorized users.  相似文献   

12.

Background

With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper''s objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML.

Methods

Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software.

Results

Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses'' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities.

Conclusions

This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows.  相似文献   

13.
14.

Background

Phylogenetic comparative methods (PCMs) have been applied widely in analyzing data from related species but their fit to data is rarely assessed.

Question

Can one determine whether any particular comparative method is typically more appropriate than others by examining comparative data sets?

Data

I conducted a meta-analysis of 122 phylogenetic data sets found by searching all papers in JEB, Blackwell Synergy and JSTOR published in 2002–2005 for the purpose of assessing the fit of PCMs. The number of species in these data sets ranged from 9 to 117.

Analysis Method

I used the Akaike information criterion to compare PCMs, and then fit PCMs to bivariate data sets through REML analysis. Correlation estimates between two traits and bootstrapped confidence intervals of correlations from each model were also compared.

Conclusions

For phylogenies of less than one hundred taxa, the Independent Contrast method and the independent, non-phylogenetic models provide the best fit.For bivariate analysis, correlations from different PCMs are qualitatively similar so that actual correlations from real data seem to be robust to the PCM chosen for the analysis. Therefore, researchers might apply the PCM they believe best describes the evolutionary mechanisms underlying their data.  相似文献   

15.

Background

Despite several recent advances in the automated generation of draft metabolic reconstructions, the manual curation of these networks to produce high quality genome-scale metabolic models remains a labour-intensive and challenging task.

Results

We present PathwayBooster, an open-source software tool to support the manual comparison and curation of metabolic models. It combines gene annotations from GenBank files and other sources with information retrieved from the metabolic databases BRENDA and KEGG to produce a set of pathway diagrams and reports summarising the evidence for the presence of a reaction in a given organism’s metabolic network. By comparing multiple sources of evidence within a common framework, PathwayBooster assists the curator in the identification of likely false positive (misannotated enzyme) and false negative (pathway hole) reactions. Reaction evidence may be taken from alternative annotations of the same genome and/or a set of closely related organisms.

Conclusions

By integrating and visualising evidence from multiple sources, PathwayBooster reduces the manual effort required in the curation of a metabolic model. The software is available online at http://www.theosysbio.bio.ic.ac.uk/resources/pathwaybooster/.

Electronic supplementary material

The online version of this article (doi:10.1186/s12859-014-0447-2) contains supplementary material, which is available to authorized users.  相似文献   

16.
17.

Background

Predictions of MHC binding affinity are commonly used in immunoinformatics for T cell epitope prediction. There are multiple available methods, some of which provide web access. However there is currently no convenient way to access the results from multiple methods at the same time or to execute predictions for an entire proteome at once.

Results

We designed a web application that allows integration of multiple epitope prediction methods for any number of proteins in a genome. The tool is a front-end for various freely available methods. Features include visualisation of results from multiple predictors within proteins in one plot, genome-wide analysis and estimates of epitope conservation.

Conclusions

We present a self contained web application, Epitopemap, for calculating and viewing epitope predictions with multiple methods. The tool is easy to use and will assist in computational screening of viral or bacterial genomes.

Electronic supplementary material

The online version of this article (doi:10.1186/s12859-015-0659-0) contains supplementary material, which is available to authorized users.  相似文献   

18.

Background

DAVID is the most popular tool for interpreting large lists of gene/proteins classically produced in high-throughput experiments. However, the use of DAVID website becomes difficult when analyzing multiple gene lists, for it does not provide an adequate visualization tool to show/compare multiple enrichment results in a concise and informative manner.

Result

We implemented a new R-based graphical tool, BACA (Bubble chArt to Compare Annotations), which uses the DAVID web service for cross-comparing enrichment analysis results derived from multiple large gene lists. BACA is implemented in R and is freely available at the CRAN repository (http://cran.r-project.org/web/packages/BACA/).

Conclusion

The package BACA allows R users to combine multiple annotation charts into one output graph by passing DAVID website.

Electronic supplementary material

The online version of this article (doi:10.1186/s12859-015-0477-4) contains supplementary material, which is available to authorized users.  相似文献   

19.

Purpose

The metaphase karyotype is often used as a diagnostic tool in the setting of early miscarriage; however this technique has several limitations. We evaluate a new technique for karyotyping that uses single nucleotide polymorphism microarrays (SNP). This technique was compared in a blinded, prospective fashion, to the traditional metaphase karyotype.

Methods

Patients undergoing dilation and curettage for first trimester miscarriage between February and August 2010 were enrolled. Samples of chorionic villi were equally divided and sent for microarray testing in parallel with routine cytogenetic testing.

Results

Thirty samples were analyzed, with only four discordant results. Discordant results occurred when the entire genome was duplicated or when a balanced rearrangement was present. Cytogenetic karyotyping took an average of 29 days while microarray-based karytoyping took an average of 12 days.

Conclusions

Molecular karyotyping of POC after missed abortion using SNP microarray analysis allows for the ability to detect maternal cell contamination and provides rapid results with good concordance to standard cytogenetic analysis.  相似文献   

20.
VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards’ Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards’ Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号