共查询到20条相似文献,搜索用时 343 毫秒
1.
Background
Misclassification has been shown to have a high prevalence in binary responses in both livestock and human populations. Leaving these errors uncorrected before analyses will have a negative impact on the overall goal of genome-wide association studies (GWAS) including reducing predictive power. A liability threshold model that contemplates misclassification was developed to assess the effects of mis-diagnostic errors on GWAS. Four simulated scenarios of case–control datasets were generated. Each dataset consisted of 2000 individuals and was analyzed with varying odds ratios of the influential SNPs and misclassification rates of 5% and 10%.Results
Analyses of binary responses subject to misclassification resulted in underestimation of influential SNPs and failed to estimate the true magnitude and direction of the effects. Once the misclassification algorithm was applied there was a 12% to 29% increase in accuracy, and a substantial reduction in bias. The proposed method was able to capture the majority of the most significant SNPs that were not identified in the analysis of the misclassified data. In fact, in one of the simulation scenarios, 33% of the influential SNPs were not identified using the misclassified data, compared with the analysis using the data without misclassification. However, using the proposed method, only 13% were not identified. Furthermore, the proposed method was able to identify with high probability a large portion of the truly misclassified observations.Conclusions
The proposed model provides a statistical tool to correct or at least attenuate the negative effects of misclassified binary responses in GWAS. Across different levels of misclassification probability as well as odds ratios of significant SNPs, the model proved to be robust. In fact, SNP effects, and misclassification probability were accurately estimated and the truly misclassified observations were identified with high probabilities compared to non-misclassified responses. This study was limited to situations where the misclassification probability was assumed to be the same in cases and controls which is not always the case based on real human disease data. Thus, it is of interest to evaluate the performance of the proposed model in that situation which is the current focus of our research.2.
Background
Alternative splicing is the critical process in a single gene coding, which removes introns and joins exons, and splicing branchpoints are indicators for the alternative splicing. Wet experiments have identified a great number of human splicing branchpoints, but many branchpoints are still unknown. In order to guide wet experiments, we develop computational methods to predict human splicing branchpoints.Results
Considering the fact that an intron may have multiple branchpoints, we transform the branchpoint prediction as the multi-label learning problem, and attempt to predict branchpoint sites from intron sequences. First, we investigate a variety of intron sequence-derived features, such as sparse profile, dinucleotide profile, position weight matrix profile, Markov motif profile and polypyrimidine tract profile. Second, we consider several multi-label learning methods: partial least squares regression, canonical correlation analysis and regularized canonical correlation analysis, and use them as the basic classification engines. Third, we propose two ensemble learning schemes which integrate different features and different classifiers to build ensemble learning systems for the branchpoint prediction. One is the genetic algorithm-based weighted average ensemble method; the other is the logistic regression-based ensemble method.Conclusions
In the computational experiments, two ensemble learning methods outperform benchmark branchpoint prediction methods, and can produce high-accuracy results on the benchmark dataset.3.
Background
Hot spot residues are functional sites in protein interaction interfaces. The identification of hot spot residues is time-consuming and laborious using experimental methods. In order to address the issue, many computational methods have been developed to predict hot spot residues. Moreover, most prediction methods are based on structural features, sequence characteristics, and/or other protein features.Results
This paper proposed an ensemble learning method to predict hot spot residues that only uses sequence features and the relative accessible surface area of amino acid sequences. In this work, a novel feature selection technique was developed, an auto-correlation function combined with a sliding window technique was applied to obtain the characteristics of amino acid residues in protein sequence, and an ensemble classifier with SVM and KNN base classifiers was built to achieve the best classification performance.Conclusion
The experimental results showed that our model yields the highest F1 score of 0.92 and an MCC value of 0.87 on ASEdb dataset. Compared with other machine learning methods, our model achieves a big improvement in hot spot prediction.Availability
http://deeplearner.ahu.edu.cn/web/HotspotEL.htm.4.
Background
DNA sequence can be viewed as an unknown language with words as its functional units. Given that most sequence alignment algorithms such as the motif discovery algorithms depend on the quality of background information about sequences, it is necessary to develop an ab initio algorithm for extracting the “words” based only on the DNA sequences.Methods
We considered that non-uniform distribution and integrity were two important features of a word, based on which we developed an ab initio algorithm to extract “DNA words” that have potential functional meaning. A Kolmogorov-Smirnov test was used for consistency test of uniform distribution of DNA sequences, and the integrity was judged by the sequence and position alignment. Two random base sequences were adopted as negative control, and an English book was used as positive control to verify our algorithm. We applied our algorithm to the genomes of Saccharomyces cerevisiae and 10 strains of Escherichia coli to show the utility of the methods.Results
The results provide strong evidences that the algorithm is a promising tool for ab initio building a DNA dictionary.Conclusions
Our method provides a fast way for large scale screening of important DNA elements and offers potential insights into the understanding of a genome.5.
6.
Background
Human cancers are complex ecosystems composed of cells with distinct molecular signatures. Such intratumoral heterogeneity poses a major challenge to cancer diagnosis and treatment. Recent advancements of single-cell techniques such as scRNA-seq have brought unprecedented insights into cellular heterogeneity. Subsequently, a challenging computational problem is to cluster high dimensional noisy datasets with substantially fewer cells than the number of genes.Methods
In this paper, we introduced a consensus clustering framework conCluster, for cancer subtype identification from single-cell RNA-seq data. Using an ensemble strategy, conCluster fuses multiple basic partitions to consensus clusters.Results
Applied to real cancer scRNA-seq datasets, conCluster can more accurately detect cancer subtypes than the widely used scRNA-seq clustering methods. Further, we conducted co-expression network analysis for the identified melanoma subtypes.Conclusions
Our analysis demonstrates that these subtypes exhibit distinct gene co-expression networks and significant gene sets with different functional enrichment.7.
Background
The DNase I hypersensitive sites (DHSs) are associated with the cis-regulatory DNA elements. An efficient method of identifying DHSs can enhance the understanding on the accessibility of chromatin. Despite a multitude of resources available on line including experimental datasets and computational tools, the complex language of DHSs remains incompletely understood.Methods
Here, we address this challenge using an approach based on a state-of-the-art machine learning method. We present a novel convolutional neural network (CNN) which combined Inception like networks with a gating mechanism for the response of multiple patterns and longterm association in DNA sequences to predict multi-scale DHSs in Arabidopsis, rice and Homo sapiens.Results
Our method obtains 0.961 area under curve (AUC) on Arabidopsis, 0.969 AUC on rice and 0.918 AUC on Homo sapiens.Conclusions
Our method provides an efficient and accurate way to identify multi-scale DHSs sequences by deep learning.8.
Rachel A. Spicer Christoph Steinbeck 《Metabolomics : Official journal of the Metabolomic Society》2018,14(1):16
Introduction
Data sharing is being increasingly required by journals and has been heralded as a solution to the ‘replication crisis’.Objectives
(i) Review data sharing policies of journals publishing the most metabolomics papers associated with open data and (ii) compare these journals’ policies to those that publish the most metabolomics papers.Methods
A PubMed search was used to identify metabolomics papers. Metabolomics data repositories were manually searched for linked publications.Results
Journals that support data sharing are not necessarily those with the most papers associated to open metabolomics data.Conclusion
Further efforts are required to improve data sharing in metabolomics.9.
N. Cesbron A.-L. Royer Y. Guitton A. Sydor B. Le Bizec G. Dervilly-Pinel 《Metabolomics : Official journal of the Metabolomic Society》2017,13(8):99
Introduction
Collecting feces is easy. It offers direct outcome to endogenous and microbial metabolites.Objectives
In a context of lack of consensus about fecal sample preparation, especially in animal species, we developed a robust protocol allowing untargeted LC-HRMS fingerprinting.Methods
The conditions of extraction (quantity, preparation, solvents, dilutions) were investigated in bovine feces.Results
A rapid and simple protocol involving feces extraction with methanol (1/3, M/V) followed by centrifugation and a step filtration (10 kDa) was developed.Conclusion
The workflow generated repeatable and informative fingerprints for robust metabolome characterization.10.
Chao Xie Chin Lui Wesley Goi Daniel H. Huson Peter F. R. Little Rohan B. H. Williams 《BMC bioinformatics》2016,17(19):508
Background
Taxonomic profiling of microbial communities is often performed using small subunit ribosomal RNA (SSU) amplicon sequencing (16S or 18S), while environmental shotgun sequencing is often focused on functional analysis. Large shotgun datasets contain a significant number of SSU sequences and these can be exploited to perform an unbiased SSU--based taxonomic analysis.Results
Here we present a new program called RiboTagger that identifies and extracts taxonomically informative ribotags located in a specified variable region of the SSU gene in a high-throughput fashion.Conclusions
RiboTagger permits fast recovery of SSU-RNA sequences from shotgun nucleic acid surveys of complex microbial communities. The program targets all three domains of life, exhibits high sensitivity and specificity and is substantially faster than comparable programs.11.
12.
Background
Many protein regions and some entire proteins have no definite tertiary structure, presenting instead as dynamic, disorder ensembles under different physiochemical circumstances. These proteins and regions are known as Intrinsically Unstructured Proteins (IUP). IUP have been associated with a wide range of protein functions, along with roles in diseases characterized by protein misfolding and aggregation.Results
Identifying IUP is important task in structural and functional genomics. We exact useful features from sequences and develop machine learning algorithms for the above task. We compare our IUP predictor with PONDRs (mainly neural-network-based predictors), disEMBL (also based on neural networks) and Globplot (based on disorder propensity).Conclusion
We find that augmenting features derived from physiochemical properties of amino acids (such as hydrophobicity, complexity etc.) and using ensemble method proved beneficial. The IUP predictor is a viable alternative software tool for identifying IUP protein regions and proteins.13.
14.
Background
Chicken anemia virus (CAV) is the causative agent of chicken infectious anemia. CAV putative intergenotypic recombinants have been reported previously. This fact is based on the previous classification of CAV sequences into three genotypes. However, it is unknown whether intersubtype recombination occurs between the recently reported four CAV genotypes and five subtypes of genome sequences.Results
Phylogenetic analysis, together with a variety of computational recombination detection algorithms, was used to investigate CAV approximately full genomes. Statistically significant evidence of intersubtype recombination was detected in the parent-like and two putative CAV recombinant sequences. This event was shown to occur between CAV subgroup A1 and A2 sequences in the phylogenetic trees.Conclusions
We revealed that intersubtype recombination in CAV genome sequences played a role in generating genetic diversity within the natural population of CAV.15.
Johannes Hertel Sandra Van der Auwera Nele Friedrich Katharina Wittfeld Maik Pietzner Kathrin Budde Alexander Teumer Thomas Kocher Matthias Nauck Hans Jörgen Grabe 《Metabolomics : Official journal of the Metabolomic Society》2017,13(4):42
Introduction
Different normalization methods are available for urinary data. However, it is unclear which method performs best in minimizing error variance on a certain data-set as no generally applicable empirical criteria have been established so far.Objectives
The main aim of this study was to develop an applicable and formally correct algorithm to decide on the normalization method without using phenotypic information.Methods
We proved mathematically for two classical measurement error models that the optimal normalization method generates the highest correlation between the normalized urinary metabolite concentrations and its blood concentrations or, respectively, its raw urinary concentrations. We then applied the two criteria to the urinary 1H-NMR measured metabolomic data from the Study of Health in Pomerania (SHIP-0; n?=?4068) under different normalization approaches and compared the results with in silico experiments to explore the effects of inflated error variance in the dilution estimation.Results
In SHIP-0, we demonstrated consistently that probabilistic quotient normalization based on aligned spectra outperforms all other tested normalization methods. Creatinine normalization performed worst, while for unaligned data integral normalization seemed to most reasonable. The simulated and the actual data were in line with the theoretical modeling, underlining the general validity of the proposed criteria.Conclusions
The problem of choosing the best normalization procedure for a certain data-set can be solved empirically. Thus, we recommend applying different normalization procedures to the data and comparing their performances via the statistical methodology explicated in this work. On the basis of classical measurement error models, the proposed algorithm will find the optimal normalization method.16.
Jacopo Troisi Laura Sarno Pasquale Martinelli Costantino Di Carlo Annamaria Landolfi Giovanni Scala Maurizio Rinaldi Pietro D’Alessandro Carla Ciccone Maurizio Guida 《Metabolomics : Official journal of the Metabolomic Society》2017,13(11):140
Introduction
Chromosomal anomalies (CA) are the most frequent fetal anomalies.Objective
To evaluate the diagnostic performance of a machine learning ensemble model based on the maternal serum metabolomic fingerprint of fetal aneuploidies during the second trimester .Methods
This is a case-control pilot study. Metabolomic profiles have been obtained on serum of 328 mothers (220 controls and 108 cases), using gas chromatography coupled to mass spectrometry. Eight machines learning and classification models were built and optimized. An ensemble model was built using a voting scheme. All samples were randomly divided into two sets. One was used as training set, the other one for diagnostic performance assessment.Results
Ensemble machine learning model correctly classified all cases and controls. The accuracy was the same for trisomy 21 and 18; also, the other CA were correctly detected. Elaidic, stearic, linolenic, myristic, benzoic, citric and glyceric acid, mannose, 2-hydroxy butyrate, phenylalanine, proline, alanine and 3-methyl histidine were selected as the most relevant metabolites in class separation.Conclusion
The proposed model, based on the maternal serum metabolomic fingerprint of fetal aneuploidies during the second trimester, correctly identifies all the cases of chromosomal abnormalities. Overall, this preliminary analysis appeared suggestive of a metabolic environment conductive to increased oxidative stress and a disturbance in the fetal central nervous system development. Maternal serum metabolomics can be a promising tool in the screening of chromosomal defects. Moreover, metabolomics allows to extend our knowledge about biochemical alterations caused by aneuploidies and responsible for the observed phenotypes.17.
Introduction
Untargeted metabolomics is a powerful tool for biological discoveries. To analyze the complex raw data, significant advances in computational approaches have been made, yet it is not clear how exhaustive and reliable the data analysis results are.Objectives
Assessment of the quality of raw data processing in untargeted metabolomics.Methods
Five published untargeted metabolomics studies, were reanalyzed.Results
Omissions of at least 50 relevant compounds from the original results as well as examples of representative mistakes were reported for each study.Conclusion
Incomplete raw data processing shows unexplored potential of current and legacy data.18.
Background
Many methods have been developed for metagenomic sequence classification, and most of them depend heavily on genome sequences of the known organisms. A large portion of sequencing sequences may be classified as unknown, which greatly impairs our understanding of the whole sample.Result
Here we present MetaBinG2, a fast method for metagenomic sequence classification, especially for samples with a large number of unknown organisms. MetaBinG2 is based on sequence composition, and uses GPUs to accelerate its speed. A million 100 bp Illumina sequences can be classified in about 1 min on a computer with one GPU card. We evaluated MetaBinG2 by comparing it to multiple popular existing methods. We then applied MetaBinG2 to the dataset of MetaSUB Inter-City Challenge provided by CAMDA data analysis contest and compared community composition structures for environmental samples from different public places across cities.Conclusion
Compared to existing methods, MetaBinG2 is fast and accurate, especially for those samples with significant proportions of unknown organisms.Reviewers
This article was reviewed by Drs. Eran Elhaik, Nicolas Rascovan, and Serghei Mangul.19.
Nadine Strehmel David Strunk Veronika Strehmel 《Metabolomics : Official journal of the Metabolomic Society》2017,13(11):135