首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Introduction

Microbial cells secrete many metabolites during growth, including important intermediates of the central carbon metabolism. This has not been taken into account by researchers when modeling microbial metabolism for metabolic engineering and systems biology studies.

Materials and Methods

The uptake of metabolites by microorganisms is well studied, but our knowledge of how and why they secrete different intracellular compounds is poor. The secretion of metabolites by microbial cells has traditionally been regarded as a consequence of intracellular metabolic overflow.

Conclusions

Here, we provide evidence based on time-series metabolomics data that microbial cells eliminate some metabolites in response to environmental cues, independent of metabolic overflow. Moreover, we review the different mechanisms of metabolite secretion and explore how this knowledge can benefit metabolic modeling and engineering.
  相似文献   

2.

Background

In recent years the visualization of biomagnetic measurement data by so-called pseudo current density maps or Hosaka-Cohen (HC) transformations became popular.

Methods

The physical basis of these intuitive maps is clarified by means of analytically solvable problems.

Results

Examples in magnetocardiography, magnetoencephalography and magnetoneurography demonstrate the usefulness of this method.

Conclusion

Hardware realizations of the HC-transformation and some similar transformations are discussed which could advantageously support cross-platform comparability of biomagnetic measurements.
  相似文献   

3.

Introduction

Data processing is one of the biggest problems in metabolomics, given the high number of samples analyzed and the need of multiple software packages for each step of the processing workflow.

Objectives

Merge in the same platform the steps required for metabolomics data processing.

Methods

KniMet is a workflow for the processing of mass spectrometry-metabolomics data based on the KNIME Analytics platform.

Results

The approach includes key steps to follow in metabolomics data processing: feature filtering, missing value imputation, normalization, batch correction and annotation.

Conclusion

KniMet provides the user with a local, modular and customizable workflow for the processing of both GC–MS and LC–MS open profiling data.
  相似文献   

4.

Introduction

Data sharing is being increasingly required by journals and has been heralded as a solution to the ‘replication crisis’.

Objectives

(i) Review data sharing policies of journals publishing the most metabolomics papers associated with open data and (ii) compare these journals’ policies to those that publish the most metabolomics papers.

Methods

A PubMed search was used to identify metabolomics papers. Metabolomics data repositories were manually searched for linked publications.

Results

Journals that support data sharing are not necessarily those with the most papers associated to open metabolomics data.

Conclusion

Further efforts are required to improve data sharing in metabolomics.
  相似文献   

5.

Introduction

Metabolomics is a well-established tool in systems biology, especially in the top–down approach. Metabolomics experiments often results in discovery studies that provide intriguing biological hypotheses but rarely offer mechanistic explanation of such findings. In this light, the interpretation of metabolomics data can be boosted by deploying systems biology approaches.

Objectives

This review aims to provide an overview of systems biology approaches that are relevant to metabolomics and to discuss some successful applications of these methods.

Methods

We review the most recent applications of systems biology tools in the field of metabolomics, such as network inference and analysis, metabolic modelling and pathways analysis.

Results

We offer an ample overview of systems biology tools that can be applied to address metabolomics problems. The characteristics and application results of these tools are discussed also in a comparative manner.

Conclusions

Systems biology-enhanced analysis of metabolomics data can provide insights into the molecular mechanisms originating the observed metabolic profiles and enhance the scientific impact of metabolomics studies.
  相似文献   

6.

Introduction

Untargeted metabolomics is a powerful tool for biological discoveries. To analyze the complex raw data, significant advances in computational approaches have been made, yet it is not clear how exhaustive and reliable the data analysis results are.

Objectives

Assessment of the quality of raw data processing in untargeted metabolomics.

Methods

Five published untargeted metabolomics studies, were reanalyzed.

Results

Omissions of at least 50 relevant compounds from the original results as well as examples of representative mistakes were reported for each study.

Conclusion

Incomplete raw data processing shows unexplored potential of current and legacy data.
  相似文献   

7.

Introduction

Although cultured cells are nowadays regularly analyzed by metabolomics technologies, some issues in study setup and data processing are still not resolved to complete satisfaction: a suitable harvesting method for adherent cells, a fast and robust method for data normalization, and the proof that metabolite levels can be normalized to cell number.

Objectives

We intended to develop a fast method for normalization of cell culture metabolomics samples, to analyze how metabolite levels correlate with cell numbers, and to elucidate the impact of the kind of harvesting on measured metabolite profiles.

Methods

We cultured four different human cell lines and used them to develop a fluorescence-based method for DNA quantification. Further, we assessed the correlation between metabolite levels and cell numbers and focused on the impact of the harvesting method (scraping or trypsinization) on the metabolite profile.

Results

We developed a fast, sensitive and robust fluorescence-based method for DNA quantification showing excellent linear correlation between fluorescence intensities and cell numbers for all cell lines. Furthermore, 82–97 % of the measured intracellular metabolites displayed linear correlation between metabolite concentrations and cell numbers. We observed differences in amino acids, biogenic amines, and lipid levels between trypsinized and scraped cells.

Conclusion

We offer a fast, robust, and validated normalization method for cell culture metabolomics samples and demonstrate the eligibility of the normalization of metabolomics data to the cell number. We show a cell line and metabolite-specific impact of the harvesting method on metabolite concentrations.
  相似文献   

8.

Background

Existing clustering approaches for microarray data do not adequately differentiate between subsets of co-expressed genes. We devised a novel approach that integrates expression and sequence data in order to generate functionally coherent and biologically meaningful subclusters of genes. Specifically, the approach clusters co-expressed genes on the basis of similar content and distributions of predicted statistically significant sequence motifs in their upstream regions.

Results

We applied our method to several sets of co-expressed genes and were able to define subsets with enrichment in particular biological processes and specific upstream regulatory motifs.

Conclusions

These results show the potential of our technique for functional prediction and regulatory motif identification from microarray data.
  相似文献   

9.

Background

Many methods have been developed for metagenomic sequence classification, and most of them depend heavily on genome sequences of the known organisms. A large portion of sequencing sequences may be classified as unknown, which greatly impairs our understanding of the whole sample.

Result

Here we present MetaBinG2, a fast method for metagenomic sequence classification, especially for samples with a large number of unknown organisms. MetaBinG2 is based on sequence composition, and uses GPUs to accelerate its speed. A million 100 bp Illumina sequences can be classified in about 1 min on a computer with one GPU card. We evaluated MetaBinG2 by comparing it to multiple popular existing methods. We then applied MetaBinG2 to the dataset of MetaSUB Inter-City Challenge provided by CAMDA data analysis contest and compared community composition structures for environmental samples from different public places across cities.

Conclusion

Compared to existing methods, MetaBinG2 is fast and accurate, especially for those samples with significant proportions of unknown organisms.

Reviewers

This article was reviewed by Drs. Eran Elhaik, Nicolas Rascovan, and Serghei Mangul.
  相似文献   

10.

Background

Uncertainties exist in many biological systems, which can be classified as random uncertainties and fuzzy uncertainties. The former can usually be dealt with using stochastic methods, while the latter have to be handled with such approaches as fuzzy methods.

Results

In this paper, we focus on a special type of biological systems that can be described using ordinary differential equations or continuous Petri nets (CPNs), but some kinetic parameters are missing or inaccurate. For this, we propose a class of fuzzy continuous Petri nets (FCPNs) by combining CPNs and fuzzy logics. We also present and implement a simulation algorithm for FCPNs, and illustrate our method with the heat shock response system.

Conclusions

This approach can be used to model biological systems where some kinetic parameters are not available or their values vary due to some environmental factors.
  相似文献   

11.

Introduction

Mass spectrometry imaging (MSI) experiments result in complex multi-dimensional datasets, which require specialist data analysis tools.

Objectives

We have developed massPix—an R package for analysing and interpreting data from MSI of lipids in tissue.

Methods

massPix produces single ion images, performs multivariate statistics and provides putative lipid annotations based on accurate mass matching against generated lipid libraries.

Results

Classification of tissue regions with high spectral similarly can be carried out by principal components analysis (PCA) or k-means clustering.

Conclusion

massPix is an open-source tool for the analysis and statistical interpretation of MSI data, and is particularly useful for lipidomics applications.
  相似文献   

12.

Purpose

This paper introduces the new EcoSpold data format for life cycle inventory (LCI).

Methods

A short historical retrospect on data formats in the life cycle assessment (LCA) field is given. The guiding principles for the revision and implementation are explained. Some technical basics of the data format are described, and changes to the previous data format are explained.

Results

The EcoSpold 2 data format caters for new requirements that have arisen in the LCA field in recent years.

Conclusions

The new data format is the basis for the Ecoinvent v3 database, but since it is an open data format, it is expected to be adopted by other LCI databases. Several new concepts used in the new EcoSpold 2 data format open the way for new possibilities for the LCA practitioners and to expand the application of the datasets in other fields beyond LCA (e.g., Material Flow Analysis, Energy Balancing).
  相似文献   

13.

Background

During the last few years, the knowledge of drug, disease phenotype and protein has been rapidly accumulated and more and more scientists have been drawn the attention to inferring drug-disease associations by computational method. Development of an integrated approach for systematic discovering drug-disease associations by those informational data is an important issue.

Methods

We combine three different networks of drug, genomic and disease phenotype and assign the weights to the edges from available experimental data and knowledge. Given a specific disease, we use our network propagation approach to infer the drug-disease associations.

Results

We apply prostate cancer and colorectal cancer as our test data. We use the manually curated drug-disease associations from comparative toxicogenomics database to be our benchmark. The ranked results show that our proposed method obtains higher specificity and sensitivity and clearly outperforms previous methods. Our result also show that our method with off-targets information gets higher performance than that with only primary drug targets in both test data.

Conclusions

We clearly demonstrate the feasibility and benefits of using network-based analyses of chemical, genomic and phenotype data to reveal drug-disease associations. The potential associations inferred by our method provide new perspectives for toxicogenomics and drug reposition evaluation.
  相似文献   

14.
15.

Introduction

Intrahepatic cholestasis of pregnancy (ICP) is a common maternal liver disease; development can result in devastating consequences, including sudden fetal death and stillbirth. Currently, recognition of ICP only occurs following onset of clinical symptoms.

Objective

Investigate the maternal hair metabolome for predictive biomarkers of ICP.

Methods

The maternal hair metabolome (gestational age of sampling between 17 and 41 weeks) of 38 Chinese women with ICP and 46 pregnant controls was analysed using gas chromatography–mass spectrometry.

Results

Of 105 metabolites detected in hair, none were significantly associated with ICP.

Conclusion

Hair samples represent accumulative environmental exposure over time. Samples collected at the onset of ICP did not reveal any metabolic shifts, suggesting rapid development of the disease.
  相似文献   

16.

Introduction

The metabolome of a biological system is affected by multiple factors including factor of interest (e.g. metabolic perturbation due to disease) and unwanted factors or factors which are not primarily the focus of the study (e.g. batch effect, gender, and level of physical activity). Removal of these unwanted data variations is advantageous, as the unwanted variations may complicate biological interpretation of the data.

Objectives

We aim to develop a new unwanted variations elimination (UVE) method called clustering-based unwanted residuals elimination (CURE) to reduce metabolic variation caused by unwanted/hidden factors in metabolomic data.

Methods

A mean-centered metabolomic dataset can be viewed as a combination of a studied factor matrix and a residual matrix. The CURE method assumes that the residual should be normally distributed if it only contains inter-individual variation. However, if the residual forms multiple clusters in feature subspace of principal components analysis or partial least squares discriminant analysis, the residual may contain variation due to unwanted factors. This unwanted variation is removed by doing K-means data clustering and removal of means for each cluster from the residuals. The process is iterated until the residual no longer forms multiple clusters in feature subspace.

Results

Three simulated datasets and a human metabolomic dataset were used to demonstrate the performance of the proposed CURE method. CURE was found able to remove most of the variations caused by unwanted factors, while preserving inter-individual variation between samples.

Conclusion

The CURE method can effectively remove unwanted data variation, and can serve as an alternative UVE method for metabolomic data.
  相似文献   

17.

Introduction

It is difficult to elucidate the metabolic and regulatory factors causing lipidome perturbations.

Objectives

This work simplifies this process.

Methods

A method has been developed to query an online holistic lipid metabolic network (of 7923 metabolites) to extract the pathways that connect the input list of lipids.

Results

The output enables pathway visualisation and the querying of other databases to identify potential regulators. When used to a study a plasma lipidome dataset of polycystic ovary syndrome, 14 enzymes were identified, of which 3 are linked to ELAVL1—an mRNA stabiliser.

Conclusion

This method provides a simplified approach to identifying potential regulators causing lipid-profile perturbations.
  相似文献   

18.

Introduction

Untargeted metabolomics workflows include numerous points where variance and systematic errors can be introduced. Due to the diversity of the lipidome, manual peak picking and quantitation using molecule specific internal standards is unrealistic, and therefore quality peak picking algorithms and further feature processing and normalization algorithms are important. Subsequent normalization, data filtering, statistical analysis, and biological interpretation are simplified when quality data acquisition and feature processing are employed.

Objectives

Metrics for QC are important throughout the workflow. The robust workflow presented here provides techniques to ensure that QC checks are implemented throughout sample preparation, data acquisition, pre-processing, and analysis.

Methods

The untargeted lipidomics workflow includes sample standardization prior to acquisition, blocks of QC standards and blanks run at systematic intervals between randomized blocks of experimental data, blank feature filtering (BFF) to remove features not originating from the sample, and QC analysis of data acquisition and processing.

Results

The workflow was successfully applied to mouse liver samples, which were investigated to discern lipidomic changes throughout the development of nonalcoholic fatty liver disease (NAFLD). The workflow, including a novel filtering method, BFF, allows improved confidence in results and conclusions for lipidomic applications.

Conclusion

Using a mouse model developed for the study of the transition of NAFLD from an early stage known as simple steatosis, to the later stage, nonalcoholic steatohepatitis, in combination with our novel workflow, we have identified phosphatidylcholines, phosphatidylethanolamines, and triacylglycerols that may contribute to disease onset and/or progression.
  相似文献   

19.

Introduction

The generic metabolomics data processing workflow is constructed with a serial set of processes including peak picking, quality assurance, normalisation, missing value imputation, transformation and scaling. The combination of these processes should present the experimental data in an appropriate structure so to identify the biological changes in a valid and robust manner.

Objectives

Currently, different researchers apply different data processing methods and no assessment of the permutations applied to UHPLC-MS datasets has been published. Here we wish to define the most appropriate data processing workflow.

Methods

We assess the influence of normalisation, missing value imputation, transformation and scaling methods on univariate and multivariate analysis of UHPLC-MS datasets acquired for different mammalian samples.

Results

Our studies have shown that once data are filtered, missing values are not correlated with m/z, retention time or response. Following an exhaustive evaluation, we recommend PQN normalisation with no missing value imputation and no transformation or scaling for univariate analysis. For PCA we recommend applying PQN normalisation with Random Forest missing value imputation, glog transformation and no scaling method. For PLS-DA we recommend PQN normalisation, KNN as the missing value imputation method, generalised logarithm transformation and no scaling. These recommendations are based on searching for the biologically important metabolite features independent of their measured abundance.

Conclusion

The appropriate choice of normalisation, missing value imputation, transformation and scaling methods differs depending on the data analysis method and the choice of method is essential to maximise the biological derivations from UHPLC-MS datasets.
  相似文献   

20.

Introduction

Collecting feces is easy. It offers direct outcome to endogenous and microbial metabolites.

Objectives

In a context of lack of consensus about fecal sample preparation, especially in animal species, we developed a robust protocol allowing untargeted LC-HRMS fingerprinting.

Methods

The conditions of extraction (quantity, preparation, solvents, dilutions) were investigated in bovine feces.

Results

A rapid and simple protocol involving feces extraction with methanol (1/3, M/V) followed by centrifugation and a step filtration (10 kDa) was developed.

Conclusion

The workflow generated repeatable and informative fingerprints for robust metabolome characterization.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号