首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Until recently, plant metabolomics have provided a deep understanding on the metabolic regulation in individual plants as experimental units. The application of these techniques to agricultural systems subjected to more complex interactions is a step towards the implementation of translational metabolomics in crop breeding.

Aim of Review

We present here a review paper discussing advances in the knowledge reached in the last years derived from the application of metabolomic techniques that evolved from biomarker discovery to improve crop yield and quality.

Key Scientific Concepts of Review

Translational metabolomics applied to crop breeding programs.
  相似文献   

2.

Introduction

Metabolomics is a well-established tool in systems biology, especially in the top–down approach. Metabolomics experiments often results in discovery studies that provide intriguing biological hypotheses but rarely offer mechanistic explanation of such findings. In this light, the interpretation of metabolomics data can be boosted by deploying systems biology approaches.

Objectives

This review aims to provide an overview of systems biology approaches that are relevant to metabolomics and to discuss some successful applications of these methods.

Methods

We review the most recent applications of systems biology tools in the field of metabolomics, such as network inference and analysis, metabolic modelling and pathways analysis.

Results

We offer an ample overview of systems biology tools that can be applied to address metabolomics problems. The characteristics and application results of these tools are discussed also in a comparative manner.

Conclusions

Systems biology-enhanced analysis of metabolomics data can provide insights into the molecular mechanisms originating the observed metabolic profiles and enhance the scientific impact of metabolomics studies.
  相似文献   

3.

Introduction

Untargeted metabolomics studies for biomarker discovery often have hundreds to thousands of human samples. Data acquisition of large-scale samples has to be divided into several batches and may span from months to as long as several years. The signal drift of metabolites during data acquisition (intra- and inter-batch) is unavoidable and is a major confounding factor for large-scale metabolomics studies.

Objectives

We aim to develop a data normalization method to reduce unwanted variations and integrate multiple batches in large-scale metabolomics studies prior to statistical analyses.

Methods

We developed a machine learning algorithm-based method, support vector regression (SVR), for large-scale metabolomics data normalization and integration. An R package named MetNormalizer was developed and provided for data processing using SVR normalization.

Results

After SVR normalization, the portion of metabolite ion peaks with relative standard deviations (RSDs) less than 30 % increased to more than 90 % of the total peaks, which is much better than other common normalization methods. The reduction of unwanted analytical variations helps to improve the performance of multivariate statistical analyses, both unsupervised and supervised, in terms of classification and prediction accuracy so that subtle metabolic changes in epidemiological studies can be detected.

Conclusion

SVR normalization can effectively remove the unwanted intra- and inter-batch variations, and is much better than other common normalization methods.
  相似文献   

4.

Introduction

Untargeted metabolomics is a powerful tool for biological discoveries. To analyze the complex raw data, significant advances in computational approaches have been made, yet it is not clear how exhaustive and reliable the data analysis results are.

Objectives

Assessment of the quality of raw data processing in untargeted metabolomics.

Methods

Five published untargeted metabolomics studies, were reanalyzed.

Results

Omissions of at least 50 relevant compounds from the original results as well as examples of representative mistakes were reported for each study.

Conclusion

Incomplete raw data processing shows unexplored potential of current and legacy data.
  相似文献   

5.

Introduction

Metabolomics is an emerging approach for early detection of cancer. Along with the development of metabolomics, high-throughput technologies and statistical learning, the integration of multiple biomarkers has significantly improved clinical diagnosis and management for patients.

Objectives

In this study, we conducted a systematic review to examine recent advancements in the oncometabolomics-based diagnostic biomarker discovery and validation in pancreatic cancer.

Methods

PubMed, Scopus, and Web of Science were searched for relevant studies published before September 2017. We examined the study designs, the metabolomics approaches, and the reporting methodological quality following PRISMA statement.

Results and Conclusion

The included 25 studies primarily focused on the identification rather than the validation of predictive capacity of potential biomarkers. The sample size ranged from 10 to 8760. External validation of the biomarker panels was observed in nine studies. The diagnostic area under the curve ranged from 0.68 to 1.00 (sensitivity: 0.43–1.00, specificity: 0.73–1.00). The effects of patients’ bio-parameters on metabolome alterations in a context-dependent manner have not been thoroughly elucidated. The most reported candidates were glutamic acid and histidine in seven studies, and glutamine and isoleucine in five studies, leading to the predominant enrichment of amino acid-related pathways. Notably, 46 metabolites were estimated in at least two studies. Specific challenges and potential pitfalls to provide better insights into future research directions were thoroughly discussed. Our investigation suggests that metabolomics is a robust approach that will improve the diagnostic assessment of pancreatic cancer. Further studies are warranted to validate their validity in multi-clinical settings.
  相似文献   

6.

Introduction

Data sharing is being increasingly required by journals and has been heralded as a solution to the ‘replication crisis’.

Objectives

(i) Review data sharing policies of journals publishing the most metabolomics papers associated with open data and (ii) compare these journals’ policies to those that publish the most metabolomics papers.

Methods

A PubMed search was used to identify metabolomics papers. Metabolomics data repositories were manually searched for linked publications.

Results

Journals that support data sharing are not necessarily those with the most papers associated to open metabolomics data.

Conclusion

Further efforts are required to improve data sharing in metabolomics.
  相似文献   

7.

Introduction

Data processing is one of the biggest problems in metabolomics, given the high number of samples analyzed and the need of multiple software packages for each step of the processing workflow.

Objectives

Merge in the same platform the steps required for metabolomics data processing.

Methods

KniMet is a workflow for the processing of mass spectrometry-metabolomics data based on the KNIME Analytics platform.

Results

The approach includes key steps to follow in metabolomics data processing: feature filtering, missing value imputation, normalization, batch correction and annotation.

Conclusion

KniMet provides the user with a local, modular and customizable workflow for the processing of both GC–MS and LC–MS open profiling data.
  相似文献   

8.

Introduction

Human plasma metabolomics offer powerful tools for understanding disease mechanisms and identifying clinical biomarkers for diagnosis, efficacy prediction and patient stratification. Although storage conditions can affect the reliability of data from metabolites, strict control of these conditions remains challenging, particularly when clinical samples are included from multiple centers. Therefore, it is necessary to consider stability profiles of each analyte.

Objectives

The purpose of this study was to extract unstable metabolites from vast metabolome data and identify factors that cause instability.

Method

Plasma samples were obtained from five healthy volunteers, were stored under ten different conditions of time and temperature and were quantified using leading-edge metabolomics. Instability was evaluated by comparing quantitation values under each storage condition with those obtained after ?80 °C storage.

Result

Stability profiling of the 992 metabolites showed time- and temperature-dependent increases in numbers of significantly changed metabolites. This large volume of data enabled comparisons of unstable metabolites with their related molecules and allowed identification of causative factors, including compound-specific enzymatic activity in plasma and chemical reactivity. Furthermore, these analyses indicated extreme instability of 1-docosahexaenoylglycerol, 1-arachidonoylglycerophosphate, cystine, cysteine and N6-methyladenosine.

Conclusion

A large volume of data regarding storage stability was obtained. These data are a contribution to the discovery of biomarker candidates without misselection based on unreliable values and to the establishment of suitable handling procedures for targeted biomarker quantification.
  相似文献   

9.

Introduction

Natural products from culture collections have enormous impact in advancing discovery programs for metabolites of biotechnological importance. These discovery efforts rely on the metabolomic characterization of strain collections.

Objective

Many emerging approaches compare metabolomic profiles of such collections, but few enable the analysis and prioritization of thousands of samples from diverse organisms while delivering chemistry specific read outs.

Method

In this work we utilize untargeted LC–MS/MS based metabolomics together with molecular networking to inventory the chemistries associated with 1000 marine microorganisms.

Result

This approach annotated 76 molecular families (a spectral match rate of 28 %), including clinically and biotechnologically important molecules such as valinomycin, actinomycin D, and desferrioxamine E. Targeting a molecular family produced primarily by one microorganism led to the isolation and structure elucidation of two new molecules designated maridric acids A and B.

Conclusion

Molecular networking guided exploration of large culture collections allows for rapid dereplication of know molecules and can highlight producers of uniques metabolites. These methods, together with large culture collections and growing databases, allow for data driven strain prioritization with a focus on novel chemistries.
  相似文献   

10.

Introduction

Availability of large cohorts of samples with related metadata provides scientists with extensive material for studies. At the same time, recent development of modern high-throughput ‘omics’ technologies, including metabolomics, has resulted in the potential for analysis of large sample sizes. Representative subset selection becomes critical for selection of samples from bigger cohorts and their division into analytical batches. This especially holds true when relative quantification of compound levels is used.

Objectives

We present a multivariate strategy for representative sample selection and integration of results from multi-batch experiments in metabolomics.

Methods

Multivariate characterization was applied for design of experiment based sample selection and subsequent subdivision into four analytical batches which were analyzed on different days by metabolomics profiling using gas-chromatography time-of-flight mass spectrometry (GC–TOF–MS). For each batch OPLS-DA® was used and its p(corr) vectors were averaged to obtain combined metabolic profile. Jackknifed standard errors were used to calculate confidence intervals for each metabolite in the average p(corr) profile.

Results

A combined, representative metabolic profile describing differences between systemic lupus erythematosus (SLE) patients and controls was obtained and used for elucidation of metabolic pathways that could be disturbed in SLE.

Conclusion

Design of experiment based representative sample selection ensured diversity and minimized bias that could be introduced at this step. Combined metabolic profile enabled unified analysis and interpretation.
  相似文献   

11.

Introduction

The Metabolomics Society Data Quality Task Group (DQTG) developed a questionnaire regarding quality assurance (QA) and quality control (QC) to provide baseline information about current QA and QC practices applied in the international metabolomics community.

Objectives

The DQTG has a long-term goal of promoting robust QA and QC in the metabolomics community through increased awareness via communication, outreach and education, and through the promotion of best working practices. An assessment of current QA and QC practices will serve as a foundation for future activities and development of appropriate guidelines.

Method

QA was defined as the set of procedures that are performed in advance of analysis of samples and that are used to improve data quality. QC was defined as the set of activities that a laboratory does during or immediately after analysis that are applied to demonstrate the quality of project data. A questionnaire was developed that included 70 questions covering demographic information, QA approaches and QC approaches and allowed all respondents to answer a subset or all of the questions.

Result

The DQTG questionnaire received 97 individual responses from 84 institutions in all fields of metabolomics covering NMR, LC-MS, GC-MS, and other analytical technologies.

Conclusion

There was a vast range of responses concerning the use of QA and QC approaches that indicated the limited availability of suitable training, lack of Standard Operating Procedures (SOPs) to review and make decisions on quality, and limited use of standard reference materials (SRMs) as QC materials. The DQTG QA/QC questionnaire has for the first time demonstrated that QA and QC usage is not uniform across metabolomics laboratories. Here we present recommendations on how to address the issues concerning QA and QC measurements and reporting in metabolomics.
  相似文献   

12.

Introduction

One of the body fluids often used in metabolomics studies is urine. The concentrations of metabolites in urine are affected by hydration status of an individual, resulting in dilution differences. This requires therefore normalization of the data to correct for such differences. Two normalization techniques are commonly applied to urine samples prior to their further statistical analysis. First, AUC normalization aims to normalize a group of signals with peaks by standardizing the area under the curve (AUC) within a sample to the median, mean or any other proper representation of the amount of dilution. The second approach uses specific end-product metabolites such as creatinine and all intensities within a sample are expressed relative to the creatinine intensity.

Objectives

Another way of looking at urine metabolomics data is by realizing that the ratios between peak intensities are the information-carrying features. This opens up possibilities to use another class of data analysis techniques designed to deal with such ratios: compositional data analysis. The aim of this paper is to develop PARAFAC modeling of three-way urine metabolomics data in the context of compositional data analysis and compare this with standard normalization techniques.

Methods

In the compositional data analysis approach, special coordinate systems are defined to deal with the ratio problem. In essence, it comes down to using other distance measures than the Euclidian Distance that is used in the conventional analysis of metabolomic data.

Results

We illustrate using this type of approach in combination with three-way methods (i.e. PARAFAC) of a longitudinal urine metabolomics study and two simulations. In both cases, the advantage of the compositional approach is established in terms of improved interpretability of the scores and loadings of the PARAFAC model.

Conclusion

For urine metabolomics studies, we advocate the use of compositional data analysis approaches. They are easy to use, well established and proof to give reliable results.
  相似文献   

13.

Introduction

Mass spectrometry and computational biology have advanced significantly in the past ten years, bringing the field of metabolomics a step closer to personalized medicine applications. Despite these analytical advancements, collection of blood samples for routine clinical analysis is still performed through traditional blood draws.

Objective

TAP capillary blood collection has been recently introduced for the rapid, painless draw of small volumes of blood (~?100 μL), though little is known about the comparability of metabolic phenotypes of blood drawn via traditional venipuncture and TAP devices.

Methods

UHPLC-MS-targeted metabolomics analyses were performed on blood drawn traditionally or through TAP devices from 5 healthy volunteers. Absolute quantitation of 45 clinically-relevant metabolites was calculated against stable heavy isotope-labeled internal standards.

Results

Ranges for 39 out of 45 quantified metabolites overlapped between drawing methods. Pyruvate and succinate were over threefold higher in the TAP samples than in traditional blood draws. No significant changes were observed for other carboxylates, glucose or lactate. TAP samples were characterized by increases in reduced glutathione and decreases in urate and cystine, markers of oxidation of purines and cysteine—overall suggesting decreased oxidation during draws. The absolute levels of bile acids and acyl-carnitines, as well as almost all amino acids, perfectly correlated among groups (Spearman r?≥?0.95).

Conclusion

Though further more extensive studies will be mandatory, this pilot suggests that TAP-derived blood may be a logistically-friendly source of blood for large scale metabolomics studies—especially those addressing amino acids, glycemia and lactatemia as well as bile acids, acyl-carnitine levels.
  相似文献   

14.

Introduction

In metabolomics studies, unwanted variation inevitably arises from various sources. Normalization, that is the removal of unwanted variation, is an essential step in the statistical analysis of metabolomics data. However, metabolomics normalization is often considered an imprecise science due to the diverse sources of variation and the availability of a number of alternative strategies that may be implemented.

Objectives

We highlight the need for comparative evaluation of different normalization methods and present software strategies to help ease this task for both data-oriented and biological researchers.

Methods

We present NormalizeMets—a joint graphical user interface within the familiar Microsoft Excel and freely-available R software for comparative evaluation of different normalization methods. The NormalizeMets R package along with the vignette describing the workflow can be downloaded from https://cran.r-project.org/web/packages/NormalizeMets/. The Excel Interface and the Excel user guide are available on https://metabolomicstats.github.io/ExNormalizeMets.

Results

NormalizeMets allows for comparative evaluation of normalization methods using criteria that depend on the given dataset and the ultimate research question. Hence it guides researchers to assess, select and implement a suitable normalization method using either the familiar Microsoft Excel and/or freely-available R software. In addition, the package can be used for visualisation of metabolomics data using interactive graphical displays and to obtain end statistical results for clustering, classification, biomarker identification adjusting for confounding variables, and correlation analysis.

Conclusion

NormalizeMets is designed for comparative evaluation of normalization methods, and can also be used to obtain end statistical results. The use of freely-available R software offers an attractive proposition for programming-oriented researchers, and the Excel interface offers a familiar alternative to most biological researchers. The package handles the data locally in the user’s own computer allowing for reproducible code to be stored locally.
  相似文献   

15.

Background

Centrifugation is an indispensable procedure for plasma sample preparation, but applied conditions can vary between labs.

Aim

Determine whether routinely used plasma centrifugation protocols (1500×g 10 min; 3000×g 5 min) influence non-targeted metabolomic analyses.

Methods

Nuclear magnetic resonance spectroscopy (NMR) and High Resolution Mass Spectrometry (HRMS) data were evaluated with sparse partial least squares discriminant analyses and compared with cell count measurements.

Results

Besides significant differences in platelet count, we identified substantial alterations in NMR and HRMS data related to the different centrifugation protocols.

Conclusion

Already minor differences in plasma centrifugation can significantly influence metabolomic patterns and potentially bias metabolomics studies.
  相似文献   

16.

Introduction

A common problem in metabolomics data analysis is the existence of a substantial number of missing values, which can complicate, bias, or even prevent certain downstream analyses. One of the most widely-used solutions to this problem is imputation of missing values using a k-nearest neighbors (kNN) algorithm to estimate missing metabolite abundances. kNN implicitly assumes that missing values are uniformly distributed at random in the dataset, but this is typically not true in metabolomics, where many values are missing because they are below the limit of detection of the analytical instrumentation.

Objectives

Here, we explore the impact of nonuniformly distributed missing values (missing not at random, or MNAR) on imputation performance. We present a new model for generating synthetic missing data and a new algorithm, No-Skip kNN (NS-kNN), that accounts for MNAR values to provide more accurate imputations.

Methods

We compare the imputation errors of the original kNN algorithm using two distance metrics, NS-kNN, and a recently developed algorithm KNN-TN, when applied to multiple experimental datasets with different types and levels of missing data.

Results

Our results show that NS-kNN typically outperforms kNN when at least 20–30% of missing values in a dataset are MNAR. NS-kNN also has lower imputation errors than KNN-TN on realistic datasets when at least 50% of missing values are MNAR.

Conclusion

Accounting for the nonuniform distribution of missing values in metabolomics data can significantly improve the results of imputation algorithms. The NS-kNN method imputes missing metabolomics data more accurately than existing kNN-based approaches when used on realistic datasets.
  相似文献   

17.

Introduction

Bisphenol A (BPA), 2,2-bis(4-hydroxyphenyl) propane, a common industrial chemical which has extremely huge production worldwide, is ubiquitous in the environment. Human have high risk of exposing to BPA and the health problems caused by BPA exposure have aroused public concern. However, the biomarkers for BPA exposure are lacking. As a rapidly developing subject, metabolomics has accumulated a large amount of valuable data in various fields. The secondary application of published metabolomics data could be a very promising field for generating novel biomarkers whilst further understanding of toxicity mechanisms.

Objectives

To summarize the published literature on the use of metabolomics as a tool to study BPA exposure and provide a systematic perspectives of current research on biomarkers screening of BPA exposure.

Methods

We conducted a systematic search of MEDLINE (PubMed) up to the end of June 25, 2017 with the key term combinations of ‘metabolomics’, ‘metabonomics’, ‘mass spectrometry’, ‘nuclear magnetic spectroscopy’, ‘metabolic profiling’ and ‘amino acid profile’ combined with ‘BPA exposure’. Additional articles were identified through searching the reference lists from included studies.

Results

This systematic review included 15 articles. Intermediates of glycolysis, Krebs cycle, β oxidation of long chain fatty acids, pentose phosphate pathway, nucleoside metabolism, branched chain amino acid metabolism, aromatic amino acids metabolism, sulfur-containing amino acids metabolism were significantly changed after BPA exposure, suggesting BPA had a highly complex toxic effects on organism which was consistent with existing studies. The biomarkers most consistently associated with BPA exposure were lactate and choline.

Conclusion

Existing metabolomics studies of BPA exposure present heterogeneous findings regarding metabolite profile characteristics. We need more evidence from target metabolomics and epidemiological studies to further examine the reliability of these biomarkers which link to low, environmentally relevant, exposure of BPA in human body.
  相似文献   

18.

Introduction

Metabolomics analysis depends on the identification and validation of specific metabolites. This task is significantly hampered by the absence of well-characterized reference standards. The one-carbon carrier 10-formyltetrahydrofolate acts as a donor of formyl groups in anabolism, where it is a substrate in formyltransferase reactions in purine biosynthesis. It has been reported as an unstable substance and is currently unavailable as a reference standard for metabolomics analysis.

Objectives

The current study was undertaken to provide the metabolomics community thoroughly characterized 10-formyltetrahydrofolate along with analytical methodology and guidelines for its storage and handling.

Methods

Anaerobic base treatment of 5,10-methenyltetrahydrofolate chloride in the presence of antioxidant was utilized to prepare 10-formyltetrahydrofolate.

Results

Pure 10-formyltetrahydrofolate has been prepared and physicochemically characterized. Conditions toward maintaining the stability of a solution of the dipotassium salt of 10-formyltetrahydrofolate have been determined.

Conclusion

This study describes the facile preparation of pure (>90%) 10-formyltetrahydrofolate, its qualitative physicochemical characterization, as well as conditions to enable its use as a reference standard in physiologic samples.
  相似文献   

19.
20.

Introduction

Molecular factors are differentially observed in various bent sectors of poplar (Populus nigra) woody taproots. Responses to stress are modulated by a complex interplay among different hormones and signal transduction pathways. In recent years, metabolomics has been recognized as a powerful tool to characterize metabolic network regulation, and it has been widely applied to investigate plant responses to biotic and abiotic stresses.

Objectives

In this paper we used metabolomics to understand if long term-bending stress induces a “spatial” and a “temporal” metabolic reprogramming in woody poplar roots.

Methods

By NMR spectroscopy and statistical analysis we investigated the unstressed and three portions of stressed root (above-bent, bent, and below-bent) sectors collected at 12 (T0), 13 (T1) and 14 (T2) months after stress induction.

Results

The data indicate a clear between-class separation of control and stressed regions, based on the metabolites regulation, during both spatial and temporal changes. We found that taproots, as a consequence of the stress, try to restore homeostasis and normal metabolic fluxes thorough the synthesis and/or accumulation of specific compounds related to mechanical forces distribution along the bent taproot.

Conclusion

The data demonstrate that the impact of mechanical stress on plant biology can efficiently be studied by NMR-based metabolomics.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号