首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Introduction

Data processing is one of the biggest problems in metabolomics, given the high number of samples analyzed and the need of multiple software packages for each step of the processing workflow.

Objectives

Merge in the same platform the steps required for metabolomics data processing.

Methods

KniMet is a workflow for the processing of mass spectrometry-metabolomics data based on the KNIME Analytics platform.

Results

The approach includes key steps to follow in metabolomics data processing: feature filtering, missing value imputation, normalization, batch correction and annotation.

Conclusion

KniMet provides the user with a local, modular and customizable workflow for the processing of both GC–MS and LC–MS open profiling data.
  相似文献   

2.

Background

High-carbohydrate diets (HCD) are favoured by the aquaculture industry for economic reasons, but they can produce negative impacts on growth and induce hepatic steatosis. We hypothesised that the mechanism behind this is the reduction of hepatic betaine content.

Objective

We further explored this mechanism by supplementing betaine (1%) to the diet of a farmed fish Megalobrama amblycephala.

Methods

Four diet groups were designed: control (CD, 27.11% carbohydrates), high-carbohydrate (HCD, 36.75% carbohydrates), long-term betaine (LBD, 35.64% carbohydrates) and short-term betaine diet (SBD; 12 weeks HCD?+?4 weeks LBD). We analysed growth performance, body composition, liver condition, and expression of genes and profiles of metabolites associated with betaine metabolism.

Results

HCD resulted in poorer growth and liver health (compared to CD), whereas LBD improved these parameters (compared to HCD). HCD induced the expression of genes associated with glucose, serine and cystathionine metabolisms, and (non-significantly, p?=?.20) a betaine-catabolizing enzyme betaine-homocysteine-methyltransferase; and decreased the content of betaine, methionine, S-adenosylhomocysteine and carnitine. Betaine supplementation (LBD) reversed these patterns, and elevated betaine-homocysteine-methyltransferase, S-adenosylmethionine and S-adenosylhomocysteine (all p?≤?.05).

Conclusion

We hypothesise that HCD reduced the content of hepatic betaine by enhancing the activity of metabolic pathways from glucose to homocysteine, reflected in increased glycolysis, serine metabolism, cystathionine metabolism and homocysteine remethylation. Long-term dietary betaine supplementation improved the negative impacts of HCD, inculding growth parameters, body composition, liver condition, and betaine metabolism. However, betaine supplementation may have caused a temporary disruption in the metabolic homeostasis.
  相似文献   

3.

Background

This study estimates atrial repolarization activities (Ta waves), which are typically hidden most of the time from body surface electrocardiography when diagnosing cardiovascular diseases. The morphology of Ta waves has been proven to be an important marker for the early sign of inferior injury, such as acute atrial infarction, or arrhythmia, such as atrial fibrillation. However, Ta waves are usually unseen except during conduction system malfunction, such as long QT interval or atrioventricular block. Therefore, justifying heart diseases based on atrial repolarization becomes impossible in sinus rhythm.

Methods

We obtain TMPs in the atrial part of the myocardium which reflects the correct excitation sequence starting from the atrium to the end of the apex.

Results

The resulting TMP shows the hidden atrial part of ECG waves.

Conclusions

This extraction makes many diseases, such as acute atrial infarction or arrhythmia, become easily diagnosed.
  相似文献   

4.

Introduction

Collecting feces is easy. It offers direct outcome to endogenous and microbial metabolites.

Objectives

In a context of lack of consensus about fecal sample preparation, especially in animal species, we developed a robust protocol allowing untargeted LC-HRMS fingerprinting.

Methods

The conditions of extraction (quantity, preparation, solvents, dilutions) were investigated in bovine feces.

Results

A rapid and simple protocol involving feces extraction with methanol (1/3, M/V) followed by centrifugation and a step filtration (10 kDa) was developed.

Conclusion

The workflow generated repeatable and informative fingerprints for robust metabolome characterization.
  相似文献   

5.

Introduction

Intrahepatic cholestasis of pregnancy (ICP) is a common maternal liver disease; development can result in devastating consequences, including sudden fetal death and stillbirth. Currently, recognition of ICP only occurs following onset of clinical symptoms.

Objective

Investigate the maternal hair metabolome for predictive biomarkers of ICP.

Methods

The maternal hair metabolome (gestational age of sampling between 17 and 41 weeks) of 38 Chinese women with ICP and 46 pregnant controls was analysed using gas chromatography–mass spectrometry.

Results

Of 105 metabolites detected in hair, none were significantly associated with ICP.

Conclusion

Hair samples represent accumulative environmental exposure over time. Samples collected at the onset of ICP did not reveal any metabolic shifts, suggesting rapid development of the disease.
  相似文献   

6.

Introduction

Quantification of tetrahydrofolates (THFs), important metabolites in the Wood–Ljungdahl pathway (WLP) of acetogens, is challenging given their sensitivity to oxygen.

Objective

To develop a simple anaerobic protocol to enable reliable THFs quantification from bioreactors.

Methods

Anaerobic cultures were mixed with anaerobic acetonitrile for extraction. Targeted LC–MS/MS was used for quantification.

Results

Tetrahydrofolates can only be quantified if sampled anaerobically. THF levels showed a strong correlation to acetyl-CoA, the end product of the WLP.

Conclusion

Our method is useful for relative quantification of THFs across different growth conditions. Absolute quantification of THFs requires the use of labelled standards.
  相似文献   

7.

Introduction

Untargeted metabolomics workflows include numerous points where variance and systematic errors can be introduced. Due to the diversity of the lipidome, manual peak picking and quantitation using molecule specific internal standards is unrealistic, and therefore quality peak picking algorithms and further feature processing and normalization algorithms are important. Subsequent normalization, data filtering, statistical analysis, and biological interpretation are simplified when quality data acquisition and feature processing are employed.

Objectives

Metrics for QC are important throughout the workflow. The robust workflow presented here provides techniques to ensure that QC checks are implemented throughout sample preparation, data acquisition, pre-processing, and analysis.

Methods

The untargeted lipidomics workflow includes sample standardization prior to acquisition, blocks of QC standards and blanks run at systematic intervals between randomized blocks of experimental data, blank feature filtering (BFF) to remove features not originating from the sample, and QC analysis of data acquisition and processing.

Results

The workflow was successfully applied to mouse liver samples, which were investigated to discern lipidomic changes throughout the development of nonalcoholic fatty liver disease (NAFLD). The workflow, including a novel filtering method, BFF, allows improved confidence in results and conclusions for lipidomic applications.

Conclusion

Using a mouse model developed for the study of the transition of NAFLD from an early stage known as simple steatosis, to the later stage, nonalcoholic steatohepatitis, in combination with our novel workflow, we have identified phosphatidylcholines, phosphatidylethanolamines, and triacylglycerols that may contribute to disease onset and/or progression.
  相似文献   

8.

Introduction

Data sharing is being increasingly required by journals and has been heralded as a solution to the ‘replication crisis’.

Objectives

(i) Review data sharing policies of journals publishing the most metabolomics papers associated with open data and (ii) compare these journals’ policies to those that publish the most metabolomics papers.

Methods

A PubMed search was used to identify metabolomics papers. Metabolomics data repositories were manually searched for linked publications.

Results

Journals that support data sharing are not necessarily those with the most papers associated to open metabolomics data.

Conclusion

Further efforts are required to improve data sharing in metabolomics.
  相似文献   

9.
10.

Background

Until recently, plant metabolomics have provided a deep understanding on the metabolic regulation in individual plants as experimental units. The application of these techniques to agricultural systems subjected to more complex interactions is a step towards the implementation of translational metabolomics in crop breeding.

Aim of Review

We present here a review paper discussing advances in the knowledge reached in the last years derived from the application of metabolomic techniques that evolved from biomarker discovery to improve crop yield and quality.

Key Scientific Concepts of Review

Translational metabolomics applied to crop breeding programs.
  相似文献   

11.

Introduction

Aqueous–methanol mixtures have successfully been applied to extract a broad range of metabolites from plant tissue. However, a certain amount of material remains insoluble.

Objectives

To enlarge the metabolic compendium, two ionic liquids were selected to extract the methanol insoluble part of trunk from Betula pendula.

Methods

The extracted compounds were analyzed by LC/MS and GC/MS.

Results

The results show that 1-butyl-3-methylimidazolium acetate (IL-Ac) predominantly resulted in fatty acids, whereas 1-ethyl-3-methylimidazolium tosylate (IL-Tos) mostly yielded phenolic structures. Interestingly, bark yielded more ionic liquid soluble metabolites compared to interior wood.

Conclusion

From this one can conclude that the application of ionic liquids may expand the metabolic snapshot.
  相似文献   

12.

Background

In recent years the visualization of biomagnetic measurement data by so-called pseudo current density maps or Hosaka-Cohen (HC) transformations became popular.

Methods

The physical basis of these intuitive maps is clarified by means of analytically solvable problems.

Results

Examples in magnetocardiography, magnetoencephalography and magnetoneurography demonstrate the usefulness of this method.

Conclusion

Hardware realizations of the HC-transformation and some similar transformations are discussed which could advantageously support cross-platform comparability of biomagnetic measurements.
  相似文献   

13.

Background

Seeds host bacterial inhabitants but only a limited knowledge is available on which taxa inhabit seed, which niches could be colonized, and what the routes of colonization are.

Scope

Within this commentary, a discussion is provided on seed bacterial inhabitants, their taxa, and from where derive the seed colonizers.

Conclusions

Seeds/and grains host specific bacteria deriving from the anthosphere, carposphere, or from cones of gymnosperms and inner tissues of plants after a long colonization from the soil to reproductive organs.
  相似文献   

14.

Background

Centrifugation is an indispensable procedure for plasma sample preparation, but applied conditions can vary between labs.

Aim

Determine whether routinely used plasma centrifugation protocols (1500×g 10 min; 3000×g 5 min) influence non-targeted metabolomic analyses.

Methods

Nuclear magnetic resonance spectroscopy (NMR) and High Resolution Mass Spectrometry (HRMS) data were evaluated with sparse partial least squares discriminant analyses and compared with cell count measurements.

Results

Besides significant differences in platelet count, we identified substantial alterations in NMR and HRMS data related to the different centrifugation protocols.

Conclusion

Already minor differences in plasma centrifugation can significantly influence metabolomic patterns and potentially bias metabolomics studies.
  相似文献   

15.

Introduction

Untargeted metabolomics is a powerful tool for biological discoveries. To analyze the complex raw data, significant advances in computational approaches have been made, yet it is not clear how exhaustive and reliable the data analysis results are.

Objectives

Assessment of the quality of raw data processing in untargeted metabolomics.

Methods

Five published untargeted metabolomics studies, were reanalyzed.

Results

Omissions of at least 50 relevant compounds from the original results as well as examples of representative mistakes were reported for each study.

Conclusion

Incomplete raw data processing shows unexplored potential of current and legacy data.
  相似文献   

16.

Background

New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation andpenalties for multiple testing.

Methods

The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge.

Results

Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data.

Conclusions

The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.
  相似文献   

17.

Introduction

Untargeted and targeted analyses are two classes of metabolic study. Both strategies have been advanced by high resolution mass spectrometers coupled with chromatography, which have the advantages of high mass sensitivity and accuracy. State-of-art methods for mass spectrometric data sets do not always quantify metabolites of interest in a targeted assay efficiently and accurately.

Objectives

TarMet can quantify targeted metabolites as well as their isotopologues through a reactive and user-friendly graphical user interface.

Methods

TarMet accepts vendor-neutral data files (NetCDF, mzXML and mzML) as inputs. Then it extracts ion chromatograms, detects peak position and bounds and confirms the metabolites via the isotope patterns. It can integrate peak areas for all isotopologues automatically.

Results

TarMet detects more isotopologues and quantify them better than state-of-art methods, and it can process isotope tracer assay well.

Conclusion

TarMet is a better tool for targeted metabolic and stable isotope tracer analyses.
  相似文献   

18.

Background

Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene–disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses.

Methods

We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used.

Results

We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches.

Conclusions

PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.
  相似文献   

19.

Introduction

Mass spectrometry imaging (MSI) experiments result in complex multi-dimensional datasets, which require specialist data analysis tools.

Objectives

We have developed massPix—an R package for analysing and interpreting data from MSI of lipids in tissue.

Methods

massPix produces single ion images, performs multivariate statistics and provides putative lipid annotations based on accurate mass matching against generated lipid libraries.

Results

Classification of tissue regions with high spectral similarly can be carried out by principal components analysis (PCA) or k-means clustering.

Conclusion

massPix is an open-source tool for the analysis and statistical interpretation of MSI data, and is particularly useful for lipidomics applications.
  相似文献   

20.

Background

Time course measurement of single molecules on a cell surface provides detailed information about the dynamics of the molecules that would otherwise be inaccessible. To extract the quantitative information, single particle tracking (SPT) is typically performed. However, trajectories extracted by SPT inevitably have linking errors when the diffusion speed of single molecules is high compared to the scale of the particle density.

Methods

To circumvent this problem, we develop an algorithm to estimate diffusion constants without relying on SPT. The proposed algorithm is based on a probabilistic model of the distance to the nearest point in subsequent frames. This probabilistic model generalizes the model of single particle Brownian motion under an isolated environment into the one surrounded by indistinguishable multiple particles, with a mean field approximation.

Results

We demonstrate that the proposed algorithm provides reasonable estimation of diffusion constants, even when other methods suffer due to high particle density or inhomogeneous particle distribution. In addition, our algorithm can be used for visualization of time course data from single molecular measurements.

Conclusions

The proposed algorithm based on the probabilistic model of indistinguishable Brownian particles provide accurate estimation of diffusion constants even in the regime where the traditional SPT methods underestimate them due to linking errors.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号