首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Introduction

Global metabolomics analyses using body fluids provide valuable results for the understanding and prediction of diseases. However, the mechanism of a disease is often tissue-based and it is advantageous to analyze metabolomic changes directly in the tissue. Metabolomics from tissue samples faces many challenges like tissue collection, homogenization, and metabolite extraction.

Objectives

We aimed to establish a metabolite extraction protocol optimized for tissue metabolite quantification by the targeted metabolomics AbsoluteIDQ? p180 Kit (Biocrates). The extraction method should be non-selective, applicable to different kinds and amounts of tissues, monophasic, reproducible, and amenable to high throughput.

Methods

We quantified metabolites in samples of eleven murine tissues after extraction with three solvents (methanol, phosphate buffer, ethanol/phosphate buffer mixture) in two tissue to solvent ratios and analyzed the extraction yield, ionization efficiency, and reproducibility.

Results

We found methanol and ethanol/phosphate buffer to be superior to phosphate buffer in regard to extraction yield, reproducibility, and ionization efficiency for all metabolites measured. Phosphate buffer, however, outperformed both organic solvents for amino acids and biogenic amines but yielded unsatisfactory results for lipids. The observed matrix effects of tissue extracts were smaller or in a similar range compared to those of human plasma.

Conclusion

We provide for each murine tissue type an optimized high-throughput metabolite extraction protocol, which yields the best results for extraction, reproducibility, and quantification of metabolites in the p180 kit. Although the performance of the extraction protocol was monitored by the p180 kit, the protocol can be applicable to other targeted metabolomics assays.
  相似文献   

2.

Background

In recent years the visualization of biomagnetic measurement data by so-called pseudo current density maps or Hosaka-Cohen (HC) transformations became popular.

Methods

The physical basis of these intuitive maps is clarified by means of analytically solvable problems.

Results

Examples in magnetocardiography, magnetoencephalography and magnetoneurography demonstrate the usefulness of this method.

Conclusion

Hardware realizations of the HC-transformation and some similar transformations are discussed which could advantageously support cross-platform comparability of biomagnetic measurements.
  相似文献   

3.

Introduction

Collecting feces is easy. It offers direct outcome to endogenous and microbial metabolites.

Objectives

In a context of lack of consensus about fecal sample preparation, especially in animal species, we developed a robust protocol allowing untargeted LC-HRMS fingerprinting.

Methods

The conditions of extraction (quantity, preparation, solvents, dilutions) were investigated in bovine feces.

Results

A rapid and simple protocol involving feces extraction with methanol (1/3, M/V) followed by centrifugation and a step filtration (10 kDa) was developed.

Conclusion

The workflow generated repeatable and informative fingerprints for robust metabolome characterization.
  相似文献   

4.

Background

The significant advancement in the mobile sensing technologies has brought great interests on application development for the Internet-of-Things (IoT). With the advantages of contactlessness data retrieval and efficient data processing of intelligent IoT-based objects, versatile innovative types of on-demand medical relevant services have promptly been developed and deployed. Critical characteristics involved within the data processing and operation must thoroughly be considered. To achieve the efficiency of data retrieval and the robustness of communications among IoT-based objects, sturdy security primitives are required to preserve data confidentiality and entity authentication.

Methods

A robust nursing-care support system is developed for efficient and secure communication among mobile bio-sensors, active intelligent objects, the IoT gateway and the backend nursing-care server in which further data analysis can be performed to provide high-quality and on-demand nursing-care service.

Results

We realize the system implementation with an IoT-based testbed, i.e. the Raspberry PI II platform, to present the practicability of the proposed IoT-oriented nursing-care support system in which a user-friendly computation cost, i.e. 6.33 ms, is required for a normal session of our proposed system. Based on the protocol analysis we conducted, the security robustness of the proposed nursing-care support system is guaranteed.

Conclusions

According to the protocol analysis and performance evaluation, the practicability of the proposed method is demonstrated. In brief, we can claim that our proposed system is very suitable for IoT-based environments and will be a highly competitive candidate for the next generation of nursing-care service systems.
  相似文献   

5.

Introduction

Data sharing is being increasingly required by journals and has been heralded as a solution to the ‘replication crisis’.

Objectives

(i) Review data sharing policies of journals publishing the most metabolomics papers associated with open data and (ii) compare these journals’ policies to those that publish the most metabolomics papers.

Methods

A PubMed search was used to identify metabolomics papers. Metabolomics data repositories were manually searched for linked publications.

Results

Journals that support data sharing are not necessarily those with the most papers associated to open metabolomics data.

Conclusion

Further efforts are required to improve data sharing in metabolomics.
  相似文献   

6.

Introduction

Adoption of automatic profiling tools for 1H-NMR-based metabolomic studies still lags behind other approaches in the absence of the flexibility and interactivity necessary to adapt to the properties of study data sets of complex matrices.

Objectives

To provide an open source tool that fully integrates these needs and enables the reproducibility of the profiling process.

Methods

rDolphin incorporates novel techniques to optimize exploratory analysis, metabolite identification, and validation of profiling output quality.

Results

The information and quality achieved in two public datasets of complex matrices are maximized.

Conclusion

rDolphin is an open-source R package (http://github.com/danielcanueto/rDolphin) able to provide the best balance between accuracy, reproducibility and ease of use.
  相似文献   

7.

Introduction

We present the first study to critically appraise the quality of reporting of the data analysis step in metabolomics studies since the publication of minimum reporting guidelines in 2007.

Objectives

The aim of this study was to assess the standard of reporting of the data analysis step in metabolomics biomarker discovery studies and to investigate whether the level of detail supplied allows basic understanding of the steps employed and/or reuse of the protocol. For the purposes of this review we define the data analysis step to include the data pretreatment step and the actual data analysis step, which covers algorithm selection, univariate analysis and multivariate analysis.

Method

We reviewed the literature to identify metabolomic studies of biomarker discovery that were published between January 2008 and December 2014. Studies were examined for completeness in reporting the various steps of the data pretreatment phase and data analysis phase and also for clarity of the workflow of these sections.

Results

We analysed 27 papers, published anytime in 2008 until the end of 2014 in the area or biomarker discovery in serum metabolomics. The results of this review showed that the data analysis step in metabolomics biomarker discovery studies is plagued by unclear and incomplete reporting. Major omissions and lack of logical flow render the data analysis’ workflows in these studies impossible to follow and therefore replicate or even imitate.

Conclusions

While we await the holy grail of computational reproducibility in data analysis to become standard, we propose that, at a minimum, the data analysis section of metabolomics studies should be readable and interpretable without omissions such that a data analysis workflow diagram could be extrapolated from the study and therefore the data analysis protocol could be reused by the reader. That inconsistent and patchy reporting obfuscates reproducibility is a given. However even basic understanding and reuses of protocols are hampered by the low level of detail supplied in the data analysis sections of the studies that we reviewed.
  相似文献   

8.

Objective

To compare four enzymatic protocols for mesenchymal stem cells (MSCs) isolation from amniotic (A-MSC) and chorionic (C-MSC) membranes, umbilical cord (UC-MSC) and placental decidua (D-MSC) in order to define a robust, practical and low-cost protocol for each tissue.

Results

A-MSCs and UC-MSCs could be isolated from all samples using trypsin/collagenase-based protocols; C-MSCs could be isolated from all samples with collagenase- and trypsin/collagenase-based protocols; D-MSCs were isolated from all samples exclusively with a collagenase-based protocol.

Conclusions

The trypsin-only protocol was least efficient; the collagenase-only protocol was best for C-MSCs and D-MSCs; the combination of trypsin and collagenase was best for UC-MSCs and none of tested protocols was adequate for A-MSCs isolation.
  相似文献   

9.
10.

Introduction

Quantification of tetrahydrofolates (THFs), important metabolites in the Wood–Ljungdahl pathway (WLP) of acetogens, is challenging given their sensitivity to oxygen.

Objective

To develop a simple anaerobic protocol to enable reliable THFs quantification from bioreactors.

Methods

Anaerobic cultures were mixed with anaerobic acetonitrile for extraction. Targeted LC–MS/MS was used for quantification.

Results

Tetrahydrofolates can only be quantified if sampled anaerobically. THF levels showed a strong correlation to acetyl-CoA, the end product of the WLP.

Conclusion

Our method is useful for relative quantification of THFs across different growth conditions. Absolute quantification of THFs requires the use of labelled standards.
  相似文献   

11.

Introduction

Untargeted metabolomics is a powerful tool for biological discoveries. To analyze the complex raw data, significant advances in computational approaches have been made, yet it is not clear how exhaustive and reliable the data analysis results are.

Objectives

Assessment of the quality of raw data processing in untargeted metabolomics.

Methods

Five published untargeted metabolomics studies, were reanalyzed.

Results

Omissions of at least 50 relevant compounds from the original results as well as examples of representative mistakes were reported for each study.

Conclusion

Incomplete raw data processing shows unexplored potential of current and legacy data.
  相似文献   

12.

Background

Applications in biomedical science and life science produce large data sets using increasingly powerful imaging devices and computer simulations. It is becoming increasingly difficult for scientists to explore and analyze these data using traditional tools. Interactive data processing and visualization tools can support scientists to overcome these limitations.

Results

We show that new data processing tools and visualization systems can be used successfully in biomedical and life science applications. We present an adaptive high-resolution display system suitable for biomedical image data, algorithms for analyzing and visualization protein surfaces and retinal optical coherence tomography data, and visualization tools for 3D gene expression data.

Conclusion

We demonstrated that interactive processing and visualization methods and systems can support scientists in a variety of biomedical and life science application areas concerned with massive data analysis.
  相似文献   

13.

Introduction

Data processing is one of the biggest problems in metabolomics, given the high number of samples analyzed and the need of multiple software packages for each step of the processing workflow.

Objectives

Merge in the same platform the steps required for metabolomics data processing.

Methods

KniMet is a workflow for the processing of mass spectrometry-metabolomics data based on the KNIME Analytics platform.

Results

The approach includes key steps to follow in metabolomics data processing: feature filtering, missing value imputation, normalization, batch correction and annotation.

Conclusion

KniMet provides the user with a local, modular and customizable workflow for the processing of both GC–MS and LC–MS open profiling data.
  相似文献   

14.

Introduction

Processing delays after blood collection is a common pre-analytical condition in large epidemiologic studies. It is critical to evaluate the suitability of blood samples with processing delays for metabolomics analysis as it is a potential source of variation that could attenuate associations between metabolites and disease outcomes.

Objectives

We aimed to evaluate the reproducibility of metabolites over extended processing delays up to 48 h. We also aimed to test the reproducibility of the metabolomics platform.

Methods

Blood samples were collected from 18 healthy volunteers. Blood was stored in the refrigerator and processed for plasma at 0, 15, 30, and 48 h after collection. Plasma samples were metabolically profiled using an untargeted, ultrahigh performance liquid chromatography–tandem mass spectrometry (UPLC–MS/MS) platform. Reproducibility of 1012 metabolites over processing delays and reproducibility of the platform were determined by intraclass correlation coefficients (ICCs) with variance components estimated from mixed-effects models.

Results

The majority of metabolites (approximately 70% of 1012) were highly reproducible (ICCs?≥?0.75) over 15-, 30- or 48-h processing delays. Nucleotides, energy-related metabolites, peptides, and carbohydrates were most affected by processing delays. The platform was highly reproducible with a median technical ICC of 0.84 (interquartile range 0.68–0.93).

Conclusion

Most metabolites measured by the UPLC–MS/MS platform show acceptable reproducibility up to 48-h processing delays. Metabolites of certain pathways need to be interpreted cautiously in relation to outcomes in epidemiologic studies with prolonged processing delays.
  相似文献   

15.

Background

The identification of suitable patients is a common problem in clinical trials that is especially evident in tertiary care hospitals.

Methods

We developed and analysed a workflow, which uses routine data captured during patient care in a hospital information system (HIS), to identify potential trial subjects. Study nurses or physicians are notified automatically by email and verify eligibility.

Results

As a case study we implemented the system for acute myeloid leukemia (AML) trials in Münster. During a test period of 50 days 41 patients were identified by the system. 13 could be included as new trial patients, 7 were already included during earlier visits. According to review of paper records no AML trial patient was missed by the system. In addition, the hospital information system further allowed to preselect patients for specific trials based on their disease status and individual characteristics.

Conclusion

Routine HIS data can be used to support patient recruitment for clinical trials by means of an automated notification workflow.
  相似文献   

16.

Aims

The readily available global rock phosphate (P) reserves may be depleted within the next 50–130 years warranting careful use of this finite resource. We develop a model that allows us to assess a range of P fertiliser and soil management strategies for Barley in order to find which one maximises plant P uptake under certain climate conditions.

Methods

Our model describes the development of the P and water profiles within the soil. Current cultivation techniques such as ploughing and reduced till gradient are simulated along with fertiliser options to feed the top soil or the soil right below the seed.

Results

Our model was able to fit data from two barley field trials, achieving a good fit at early growth stages but a poor fit at late growth stages, where the model underestimated plant P uptake. A well-mixed soil (inverted and 25 cm ploughing) is important for optimal plant P uptake and provides the best environment for the root system.

Conclusions

The model is sensitive to the initial state of P and its distribution within the soil profile; experimental parameters which are sparsely measured. The combination of modelling and experimental data provides useful agricultural predictions for site specific locations.
  相似文献   

17.

Background

The clinical decision support system can effectively break the limitations of doctors’ knowledge and reduce the possibility of misdiagnosis to enhance health care. The traditional genetic data storage and analysis methods based on stand-alone environment are hard to meet the computational requirements with the rapid genetic data growth for the limited scalability.

Methods

In this paper, we propose a distributed gene clinical decision support system, which is named GCDSS. And a prototype is implemented based on cloud computing technology. At the same time, we present CloudBWA which is a novel distributed read mapping algorithm leveraging batch processing strategy to map reads on Apache Spark.

Results

Experiments show that the distributed gene clinical decision support system GCDSS and the distributed read mapping algorithm CloudBWA have outstanding performance and excellent scalability. Compared with state-of-the-art distributed algorithms, CloudBWA achieves up to 2.63 times speedup over SparkBWA. Compared with stand-alone algorithms, CloudBWA with 16 cores achieves up to 11.59 times speedup over BWA-MEM with 1 core.

Conclusions

GCDSS is a distributed gene clinical decision support system based on cloud computing techniques. In particular, we incorporated a distributed genetic data analysis pipeline framework in the proposed GCDSS system. To boost the data processing of GCDSS, we propose CloudBWA, which is a novel distributed read mapping algorithm to leverage batch processing technique in mapping stage using Apache Spark platform.
  相似文献   

18.
19.

Introduction

The generic metabolomics data processing workflow is constructed with a serial set of processes including peak picking, quality assurance, normalisation, missing value imputation, transformation and scaling. The combination of these processes should present the experimental data in an appropriate structure so to identify the biological changes in a valid and robust manner.

Objectives

Currently, different researchers apply different data processing methods and no assessment of the permutations applied to UHPLC-MS datasets has been published. Here we wish to define the most appropriate data processing workflow.

Methods

We assess the influence of normalisation, missing value imputation, transformation and scaling methods on univariate and multivariate analysis of UHPLC-MS datasets acquired for different mammalian samples.

Results

Our studies have shown that once data are filtered, missing values are not correlated with m/z, retention time or response. Following an exhaustive evaluation, we recommend PQN normalisation with no missing value imputation and no transformation or scaling for univariate analysis. For PCA we recommend applying PQN normalisation with Random Forest missing value imputation, glog transformation and no scaling method. For PLS-DA we recommend PQN normalisation, KNN as the missing value imputation method, generalised logarithm transformation and no scaling. These recommendations are based on searching for the biologically important metabolite features independent of their measured abundance.

Conclusion

The appropriate choice of normalisation, missing value imputation, transformation and scaling methods differs depending on the data analysis method and the choice of method is essential to maximise the biological derivations from UHPLC-MS datasets.
  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号