首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Introduction

Collecting feces is easy. It offers direct outcome to endogenous and microbial metabolites.

Objectives

In a context of lack of consensus about fecal sample preparation, especially in animal species, we developed a robust protocol allowing untargeted LC-HRMS fingerprinting.

Methods

The conditions of extraction (quantity, preparation, solvents, dilutions) were investigated in bovine feces.

Results

A rapid and simple protocol involving feces extraction with methanol (1/3, M/V) followed by centrifugation and a step filtration (10 kDa) was developed.

Conclusion

The workflow generated repeatable and informative fingerprints for robust metabolome characterization.
  相似文献   

2.

Introduction

Untargeted metabolomics workflows include numerous points where variance and systematic errors can be introduced. Due to the diversity of the lipidome, manual peak picking and quantitation using molecule specific internal standards is unrealistic, and therefore quality peak picking algorithms and further feature processing and normalization algorithms are important. Subsequent normalization, data filtering, statistical analysis, and biological interpretation are simplified when quality data acquisition and feature processing are employed.

Objectives

Metrics for QC are important throughout the workflow. The robust workflow presented here provides techniques to ensure that QC checks are implemented throughout sample preparation, data acquisition, pre-processing, and analysis.

Methods

The untargeted lipidomics workflow includes sample standardization prior to acquisition, blocks of QC standards and blanks run at systematic intervals between randomized blocks of experimental data, blank feature filtering (BFF) to remove features not originating from the sample, and QC analysis of data acquisition and processing.

Results

The workflow was successfully applied to mouse liver samples, which were investigated to discern lipidomic changes throughout the development of nonalcoholic fatty liver disease (NAFLD). The workflow, including a novel filtering method, BFF, allows improved confidence in results and conclusions for lipidomic applications.

Conclusion

Using a mouse model developed for the study of the transition of NAFLD from an early stage known as simple steatosis, to the later stage, nonalcoholic steatohepatitis, in combination with our novel workflow, we have identified phosphatidylcholines, phosphatidylethanolamines, and triacylglycerols that may contribute to disease onset and/or progression.
  相似文献   

3.

Introduction

Although it is still at a very early stage compared to its mass spectrometry (MS) counterpart, proton nuclear magnetic resonance (NMR) lipidomics is worth being investigated as an original and complementary solution for lipidomics. Dedicated sample preparation protocols and adapted data acquisition methods have to be developed to set up an NMR lipidomics workflow; in particular, the considerable overlap observed for lipid signals on 1D spectra may hamper its applicability.

Objectives

The study describes the development of a complete proton NMR lipidomics workflow for application to serum fingerprinting. It includes the assessment of fast 2D NMR strategies, which, besides reducing signal overlap by spreading the signals along a second dimension, offer compatibility with the high-throughput requirements of food quality characterization.

Method

The robustness of the developed sample preparation protocol is assessed in terms of repeatability and ability to provide informative fingerprints; further, different NMR acquisition schemes—including classical 1D, fast 2D based on non-uniform sampling or ultrafast schemes—are evaluated and compared. Finally, as a proof of concept, the developed workflow is applied to characterize lipid profiles disruption in serum from β-agonists diet fed pigs.

Results

Our results show the ability of the workflow to discriminate efficiently sample groups based on their lipidic profile, while using fast 2D NMR methods in an automated acquisition framework.

Conclusion

This work demonstrates the potential of fast multidimensional 1H NMR—suited with an appropriate sample preparation—for lipidomics fingerprinting as well as its applicability to address chemical food safety issues.
  相似文献   

4.

Background

Tuberculosis (TB) is a contagious infectious disease caused by Mycobacterium tuberculosis (Mtb). This disease with two million deaths per year has the highest mortality rate among bacterial infections. The only available vaccine against TB is BCG vaccine. BCG is an effective vaccine against TB in childhood, however, due to some limitations, has not proper efficiency in adults. Also, BCG cannot produce an adequately protective response against reactivation of latent infections.

Objective

In the present study we will review the most recent findings about contribution of HspX protein in the vaccines against tuberculosis.

Methods

Therefore, many attempts have been made to improve BCG or to find its replacement. Most of the subunit vaccines for TB in various phases of clinical trials were constructed as prophylactic vaccines using Mtb proteins expressed in the replicating stage. These vaccines might prevent active TB but not reactivation of latent tuberculosis infection (LTBI). A literature search was performed on various online databases (PubMed, Scopus, and Google Scholar) regarding the roles of HspX protein in tuberculosis vaccines.

Results

Ideal subunit post-exposure vaccines should target all forms of TB infection, including active symptomatic and dormant (latent) asymptomatic forms. Among these subunit vaccines, HspX is the most important latent phase antigen of M. tuberculosis with a strong immunological response. There are many studies that have evaluated the immunogenicity of this protein to improve TB vaccine.

Conclusion

According to the studies, HspX protein is a good candidate for development of subunit vaccines against TB infection.
  相似文献   

5.

Background

Needle-free, painless and localized drug delivery has been a coveted technology in the area of biomedical research. We present an innovative way of trans-dermal vaccine delivery using a miniature detonation-driven shock tube device. This device utilizes~2.5 bar of in situ generated oxyhydrogen mixture to produce a strong shockwave that accelerates liquid jets to velocities of about 94 m/s.

Method

Oxyhydrogen driven shock tube was optimized for efficiently delivering vaccines in the intradermal region in vivo. Efficiency of vaccination was evaluated by pathogen challenge and host immune response. Expression levels of molecular markers were checked by qRT-PCR.

Results

High efficiency vaccination was achieved using the device. Post pathogen challenge with Mycobacterium tuberculosis, 100% survival was observed in vaccinated animals. Immune response to vaccination was significantly higher in the animals vaccinated using the device as compared to conventional route of vaccination.

Conclusion

A novel device was developed and optimized for intra dermal vaccine delivery in murine model. Conventional as well in-house developed vaccine strains were used to test the system. It was found that the vaccine delivery and immune response was at par with the conventional routes of vaccination. Thus, the device reported can be used for delivering live attenuated vaccines in the future.
  相似文献   

6.

Background

One of the recent challenges of computational biology is development of new algorithms, tools and software to facilitate predictive modeling of big data generated by high-throughput technologies in biomedical research.

Results

To meet these demands we developed PROPER - a package for visual evaluation of ranking classifiers for biological big data mining studies in the MATLAB environment.

Conclusion

PROPER is an efficient tool for optimization and comparison of ranking classifiers, providing over 20 different two- and three-dimensional performance curves.
  相似文献   

7.

Background

Influenza virus infections are responsible for significant morbidity worldwide and therefore it remains a high priority to develop more broadly protective vaccines. Adjuvation of current seasonal influenza vaccines has the potential to achieve this goal.

Methods

To assess the immune potentiating properties of Matrix-M?, mice were immunized with virosomal trivalent seasonal vaccine adjuvated with Matrix-M?. Serum samples were isolated to determine the hemagglutination inhibiting (HAI) antibody titers against vaccine homologous and heterologous strains. Furthermore, we assess whether adjuvation with Matrix-M? broadens the protective efficacy of the virosomal trivalent seasonal vaccine against vaccine homologous and heterologous influenza viruses.

Results

Matrix-M? adjuvation enhanced HAI antibody titers and protection against vaccine homologous strains. Interestingly, Matrix-M? adjuvation also resulted in HAI antibody titers against heterologous influenza B strains, but not against the tested influenza A strains. Even though the protection against heterologous influenza A was induced by the adjuvated vaccine, in the absence of HAI titers the protection was accompanied by severe clinical scores and body weight loss. In contrast, in the presence of heterologous HAI titers full protection against the heterologous influenza B strain without any disease symptoms was obtained.

Conclusion

The results of this study emphasize the promising potential of a Matrix-M?-adjuvated seasonal trivalent virosomal influenza vaccine. Adjuvation of trivalent virosomal vaccine does not only enhance homologous protection, but in addition induces protection against heterologous strains and thus provides overall more potent and broad protective immunity.
  相似文献   

8.

Background

Viral vaccine target discovery requires understanding the diversity of both the virus and the human immune system. The readily available and rapidly growing pool of viral sequence data in the public domain enable the identification and characterization of immune targets relevant to adaptive immunity. A systematic bioinformatics approach is necessary to facilitate the analysis of such large datasets for selection of potential candidate vaccine targets.

Results

This work describes a computational methodology to achieve this analysis, with data of dengue, West Nile, hepatitis A, HIV-1, and influenza A viruses as examples. Our methodology has been implemented as an analytical pipeline that brings significant advancement to the field of reverse vaccinology, enabling systematic screening of known sequence data in nature for identification of vaccine targets. This includes key steps (i) comprehensive and extensive collection of sequence data of viral proteomes (the virome), (ii) data cleaning, (iii) large-scale sequence alignments, (iv) peptide entropy analysis, (v) intra- and inter-species variation analysis of conserved sequences, including human homology analysis, and (vi) functional and immunological relevance analysis.

Conclusion

These steps are combined into the pipeline ensuring that a more refined process, as compared to a simple evolutionary conservation analysis, will facilitate a better selection of vaccine targets and their prioritization for subsequent experimental validation.
  相似文献   

9.
Gao S  Xu S  Fang Y  Fang J 《Proteome science》2012,10(Z1):S7

Background

Identification of phosphorylation sites by computational methods is becoming increasingly important because it reduces labor-intensive and costly experiments and can improve our understanding of the common properties and underlying mechanisms of protein phosphorylation.

Methods

A multitask learning framework for learning four kinase families simultaneously, instead of studying each kinase family of phosphorylation sites separately, is presented in the study. The framework includes two multitask classification methods: the Multi-Task Least Squares Support Vector Machines (MTLS-SVMs) and the Multi-Task Feature Selection (MT-Feat3).

Results

Using the multitask learning framework, we successfully identify 18 common features shared by four kinase families of phosphorylation sites. The reliability of selected features is demonstrated by the consistent performance in two multi-task learning methods.

Conclusions

The selected features can be used to build efficient multitask classifiers with good performance, suggesting they are important to protein phosphorylation across 4 kinase families.
  相似文献   

10.
Lyu  Chuqiao  Wang  Lei  Zhang  Juhua 《BMC genomics》2018,19(10):905-165

Background

The DNase I hypersensitive sites (DHSs) are associated with the cis-regulatory DNA elements. An efficient method of identifying DHSs can enhance the understanding on the accessibility of chromatin. Despite a multitude of resources available on line including experimental datasets and computational tools, the complex language of DHSs remains incompletely understood.

Methods

Here, we address this challenge using an approach based on a state-of-the-art machine learning method. We present a novel convolutional neural network (CNN) which combined Inception like networks with a gating mechanism for the response of multiple patterns and longterm association in DNA sequences to predict multi-scale DHSs in Arabidopsis, rice and Homo sapiens.

Results

Our method obtains 0.961 area under curve (AUC) on Arabidopsis, 0.969 AUC on rice and 0.918 AUC on Homo sapiens.

Conclusions

Our method provides an efficient and accurate way to identify multi-scale DHSs sequences by deep learning.
  相似文献   

11.

Introduction

Data processing is one of the biggest problems in metabolomics, given the high number of samples analyzed and the need of multiple software packages for each step of the processing workflow.

Objectives

Merge in the same platform the steps required for metabolomics data processing.

Methods

KniMet is a workflow for the processing of mass spectrometry-metabolomics data based on the KNIME Analytics platform.

Results

The approach includes key steps to follow in metabolomics data processing: feature filtering, missing value imputation, normalization, batch correction and annotation.

Conclusion

KniMet provides the user with a local, modular and customizable workflow for the processing of both GC–MS and LC–MS open profiling data.
  相似文献   

12.

Introduction

Data sharing is being increasingly required by journals and has been heralded as a solution to the ‘replication crisis’.

Objectives

(i) Review data sharing policies of journals publishing the most metabolomics papers associated with open data and (ii) compare these journals’ policies to those that publish the most metabolomics papers.

Methods

A PubMed search was used to identify metabolomics papers. Metabolomics data repositories were manually searched for linked publications.

Results

Journals that support data sharing are not necessarily those with the most papers associated to open metabolomics data.

Conclusion

Further efforts are required to improve data sharing in metabolomics.
  相似文献   

13.

Introduction

Concerning NMR-based metabolomics, 1D spectra processing often requires an expert eye for disentangling the intertwined peaks.

Objectives

The objective of NMRProcFlow is to assist the expert in this task in the best way without requirement of programming skills.

Methods

NMRProcFlow was developed to be a graphical and interactive 1D NMR (1H & 13C) spectra processing tool.

Results

NMRProcFlow (http://nmrprocflow.org), dedicated to metabolic fingerprinting and targeted metabolomics, covers all spectra processing steps including baseline correction, chemical shift calibration and alignment.

Conclusion

Biologists and NMR spectroscopists can easily interact and develop synergies by visualizing the NMR spectra along with their corresponding experimental-factor levels, thus setting a bridge between experimental design and subsequent statistical analyses.
  相似文献   

14.

Background

Despite the progress in neuroblastoma therapies the mortality of high-risk patients is still high (40–50%) and the molecular basis of the disease remains poorly known. Recently, a mathematical model was used to demonstrate that the network regulating stress signaling by the c-Jun N-terminal kinase pathway played a crucial role in survival of patients with neuroblastoma irrespective of their MYCN amplification status. This demonstrates the enormous potential of computational models of biological modules for the discovery of underlying molecular mechanisms of diseases.

Results

Since signaling is known to be highly relevant in cancer, we have used a computational model of the whole cell signaling network to understand the molecular determinants of bad prognostic in neuroblastoma. Our model produced a comprehensive view of the molecular mechanisms of neuroblastoma tumorigenesis and progression.

Conclusion

We have also shown how the activity of signaling circuits can be considered a reliable model-based prognostic biomarker.

Reviewers

This article was reviewed by Tim Beissbarth, Wenzhong Xiao and Joanna Polanska. For the full reviews, please go to the Reviewers’ comments section.
  相似文献   

15.

Introduction

Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC–MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters.

Objective

Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using design of experiments (DoE).

Methods

We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term Comprehensive optimization of LC–MS metabolomics methods using design of experiments (COLMeD). Multivariate statistical analysis guided our decision process in the method optimizations.

Results

LC–MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5 % (p < 0.0001) over initial conditions with a 13.3 % increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8 and 57.3 %, with median metabolite response increases of 106.1 and 10.3 % (p < 0.0001 and p < 0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8 % response increase (p < 0.0001) over initial conditions.

Conclusions

The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method.
  相似文献   

16.

Background

Mixtures of beta distributions are a flexible tool for modeling data with values on the unit interval, such as methylation levels. However, maximum likelihood parameter estimation with beta distributions suffers from problems because of singularities in the log-likelihood function if some observations take the values 0 or 1.

Methods

While ad-hoc corrections have been proposed to mitigate this problem, we propose a different approach to parameter estimation for beta mixtures where such problems do not arise in the first place. Our algorithm combines latent variables with the method of moments instead of maximum likelihood, which has computational advantages over the popular EM algorithm.

Results

As an application, we demonstrate that methylation state classification is more accurate when using adaptive thresholds from beta mixtures than non-adaptive thresholds on observed methylation levels. We also demonstrate that we can accurately infer the number of mixture components.

Conclusions

The hybrid algorithm between likelihood-based component un-mixing and moment-based parameter estimation is a robust and efficient method for beta mixture estimation. We provide an implementation of the method (“betamix”) as open source software under the MIT license.
  相似文献   

17.

Introduction

Botanicals containing iridoid and phenylethanoid/phenylpropanoid glycosides are used worldwide for the treatment of inflammatory musculoskeletal conditions that are primary causes of human years lived with disability, such as arthritis and lower back pain.

Objectives

We report the analysis of candidate anti-inflammatory metabolites of several endemic Scrophularia species and Verbascum thapsus used medicinally by peoples of North America.

Methods

Leaves, stems, and roots were analyzed by ultra-performance liquid chromatography-mass spectrometry (UPLC-MS) and partial least squares-discriminant analysis (PLS-DA) was performed in MetaboAnalyst 3.0 after processing the datasets in Progenesis QI.

Results

Comparison of the datasets revealed significant and differential accumulation of iridoid and phenylethanoid/phenylpropanoid glycosides in the tissues of the endemic Scrophularia species and Verbascum thapsus.

Conclusions

Our investigation identified several species of pharmacological interest as good sources for harpagoside and other important anti-inflammatory metabolites.
  相似文献   

18.

Purpose

This discussion article aims to highlight two problematic aspects in the International Reference Life Cycle Data System (ILCD) Handbook: its guidance to the choice between attributional and consequential modeling and to the choice between average and marginal data as input to the life cycle inventory (LCI) analysis.

Methods

We analyze the ILCD guidance by comparing different statements in the handbook with each other and with previous research in this area.

Results and discussion

We find that the ILCD handbook is internally inconsistent when it comes to recommendations on how to choose between attributional and consequential modeling. We also find that the handbook is inconsistent with much of previous research in this matter, and also in the recommendations on how to choose between average and marginal data in the LCI.

Conclusions

Because of the inconsistencies in the ILCD handbook, we recommend that the handbook be revised.
  相似文献   

19.
20.

Background

Long-term exposure to drugs of abuse causes an upregulation of the cAMP-signaling pathway in the nucleus accumbens and other forebrain regions, this common neuroadaptation is thought to underlie aspects of drug tolerance and dependence. Phosphodiesterase 4 (PDE4) is an enzyme that the selective hydrolyzes intracellular cAMP. It is expressed in several brain regions that regulate the reinforcing effects of drugs of abuse.

Objective

Here, we review the current knowledge about central nervous system (CNS) distribution of PDE4 isoforms and the effects of systemic and brain-region specific inhibition of PDE4 on behavioral models of drug addiction.

Methods

A systematic literature search was performed using the Pubmed.

Results

Using behavioral sensitization, conditioned place preference and drug self-administration as behavioral models, a large number of studies have shown that local or systemic administration of PDE4 inhibitors reduce drug intake and/or drug seeking for psychostimulants, alcohol, and opioids in rats or mice.

Conclusions

Preclinical studies suggest that PDE4 could be a therapeutic target for several classes of substance use disorder. We conclude by identifying opportunities for the development of subtype-selective PDE4 inhibitors that may reduce addiction liability and minimize the side effects that limit the clinical potential of non-selective PDE4 inhibitors. Several PDE4 inhibitors have been clinically approved for other diseases. There is a promising possibility to repurpose these PDE4 inhibitors for the treatment of drug addiction as they are safe and well-tolerated in patients.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号