首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Mathematical modeling has achieved a broad interest in the field of biology. These models represent the associations among the metabolism of the biological phenomenon with some mathematical equations such that the observed time course profile of the biological data fits the model. However, the estimation of the unknown parameters of the model is a challenging task. Many algorithms have been developed for parameter estimation, but none of them is entirely capable of finding the best solution. The purpose of this paper is to develop a method for precise estimation of parameters of a biological model.

Methods

In this paper, a novel particle swarm optimization algorithm based on a decomposition technique is developed. Then, its root mean square error is compared with simple particle swarm optimization, Iterative Unscented Kalman Filter and Simulated Annealing algorithms for two different simulation scenarios and a real data set related to the metabolism of CAD system.

Results

Our proposed algorithm results in 54.39% and 26.72% average reduction in root mean square error when applied to the simulation and experimental data, respectively.

Conclusion

The results show that the metaheuristic approaches such as the proposed method are very wise choices for finding the solution of nonlinear problems with many unknown parameters.
  相似文献   

2.
Xia  Xiaoxuan  Weng  Haoyi  Men  Ruoting  Sun  Rui  Zee  Benny Chung Ying  Chong  Ka Chun  Wang  Maggie Haitian 《BMC genetics》2018,19(1):67-37

Background

Association studies using a single type of omics data have been successful in identifying disease-associated genetic markers, but the underlying mechanisms are unaddressed. To provide a possible explanation of how these genetic factors affect the disease phenotype, integration of multiple omics data is needed.

Results

We propose a novel method, LIPID (likelihood inference proposal for indirect estimation), that uses both single nucleotide polymorphism (SNP) and DNA methylation data jointly to analyze the association between a trait and SNPs. The total effect of SNPs is decomposed into direct and indirect effects, where the indirect effects are the focus of our investigation. Simulation studies show that LIPID performs better in various scenarios than existing methods. Application to the GAW20 data also leads to encouraging results, as the genes identified appear to be biologically relevant to the phenotype studied.

Conclusions

The proposed LIPID method is shown to be meritorious in extensive simulations and in real-data analyses.
  相似文献   

3.

Introduction

Aqueous–methanol mixtures have successfully been applied to extract a broad range of metabolites from plant tissue. However, a certain amount of material remains insoluble.

Objectives

To enlarge the metabolic compendium, two ionic liquids were selected to extract the methanol insoluble part of trunk from Betula pendula.

Methods

The extracted compounds were analyzed by LC/MS and GC/MS.

Results

The results show that 1-butyl-3-methylimidazolium acetate (IL-Ac) predominantly resulted in fatty acids, whereas 1-ethyl-3-methylimidazolium tosylate (IL-Tos) mostly yielded phenolic structures. Interestingly, bark yielded more ionic liquid soluble metabolites compared to interior wood.

Conclusion

From this one can conclude that the application of ionic liquids may expand the metabolic snapshot.
  相似文献   

4.

Background

Time course measurement of single molecules on a cell surface provides detailed information about the dynamics of the molecules that would otherwise be inaccessible. To extract the quantitative information, single particle tracking (SPT) is typically performed. However, trajectories extracted by SPT inevitably have linking errors when the diffusion speed of single molecules is high compared to the scale of the particle density.

Methods

To circumvent this problem, we develop an algorithm to estimate diffusion constants without relying on SPT. The proposed algorithm is based on a probabilistic model of the distance to the nearest point in subsequent frames. This probabilistic model generalizes the model of single particle Brownian motion under an isolated environment into the one surrounded by indistinguishable multiple particles, with a mean field approximation.

Results

We demonstrate that the proposed algorithm provides reasonable estimation of diffusion constants, even when other methods suffer due to high particle density or inhomogeneous particle distribution. In addition, our algorithm can be used for visualization of time course data from single molecular measurements.

Conclusions

The proposed algorithm based on the probabilistic model of indistinguishable Brownian particles provide accurate estimation of diffusion constants even in the regime where the traditional SPT methods underestimate them due to linking errors.
  相似文献   

5.

Background

Studies of intrinsically disordered proteins that lack a stable tertiary structure but still have important biological functions critically rely on computational methods that predict this property based on sequence information. Although a number of fairly successful models for prediction of protein disorder have been developed over the last decade, the quality of their predictions is limited by available cases of confirmed disorders.

Results

To more reliably estimate protein disorder from protein sequences, an iterative algorithm is proposed that integrates predictions of multiple disorder models without relying on any protein sequences with confirmed disorder annotation. The iterative method alternately provides the maximum a posterior (MAP) estimation of disorder prediction and the maximum-likelihood (ML) estimation of quality of multiple disorder predictors. Experiments on data used at CASP7, CASP8, and CASP9 have shown the effectiveness of the proposed algorithm.

Conclusions

The proposed algorithm can potentially be used to predict protein disorder and provide helpful suggestions on choosing suitable disorder predictors for unknown protein sequences.
  相似文献   

6.

Introduction

Data sharing is being increasingly required by journals and has been heralded as a solution to the ‘replication crisis’.

Objectives

(i) Review data sharing policies of journals publishing the most metabolomics papers associated with open data and (ii) compare these journals’ policies to those that publish the most metabolomics papers.

Methods

A PubMed search was used to identify metabolomics papers. Metabolomics data repositories were manually searched for linked publications.

Results

Journals that support data sharing are not necessarily those with the most papers associated to open metabolomics data.

Conclusion

Further efforts are required to improve data sharing in metabolomics.
  相似文献   

7.

Introduction

Collecting feces is easy. It offers direct outcome to endogenous and microbial metabolites.

Objectives

In a context of lack of consensus about fecal sample preparation, especially in animal species, we developed a robust protocol allowing untargeted LC-HRMS fingerprinting.

Methods

The conditions of extraction (quantity, preparation, solvents, dilutions) were investigated in bovine feces.

Results

A rapid and simple protocol involving feces extraction with methanol (1/3, M/V) followed by centrifugation and a step filtration (10 kDa) was developed.

Conclusion

The workflow generated repeatable and informative fingerprints for robust metabolome characterization.
  相似文献   

8.

Background

Integrative analysis on multi-omics data has gained much attention recently. To investigate the interactive effect of gene expression and DNA methylation on cancer, we propose a directed random walk-based approach on an integrated gene-gene graph that is guided by pathway information.

Methods

Our approach first extracts a single pathway profile matrix out of the gene expression and DNA methylation data by performing the random walk over the integrated graph. We then apply a denoising autoencoder to the pathway profile to further identify important pathway features and genes. The extracted features are validated in the survival prediction task for breast cancer patients.

Results

The results show that the proposed method substantially improves the survival prediction performance compared to that of other pathway-based prediction methods, revealing that the combined effect of gene expression and methylation data is well reflected in the integrated gene-gene graph combined with pathway information. Furthermore, we show that our joint analysis on the methylation features and gene expression profile identifies cancer-specific pathways with genes related to breast cancer.

Conclusions

In this study, we proposed a DRW-based method on an integrated gene-gene graph with expression and methylation profiles in order to utilize the interactions between them. The results showed that the constructed integrated gene-gene graph can successfully reflect the combined effect of methylation features on gene expression profiles. We also found that the selected features by DA can effectively extract topologically important pathways and genes specifically related to breast cancer.
  相似文献   

9.
10.

Background

Formalin fixed paraffin embedded (FFPE) tumor samples are a major source of DNA from patients in cancer research. However, FFPE is a challenging material to work with due to macromolecular fragmentation and nucleic acid crosslinking. FFPE tissue particularly possesses challenges for methylation analysis and for preparing sequencing-based libraries relying on bisulfite conversion. Successful bisulfite conversion is a key requirement for sequencing-based methylation analysis.

Methods

Here we describe a complete and streamlined workflow for preparing next generation sequencing libraries for methylation analysis from FFPE tissues. This includes, counting cells from FFPE blocks and extracting DNA from FFPE slides, testing bisulfite conversion efficiency with a polymerase chain reaction (PCR) based test, preparing reduced representation bisulfite sequencing libraries and massively parallel sequencing.

Results

The main features and advantages of this protocol are:
  • An optimized method for extracting good quality DNA from FFPE tissues.
  • An efficient bisulfite conversion and next generation sequencing library preparation protocol that uses 50 ng DNA from FFPE tissue.
  • Incorporation of a PCR-based test to assess bisulfite conversion efficiency prior to sequencing.

Conclusions

We provide a complete workflow and an integrated protocol for performing DNA methylation analysis at the genome-scale and we believe this will facilitate clinical epigenetic research that involves the use of FFPE tissue.
  相似文献   

11.

Background

The immunotoxicity of engine exhausts is of high concern to human health due to the increasing prevalence of immune-related diseases. However, the evaluation of immunotoxicity of engine exhausts is currently based on expensive and time-consuming experiments. It is desirable to develop efficient methods for immunotoxicity assessment.

Methods

To accelerate the development of safe alternative fuels, this study proposed a computational method for identifying informative features for predicting proinflammatory potentials of engine exhausts. A principal component regression (PCR) algorithm was applied to develop prediction models. The informative features were identified by a sequential backward feature elimination (SBFE) algorithm.

Results

A total of 19 informative chemical and biological features were successfully identified by SBFE algorithm. The informative features were utilized to develop a computational method named FS-CBM for predicting proinflammatory potentials of engine exhausts. FS-CBM model achieved a high performance with correlation coefficient values of 0.997 and 0.943 obtained from training and independent test sets, respectively.

Conclusions

The FS-CBM model was developed for predicting proinflammatory potentials of engine exhausts with a large improvement on prediction performance compared with our previous CBM model. The proposed method could be further applied to construct models for bioactivities of mixtures.
  相似文献   

12.

Objectives

Biomass subpopulations in mammalian cell culture processes cause impurities and influence productivity, which requires this critical process parameter to be monitored in real-time.

Results

For this reason, a novel soft sensor concept for estimating viable, dead and lysed cell concentration was developed, based on the robust and cheap in situ measurements of permittivity and turbidity in combination with a simple model. It could be shown that the turbidity measurements contain information about all investigated biomass subpopulations. The novelty of the developed soft sensor is the real-time estimation of lysed cell concentration, which is directly correlated to process-related impurities such as DNA and host cell protein in the supernatant. Based on data generated by two fed-batch processes the developed soft sensor is described and discussed.

Conclusions

The presented soft sensor concept provides a tool for viable, dead and lysed cell concentration estimation in real-time with adequate accuracy and enables further applications with respect to process optimization and control.
  相似文献   

13.

Introduction

Zonisamide is a new-generation anticonvulsant antiepileptic drug metabolized primarily in the liver, with subsequent elimination via the renal route.

Objectives

Our objective was to evaluate the utility of pharmacometabolomics in the detection of zonisamide metabolites that could be related to its disposition and therefore, to its efficacy and toxicity.

Methods

This study was nested to a bioequivalence clinical trial with 28 healthy volunteers. Each participant received a single dose of zonisamide on two separate occasions (period 1 and period 2), with a washout period between them. Blood samples of zonisamide were obtained from all patients at baseline for each period, before volunteers were administered any medication, for metabolomics analysis.

Results

After a Lasso regression was applied, age, height, branched-chain amino acids, steroids, triacylglycerols, diacyl glycerophosphoethanolamine, glycerophospholipids susceptible to methylation, phosphatidylcholines with 20:4 FA (arachidonic acid) and cholesterol ester and lysophosphatidylcholine were obtained in both periods.

Conclusion

To our knowledge, this is the only research study to date that has attempted to link basal metabolomic status with pharmacokinetic parameters of zonisamide.
  相似文献   

14.

Introduction

Experiments in metabolomics rely on the identification and quantification of metabolites in complex biological mixtures. This remains one of the major challenges in NMR/mass spectrometry analysis of metabolic profiles. These features are mandatory to make metabolomics asserting a general approach to test a priori formulated hypotheses on the basis of exhaustive metabolome characterization rather than an exploratory tool dealing with unknown metabolic features.

Objectives

In this article we propose a method, named ASICS, based on a strong statistical theory that handles automatically the metabolites identification and quantification in proton NMR spectra.

Methods

A statistical linear model is built to explain a complex spectrum using a library containing pure metabolite spectra. This model can handle local or global chemical shift variations due to experimental conditions using a warping function. A statistical lasso-type estimator identifies and quantifies the metabolites in the complex spectrum. This estimator shows good statistical properties and handles peak overlapping issues.

Results

The performances of the method were investigated on known mixtures (such as synthetic urine) and on plasma datasets from duck and human. Results show noteworthy performances, outperforming current existing methods.

Conclusion

ASICS is a completely automated procedure to identify and quantify metabolites in 1H NMR spectra of biological mixtures. It will enable empowering NMR-based metabolomics by quickly and accurately helping experts to obtain metabolic profiles.
  相似文献   

15.

Introduction

Untargeted metabolomics is a powerful tool for biological discoveries. To analyze the complex raw data, significant advances in computational approaches have been made, yet it is not clear how exhaustive and reliable the data analysis results are.

Objectives

Assessment of the quality of raw data processing in untargeted metabolomics.

Methods

Five published untargeted metabolomics studies, were reanalyzed.

Results

Omissions of at least 50 relevant compounds from the original results as well as examples of representative mistakes were reported for each study.

Conclusion

Incomplete raw data processing shows unexplored potential of current and legacy data.
  相似文献   

16.

Background

Methylation analysis of cell-free DNA is a encouraging tool for tumor diagnosis, monitoring and prognosis. Sensitivity of methylation analysis is a very important matter due to the tiny amounts of cell-free DNA available in plasma. Most current methods of DNA methylation analysis are based on the difference of bisulfite-mediated deamination of cytosine between cytosine and 5-methylcytosine. However, the recovery of bisulfite-converted DNA based on current methods is very poor for the methylation analysis of cell-free DNA.

Results

We optimized a rapid method for the crucial steps of bisulfite conversion with high recovery of cell-free DNA. A rapid deamination step and alkaline desulfonation was combined with the purification of DNA on a silica column. The conversion efficiency and recovery of bisulfite-treated DNA was investigated by the droplet digital PCR. The optimization of the reaction results in complete cytosine conversion in 30 min at 70 °C and about 65% of recovery of bisulfite-treated cell-free DNA, which is higher than current methods.

Conclusions

The method allows high recovery from low levels of bisulfite-treated cell-free DNA, enhancing the analysis sensitivity of methylation detection from cell-free DNA.
  相似文献   

17.

Introduction

It is difficult to elucidate the metabolic and regulatory factors causing lipidome perturbations.

Objectives

This work simplifies this process.

Methods

A method has been developed to query an online holistic lipid metabolic network (of 7923 metabolites) to extract the pathways that connect the input list of lipids.

Results

The output enables pathway visualisation and the querying of other databases to identify potential regulators. When used to a study a plasma lipidome dataset of polycystic ovary syndrome, 14 enzymes were identified, of which 3 are linked to ELAVL1—an mRNA stabiliser.

Conclusion

This method provides a simplified approach to identifying potential regulators causing lipid-profile perturbations.
  相似文献   

18.

Introduction

Data processing is one of the biggest problems in metabolomics, given the high number of samples analyzed and the need of multiple software packages for each step of the processing workflow.

Objectives

Merge in the same platform the steps required for metabolomics data processing.

Methods

KniMet is a workflow for the processing of mass spectrometry-metabolomics data based on the KNIME Analytics platform.

Results

The approach includes key steps to follow in metabolomics data processing: feature filtering, missing value imputation, normalization, batch correction and annotation.

Conclusion

KniMet provides the user with a local, modular and customizable workflow for the processing of both GC–MS and LC–MS open profiling data.
  相似文献   

19.

Background

Random forest (RF) is a machine-learning method that generally works well with high-dimensional problems and allows for nonlinear relationships between predictors; however, the presence of correlated predictors has been shown to impact its ability to identify strong predictors. The Random Forest-Recursive Feature Elimination algorithm (RF-RFE) mitigates this problem in smaller data sets, but this approach has not been tested in high-dimensional omics data sets.

Results

We integrated 202,919 genotypes and 153,422 methylation sites in 680 individuals, and compared the abilities of RF and RF-RFE to detect simulated causal associations, which included simulated genotype–methylation interactions, between these variables and triglyceride levels. Results show that RF was able to identify strong causal variables with a few highly correlated variables, but it did not detect other causal variables.

Conclusions

Although RF-RFE decreased the importance of correlated variables, in the presence of many correlated variables, it also decreased the importance of causal variables, making both hard to detect. These findings suggest that RF-RFE may not scale to high-dimensional data.
  相似文献   

20.

Background

An important feature in many genomic studies is quality control and normalization. This is particularly important when analyzing epigenetic data, where the process of obtaining measurements can be bias prone. The GAW20 data was from the Genetics of Lipid Lowering Drugs and Diet Network (GOLDN), a study with multigeneration families, where DNA cytosine-phosphate-guanine (CpG) methylation was measured pre- and posttreatment with fenofibrate. We performed quality control assessment of the GAW20 DNA methylation data, including normalization, assessment of batch effects and detection of sample swaps.

Results

We show that even after normalization, the GOLDN methylation data has systematic differences pre- and posttreatment. Through investigation of (a) CpGs sites containing a single nucleotide polymorphism, (b) the stability of breeding values for methylation across time points, and (c) autosomal gender-associated CpGs, 13 sample swaps were detected, 11 of which were posttreatment.

Conclusions

This paper demonstrates several ways to perform quality control of methylation data in the absence of raw data files and highlights the importance of normalization and quality control of the GAW20 methylation data from the GOLDN study.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号