共查询到20条相似文献,搜索用时 265 毫秒
1.
Rachel A. Spicer Christoph Steinbeck 《Metabolomics : Official journal of the Metabolomic Society》2018,14(1):16
Introduction
Data sharing is being increasingly required by journals and has been heralded as a solution to the ‘replication crisis’.Objectives
(i) Review data sharing policies of journals publishing the most metabolomics papers associated with open data and (ii) compare these journals’ policies to those that publish the most metabolomics papers.Methods
A PubMed search was used to identify metabolomics papers. Metabolomics data repositories were manually searched for linked publications.Results
Journals that support data sharing are not necessarily those with the most papers associated to open metabolomics data.Conclusion
Further efforts are required to improve data sharing in metabolomics.2.
Izabella Surowiec Erik Johansson Frida Torell Helena Idborg Iva Gunnarsson Elisabet Svenungsson Per-Johan Jakobsson Johan Trygg 《Metabolomics : Official journal of the Metabolomic Society》2017,13(10):114
Introduction
Availability of large cohorts of samples with related metadata provides scientists with extensive material for studies. At the same time, recent development of modern high-throughput ‘omics’ technologies, including metabolomics, has resulted in the potential for analysis of large sample sizes. Representative subset selection becomes critical for selection of samples from bigger cohorts and their division into analytical batches. This especially holds true when relative quantification of compound levels is used.Objectives
We present a multivariate strategy for representative sample selection and integration of results from multi-batch experiments in metabolomics.Methods
Multivariate characterization was applied for design of experiment based sample selection and subsequent subdivision into four analytical batches which were analyzed on different days by metabolomics profiling using gas-chromatography time-of-flight mass spectrometry (GC–TOF–MS). For each batch OPLS-DA® was used and its p(corr) vectors were averaged to obtain combined metabolic profile. Jackknifed standard errors were used to calculate confidence intervals for each metabolite in the average p(corr) profile.Results
A combined, representative metabolic profile describing differences between systemic lupus erythematosus (SLE) patients and controls was obtained and used for elucidation of metabolic pathways that could be disturbed in SLE.Conclusion
Design of experiment based representative sample selection ensured diversity and minimized bias that could be introduced at this step. Combined metabolic profile enabled unified analysis and interpretation.3.
Douglas B. Kell Stephen G. Oliver 《Metabolomics : Official journal of the Metabolomic Society》2016,12(9):148
Background
The term ‘metabolome’ was introduced to the scientific literature in September 1998.Aim and key scientific concepts of the review
To mark its 18-year-old ‘coming of age’, two of the co-authors of that paper review the genesis of metabolomics, whence it has come and where it may be going.4.
Nicholas J. Bond Albert Koulman Julian L. Griffin Zoe Hall 《Metabolomics : Official journal of the Metabolomic Society》2017,13(11):128
Introduction
Mass spectrometry imaging (MSI) experiments result in complex multi-dimensional datasets, which require specialist data analysis tools.Objectives
We have developed massPix—an R package for analysing and interpreting data from MSI of lipids in tissue.Methods
massPix produces single ion images, performs multivariate statistics and provides putative lipid annotations based on accurate mass matching against generated lipid libraries.Results
Classification of tissue regions with high spectral similarly can be carried out by principal components analysis (PCA) or k-means clustering.Conclusion
massPix is an open-source tool for the analysis and statistical interpretation of MSI data, and is particularly useful for lipidomics applications.5.
Benjamin?M.?Delory Caroline?Baudson Yves?Brostaux Guillaume?Lobet Patrick?du?Jardin Lo?c?Pagès Pierre?Delaplace
Background and aims
In order to analyse root system architectures (RSAs) from captured images, a variety of manual (e.g. Data Analysis of Root Tracings, DART), semi-automated and fully automated software packages have been developed. These tools offer complementary approaches to study RSAs and the use of the Root System Markup Language (RSML) to store RSA data makes the comparison of measurements obtained with different (semi-) automated root imaging platforms easier. The throughput of the data analysis process using exported RSA data, however, should benefit greatly from batch analysis in a generic data analysis environment (R software).Methods
We developed an R package (archiDART) with five functions. It computes global RSA traits, root growth rates, root growth directions and trajectories, and lateral root distribution from DART-generated and/or RSML files. It also has specific plotting functions designed to visualise the dynamics of root system growth.Results
The results demonstrated the ability of the package’s functions to compute relevant traits for three contrasted RSAs (Brachypodium distachyon [L.] P. Beauv., Hevea brasiliensis Müll. Arg. and Solanum lycopersicum L.).Conclusions
This work extends the DART software package and other image analysis tools supporting the RSML format, enabling users to easily calculate a number of RSA traits in a generic data analysis environment.6.
7.
David Ameisen Christophe Deroulers Valérie Perrier Fatiha Bouhidel Maxime Battistella Luc Legrès Anne Janin Philippe Bertheau Jean-Baptiste Yunès 《Diagnostic pathology》2014,9(Z1):S3
Background
Since microscopic slides can now be automatically digitized and integrated in the clinical workflow, quality assessment of Whole Slide Images (WSI) has become a crucial issue. We present a no-reference quality assessment method that has been thoroughly tested since 2010 and is under implementation in multiple sites, both public university-hospitals and private entities. It is part of the FlexMIm R&D project which aims to improve the global workflow of digital pathology. For these uses, we have developed two programming libraries, in Java and Python, which can be integrated in various types of WSI acquisition systems, viewers and image analysis tools.Methods
Development and testing have been carried out on a MacBook Pro i7 and on a bi-Xeon 2.7GHz server. Libraries implementing the blur assessment method have been developed in Java, Python, PHP5 and MySQL5. For web applications, JavaScript, Ajax, JSON and Sockets were also used, as well as the Google Maps API. Aperio SVS files were converted into the Google Maps format using VIPS and Openslide libraries.Results
We designed the Java library as a Service Provider Interface (SPI), extendable by third parties. Analysis is computed in real-time (3 billion pixels per minute). Tests were made on 5000 single images, 200 NDPI WSI, 100 Aperio SVS WSI converted to the Google Maps format.Conclusions
Applications based on our method and libraries can be used upstream, as calibration and quality control tool for the WSI acquisition systems, or as tools to reacquire tiles while the WSI is being scanned. They can also be used downstream to reacquire the complete slides that are below the quality threshold for surgical pathology analysis. WSI may also be displayed in a smarter way by sending and displaying the regions of highest quality before other regions. Such quality assessment scores could be integrated as WSI's metadata shared in clinical, research or teaching contexts, for a more efficient medical informatics workflow.9.
A. J. J. IJsselmuiden E. M. Zwaan R. M. Oemrawsingh M. J. Bom F. J. W. M. Dankers M. J. de Boer C. Camaro R. J. M. van Geuns J. Daemen D. J. van der Heijden J. W. Jukema A. O. Kraaijeveld M. Meuwissen B. E. Schölzel G. Pundziute P. van der Harst J. van Ramshorst M. T. Dirksen C. Zivelonghi P. Agostoni J. A. S. van der Heyden J. J. Wykrzykowska M. J. Scholte H. M. Nef M. J. M. Kofflard N. van Royen M. Alings E. Kedhi 《Netherlands heart journal》2018,26(10):473-483
Introduction
Optical coherence tomography (OCT) enables detailed imaging of the coronary wall, lumen and intracoronary implanted devices. Responding to the lack of specific appropriate use criteria (AUC) for this technique, we conducted a literature review and a procedure for appropriate use criteria.Methods
Twenty-one of all 184 members of the Dutch Working Group on Interventional Cardiology agreed to evaluate 49 pre-specified cases. During a meeting, factual indications were established whereupon members individually rated indications on a 9-point scale, with the opportunity to substantiate their scoring.Results
Twenty-six indications were rated ‘Appropriate’, eighteen indications ‘May be appropriate’, and five ‘Rarely appropriate’. Use of OCT was unanimously considered ‘Appropriate’ in stent thrombosis, and ‘Appropriate’ for guidance in PCI, especially in distal left main coronary artery and proximal left anterior descending coronary artery, unexplained angiographic abnormalities, and use of bioresorbable vascular scaffold (BVS). OCT was considered ‘Rarely Appropriate’ on top of fractional flow reserve (FFR) for treatment indication, assessment of strut coverage, bypass anastomoses or assessment of proximal left main coronary artery.Conclusions
The use of OCT in stent thrombosis is unanimously considered ‘Appropriate’ by these experts. Varying degrees of consensus exists on the appropriate use of OCT in other settings.10.
Mu Wang Ouyan Rang Fang Liu Wei Xia Yuanyuan Li Yu Zhang Songfeng Lu Shunqing Xu 《Metabolomics : Official journal of the Metabolomic Society》2018,14(4):45
Introduction
Bisphenol A (BPA), 2,2-bis(4-hydroxyphenyl) propane, a common industrial chemical which has extremely huge production worldwide, is ubiquitous in the environment. Human have high risk of exposing to BPA and the health problems caused by BPA exposure have aroused public concern. However, the biomarkers for BPA exposure are lacking. As a rapidly developing subject, metabolomics has accumulated a large amount of valuable data in various fields. The secondary application of published metabolomics data could be a very promising field for generating novel biomarkers whilst further understanding of toxicity mechanisms.Objectives
To summarize the published literature on the use of metabolomics as a tool to study BPA exposure and provide a systematic perspectives of current research on biomarkers screening of BPA exposure.Methods
We conducted a systematic search of MEDLINE (PubMed) up to the end of June 25, 2017 with the key term combinations of ‘metabolomics’, ‘metabonomics’, ‘mass spectrometry’, ‘nuclear magnetic spectroscopy’, ‘metabolic profiling’ and ‘amino acid profile’ combined with ‘BPA exposure’. Additional articles were identified through searching the reference lists from included studies.Results
This systematic review included 15 articles. Intermediates of glycolysis, Krebs cycle, β oxidation of long chain fatty acids, pentose phosphate pathway, nucleoside metabolism, branched chain amino acid metabolism, aromatic amino acids metabolism, sulfur-containing amino acids metabolism were significantly changed after BPA exposure, suggesting BPA had a highly complex toxic effects on organism which was consistent with existing studies. The biomarkers most consistently associated with BPA exposure were lactate and choline.Conclusion
Existing metabolomics studies of BPA exposure present heterogeneous findings regarding metabolite profile characteristics. We need more evidence from target metabolomics and epidemiological studies to further examine the reliability of these biomarkers which link to low, environmentally relevant, exposure of BPA in human body.11.
Background
Sequence alignment data is often ordered by coordinate (id of the reference sequence plus position on the sequence where the fragment was mapped) when stored in BAM files, as this simplifies the extraction of variants between the mapped data and the reference or of variants within the mapped data. In this order paired reads are usually separated in the file, which complicates some other applications like duplicate marking or conversion to the FastQ format which require to access the full information of the pairs.Results
In this paper we introduce biobambam, a set of tools based on the efficient collation of alignments in BAM files by read name. The employed collation algorithm avoids time and space consuming sorting of alignments by read name where this is possible without using more than a specified amount of main memory. Using this algorithm tasks like duplicate marking in BAM files and conversion of BAM files to the FastQ format can be performed very efficiently with limited resources. We also make the collation algorithm available in the form of an API for other projects. This API is part of the libmaus package.Conclusions
In comparison with previous approaches to problems involving the collation of alignments by read name like the BAM to FastQ or duplication marking utilities our approach can often perform an equivalent task more efficiently in terms of the required main memory and run-time. Our BAM to FastQ conversion is faster than all widely known alternatives including Picard and bamUtil. Our duplicate marking is about as fast as the closest competitor bamUtil for small data sets and faster than all known alternatives on large and complex data sets.12.
Background
Next-generation sequencing can determine DNA bases and the results of sequence alignments are generally stored in files in the Sequence Alignment/Map (SAM) format and the compressed binary version (BAM) of it. SAMtools is a typical tool for dealing with files in the SAM/BAM format. SAMtools has various functions, including detection of variants, visualization of alignments, indexing, extraction of parts of the data and loci, and conversion of file formats. It is written in C and can execute fast. However, SAMtools requires an additional implementation to be used in parallel with, for example, OpenMP (Open Multi-Processing) libraries. For the accumulation of next-generation sequencing data, a simple parallelization program, which can support cloud and PC cluster environments, is required.Results
We have developed cljam using the Clojure programming language, which simplifies parallel programming, to handle SAM/BAM data. Cljam can run in a Java runtime environment (e.g., Windows, Linux, Mac OS X) with Clojure.Conclusions
Cljam can process and analyze SAM/BAM files in parallel and at high speed. The execution time with cljam is almost the same as with SAMtools. The cljam code is written in Clojure and has fewer lines than other similar tools.13.
Background
Children receiving Total Body Irradiation (TBI) in preparation for Hematopoietic Stem Cell Transplantation (HSCT) are at risk for Growth Hormone Deficiency (GHD), which sometimes severely compromises their Final Height (FH). To better represent the impact of such therapies on growth we apply a mathematical model, which accounts both for the gompertzian-like growth trend and the hormone-related ‘spurts’, and evaluate how the parameter values estimated on the children undergoing TBI differ from those of the matched normal population.Methods
25 patients long-term childhood lymphoblastic and myeloid acute leukaemia survivors followed at Pediatric Onco-Hematology, Stem Cell Transplantation and Cellular Therapy Division, Regina Margherita Children’s Hospital (Turin, Italy) were retrospectively analysed for assessing the influence of TBI on their longitudinal growth and for validating a new method to estimate the GH therapy effects. Six were treated with GH therapy after a GHD diagnosis.Results
We show that when TBI was performed before puberty overall growth and pubertal duration were significantly impaired, but such growth limitations were completely reverted in the small sample (6 over 25) of children who underwent GH replacement therapies.Conclusion
Since in principle the model could account for any additional growth ‘spurt’ induced by therapy, it may become a useful ‘simulation’ tool for paediatricians for comparing the predicted therapy effectiveness depending on its timing and dosage.14.
Minggang Wang Arjen Biere Wim H. van der Putten T. Martijn Bezemer E. Pernilla Brinkman 《Plant and Soil》2017,412(1-2):215-233
Aims
A soil-plant-atmosphere continuum (SPAC) model for simulating tree transpiration (Ep) with variable water stress and water distribution in the soil is presented. The model couples a sun/shade approach for the canopy with a discrete representation of the soil in different layers and compartments.Methods
To test its performance, the outputs from the simulations are compared to those from an experiment using trees of olive ‘Picual’ and almond ‘Marinada’ with the root system split into two. Trees are subjected to different irrigation phases in which one side of the root system is dried out while the other is kept wet.Results
The model is able to accurately predict Ep (R2 and the efficiency factor (EF) around 0.9) in the two species studied. The use of a function that modulates the uptake capacity of a root according to the soil water content was necessary to track the fluxes observed from each split part. It was also appropriate to account for root clumping to match the measured and modelled leaf water potential.Conclusions
Coupling the sun/shade approach with the soil multi-compartment solution provides a useful tool to explore tree Ep for different degrees of water availability and distribution.15.
Hongchao Ji Zhimin Zhang Hongmei Lu 《Metabolomics : Official journal of the Metabolomic Society》2018,14(5):68
Introduction
Untargeted and targeted analyses are two classes of metabolic study. Both strategies have been advanced by high resolution mass spectrometers coupled with chromatography, which have the advantages of high mass sensitivity and accuracy. State-of-art methods for mass spectrometric data sets do not always quantify metabolites of interest in a targeted assay efficiently and accurately.Objectives
TarMet can quantify targeted metabolites as well as their isotopologues through a reactive and user-friendly graphical user interface.Methods
TarMet accepts vendor-neutral data files (NetCDF, mzXML and mzML) as inputs. Then it extracts ion chromatograms, detects peak position and bounds and confirms the metabolites via the isotope patterns. It can integrate peak areas for all isotopologues automatically.Results
TarMet detects more isotopologues and quantify them better than state-of-art methods, and it can process isotope tracer assay well.Conclusion
TarMet is a better tool for targeted metabolic and stable isotope tracer analyses.16.
17.
Background
Structural variations (SVs) are wide-spread in human genomes and may have important implications in disease-related and evolutionary studies. High-throughput sequencing (HTS) has become a major platform for SV detection and simulation serves as a powerful and cost-effective approach for benchmarking SV detection algorithms. Accurate performance assessment by simulation requires the simulator capable of generating simulation data with all important features of real data, such GC biases in HTS data and various complexities in tumor data. However, no available package has systematically addressed all issues in data simulation for SV benchmarking.Results
Pysim-sv is a package for simulating HTS data to evaluate performance of SV detection algorithms. Pysim-sv can introduce a wide spectrum of germline and somatic genomic variations. The package contains functionalities to simulate tumor data with aneuploidy and heterogeneous subclones, which is very useful in assessing algorithm performance in tumor studies. Furthermore, Pysim-sv can introduce GC-bias, the most important and prevalent bias in HTS data, in the simulated HTS data.Conclusions
Pysim-sv provides an unbiased toolkit for evaluating HTS-based SV detection algorithms.18.
Background
One of the recent challenges of computational biology is development of new algorithms, tools and software to facilitate predictive modeling of big data generated by high-throughput technologies in biomedical research.Results
To meet these demands we developed PROPER - a package for visual evaluation of ranking classifiers for biological big data mining studies in the MATLAB environment.Conclusion
PROPER is an efficient tool for optimization and comparison of ranking classifiers, providing over 20 different two- and three-dimensional performance curves.19.
V. Carlassara E. Lampo B. Degryse C. Van Audenhove N. Spruytte 《Tijdschrift voor gerontologie en geriatrie》2017,48(2):67-76