首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《遗传学报》2021,48(7):520-530
Genetic, epigenetic, and metabolic alterations are all hallmarks of cancer. However, the epigenome and metabolome are both highly complex and dynamic biological networks in vivo. The interplay between the epigenome and metabolome contributes to a biological system that is responsive to the tumor microenvironment and possesses a wealth of unknown biomarkers and targets of cancer therapy. From this perspective, we first review the state of high-throughput biological data acquisition(i.e. multiomics data)and analysis(i.e. computational tools) and then propose a conceptual in silico metabolic and epigenetic regulatory network(MER-Net) that is based on these current high-throughput methods. The conceptual MER-Net is aimed at linking metabolomic and epigenomic networks through observation of biological processes, omics data acquisition, analysis of network information, and integration with validated database knowledge. Thus, MER-Net could be used to reveal new potential biomarkers and therapeutic targets using deep learning models to integrate and analyze large multiomics networks. We propose that MER-Net can serve as a tool to guide integrated metabolomics and epigenomics research or can be modified to answer other complex biological and clinical questions using multiomics data.  相似文献   

2.
Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana.  相似文献   

3.
Currently, the vital impact of environmental pollution on economic, social and health dimensions has been recognized. The need for theoretical and implementation frameworks for the acquisition, modeling and analysis of environmental data as well as tools to conceive and validate scenarios is becoming increasingly important. For these reasons, different environmental simulation models have been developed. Researchers and stakeholders need efficient tools to store, display, compare and analyze data that are produced by simulation models. One common way to manage simulation results is to use text files; however, text files make it difficult to explore the data. Spreadsheet tools (e.g., OpenOffice, MS Excel) can help to display and analyze model results, but they are not suitable for very large volumes of information. Recently, some studies have shown the feasibility of using Data Warehouse (DW) and On-Line Analytical Processing (OLAP) technologies to store model results and to facilitate model visualization, analysis and comparisons. These technologies allow model users to easily produce graphical reports and charts. In this paper, we address the analysis of pesticide transfer simulation results by warehousing and OLAPing data, for which the data results from the MACRO simulation model. This model simulates hydrological transfers of pesticides at the plot scale. We demonstrate how the simulation results can be managed using DW technologies. We also demonstrate how the use of integrity constraints can improve OLAP analysis. These constraints are used to maintain the quality of the warehoused data as well as to maintain the aggregations and queries, which will lead to better analysis, conclusions and decisions.  相似文献   

4.
Data from the electronic medical record comprise numerous structured but uncoded ele-ments, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of rele-vant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it.  相似文献   

5.
Marker-assisted selection (MAS) uses genetic marker genotypes to predict an animal's production potential and will provide additional selection information for progeny testing. With the discovery of highly polymorphic microsatellite markers, the tools now exist to begin the search for economic trait loci (ETL), which is the first step toward MAS. The objective of this study was to identify ETL for somatic cell score in an existing Holstein population. Using the granddaughter design, sons from seven grandsire families were genotyped with 20 autosomal microsatellites from five chromosomes (4, 8, 13, 17, 23), with an emphasis on chromosome 23, which is the location of the bovine major histocompatibility complex (BoLA). Selective genotyping was used to reduce the number of genotypes required, in which the 10 highest and 10 lowest sons from the phenotypic distribution curve were tested (140 sons in seven families). One marker (513), located near BoLA, showed evidence of an ETL in three of five polymorphic families. Additional sons were genotyped from the five families to estimate the effect and to compare selective and ‘complete’ genotyping. Both methods detected an ETL at marker 513, but in different families. This study provides evidence of the usefulness of microsatellite markers and the granddaughter design in the detection of ETL; however, additional markers need to be evaluated to determine the usefulness of selective genotyping. Based on the results from the 20 studied markers, the most likely position of a somatic cell score ETL lies near marker 513, located on chromosome 23.  相似文献   

6.
Many animals use tools but only humans are generally considered to have the cognitive sophistication required for cumulative technological evolution. Three important characteristics of cumulative technological evolution are: (i) the diversification of tool design; (ii) cumulative change; and (iii) high-fidelity social transmission. We present evidence that crows have diversified and cumulatively changed the design of their pandanus tools. In 2000 we carried out an intensive survey in New Caledonia to establish the geographical variation in the manufacture of these tools. We documented the shapes of 5550 tools from 21 sites throughout the range of pandanus tool manufacture. We found three distinct pandanus tool designs: wide tools, narrow tools and stepped tools. The lack of ecological correlates of the three tool designs and their different, continuous and overlapping geographical distributions make it unlikely that they evolved independently. The similarities in the manufacture method of each design further suggest that pandanus tools have gone through a process of cumulative change from a common historical origin. We propose a plausible scenario for this rudimentary cumulative evolution.  相似文献   

7.
8.
The 'crafting' of tools involves (i) selection of appropriate raw material, (ii) preparatory trimming and (iii) fine, three-dimensional sculpting. Its evolution is technologically important because it allows the open-ended development of tools. New Caledonian crows manufacture an impressive range of stick and leaf tools. We previously reported that their toolkit included hooked implements made from leafy twigs, although their manufacture had never been closely observed. We describe the manufacture of 10 hooked-twig tools by an adult crow and its dependent juvenile. To make all 10 tools, the crows carried out a relatively invariant three-step sequence of complex manipulations that involved (i) the selection of raw material, (ii) trimming and (iii) a lengthy sculpting of the hook. Hooked-twig manufacture contrasts with the lack of sculpting in the making of wooden tools by other non-humans such as chimpanzees and woodpecker finches. This fine, three-stage crafting process removes another alleged difference between humans and other animals.  相似文献   

9.
In this paper, we discuss the properties of biological data and challenges it poses for data management, and argue that, in order to meet the data management requirements for 'digital biology', careful integration of the existing technologies and the development of new data management techniques for biological data are needed. Based on this premise, we present PathCase: Case Pathways Database System. PathCase is an integrated set of software tools for modelling, storing, analysing, visualizing and querying biological pathways data at different levels of genetic, molecular, biochemical and organismal detail. The novel features of the system include: (i) genomic information integrated with other biological data and presented starting from pathways; (ii) design for biologists who are possibly unfamiliar with genomics, but whose research is essential for annotating gene and genome sequences with biological functions; (iii) database design, implementation and graphical tools which enable users to visualize pathways data in multiple abstraction levels and to pose exploratory queries; (iv) a wide range of different types of queries including, 'path' and 'neighbourhood queries' and graphical visualization of query outputs; and (v) an implementation that allows for web (XML)-based dissemination of query outputs (i.e. pathways data in BIOPAX format) to researchers in the community, giving them control on the use of pathways data.  相似文献   

10.
An architecture for biological information extraction and representation   总被引:1,自引:0,他引:1  
Motivations: Technological advances in biomedical research are generating a plethora of heterogeneous data at a high rate. There is a critical need for extraction, integration and management tools for information discovery and synthesis from these heterogeneous data. RESULTS: In this paper, we present a general architecture, called ALFA, for information extraction and representation from diverse biological data. The ALFA architecture consists of: (i) a networked, hierarchical, hyper-graph object model for representing information from heterogeneous data sources in a standardized, structured format; and (ii) a suite of integrated, interactive software tools for information extraction and representation from diverse biological data sources. As part of our research efforts to explore this space, we have currently prototyped the ALFA object model and a set of interactive software tools for searching, filtering, and extracting information from scientific text. In particular, we describe BioFerret, a meta-search tool for searching and filtering relevant information from the web, and ALFA Text Viewer, an interactive tool for user-guided extraction, disambiguation, and representation of information from scientific text. We further demonstrate the potential of our tools in integrating the extracted information with experimental data and diagrammatic biological models via the common underlying ALFA representation. CONTACT: aditya_vailaya@agilent.com.  相似文献   

11.
In vivo microscopy generates images that contain complex information on the dynamic behaviour of three-dimensional (3D) objects. As a result, adapted mathematical and computational tools are required to help in their interpretation. Ideally, a complete software chain to study the dynamics of a complex 3D object should include: (i) the acquisition, (ii) the preprocessing and (iii) segmentation of the images, followed by (iv) a reconstruction in time and space and (v) the final quantitative analysis. Here, we have developed such a protocol to study cell dynamics at the shoot apical meristem in Arabidopsis. The protocol uses serial optical sections made with the confocal microscope. It includes specially designed algorithms to automate the identification of cell lineage and to analyse the quantitative behaviour of the meristem surface.  相似文献   

12.
Brunger AT 《Nature protocols》2007,2(11):2728-2733
Version 1.2 of the software system, termed Crystallography and NMR system (CNS), for crystallographic and NMR structure determination has been released. Since its first release, the goals of CNS have been (i) to create a flexible computational framework for exploration of new approaches to structure determination, (ii) to provide tools for structure solution of difficult or large structures, (iii) to develop models for analyzing structural and dynamical properties of macromolecules and (iv) to integrate all sources of information into all stages of the structure determination process. Version 1.2 includes an improved model for the treatment of disordered solvent for crystallographic refinement that employs a combined grid search and least-squares optimization of the bulk solvent model parameters. The method is more robust than previous implementations, especially at lower resolution, generally resulting in lower R values. Other advances include the ability to apply thermal factor sharpening to electron density maps. Consistent with the modular design of CNS, these additions and changes were implemented in the high-level computing language of CNS.  相似文献   

13.
Perovskite solar cells (PSCs) have shown great potential for photovoltaic applications with their unprecedented power conversion efficiency advancement. Such devices generally have a complex structure design with high temperature processed TiO2 as the electron transport layer (ETL). Further careful design of device configuration to fully tap the potentials of perovskite materials is expected. Particularly, for the practical application of PSCs, it is crucial to simplify their device structures thus the associated manufacturing process and cost while maintaining their efficiency to be comparable with the conventional devices. But how simple is simple? ETL‐free PSCs promise the simplest structured, thus simple manufacturing processes and low cost large area PSCs in practical applications. They can also help the further exploration of the great potential of perovskite materials and understanding the working principle of PSCs. Within this review, the evolution of the PSC is outlined by discussing the recent advances in the simplification of device configuration and processes for cost effective, highly efficient, and robust PSCs, with a focus on ETL‐free PSCs. Their advancement, key issues, working mechanism, existing problems, and future performance enhancements. This review aims to promote the future development of low cost and robust ETL‐free PSCs toward more efficient power output.  相似文献   

14.
Mass spectrometry (MS) is a technique that is used for biological studies. It consists in associating a spectrum to a biological sample. A spectrum consists of couples of values (intensity, m/z), where intensity measures the abundance of biomolecules (as proteins) with a mass-to-charge ratio (m/z) present in the originating sample. In proteomics experiments, MS spectra are used to identify pattern expressions in clinical samples that may be responsible of diseases. Recently, to improve the identification of peptides/proteins related to patterns, MS/MS process is used, consisting in performing cascade of mass spectrometric analysis on selected peaks. Latter technique has been demonstrated to improve the identification and quantification of proteins/peptide in samples. Nevertheless, MS analysis deals with a huge amount of data, often affected by noises, thus requiring automatic data management systems. Tools have been developed and most of the time furnished with the instruments allowing: (i) spectra analysis and visualization, (ii) pattern recognition, (iii) protein databases querying, (iv) peptides/proteins quantification and identification. Currently most of the tools supporting such phases need to be optimized to improve the protein (and their functionalities) identification processes. In this article we survey on applications supporting spectrometrists and biologists in obtaining information from biological samples, analyzing available software for different phases. We consider different mass spectrometry techniques, and thus different requirements. We focus on tools for (i) data preprocessing, allowing to prepare results obtained from spectrometers to be analyzed; (ii) spectra analysis, representation and mining, aimed to identify common and/or hidden patterns in spectra sets or in classifying data; (iii) databases querying to identify peptides; and (iv) improving and boosting the identification and quantification of selected peaks. We trace some open problems and report on requirements that represent new challenges for bioinformatics.  相似文献   

15.
MOTIVATION: Metabolic flux analysis of biochemical reaction networks using isotope tracers requires software tools that can analyze the dynamics of isotopic isomer (isotopomer) accumulation in metabolites and reveal the underlying kinetic mechanisms of metabolism regulation. Since existing tools are restricted by the isotopic steady state and remain disconnected from the underlying kinetic mechanisms, we have recently developed a novel approach for the analysis of tracer-based metabolomic data that meets these requirements. The present contribution describes the last step of this development: implementation of (i) the algorithms for the determination of the kinetic parameters and respective metabolic fluxes consistent with the experimental data and (ii) statistical analysis of both fluxes and parameters, thereby lending it a practical application. RESULTS: The C++ applications package for dynamic isotopomer distribution data analysis was supplemented by (i) five distinct methods for resolving a large system of differential equations; (ii) the 'simulated annealing' algorithm adopted to estimate the set of parameters and metabolic fluxes, which corresponds to the global minimum of the difference between the computed and measured isotopomer distributions; and (iii) the algorithms for statistical analysis of the estimated parameters and fluxes, which use the covariance matrix evaluation, as well as Monte Carlo simulations. An example of using this tool for the analysis of (13)C distribution in the metabolites of glucose degradation pathways has demonstrated the evaluation of optimal set of parameters and fluxes consistent with the experimental pattern, their range and statistical significance, and also the advantages of using dynamic rather than the usual steady-state method of analysis. AVAILABILITY: Software is available free from http://www.bq.ub.es/bioqint/selivanov.htm  相似文献   

16.
17.
Roe MR  Griffin TJ 《Proteomics》2006,6(17):4678-4687
Revolutionary advances in biological mass spectrometry (MS) have provided a basic tool to make possible comprehensive proteomic analysis. Traditionally, two-dimensional gel electrophoresis has been used as a separation method coupled with MS to facilitate analysis of complex protein mixtures. Despite the utility of this method, the many challenges of comprehensive proteomic analysis has motivated the development of gel-free MS-based strategies to obtain information not accessible using two-dimensional gel separations. These advanced strategies have enabled researchers to dig deeper into complex proteomes, gaining insights into the composition, quantitative response, covalent modifications and macromolecular interactions of proteins that collectively drive cellular function. This review describes the current state of gel-free, high throughput proteomic strategies using MS, including (i) the separation approaches commonly used for complex mixture analysis; (ii) strategies for large-scale quantitative analysis; (iii) analysis of post-translational modifications; and (iv) recent advances and future directions. The use of these strategies to make new discoveries at the proteome level into the effects of disease or other cellular perturbations is discussed in a variety of contexts, providing information on the potential of these tools in electromagnetic field research.  相似文献   

18.
We consider the general properties of developing systems, the approaches to their modeling, and the question of their complexity. The notion “complex system” is vague; somewhat more distinct is the complexity of the model describing a phenomenon. We propose to discuss two pertinent issues. (i) The complexity of basic models is minimal; in other words, complicated basic models are needless. (ii) Living systems are simpler than inanimate ones. Though developing systems are seen in abiotic as well as in biotic nature, the fundamental difference is that living beings are capable of goal-setting and purposeful development; hence they can be described with simpler basic models.  相似文献   

19.
The modeling of lifetime (i.e. cumulative) medical cost data in the presence of censored follow-up is complicated by induced informative censoring, rendering standard survival analysis tools invalid. With few exceptions, recently proposed nonparametric estimators for such data do not extend easily to handle covariate information. We propose to model the hazard function for lifetime cost endpoints using an adaptation of the HARE methodology (Kooperberg, Stone, and Truong, Journal of the American Statistical Association, 1995, 90, 78-94). Linear splines and their tensor products are used to adaptively build a model that incorporates covariates and covariate-by-cost interactions without restrictive parametric assumptions. The informative censoring problem is handled using inverse probability of censoring weighted estimating equations. The proposed method is illustrated using simulation and also with data on the cost of dialysis for patients with end-stage renal disease.  相似文献   

20.
The assumption of Hardy-Weinberg equilibrium (HWE) is generally required for association analysis using case-control design on autosomes; otherwise, the size may be inflated. There has been an increasing interest of exploring the association between diseases and markers on X chromosome and the effect of the departure from HWE on association analysis on X chromosome. Note that there are two hypotheses of interest regarding the X chromosome: (i) the frequencies of the same allele at a locus in males and females are equal and (ii) the inbreeding coefficient in females is zero (without excess homozygosity). Thus, excess homozygosity and significantly different minor allele frequencies between males and females are used to filter X-linked variants. There are two existing methods to test for (i) and (ii), respectively. However, their size and powers have not been studied yet. Further, there is no existing method to simultaneously detect both hypotheses till now. Therefore, in this article, we propose a novel likelihood ratio test for both (i) and (ii) on X chromosome. To further investigate the underlying reason why the null hypothesis is statistically rejected, we also develop two likelihood ratio tests for detecting (i) and (ii), respectively. Moreover, we explore the effect of population stratification on the proposed tests. From our simulation study, the size of the test for (i) is close to the nominal significance level. However, the size of the excess homozygosity test and the test for both (i) and (ii) is conservative. So, we propose parametric bootstrap techniques to evaluate their validity and performance. Simulation results show that the proposed methods with bootstrap techniques control the size well under the respective null hypothesis. Power comparison demonstrates that the methods with bootstrap techniques are more powerful than those without bootstrap procedure and the existing methods. The application of the proposed methods to a rheumatoid arthritis dataset indicates their utility.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号