首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nowadays, biomedicine is characterised by a growing need for processing of large amounts of data in real time. This leads to new requirements for information and communication technologies (ICT). Cloud computing offers a solution to these requirements and provides many advantages, such as cost savings, elasticity and scalability of using ICT. The aim of this paper is to explore the concept of cloud computing and the related use of this concept in the area of biomedicine. Authors offer a comprehensive analysis of the implementation of the cloud computing approach in biomedical research, decomposed into infrastructure, platform and service layer, and a recommendation for processing large amounts of data in biomedicine. Firstly, the paper describes the appropriate forms and technological solutions of cloud computing. Secondly, the high-end computing paradigm of cloud computing aspects is analysed. Finally, the potential and current use of applications in scientific research of this technology in biomedicine is discussed.  相似文献   

2.
The functioning of even a simple biological system is much more complicated than the sum of its genes, proteins and metabolites. A premise of systems biology is that molecular profiling will facilitate the discovery and characterization of important disease pathways. However, as multiple levels of effector pathway regulation appear to be the norm rather than the exception, a significant challenge presented by high-throughput genomics and proteomics technologies is the extraction of the biological implications of complex data. Thus, integration of heterogeneous types of data generated from diverse global technology platforms represents the first challenge in developing the necessary foundational databases needed for predictive modelling of cell and tissue responses. Given the apparent difficulty in defining the correspondence between gene expression and protein abundance measured in several systems to date, how do we make sense of these data and design the next experiment? In this review, we highlight current approaches and challenges associated with integration and analysis of heterogeneous data sets, focusing on global analysis obtained from high-throughput technologies.  相似文献   

3.
Ecologists are increasingly asking large‐scale and/or broad‐scope questions that require vast datasets. In response, various top‐down efforts and incentives have been implemented to encourage data sharing and integration. However, despite general consensus on the critical need for more open ecological data, several roadblocks still discourage compliance and participation in these projects; as a result, ecological data remain largely unavailable. Grassroots initiatives (i.e. efforts initiated and led by cohesive groups of scientists focused on specific goals) have thus far been overlooked as a powerful means to meet these challenges. These bottom‐up collaborative data integration projects can play a crucial role in making high quality datasets available because they tackle the heterogeneity of ecological data at a scale where it is still manageable, all the while offering the support and structure to do so. These initiatives foster best practices in data management and provide tangible rewards to researchers who choose to invest time in sound data stewardship. By maintaining proximity between data generators and data users, grassroots initiatives improve data interpretation and ensure high‐quality data integration while providing fair acknowledgement to data generators. We encourage researchers to formalize existing collaborations and to engage in local activities that improve the availability and distribution of ecological data. By fostering communication and interaction among scientists, we are convinced that grassroots initiatives can significantly support the development of global‐scale data repositories. In doing so, these projects help address important ecological questions and support policy decisions.  相似文献   

4.
In recent years, developing the idea of “cancer big data” has emerged as a result of the significant expansion of various fields such as clinical research, genomics, proteomics and public health records. Advances in omics technologies are making a significant contribution to cancer big data in biomedicine and disease diagnosis. The increasingly availability of extensive cancer big data has set the stage for the development of multimodal artificial intelligence (AI) frameworks. These frameworks aim to analyze high-dimensional multi-omics data, extracting meaningful information that is challenging to obtain manually. Although interpretability and data quality remain critical challenges, these methods hold great promise for advancing our understanding of cancer biology and improving patient care and clinical outcomes. Here, we provide an overview of cancer big data and explore the applications of both traditional machine learning and deep learning approaches in cancer genomic and proteomic studies. We briefly discuss the challenges and potential of AI techniques in the integrated analysis of omics data, as well as the future direction of personalized treatment options in cancer.  相似文献   

5.
With the fast development of high-throughput sequencing technologies, a new generation of genome-wide gene expression measurements is under way. This is based on mRNA sequencing (RNA-seq), which complements the already mature technology of microarrays, and is expected to overcome some of the latter’s disadvantages. These RNA-seq data pose new challenges, however, as strengths and weaknesses have yet to be fully identified. Ideally, Next (or Second) Generation Sequencing measures can be integrated for more comprehensive gene expression investigation to facilitate analysis of whole regulatory networks. At present, however, the nature of these data is not very well understood. In this paper we study three alternative gene expression time series datasets for the Drosophila melanogaster embryo development, in order to compare three measurement techniques: RNA-seq, single-channel and dual-channel microarrays. The aim is to study the state of the art for the three technologies, with a view of assessing overlapping features, data compatibility and integration potential, in the context of time series measurements. This involves using established tools for each of the three different technologies, and technical and biological replicates (for RNA-seq and microarrays, respectively), due to the limited availability of biological RNA-seq replicates for time series data. The approach consists of a sensitivity analysis for differential expression and clustering. In general, the RNA-seq dataset displayed highest sensitivity to differential expression. The single-channel data performed similarly for the differentially expressed genes common to gene sets considered. Cluster analysis was used to identify different features of the gene space for the three datasets, with higher similarities found for the RNA-seq and single-channel microarray dataset.  相似文献   

6.
Four predictions are made on the future of space age technologies in human and cultural ecology: first, remote sensing systems will generate a need for more fieldwork, not less; second, the services and skills of anthropologists will become essential to the interpretation of satellite data, especially as these relate to areas characterized by non-Western cultural practices; third, training in remote sensing and the use of geographic information systems will become a regular offering for anthropology students; and fourth, since these new systems and methods can be applied retrospectively to the re-analysis of earlier ethnographic works, space age technologies will be with us for some time to come.  相似文献   

7.
A number of fundamental technical developments like the evolvement of oligonucleotide microarrays, new sequencing technologies and gene synthesis have considerably changed the character of genomic biological resource centres in recent years. While genomic biological resource centres traditionally served mainly as providers of sparsely characterized cDNA clones and clone sets, there is nowadays a clear tendency towards well-characterized, high-quality clones. In addition, major new service units like microarray services have developed, which are completely independent of clone collections, reflecting the co-evolution of data generation and technology development. The new technologies require an increasingly higher degree of specialization, data integration and quality standards. Altogether, these developments result in spin-offs of highly specialized biotech companies, some of which will take a prominent position in translational medicine.  相似文献   

8.
Biological data integration using Semantic Web technologies   总被引:2,自引:0,他引:2  
Pasquier C 《Biochimie》2008,90(4):584-594
Current research in biology heavily depends on the availability and efficient use of information. In order to build new knowledge, various sources of biological data must often be combined. Semantic Web technologies, which provide a common framework allowing data to be shared and reused between applications, can be applied to the management of disseminated biological data. However, due to some specificities of biological data, the application of these technologies to life science constitutes a real challenge. Through a use case of biological data integration, we show in this paper that current Semantic Web technologies start to become mature and can be applied for the development of large applications. However, in order to get the best from these technologies, improvements are needed both at the level of tool performance and knowledge modeling.  相似文献   

9.
Transgenic animals are produced by introducing 'foreign' DNA into the genetic material of pre-implantation embryos. This DNA is present in all tissues of the resulting individual. This technique is of great importance to many aspects of biomedical science, including gene regulation, the immune system, cancer research, developmental biology, biomedicine, manufacturing and agriculture. The production of transgenic animals is one of several new and developing technologies that will have a profound impact on the genetic improvement of livestock. The rate at which these technologies are incorporated into production schemes will determine the speed at which we will be able to achieve our goal of more efficiently producing livestock that meets consumer and market demand.  相似文献   

10.
The recent sequencing and annotation of the human genome enables a new era in biomedicine that will be based on an interdisciplinary, systemic approach to the elucidation and treatment of human disease. Reconstruction of genome-scale metabolic networks is an important part of this approach since networks represent the integration of diverse biological data such as genome annotations, high-throughput data, and legacy biochemical knowledge. This article will describe Homo sapiens Recon 1, a functionally tested, genome-scale reconstruction of human cellular metabolism, and its capabilities for facilitating the understanding of physiological and disease metabolic states.  相似文献   

11.
Data integration is needed in order to cope with the huge amounts of biological information now available and to perform data mining effectively. Current data integration systems have strict limitations, mainly due to the number of resources, their size and frequency of updates, their heterogeneity and distribution on the Internet. Integration must therefore be achieved by accessing network services through flexible and extensible data integration and analysis network tools. EXtensible Markup Language (XML), Web Services and Workflow Management Systems (WMS) can support the creation and deployment of such systems. Many XML languages and Web Services for bioinformatics have already been designed and implemented and some WMS have been proposed. In this article, we review a methodology for data integration in biomedical research that is based on these technologies. We also briefly describe some of the available WMS and discuss the current limitations of this methodology and the ways in which they can be overcome.  相似文献   

12.
The subjects of analysis are the fundamental scientific and applied aspects of a new sphre of biomedicine, i.e., prophylaxis of aging. It is shown that it presupposes a principally new class of medical technologies that enable physcians to set and accomplish the task of exceeding the limits of the average species life expectancy of the human.  相似文献   

13.
Transgenic animals in biomedicine and agriculture: outlook for the future   总被引:8,自引:0,他引:8  
Transgenic animals are produced by introduction of 'foreign' deoxyribonucleic acid (DNA) into preimplantation embryos. The foreign DNA is inserted into the genetic material and may be expressed in tissues of the resulting individual. This technique is of great importance to many aspects of biomedical science including gene regulation, the immune system, cancer research, developmental biology, biomedicine, manufacturing and agriculture. The production of transgenic animals is one of a number of new and developing technologies that will have a profound impact on the genetic improvement of livestock. The rate at which these technologies are incorporated into production schemes will determine the speed at which we will be able to achieve our goal of more efficiently producing livestock, which meets consumer and market demand.  相似文献   

14.
Data integration is key to functional and comparative genomics because integration allows diverse data types to be evaluated in new contexts. To achieve data integration in a scalable and sensible way, semantic standards are needed, both for naming things (standardized nomenclatures, use of key words) and also for knowledge representation. The Mouse Genome Informatics database and other model organism databases help to close the gap between information and understanding of biological processes because these resources enforce well-defined nomenclature and knowledge representation standards. Model organism databases have a critical role to play in ensuring that diverse kinds of data, especially genome-scale data sets and information, remain useful to the biological community in the long-term. The efforts of model organism database groups ensure not only that organism-specific data are integrated, curated and accessible but also that the information is structured in such a way that comparison of biological knowledge across model organisms is facilitated.  相似文献   

15.
陈建平  许哲平 《广西植物》2022,42(Z1):52-61
标本数字化建设是生物多样性保护和利用的重要工作基础,通过标本数据的整合分析,在生物分类学、生态学、生物工程、生物保护、粮食安全、生物多样性评估、教学教育和人类社会活动等方面提供数据支撑。为了了解全球标本数字化建设工作的现状以及数据共享的策略与技术发展趋势,该文分别调查梳理了北美洲、南美洲、欧洲、非洲、亚洲和大洋洲地区的标本数字化和平台建设情况,对标本数据共享现状和趋势从数据使用协议、新技术新方法和公众科学等方面进行了对比和分析,并为中国国内的标本数字化工作提出了工作建议,包括:(1)加强标本数字化建设、管理和动态更新方面的协同机制建设,确保实物资源和数字化资源信息同步;(2)加强数据整理和发布,促进数据质量的提升,充分开放数据使用协议,减少数据使用的阻碍;(3)加强对新技术的学习和引入,特别是开源软件、机器学习和人工智能技术的应用,能够在标签快速识别、自动鉴定和属性数据提取等方面发挥作用;(4)加强区域和国际合作,推动数据的整合应用;(5)推动公众科学项目发展,促进野外采集、室内整理、在线纠错、数据产品研发等工作的开展。  相似文献   

16.
To mitigate some of the potentially deleterious environmental and agricultural consequences associated with current land-based-biofuel feedstocks, we propose the use of biofuels derived from aquatic microbial oxygenic photoautotrophs (AMOPs), more commonly known as cyanobacteria, algae, and diatoms. Herein we review their demonstrated productivity in mass culturing and aspects of their physiology that are particularly attractive for integration into renewable biofuel applications. Compared with terrestrial crops, AMOPs are inherently more efficient solar collectors, use less or no land, can be converted to liquid fuels using simpler technologies than cellulose, and offer secondary uses that fossil fuels do not provide. AMOPs pose a new set of technological challenges if they are to contribute as biofuel feedstocks.  相似文献   

17.
Dramatic improvements in high throughput sequencing technologies have led to a staggering growth in the number of predicted genes. However, a large fraction of these newly discovered genes do not have a functional assignment. Fortunately, a variety of novel high-throughput genome-wide functional screening technologies provide important clues that shed light on gene function. The integration of heterogeneous data to predict protein function has been shown to improve the accuracy of automated gene annotation systems. In this paper, we propose and evaluate a probabilistic approach for protein function prediction that integrates protein-protein interaction (PPI) data, gene expression data, protein motif information, mutant phenotype data, and protein localization data. First, functional linkage graphs are constructed from PPI data and gene expression data, in which an edge between nodes (proteins) represents evidence for functional similarity. The assumption here is that graph neighbors are more likely to share protein function, compared to proteins that are not neighbors. The functional linkage graph model is then used in concert with protein domain, mutant phenotype and protein localization data to produce a functional prediction. Our method is applied to the functional prediction of Saccharomyces cerevisiae genes, using Gene Ontology (GO) terms as the basis of our annotation. In a cross validation study we show that the integrated model increases recall by 18%, compared to using PPI data alone at the 50% precision. We also show that the integrated predictor is significantly better than each individual predictor. However, the observed improvement vs. PPI depends on both the new source of data and the functional category to be predicted. Surprisingly, in some contexts integration hurts overall prediction accuracy. Lastly, we provide a comprehensive assignment of putative GO terms to 463 proteins that currently have no assigned function.  相似文献   

18.
Real-time monitoring of bioprocesses by the integration of analytics at critical unit operations is one of the paramount necessities for quality by design manufacturing and real-time release (RTR) of biopharmaceuticals. A well-defined process analytical technology (PAT) roadmap enables the monitoring of critical process parameters and quality attributes at appropriate unit operations to develop an analytical paradigm that is capable of providing real-time data. We believe a comprehensive PAT roadmap should entail not only integration of analytical tools into the bioprocess but also should address automated-data piping, analysis, aggregation, visualization, and smart utility of data for advanced-data analytics such as machine and deep learning for holistic process understanding. In this review, we discuss a broad spectrum of PAT technologies spanning from vibrational spectroscopy, multivariate data analysis, multiattribute chromatography, mass spectrometry, sensors, and automated-sampling technologies. We also provide insights, based on our experience in clinical and commercial manufacturing, into data automation, data visualization, and smart utility of data for advanced-analytics in PAT. This review is catered for a broad audience, including those new to the field to those well versed in applying these technologies. The article is also intended to give some insight into the strategies we have undertaken to implement PAT tools in biologics process development with the vision of realizing RTR testing in biomanufacturing and to meet regulatory expectations.  相似文献   

19.
Lyu  Yafei  Li  Qunhua 《BMC bioinformatics》2016,17(1):51-60
Determining differentially expressed genes (DEGs) between biological samples is the key to understand how genotype gives rise to phenotype. RNA-seq and microarray are two main technologies for profiling gene expression levels. However, considerable discrepancy has been found between DEGs detected using the two technologies. Integration data across these two platforms has the potential to improve the power and reliability of DEG detection. We propose a rank-based semi-parametric model to determine DEGs using information across different sources and apply it to the integration of RNA-seq and microarray data. By incorporating both the significance of differential expression and the consistency across platforms, our method effectively detects DEGs with moderate but consistent signals. We demonstrate the effectiveness of our method using simulation studies, MAQC/SEQC data and a synthetic microRNA dataset. Our integration method is not only robust to noise and heterogeneity in the data, but also adaptive to the structure of data. In our simulations and real data studies, our approach shows a higher discriminate power and identifies more biologically relevant DEGs than eBayes, DEseq and some commonly used meta-analysis methods.  相似文献   

20.
A common way to think about scientific practice involves classifying it as hypothesis- or data-driven. We argue that although such distinctions might illuminate scientific practice very generally, they are not sufficient to understand the day-to-day dynamics of scientific activity and the development of programmes of research. One aspect of everyday scientific practice that is beginning to gain more attention is integration. This paper outlines what is meant by this term and how it has been discussed from scientific and philosophical points of view. We focus on methodological, data and explanatory integration, and show how they are connected. Then, using some examples from molecular systems biology, we will show how integration works in a range of inquiries to generate surprising insights and even new fields of research. From these examples we try to gain a broader perspective on integration in relation to the contexts of inquiry in which it is implemented. In today's environment of data-intensive large-scale science, integration has become both a practical and normative requirement with corresponding implications for meta-methodological accounts of scientific practice. We conclude with a discussion of why an understanding of integration and its dynamics is useful for philosophy of science and scientific practice in general.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号