共查询到20条相似文献,搜索用时 87 毫秒
1.
生命科学尤其在生物信息学中庞大的数据处理和高性能计算,大大超出了某一单个机构的计算能力,而网格计算的出现使这些困难应刃而解,并逐渐成为生命科学标准的网络基础。本文从计算网格、数据网格和知识网格三个方面综述了最新的网格技术。计算网格在解决生命科学中高流通量问题上已相当成熟;数据网格在处理生物信息学相关问题的过程中也显示了很强的优势;知识网格在知识的创造和共享方面提出了一个全新的概念。 相似文献
2.
可变剪接的生物信息数据分析综述 总被引:1,自引:0,他引:1
前体mRNA的可变剪接是扩大真核生物蛋白质组多样性的重要基因调控机制。可变剪接的错误调节可以引起多种人类疾病。由于高通量技术的发展,生物信息学成为可变剪接研究的主要手段。本文总结了可变剪接在生物信息学领域的研究方法,同时也分析并预测了可变剪接的发展方向。 相似文献
3.
生命科学尤其是生物信息学中庞大的数据处理和高性能计算大大超出了某一单个机构的计算能力,而网格技术的出现使这些困难应刃而解,并逐渐成为生命科学标准的网络基础.从计算网格、数据网格和知识网格3个方面综述了最新的网格技术.最后展望了网格技术在生命科学领域中广阔的应用前景. 相似文献
4.
5.
为了解决生物科学方面的研究人员在使用生物研究软件工具中存在的“不知道、选不出、难学习和用不起”等问题,同时也为了能为生物类院校的师生提供教学辅助,利用网格技术在高性能服务器上,建立了一款服务于生物研究和教学的网上生物科研服务平台(Biological study service platform,BSP)。该平台包含“工具大全、工具论坛和web工作台”等三部分,拟分别从“找,学,用”三个方面彻底解决生物研究人员在使用生物研究软件工具中存在的问题,同时也为生物类的教师和学生提供了教与学平台。访问地址如下:http://218.57.145.30:9999/biosp/。此外,该研究小组还将不断地更新和改进该平台,开发更多的自主应用软件,更好地服务生物科研工作。 相似文献
6.
针对我国生物信息产业的现状及存在的问题进行分析,介绍了生物信息学以及生物芯片研究的现状和新技术、生物信息产业的发展,并对生物信息产业的知识产权保护问题进行了分析和讨论。对于今后如何发展我国生物信息产业以及如何采取策略和措施提供参考。 相似文献
7.
8.
随着高通量生物学技术在生物医学研究中的广泛应用,生物信息学在生物医学研究中的应用也越来越广泛。在代谢性心血管病变研究中,从DNA到RNA,从蛋白质到小分子,再到系统水平,都能找到生物信息学成功应用的证据。本章将简要介绍DNA、RNA、蛋白质、药物、生物网络等各层面具有代表性的生物信息学方法和工具在代谢性心血管病变研究中的应用。 相似文献
9.
基于网格的医学信息平台设计 总被引:1,自引:0,他引:1
针对目前医学信息应用模式的局限性,提出一种基于网格的平台技术,促进网络环境下的医学资源共享和互用。其中采用面向网格工具包的中间件设计,简化了服务集成和调用。实验模型的建立验证平台的可行性及实用价值。 相似文献
10.
简要分析了"十一五"期间国家高技术研究发展计划(863计划)"生物信息和生物计算技术"专题的课题设置及实施情况。分别从本专题研究方向及课题设置、课题承担单位及研究人员结构、课题完成情况及所取得的代表性研究成果等方面进行了具体分析和归纳总结,供广大科技工作者参考。 相似文献
11.
Grid computing systems are emerging as a computing infrastructure that will enable the use of wide-area network computing systems for a variety of challenging applications. One of these is the ever increasing demand for multimedia from users engaging in a wide range of activities such as scientific research, education, commerce, and entertainment. To provide an adequate level of service to multimedia applications, it is often necessary to simultaneously allocate resources including predetermined capacities from interconnecting networks to the applications. The simultaneous allocation of resources is often referred to as co-allocation in the Grid literature. In this paper, we formally define the co-allocation problem and propose a novel scheme called synchronous queuing (SQ) for implementing co-allocation with quality of service (QoS) assurances in Grids. Unlike existing approaches, SQ does not require advance reservation capabilities at the resources. This enables an SQ-based approach to over subscribe the resources and hence improve resource utilization. The simulation studies performed to evaluate SQ indicate that it outperforms an QoS-based scheme with strict admission control by a significant margin. 相似文献
12.
Cactus Tools for Grid Applications 总被引:3,自引:0,他引:3
Gabrielle Allen Werner Benger Thomas Dramlitsch Tom Goodale Hans-Christian Hege Gerd Lanfermann André Merzky Thomas Radke Edward Seidel John Shalf 《Cluster computing》2001,4(3):179-188
Cactus is an open source problem solving environment designed for scientists and engineers. Its modular structure facilitates parallel computation across different architectures and collaborative code development between different groups. The Cactus Code originated in the academic research community, where it has been developed and used over many years by a large international collaboration of physicists and computational scientists. We discuss here how the intensive computing requirements of physics applications now using the Cactus Code encourage the use of distributed and metacomputing, and detail how its design makes it an ideal application test-bed for Grid computing. We describe the development of tools, and the experiments which have already been performed in a Grid environment with Cactus, including distributed simulations, remote monitoring and steering, and data handling and visualization. Finally, we discuss how Grid portals, such as those already developed for Cactus, will open the door to global computing resources for scientific users. 相似文献
13.
The Rutgers Computational Grid (RCG) project is aimed at providing high throughput performance to Rutgers university faculty and students. The RCG employs dual processor PCs, with Pentium II and III processors, as computational nodes, running the Linux RedHat operating system. The Load Sharing Facility (LSF) scheduling system from Platform Computing is used for job control and monitoring. The nodes are grouped into subclusters physically located in several departments and controlled by a single master node through LSF. The hardware and software used in RCG are described. Utilization and performance issues, including parallel performance, are discussed based on the experience of the first two years of RCG operation. 相似文献
14.
Grid Computing consists of a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios is presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions is to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios is to increase the awareness of security issues in Grid Computing. 相似文献
15.
In genomic prediction, common analysis methods rely on a linear mixed-model framework to estimate SNP marker effects and breeding values of animals or plants. Ridge regression–best linear unbiased prediction (RR-BLUP) is based on the assumptions that SNP marker effects are normally distributed, are uncorrelated, and have equal variances. We propose DAIRRy-BLUP, a parallel, Distributed-memory RR-BLUP implementation, based on single-trait observations (y), that uses the Average Information algorithm for restricted maximum-likelihood estimation of the variance components. The goal of DAIRRy-BLUP is to enable the analysis of large-scale data sets to provide more accurate estimates of marker effects and breeding values. A distributed-memory framework is required since the dimensionality of the problem, determined by the number of SNP markers, can become too large to be analyzed by a single computing node. Initial results show that DAIRRy-BLUP enables the analysis of very large-scale data sets (up to 1,000,000 individuals and 360,000 SNPs) and indicate that increasing the number of phenotypic and genotypic records has a more significant effect on the prediction accuracy than increasing the density of SNP arrays. 相似文献
16.
Programming the Grid: Distributed Software Components, P2P and Grid Web Services for Scientific Applications 总被引:3,自引:0,他引:3
Dennis Gannon Randall Bramley Geoffrey Fox Shava Smallen Al Rossi Rachana Ananthakrishnan Felipe Bertrand Ken Chiu Matt Farrellee Madhu Govindaraju Sriram Krishnan Lavanya Ramakrishnan Yogesh Simmhan Alek Slominski Yu Ma Caroline Olariu Nicolas Rey-Cenvaz 《Cluster computing》2002,5(3):325-336
Computational Grids [17,25] have become an important asset in large-scale scientific and engineering research. By providing a set of services that allow a widely distributed collection of resources to be tied together into a relatively seamless computing framework, teams of researchers can collaborate to solve problems that they could not have attempted before. Unfortunately the task of building Grid applications remains extremely difficult because there are few tools available to support developers. To build reliable and re-usable Grid applications, programmers must be equipped with a programming framework that hides the details of most Grid services and allows the developer a consistent, non-complex model in which applications can be composed from well tested, reliable sub-units. This paper describes experiences with using a software component framework for building Grid applications. The framework, which is based on the DOE Common Component Architecture (CCA) [1,2,3,8], allows individual components to export function/service interfaces that can be remotely invoked by other components. The framework also provides a simple messaging/event system for asynchronous notification between application components. The paper also describes how the emerging Web-services [52] model fits with a component-oriented application design philosophy. To illustrate the connection between Web services and Grid application programming we describe a simple design pattern for application factory services which can be used to simplify the task of building reliable Grid programs. Finally we address several issues of Grid programming that better understood from the perspective of Peer-to-Peer (P2P) systems. In particular we describe how models for collaboration and resource sharing fit well with many Grid application scenarios. 相似文献
17.
M. Parashar H. Liu Z. Li V. Matossian C. Schmidt G. Zhang S. Hariri 《Cluster computing》2006,9(2):161-174
The increasing complexity, heterogeneity, and dynamism of emerging pervasive Grid environments and applications has necessitated
the development of autonomic self-managing solutions, that are inspired by biological systems and deal with similar challenges
of complexity, heterogeneity, and uncertainty. This paper introduces Project AutoMate and describes its key components. The
overall goal of Project Automate is to investigate conceptual models and implementation architectures that can enable the
development and execution of such self-managing Grid applications. Illustrative autonomic scientific and engineering Grid
applications enabled by AutoMate are presented.
The research presented in this paper is supported in part by the National Science Foundation via grants numbers ACI 9984357,
EIA 0103674, EIA 0120934, ANI 0335244, CNS 0305495, CNS 0426354 and IIS 0430826. The authors would like to acknowledge the
contributions of M. Agarwal, V. Bhat and N. Jiang to this research. 相似文献
18.
The goal of this work is to create a tool that allows users to easily distribute large scientific computations on computational grids. Our tool MW relies on the simple master–worker paradigm. MW provides both a top Level interface to application software and a bottom Level interface to existing Grid computing toolkits. Both interfaces are briefly described. We conclude with a case study, where the necessary Grid services are provided by the Condor high-throughput computing system, and the MW-enabled application code is used to solve a combinatorial optimization problem of unprecedented complexity. 相似文献
19.
This paper discusses a number of aspects of using grid computing methods in support of molecular simulations, with examples drawn from the eMinerals project. A number of components for a useful grid infrastructure are discussed, including the integration of compute and data grids, automatic metadata capture from simulation studies, interoperability of data between simulation codes, management of data and data accessibility, management of jobs and workflow, and tools to support collaboration. Use of a grid infrastructure also brings certain challenges, which are discussed. These include making use of boundless computing resources, the necessary changes, and the need to be able to manage experimentation. 相似文献
20.
Christine Carapito Alexandre Burel Patrick Guterl Alexandre Walter Fabrice Varrier Fabrice Bertile Alain Van Dorsselaer 《Proteomics》2014,14(9):1014-1019
One of the major bottlenecks in the proteomics field today resides in the computational interpretation of the massive data generated by the latest generation of high‐throughput MS instruments. MS/MS datasets are constantly increasing in size and complexity and it becomes challenging to comprehensively process such huge datasets and afterwards deduce most relevant biological information. The Mass Spectrometry Data Analysis (MSDA, https://msda.unistra.fr ) online software suite provides a series of modules for in‐depth MS/MS data analysis. It includes a custom databases generation toolbox, modules for filtering and extracting high‐quality spectra, for running high‐performance database and de novo searches, and for extracting modified peptides spectra and functional annotations. Additionally, MSDA enables running the most computationally intensive steps, namely database and de novo searches, on a computer grid thus providing a net time gain of up to 99% for data processing. 相似文献