首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cactus Tools for Grid Applications   总被引:3,自引:0,他引:3  
Cactus is an open source problem solving environment designed for scientists and engineers. Its modular structure facilitates parallel computation across different architectures and collaborative code development between different groups. The Cactus Code originated in the academic research community, where it has been developed and used over many years by a large international collaboration of physicists and computational scientists. We discuss here how the intensive computing requirements of physics applications now using the Cactus Code encourage the use of distributed and metacomputing, and detail how its design makes it an ideal application test-bed for Grid computing. We describe the development of tools, and the experiments which have already been performed in a Grid environment with Cactus, including distributed simulations, remote monitoring and steering, and data handling and visualization. Finally, we discuss how Grid portals, such as those already developed for Cactus, will open the door to global computing resources for scientific users.  相似文献   

2.
Development of NPACI Grid Application Portals and Portal Web Services   总被引:2,自引:0,他引:2  
Grid portals and services are emerging as convenient mechanisms for providing the scientific community with familiar and simplified interfaces to the Grid. Our experiences in implementing computational grid portals, and the services needed to support them, has led to the creation of GridPort: a unique, integrated, layered software system for building portals and hosting portal services that access Grid services. The usefulness of this system has been successfully demonstrated with the implementation of several application portals. This system has several unique features: the software is portable and runs on most webservers; written in Perl/CGI, it is easy to support and modify; a single API provides access to a host of Grid services; it is flexible and adaptable; it supports single login between multiple portals; and portals built with it may run across multiple sites and organizations. In this paper we summarize our experiences in building this system, including philosophy and design choices and we describe the software we are building that support portal development, portal services. Finally, we discuss our experiences in developing the GridPort Client Toolkit in support of remote Web client portals and Grid Web services.  相似文献   

3.
The increasing complexity, heterogeneity, and dynamism of emerging pervasive Grid environments and applications has necessitated the development of autonomic self-managing solutions, that are inspired by biological systems and deal with similar challenges of complexity, heterogeneity, and uncertainty. This paper introduces Project AutoMate and describes its key components. The overall goal of Project Automate is to investigate conceptual models and implementation architectures that can enable the development and execution of such self-managing Grid applications. Illustrative autonomic scientific and engineering Grid applications enabled by AutoMate are presented. The research presented in this paper is supported in part by the National Science Foundation via grants numbers ACI 9984357, EIA 0103674, EIA 0120934, ANI 0335244, CNS 0305495, CNS 0426354 and IIS 0430826. The authors would like to acknowledge the contributions of M. Agarwal, V. Bhat and N. Jiang to this research.  相似文献   

4.
Software Distributed Shared Memory (DSM) systems can be used to provide a coherent shared address space on multicomputers and other parallel systems without support for shared memory in hardware. The coherency software automatically translates shared memory accesses to explicit messages exchanged among the nodes in the system. Many applications exhibit a good performance on such systems but it has been shown that, for some applications, performance critical messages can be delayed behind less important messages because of the enqueuing behavior in the communication libraries used in current systems. We present in this paper a new portable communication library that supports priorities to remedy this situation. We describe an implementation of the communication library and a quantitative model that is used to estimate the performance impact of priorities for a typical situation. Using the model, we show that the use of high-priority communication reduces the latency of performance critical messages substantially over a wide range of network design parameters. The latency is reduced with up to 10–25% for each delaying low priority message in the queue ahead.  相似文献   

5.
This paper describes the design and implementation of a parallel programming environment called Distributed Shared Array (DSA), which provides a shared global array abstract across different machines connected by a network. In DSA, users can define and use global arrays that can be accessed uniformly from any machines in the network. Explicit management of array area allocation, replication, and migration is achieved by explicit calls for array manipulation: defining array regions, reading and writing array regions, synchronization, and control of replication and migration. The DSA is integrated with Grid (Globus) services. This paper also describes the use of our model for gene cluster analysis, multiple alignment and molecular dynamics simulation. In these applications, global arrays are used for storing the distance matrix, alignment matrix and atom coordinates, respectively. Large array areas, which cannot be stored in the memory of individual machines, are made available by the DSA. Scalable performance of DSA was obtained compared to that of conventional parallel programs written in MPI.  相似文献   

6.
Software Component Frameworks are well known in the commercial business application world and now this technology is being explored with great interest as a way to build large-scale scientific applications on parallel computers. In the case of Grid systems, the current architectural model is based on the emerging web services framework. In this paper we describe progress that has been made on the Common Component Architecture model (CCA) and discuss its success and limitations when applied to problems in Grid computing. Our primary conclusion is that a component model fits very well with a services-oriented Grid, but the model of composition must allow for a very dynamic (both in space and in time) control of composition. We note that this adds a new dimension to conventional service workflow and it extends the “Inversion of Control” aspects of most component systems. Dennis Gannon is a professor of Computer Science at Indiana University. He received his Ph.D. in Computer Science from the University of Illinois in 1980 and his Ph.D. in Mathematics from the University of California in 1974. From 1980 to 1985, he was on the faculty at Purdue University. His research interests include software tools for high performance distributed systems and problem solving environments for scientific computation. Sriram Krishnan received his Ph.D. in Computer Science from Indiana University in 2004. He is currently in the Grid Development Group at the San Diego Supercomputer Center where he is working on designing a Web services based architecture for biomedical applications that is secure and scalable, and is conducive to the creation of complex workflows. He received my undergraduate degree in Computer Engineering from the University of Mumbai, India. Liang Fang is a Ph.D. student in Computer Science at Indiana University. His research interests include Grid computing, Web services, portals, their security and scalability issues. He is a Research Assistant in Computer Science at Indiana University, currently responsible for investigating authorization and other security solutions to the project of Linked Environments for Atmospheric Discovery (LEAD). Gopi Kandaswamy is a Ph.D. student in the Computer Science Department at Indiana University where he is current a Research Assistant. His research interests include Web services and workflow systems for the Grid. Yogesh Simmhan received his B.E. degree in Computer Science from Madras University, India in 2000, and is a doctoral candidate in Computer Science at Indiana University. He is currently working as a Research Assistant at Indiana University, investigating data management issues in the LEAD project. His interests lie in data provenance for workflow systems and its use in data quality estimation. Aleksander Slominski is a Ph.D. student in the Computer Science at Indiana University. His research interests include Grid and Web Services, streaming XML Pull Parsing and performance, Grid security, asynchronous messaging, events, and notifications brokers, component technologies, and workflow composition. He is currently working as a Research Assistant investigating creation and execution of dynamic workflows using Grid Process Execution Language (GPEL) based on WS-BPEL.  相似文献   

7.
The dramatic growth of distributed computing applications is creating both an opportunity and a daunting challenge for users seeking to build applications that will play critical roles in their organization. Here, we discuss the use of a new system, Astrolabe, to automate self-configuration, monitoring, and to control adaptation. Astrolabe operates by creating a virtual system-wide hierarchical database, which evolves as the underlying information changes. Astrolabe is secure, robust under a wide range of failure and attack scenarios, and imposes low loads even under stress. To focus the discussion, we structure it around a hypothetical Web Services scenario. One of the major opportunities created by Astrolabe is to allow Web Services client systems to autonomically adapt when a data center becomes slow or unreachable. The authors were supported by Intel Corporation, DARPA/AFRL grant RADC F30602-99-1-0532, by AFOSR/MURI grant F49620-02-1-0233, Microsoft Research BARC and the Cornell/AFRL Information Assurance Institute.  相似文献   

8.
This paper discusses a number of aspects of using grid computing methods in support of molecular simulations, with examples drawn from the eMinerals project. A number of components for a useful grid infrastructure are discussed, including the integration of compute and data grids, automatic metadata capture from simulation studies, interoperability of data between simulation codes, management of data and data accessibility, management of jobs and workflow, and tools to support collaboration. Use of a grid infrastructure also brings certain challenges, which are discussed. These include making use of boundless computing resources, the necessary changes, and the need to be able to manage experimentation.  相似文献   

9.
The goal of this work is to create a tool that allows users to easily distribute large scientific computations on computational grids. Our tool MW relies on the simple master–worker paradigm. MW provides both a top Level interface to application software and a bottom Level interface to existing Grid computing toolkits. Both interfaces are briefly described. We conclude with a case study, where the necessary Grid services are provided by the Condor high-throughput computing system, and the MW-enabled application code is used to solve a combinatorial optimization problem of unprecedented complexity.  相似文献   

10.
以RefSeq数据库和已测序基因组序列为模板,通过大规模计算得到代表转录各层次信息的"标准转录数据库",并利用通用网关接口技术,建立了人类和模式生物标准转录数据集Web服务系统。用户提交RefSeq记录号或自由注释词,可检索获得序列的全部信息,实现对基因结构解析的在线计算。目前系统覆盖了人、拟南芥、水稻、大鼠、小鼠、斑马鱼等6个物种,拥有数据记录18万余条。为深入研究人类及其他物种转录组提供了重要工具,并为进一步分析真核基因的可变剪接方式提供了坚实的数据基础。  相似文献   

11.
Industrial ecology (IE) is a maturing scientific discipline. The field is becoming more data and computation intensive, which requires IE researchers to develop scientific software to tackle novel research questions. We review the current state of software programming and use in our field and find challenges regarding transparency, reproducibility, reusability, and ease of collaboration. Our response to that problem is fourfold: First, we propose how existing general principles for the development of good scientific software could be implemented in IE and related fields. Second, we argue that collaborating on open source software could make IE research more productive and increase its quality, and we present guidelines for the development and distribution of such software. Third, we call for stricter requirements regarding general access to the source code used to produce research results and scientific claims published in the IE literature. Fourth, we describe a set of open source modules for standard IE modeling tasks that represent our first attempt at turning our recommendations into practice. We introduce a Python toolbox for IE that includes the life cycle assessment (LCA) framework Brightway2, the ecospold2matrix module that parses unallocated data in ecospold format, the pySUT and pymrio modules for building and analyzing multiregion input‐output models and supply and use tables, and the dynamic_stock_model class for dynamic stock modeling. Widespread use of open access software can, at the same time, increase quality, transparency, and reproducibility of IE research.  相似文献   

12.
Summary   In 2002 the Environmental Services Scheme (ESS) was launched in New South Wales, Australia. Its aim was to pilot a process to provide financial incentives to landholders to undertake changes in land use or land management that improved the status of environmental services (e.g. provision of clean water, healthy soils, biodiversity conservation). To guide the direction of incentive funds, metrics were developed for use by departmental staff to score the benefits of land use or land management changes to a range of environmental services. The purpose of this paper is to (i) report on the development of one of these metrics – the biodiversity benefits index; (ii) present the data generated by field application of the metric to 20 properties contracted to the ESS; and (iii) discuss the lessons learned and recent developments of the metric that aim to make it accessible to a wider range of end-users and applications.  相似文献   

13.
以江南圩田为研究对象,文章阐释在风景园林生态的语境下,以生态系统服务效能为“法”,以空间形态为“式”,有法有式,有律可循的观点。通过“水—绿—人”三者的关系,进行传统江南圩田原生空间形态的整体解读,剖析其间蕴含的8个维度的生态系统服务,反思当下圩田地区建设过程中的冲突与矛盾,进而探讨传统江南圩田生态智慧对当代圩田地区发展与规划实践给予的启示:“顺水而为”适度改造自然的理念;形成三生融合的整体人文生态系统;以及与水相协调的传统社会治理系统。  相似文献   

14.
Ecological risk assessment (ERA) is a scientific tool used to support ecosystem-based management (EBM), but most current ERA methods consider only a few indices of particular species or components. Such limitations restrict the scope of results so that they are insufficient to reflect the integrated risk characterization of an ecosystem, thereby inhibiting the application of ERA in EBM. We incorporate the concept of ecosystem services into ERA and develop an improved ERA framework to create a comprehensive risk map of an ecosystem, accounting for multiple human activities and ecosystem services. Using the Yellow River as a case study, we show how this framework enables the implementation of integrated risk characterization and prioritization of the most important ecological risk issues in the ecosystem-based river management of the Yellow River. This framework can help practitioners facilitate better implementation of ERA within EBM in rivers or any target ecosystem.  相似文献   

15.
16.
17.
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application – SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary “Manual Initialize” and “Hand Draw” tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.  相似文献   

18.
19.
20.
Batch anaerobic codigestion of municipal household solid waste (MHSW) and digested manure in mesophilic conditions was carried out. The different waste-to-biomass ratios and intensity of mixing were studied theoretically and experimentally. The experiments showed that when organic loading was high, intensive mixing resulted in acidification and failure of the process, while low mixing intensity was crucial for successful digestion. However, when loading was low, mixing intensity had no significant effect on the process. We hypothesized that mixing was preventing establishment of methanogenic zones in the reactor space. The methanogenic zones are important to withstand inhibition due to development of acids formed during acidogenesis. The 2D distributed models of symmetrical cylinder reactor are presented based on the hypothesis of the necessity of a minimum size of methanogenic zones that can propagate and establish a good methanogenic environment. The model showed that at high organic loading rate spatial separation of the initial methanogenic centers from active acidogenic areas is the key factor for efficient conversion of solids to methane. The initial level of methanogenic biomass in the initiation centers is a critical factor for the survival of these centers. At low mixing, most of the initiation methanogenic centers survive and expand over the reactor volume. However, at vigorous mixing the initial methanogenic centers are reduced in size, averaged over the reactor volume, and finally dissipate. Using fluorescence in situ hybridization, large irregular cocci of microorganisms were observed in the case with minimal mixing, while in the case with high stirring mainly dead cells were found.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号