共查询到20条相似文献,搜索用时 15 毫秒
1.
Cactus Tools for Grid Applications 总被引:3,自引:0,他引:3
Gabrielle Allen Werner Benger Thomas Dramlitsch Tom Goodale Hans-Christian Hege Gerd Lanfermann André Merzky Thomas Radke Edward Seidel John Shalf 《Cluster computing》2001,4(3):179-188
Cactus is an open source problem solving environment designed for scientists and engineers. Its modular structure facilitates parallel computation across different architectures and collaborative code development between different groups. The Cactus Code originated in the academic research community, where it has been developed and used over many years by a large international collaboration of physicists and computational scientists. We discuss here how the intensive computing requirements of physics applications now using the Cactus Code encourage the use of distributed and metacomputing, and detail how its design makes it an ideal application test-bed for Grid computing. We describe the development of tools, and the experiments which have already been performed in a Grid environment with Cactus, including distributed simulations, remote monitoring and steering, and data handling and visualization. Finally, we discuss how Grid portals, such as those already developed for Cactus, will open the door to global computing resources for scientific users. 相似文献
2.
Grid portals and services are emerging as convenient mechanisms for providing the scientific community with familiar and simplified interfaces to the Grid. Our experiences in implementing computational grid portals, and the services needed to support them, has led to the creation of GridPort: a unique, integrated, layered software system for building portals and hosting portal services that access Grid services. The usefulness of this system has been successfully demonstrated with the implementation of several application portals. This system has several unique features: the software is portable and runs on most webservers; written in Perl/CGI, it is easy to support and modify; a single API provides access to a host of Grid services; it is flexible and adaptable; it supports single login between multiple portals; and portals built with it may run across multiple sites and organizations. In this paper we summarize our experiences in building this system, including philosophy and design choices and we describe the software we are building that support portal development, portal services. Finally, we discuss our experiences in developing the GridPort Client Toolkit in support of remote Web client portals and Grid Web services. 相似文献
3.
M. Parashar H. Liu Z. Li V. Matossian C. Schmidt G. Zhang S. Hariri 《Cluster computing》2006,9(2):161-174
The increasing complexity, heterogeneity, and dynamism of emerging pervasive Grid environments and applications has necessitated
the development of autonomic self-managing solutions, that are inspired by biological systems and deal with similar challenges
of complexity, heterogeneity, and uncertainty. This paper introduces Project AutoMate and describes its key components. The
overall goal of Project Automate is to investigate conceptual models and implementation architectures that can enable the
development and execution of such self-managing Grid applications. Illustrative autonomic scientific and engineering Grid
applications enabled by AutoMate are presented.
The research presented in this paper is supported in part by the National Science Foundation via grants numbers ACI 9984357,
EIA 0103674, EIA 0120934, ANI 0335244, CNS 0305495, CNS 0426354 and IIS 0430826. The authors would like to acknowledge the
contributions of M. Agarwal, V. Bhat and N. Jiang to this research. 相似文献
4.
Software Distributed Shared Memory (DSM) systems can be used to provide a coherent shared address space on multicomputers and other parallel systems without support for shared memory in hardware. The coherency software automatically translates shared memory accesses to explicit messages exchanged among the nodes in the system. Many applications exhibit a good performance on such systems but it has been shown that, for some applications, performance critical messages can be delayed behind less important messages because of the enqueuing behavior in the communication libraries used in current systems. We present in this paper a new portable communication library that supports priorities to remedy this situation. We describe an implementation of the communication library and a quantitative model that is used to estimate the performance impact of priorities for a typical situation. Using the model, we show that the use of high-priority communication reduces the latency of performance critical messages substantially over a wide range of network design parameters. The latency is reduced with up to 10–25% for each delaying low priority message in the queue ahead. 相似文献
5.
Akira Nomoto Yasuo Watanabe Wataru Kaneko Shugo Nakamura Kentaro Shimizu 《Cluster computing》2004,7(1):65-72
This paper describes the design and implementation of a parallel programming environment called Distributed Shared Array (DSA), which provides a shared global array abstract across different machines connected by a network. In DSA, users can define and use global arrays that can be accessed uniformly from any machines in the network. Explicit management of array area allocation, replication, and migration is achieved by explicit calls for array manipulation: defining array regions, reading and writing array regions, synchronization, and control of replication and migration. The DSA is integrated with Grid (Globus) services. This paper also describes the use of our model for gene cluster analysis, multiple alignment and molecular dynamics simulation. In these applications, global arrays are used for storing the distance matrix, alignment matrix and atom coordinates, respectively. Large array areas, which cannot be stored in the memory of individual machines, are made available by the DSA. Scalable performance of DSA was obtained compared to that of conventional parallel programs written in MPI. 相似文献
6.
Dennis Gannon Sriram Krishnan Liang Fang Gopi Kandaswamy Yogesh Simmhan Aleksander Slominski 《Cluster computing》2005,8(4):271-277
Software Component Frameworks are well known in the commercial business application world and now this technology is being explored with great interest as a way to build large-scale scientific applications on parallel computers. In the case of Grid systems, the current architectural model is based on the emerging web services framework. In this paper we describe progress that has been made on the Common Component Architecture model (CCA) and discuss its success and limitations when applied to problems in Grid computing. Our primary conclusion is that a component model fits very well with a services-oriented Grid, but the model of composition must allow for a very dynamic (both in space and in time) control of composition. We note that this adds a new dimension to conventional service workflow and it extends the “Inversion of Control” aspects of most component systems. Dennis Gannon is a professor of Computer Science at Indiana University. He received his Ph.D. in Computer Science from the University of Illinois in 1980 and his Ph.D. in Mathematics from the University of California in 1974. From 1980 to 1985, he was on the faculty at Purdue University. His research interests include software tools for high performance distributed systems and problem solving environments for scientific computation. Sriram Krishnan received his Ph.D. in Computer Science from Indiana University in 2004. He is currently in the Grid Development Group at the San Diego Supercomputer Center where he is working on designing a Web services based architecture for biomedical applications that is secure and scalable, and is conducive to the creation of complex workflows. He received my undergraduate degree in Computer Engineering from the University of Mumbai, India. Liang Fang is a Ph.D. student in Computer Science at Indiana University. His research interests include Grid computing, Web services, portals, their security and scalability issues. He is a Research Assistant in Computer Science at Indiana University, currently responsible for investigating authorization and other security solutions to the project of Linked Environments for Atmospheric Discovery (LEAD). Gopi Kandaswamy is a Ph.D. student in the Computer Science Department at Indiana University where he is current a Research Assistant. His research interests include Web services and workflow systems for the Grid. Yogesh Simmhan received his B.E. degree in Computer Science from Madras University, India in 2000, and is a doctoral candidate in Computer Science at Indiana University. He is currently working as a Research Assistant at Indiana University, investigating data management issues in the LEAD project. His interests lie in data provenance for workflow systems and its use in data quality estimation. Aleksander Slominski is a Ph.D. student in the Computer Science at Indiana University. His research interests include Grid and Web Services, streaming XML Pull Parsing and performance, Grid security, asynchronous messaging, events, and notifications brokers, component technologies, and workflow composition. He is currently working as a Research Assistant investigating creation and execution of dynamic workflows using Grid Process Execution Language (GPEL) based on WS-BPEL. 相似文献
7.
The dramatic growth of distributed computing applications is creating both an opportunity and a daunting challenge for users seeking to build applications that will play critical roles in their organization. Here, we discuss the use of a new system, Astrolabe, to automate self-configuration, monitoring, and to control adaptation. Astrolabe operates by creating a virtual system-wide hierarchical database, which evolves as the underlying information changes. Astrolabe is secure, robust under a wide range of failure and attack scenarios, and imposes low loads even under stress. To focus the discussion, we structure it around a hypothetical Web Services scenario. One of the major opportunities created by Astrolabe is to allow Web Services client systems to autonomically adapt when a data center becomes slow or unreachable. The authors were supported by Intel Corporation, DARPA/AFRL grant RADC F30602-99-1-0532, by AFOSR/MURI grant F49620-02-1-0233, Microsoft Research BARC and the Cornell/AFRL Information Assurance Institute. 相似文献
8.
以RefSeq数据库和已测序基因组序列为模板,通过大规模计算得到代表转录各层次信息的标准转录数据库,并利用通用网关接口技术,建立了人类和模式生物标准转录数据集Web服务系统。用户提交RefSeq记录号或自由注释词,可检索获得序列的全部信息,实现对基因结构解析的在线计算。目前系统覆盖了人、拟南芥、水稻、大鼠、小鼠、斑马鱼等6个物种,拥有数据记录18万余条。为深入研究人类及其他物种转录组提供了重要工具,并为进一步分析真核基因的可变剪接方式提供了坚实的数据基础。 相似文献
9.
Ian Oliver Alan Ede Wendy Hawes Alastair Grieve 《Ecological Management & Restoration》2005,6(3):197-205
Summary In 2002 the Environmental Services Scheme (ESS) was launched in New South Wales, Australia. Its aim was to pilot a process to provide financial incentives to landholders to undertake changes in land use or land management that improved the status of environmental services (e.g. provision of clean water, healthy soils, biodiversity conservation). To guide the direction of incentive funds, metrics were developed for use by departmental staff to score the benefits of land use or land management changes to a range of environmental services. The purpose of this paper is to (i) report on the development of one of these metrics – the biodiversity benefits index; (ii) present the data generated by field application of the metric to 20 properties contracted to the ESS; and (iii) discuss the lessons learned and recent developments of the metric that aim to make it accessible to a wider range of end-users and applications. 相似文献
10.
Ecological risk assessment (ERA) is a scientific tool used to support ecosystem-based management (EBM), but most current ERA methods consider only a few indices of particular species or components. Such limitations restrict the scope of results so that they are insufficient to reflect the integrated risk characterization of an ecosystem, thereby inhibiting the application of ERA in EBM. We incorporate the concept of ecosystem services into ERA and develop an improved ERA framework to create a comprehensive risk map of an ecosystem, accounting for multiple human activities and ecosystem services. Using the Yellow River as a case study, we show how this framework enables the implementation of integrated risk characterization and prioritization of the most important ecological risk issues in the ecosystem-based river management of the Yellow River. This framework can help practitioners facilitate better implementation of ERA within EBM in rivers or any target ecosystem. 相似文献
11.
12.
13.
Wenjin Chen Chung Wong Evan Vosburgh Arnold J. Levine David J. Foran Eugenia Y. Xu 《Journal of visualized experiments : JoVE》2014,(89)
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application – SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary “Manual Initialize” and “Hand Draw” tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia. 相似文献
14.
15.
16.
Alam Jahangir Muzaffar Alam David S. Carter Michael P. Dillon Daisy Joe Du Bois Anthony P.D.W. Ford Joel R. Gever Clara Lin Paul J. Wagner Yansheng Zhai Jeff Zira 《Bioorganic & medicinal chemistry letters》2009,19(6):1632-1635
The purinoceptor subtypes P2X3 and P2X2/3 have been shown to play a pivotal role in models of various pain conditions. Identification of a potent and selective dual P2X3/P2X2/3 diaminopyrimidine antagonist RO-4 prompted subsequent optimization of the template. This paper describes the SAR and optimization of the diaminopyrimidine ring and particularly the substitution of the 2-amino group. The discovery of the highly potent and drug-like dual P2X3/P2X2/3 antagonist RO-51 is presented. 相似文献
17.
《Dendrochronologia》2014,32(4):343-356
A number of processing options associated with the use of a “regional curve” to standardise tree-ring measurements and generate a chronology representing changing tree growth over time are discussed. It is shown that failing to use pith offset estimates can generate a small but systematic chronology error. Where chronologies contain long-timescale signal variance, tree indices created by division of the raw measurements by RCS curve values produce chronologies with a skewed distribution. A simple empirical method of converting tree-indices to have a normal distribution is proposed. The Expressed Population Signal, which is widely used to estimate the statistical confidence of chronologies created using curve-fitting methods of standardisation, is not suitable for use with RCS generated chronologies. An alternative implementation, which takes account of the uncertainty associated with long-timescale as well as short-timescale chronology variance, is proposed. The need to assess the homogeneity of differently-sourced sets of measurement data and their suitability for amalgamation into a single data set for RCS standardisation is discussed. The possible use of multiple growth-rate based RCS curves is considered where a potential gain in chronology confidence must be balanced against the potential loss of long-timescale variance. An approach to the use of the “signal-free” method for generating artificial measurement series with the ‘noise’ characteristics of real data series but with a known chronology signal applied for testing standardisation performance is also described. 相似文献
18.
Attachment sites for bacteriophage P2 on the Escherichia coli chromosome: DNA sequences, localization on the physical map, and detection of a P2-like remnant in E. coli K-12 derivatives.
下载免费PDF全文

Integration of bacteriophage P2 into the Escherichia coli genome involves recombination between two attachment sites, attP and attB, one on the phage and one on the host genome, respectively. At least 10 different attB sites have been identified over the years. In E. coli C, one site, called locI, is preferred, being occupied before any of the others. In E. coli K-12, no such preference is seen (reviewed in L. E. Bertani and E. W. Six, p. 73-143, in R. Calendar, ed., The Bacteriophages, vol. 2, 1988). The DNA sequence of locI has been determined, and it shows a core sequence of 27 nucleotides identical to attP (A. Yu, L. E. Bertani, and E. Hagg?rd-Ljungquist, Gene 80:1-12, 1989). By inverse polymerase chain reactions, the prophage-host junctions of DNA extracted from P2 lysogenic strains have been amplified, cloned, and sequenced. By combining the attL and attR sequences, the attB sequences of locations II, III, and H have been deduced. The core sequence of location II had 20 matches to the 27-nucleotide core sequence of attP; the sequences of locations III and H had 17 matches. Thus, the P2 integrase accepts at least up to 37% mismatches within the core sequence. The E. coli K-12 strains examined all contain a 639-nucleotide-long cryptic remnant of P2 at a site with a sequence similar to that of locI but that may have a different map position. The P2 remnant consists of the C-terminal part of gene D, all of gene ogr, and attR. Locations II, III, and H have been located on Kohara's physical map to positions 3670, 1570 to 1575, and 2085, respectively. 相似文献
19.
David S. Carter Muzaffar Alam Haiying Cai Michael P. Dillon Anthony P.D.W. Ford Joel R. Gever Alam Jahangir Clara Lin Amy G. Moore Paul J. Wagner Yansheng Zhai 《Bioorganic & medicinal chemistry letters》2009,19(6):1628-1631
P2X purinoceptors are ligand-gated ion channels whose endogenous ligand is ATP. Both the P2X3 and P2X2/3 receptor subtypes have been shown to play an important role in the regulation of sensory function and dual P2X3/P2X2/3 antagonists offer significant potential for the treatment of pain. A high-throughput screen of the Roche compound collection resulted in the identification of a novel series of diaminopyrimidines; subsequent optimization resulted in the discovery of RO-4, a potent, selective and drug-like dual P2X3/P2X2/3 antagonist. 相似文献