共查询到20条相似文献,搜索用时 36 毫秒
1.
Load balancing in a workstation-based cluster system has been investigated extensively, mainly focusing on the effective usage
of global CPU and memory resources. However, if a significant portion of applications running in the system is I/O-intensive,
traditional load balancing policies can cause system performance to decrease substantially. In this paper, two I/O-aware load-balancing
schemes, referred to as IOCM and WAL-PM, are presented to improve the overall performance of a cluster system with a general
and practical workload including I/O activities. The proposed schemes dynamically detect I/O load imbalance of nodes in a
cluster, and determine whether to migrate some I/O load from overloaded nodes to other less- or under-loaded nodes. The current
running jobs are eligible to be migrated in WAL-PM only if overall performance improves. Besides balancing I/O load, the scheme
judiciously takes into account both CPU and memory load sharing in the system, thereby maintaining the same level of performance
as existing schemes when I/O load is low or well balanced. Extensive trace-driven simulations for both synthetic and real
I/O-intensive applications show that: (1) Compared with existing schemes that only consider CPU and memory, the proposed schemes
improve the performance with respect to mean slowdown by up to a factor of 20; (2) When compared to the existing approaches
that only consider I/O with non-preemptive job migrations, the proposed schemes achieve improvements in mean slowdown by up
to a factor of 10; (3) Under CPU-memory intensive workloads, our scheme improves the performance over the existing approaches
that only consider I/O by up to 47.5%.
Xiao Qin received the BSc and MSc degrees in computer science from Huazhong University of Science and Technology in 1992 and 1999,
respectively. He received the PhD degree in computer science from the University of Nebraska-Lincoln in 2004. Currently, he
is an assistant professor in the department of computer science at the New Mexico Institute of Mining and Technology. His
research interests include parallel and distributed systems, storage systems, real-time computing, performance evaluation,
and fault-tolerance. He served on program committees of international conferences like CLUSTER, ICPP, and IPCCC. During 2000–2001,
he was on the editorial board of The IEEE Distributed System Online. He is a member of the IEEE.
Hong Jiang received the B.Sc. degree in Computer Engineering in 1982 from Huazhong University of Science and Technology, Wuhan, China;
the M.A.Sc. degree in Computer Engineering in 1987 from the University of Toronto, Toronto, Canada; and the PhD degree in
Computer Science in 1991 from the Texas A&M University, College Station, Texas, USA. Since August 1991 he has been at the
University of Nebraska-Lincoln, Lincoln, Nebraska, USA, where he is Associate Professor and Vice Chair in the Department of
Computer Science and Engineering. His present research interests are computer architecture, parallel/distributed computing,
computer storage systems and parallel I/O, performance evaluation, middleware, networking, and computational engineering.
He has over 70 publications in major journals and international Conferences in these areas and his research has been supported
by NSF, DOD and the State of Nebraska. Dr. Jiang is a Member of ACM, the IEEE Computer Society, and the ACM SIGARCH and ACM
SIGCOMM.
Yifeng Zhu received the B.E. degree in Electrical Engineering from Huazhong University of Science and Technology in 1998 and the M.S.
degree in computer science from University of Nebraska Lincoln (UNL) in 2002. Currently he is working towards his Ph.D. degree
in the department of computer science and engineering at UNL. His main fields of research interests are parallel I/O, networked
storage, parallel scheduling, and cluster computing. He is a student member of IEEE.
David Swanson received a Ph.D. in physical (computational) chemistry at the University of Nebraska-Lincoln (UNL) in 1995, after which he
worked as an NSF-NATO postdoctoral fellow at the Technical University of Wroclaw, Poland, in 1996, and subsequently as a National
Research Council Research Associate at the Naval Research Laboratory in Washington, DC, from 1997–1998. In early 1999 he returned
to UNL where he has coordinated the Research Computing Facility and currently serves as an Assistant Research Professor in
the Department of Computer Science and Engineering. The Office of Naval Research, the National Science Foundation, and the
State of Nebraska have supported his research in areas such as large-scale parallel simulation and distributed systems. 相似文献
2.
While aggregating the throughput of existing disks on cluster nodes is a cost-effective approach to alleviate the I/O bottleneck
in cluster computing, this approach suffers from potential performance degradations due to contentions for shared resources
on the same node between storage data processing and user task computation. This paper proposes to judiciously utilize the
storage redundancy in the form of mirroring existed in a RAID-10 style file system to alleviate this performance degradation.
More specifically, a heuristic scheduling algorithm is developed, motivated from the observations of a simple cluster configuration,
to spatially schedule write operations on the nodes with less load among each mirroring pair. The duplication of modified
data to the mirroring nodes is performed asynchronously in the background. The read performance is improved by two techniques:
doubling the degree of parallelism and hot-spot skipping. A synthetic benchmark is used to evaluate these algorithms in a
real cluster environment and the proposed algorithms are shown to be very effective in performance enhancement.
Yifeng Zhu received his B.Sc. degree in Electrical Engineering in 1998 from Huazhong University of Science and Technology, Wuhan, China;
the M.S. and Ph.D. degree in Computer Science from University of Nebraska – Lincoln in 2002 and 2005 respectively. He is an
assistant professor in the Electrical and Computer Engineering department at University of Maine. His main research interests
are cluster computing, grid computing, computer architecture and systems, and parallel I/O storage systems. Dr. Zhu is a Member
of ACM, IEEE, the IEEE Computer Society, and the Francis Crowe Society.
Hong Jiang received the B.Sc. degree in Computer Engineering in 1982 from Huazhong University of Science and Technology, Wuhan, China;
the M.A.Sc. degree in Computer Engineering in 1987 from the University of Toronto, Toronto, Canada; and the PhD degree in
Computer Science in 1991 from the Texas A&M University, College Station, Texas, USA. Since August 1991 he has been at the
University of Nebraska-Lincoln, Lincoln, Nebraska, USA, where he is Professor and Vice Chair in the Department of Computer
Science and Engineering. His present research interests are computer architecture, parallel/distributed computing, cluster
and Grid computing, computer storage systems and parallel I/O, performance evaluation, real-time systems, middleware, and
distributed systems for distance education. He has over 100 publications in major journals and international Conferences in
these areas and his research has been supported by NSF, DOD and the State of Nebraska. Dr. Jiang is a Member of ACM, the IEEE
Computer Society, and the ACM SIGARCH.
Xiao Qin received the BS and MS degrees in computer science from Huazhong University of Science and Technology in 1992 and 1999, respectively.
He received the PhD degree in computer science from the University of Nebraska-Lincoln in 2004. Currently, he is an assistant
professor in the department of computer science at the New Mexico Institute of Mining and Technology. He had served as a subject
area editor of IEEE Distributed System Online (2000–2001). His research interests are in parallel and distributed systems, storage systems, real-time computing, performance
evaluation, and fault-tolerance. He is a member of the IEEE.
Dan Feng received the Ph.D degree from Huazhong University of Science and Technology, Wuhan, China, in 1997. She is currently a professor
of School of Computer, Huazhong University of Science and Technology, Wuhan, China. She is the principal scientist of the
the National Grand Fundamental Research 973 Program of China “Research on the organization and key technologies of the Storage
System on the next generation Internet.” Her research interests include computer architecture, storage system, parallel I/O,
massive storage and performance evaluation.
David Swanson received a Ph.D. in physical (computational) chemistry at the University of Nebraska-Lincoln (UNL) in 1995, after which he
worked as an NSF-NATO postdoctoral fellow at the Technical University of Wroclaw, Poland, in 1996, and subsequently as a National
Research Council Research Associate at the Naval Research Laboratory in Washington, DC, from 1997–1998. In 1999 he returned
to UNL where he directs the Research Computing Facility and currently serves as an Assistant Research Professor in the Department
of Computer Science and Engineering. The Office of Naval Research, the National Science Foundation, and the State of Nebraska
have supported his research in areas such as large-scale scientific simulation and distributed systems. 相似文献
3.
The adequate location of wells in oil and environmental applications has a significant economic impact on reservoir management.
However, the determination of optimal well locations is both challenging and computationally expensive. The overall goal of
this research is to use the emerging Grid infrastructure to realize an autonomic self-optimizing reservoir framework. In this
paper, we present a policy-driven peer-to-peer Grid middleware substrate to enable the use of the Simultaneous Perturbation
Stochastic Approximation (SPSA) optimization algorithm, coupled with the Integrated Parallel Accurate Reservoir Simulator
(IPARS) and an economic model to find the optimal solution for the well placement problem.
Wolfgang Bangerth is a postdoctoral research fellow at both the Institute for Computational Engineering and Sciences, and the Institute for
Geophyics, at the University of Texas at Austin. He obtained his Ph.D. in applied mathematics from the University of Heidelberg,
Germany in 2002. He is the project leader for the deal.II finite element library (http://www.dealii.org). Wolfgang is a member
of SIAM, AAAS, and ACM.
Hector Klie obtained his Ph.D. degree in Computational Science and Engineering at Rice University, 1996, he completed his Master and
undergraduate degrees in Computer Science at the Simon Bolivar University, Venezuela in 1991 and 1989, respectively. Hector
Klie's main research interests are in the development of efficient parallel linear and nonlinear solvers and optimization
algorithms for large-scale transport and flow of porous media problems. He currently holds the position of Associate Director
and Senior Research Associate in the Center for Subsurface Modeling at the Institute of Computational Science and Engineering
at The University of Texas at Austin. Dr. Klie is current member of SIAM, SPE and SEG.
Vincent Matossian obtained a Masters in applied physics from the French Université Pierre et Marie Curie. Vincent is currently pursuing a Ph.D.
degree in distributed systems at the Department of Electrical and Computer Engineering at Rutgers University under the guidance
of Manish Parashar. His research interests include information discovery and ad-hoc communication paradigms in decentralized
systems.
Manish Parashar is Professor of Electrical and Computer Engineering at Rutgers University, where he also is director of the Applied Software
Systems Laboratory. He received a BE degree in Electronics and Telecommunications from Bombay University, India and MS and
Ph.D. degrees in Computer Engineering from Syracuse University. He has received the Rutgers Board of Trustees Award for Excellence
in Research (2004–2005), NSF CAREER Award (1999) and the Enrico Fermi Scholarship from Argonne National Laboratory (1996).
His research interests include autonomic computing, parallel & distributed computing (including peer-to-peer and Grid computing),
scientific computing, software engineering. He is a senior member of IEEE, a member of the IEEE Computer Society Distinguished
Visitor Program (2004–2007), and a member of ACM.
Mary Fanett Wheeler obtained her Ph.D. at Rice University in 1971. Her primary research interest is in the numerical solutions of partial differential
systems with applications to flow in porous media, geomechanics, surface flow, and parallel computation. Her numerical work
includes formulation, analysis and implementation of finite-difference/finite-element discretization schemes for nonlinear,
coupled PDE's as well as domain decomposition iterative solution methods. She has directed the Center for Subsurface Modeling,
The University of Texas at Austin, since its creation in 1990. Dr. Wheeler is recepient of the Ernest and Virginia Cockrell
Chair in Engineering and is Professor in the Department of Aerospace Engineering & Engineering Mechanics and in the Department
of Petroleum & Geosystems Engineering of The University of Texas 相似文献
4.
Clusters of workstations are a practical approach to parallel computing that provide high performance at a low cost for many
scientific and engineering applications. In order to handle problems with increasing data sets, methods supporting parallel
out-of-core computations must be investigated. Since writing an out-of-core version of a program is a difficult task and virtual
memory systems do not perform well in some cases, we have developed a parallel programming interface and the support library
to provide efficient and convenient access to the out-of-core data. This paper focuses on how these components extend the
range of problem sizes that can be solved on the cluster of workstations. Execution time of Jacobi iteration when using our
interface, virtual memory and PVFS are compared to characterize the performance for various problem sizes, and it is concluded
that our new interface significantly increases the sizes of problems that can be efficiently solved.
Jianqi Tang received B.Sc. and M.Sc. from Harbin Institute of Technology in 1997 and 1999 respectively, both in computer application.
Currently, she is a Ph.D. candidate at the Department of Computer Science and engineering, Harbin Institute of Technology.
She has participated in several National research projects. Her research interests include parallel computing, parallel I/O
and grid computing.
Binxing Fang received M.Sc. in 1984 from Tsinghua University and Ph.D. from Harbin Institute of Technology in 1989, both in computer science.
From 1990 to 1993 he was with National University of Defense Technology as a postdoctor. Since 1984, he is a faculty member
at the Department of Computer Science and engineering of Harbin Institute of Technology, where he is presently a Professor.
He is a Member of the National Information Expert Consultant Group and a Standing Member of the Council of Chinese Society
of Communications. His research efforts focus on parallel computing, computer network and information security. Professor
Fang has implemented over 30 projects from the state and ministry/province.
Mingzeng Hu was born in 1935. He has been with the Department of Computer Science and engineering in Harbin Institute of Technology since
1958, where he is currently a Professor. He was a visiting scholar in the Siemens Company, Germany from 1978 to 1979, a visiting
associate professor in Chiba University, Japan from 1984 to 1985, and a visiting professor in York University, Canada from
1989 to 1995. He is the Director of the National Key Laboratory of Computer Information Content Security. He is also a Member
of 3rd Academic Degree Committee under the State Council of China. Professor Hu’s research interests include high performance
computer architecture and parallel processing technology, fault tolerant computing, network system, VL design, and computer
system security technology. He has implemented many projects from the state and ministry/province and has won several Ministry
Science and Technology Progress Awards. He published over 100 papers in core journals home and abroad and one book. Professor
Hu has supervised over 20 doctoral students.
Hongli Zhang received M.Sc in computer system software in 1996 and Ph.D. in computer architecture in 1999 from Harbin Institute of Technology.
Currently, she is an Associate Professor at the Department of Computer Science and engineering, Harbin Institute of Technology.
Her research interests include computer network security and parallel computing. 相似文献
5.
Dennis Gannon Sriram Krishnan Liang Fang Gopi Kandaswamy Yogesh Simmhan Aleksander Slominski 《Cluster computing》2005,8(4):271-277
Software Component Frameworks are well known in the commercial business application world and now this technology is being
explored with great interest as a way to build large-scale scientific applications on parallel computers. In the case of Grid
systems, the current architectural model is based on the emerging web services framework. In this paper we describe progress
that has been made on the Common Component Architecture model (CCA) and discuss its success and limitations when applied to
problems in Grid computing. Our primary conclusion is that a component model fits very well with a services-oriented Grid,
but the model of composition must allow for a very dynamic (both in space and in time) control of composition. We note that
this adds a new dimension to conventional service workflow and it extends the “Inversion of Control” aspects of most component
systems.
Dennis Gannon is a professor of Computer Science at Indiana University. He received his Ph.D. in Computer Science from the University of
Illinois in 1980 and his Ph.D. in Mathematics from the University of California in 1974. From 1980 to 1985, he was on the
faculty at Purdue University. His research interests include software tools for high performance distributed systems and problem
solving environments for scientific computation.
Sriram Krishnan received his Ph.D. in Computer Science from Indiana University in 2004. He is currently in the Grid Development Group at
the San Diego Supercomputer Center where he is working on designing a Web services based architecture for biomedical applications
that is secure and scalable, and is conducive to the creation of complex workflows. He received my undergraduate degree in
Computer Engineering from the University of Mumbai, India.
Liang Fang is a Ph.D. student in Computer Science at Indiana University. His research interests include Grid computing, Web services,
portals, their security and scalability issues. He is a Research Assistant in Computer Science at Indiana University, currently
responsible for investigating authorization and other security solutions to the project of Linked Environments for Atmospheric
Discovery (LEAD).
Gopi Kandaswamy is a Ph.D. student in the Computer Science Department at Indiana University where he is current a Research Assistant. His
research interests include Web services and workflow systems for the Grid.
Yogesh Simmhan received his B.E. degree in Computer Science from Madras University, India in 2000, and is a doctoral candidate in Computer
Science at Indiana University. He is currently working as a Research Assistant at Indiana University, investigating data management
issues in the LEAD project. His interests lie in data provenance for workflow systems and its use in data quality estimation.
Aleksander Slominski is a Ph.D. student in the Computer Science at Indiana University. His research interests include Grid and Web Services, streaming
XML Pull Parsing and performance, Grid security, asynchronous messaging, events, and notifications brokers, component technologies,
and workflow composition. He is currently working as a Research Assistant investigating creation and execution of dynamic
workflows using Grid Process Execution Language (GPEL) based on WS-BPEL. 相似文献
6.
Impact of Admission and Cache Replacement Policies on Response Times of Jobs on Data Grids 总被引:1,自引:0,他引:1
Caching techniques have been used widely to improve the performance gaps of storage hierarchies in computing systems. Little
is known about the impact of policies on the response times of jobs that access and process very large files in data grids,
particularly when data and computations on the data have to be co-located on the same host. In data intensive applications
that access large data files over wide area network environment, such as data-grids, the combination of policies for job servicing
(or scheduling), caching and cache replacement can significantly impact the performance of grid jobs. We present preliminary
results of a simulation study that combines an admission policy with a cache replacement policy when servicing jobs submitted
to a storage resource manager.The results show that, in comparison to a first come first serve policy, the response times
of jobs are significantly improved, for practical limits of disk cache sizes, when the jobs that are back-logged to access
the same files are taken into consideration in scheduling the next file to be retrieved into the disk cache. Not only are
the response times of jobs improved, but also the metric measures for caching policies, such as the hit ratio and the average
cost per retrieval, are improved irrespective of the cache replacement policy used.
Ekow Otoo is research staff scientist with the scientific data management group at Lawrence Berkeley National Laboratory, University
of California, Berkeley. He received his B.Sc. degree in Electrical Engineering from the University of Science and Technology,
Kumasi, Ghana and a post graduate diploma in Computer Science from the University of Ghana, Legon. In 1977, he received his
M.Sc. degree in Computer Science from the University of Newcastle Upon Tyne in Britain and his Ph.D. degree in Computer Science
from McGill University, Montreal, Canada in 1983. He joined the faculty of the School of Computer Science, Carleton University,
in 1983 and from 1987 to 1999, he was a tenured faculty member of the School of Computer Science, Carleton University, Ottawa,
Canada. He has served as research consultant to Bell Northern Research, Ottawa, Canada, and as a research project consultant
to the GIS Division, Geomatics Canada, Natural Resources Canada, from 1990 to 1998. Ekow Otoo is a member of the ACM and IEEE.
His research interests include database management systems, data structures and algorithms, parallel I/O for high performance
computing, parallel and distributed computing.
Doron Rotem is currently a senior staff scientist and a member of the Data Management group at the Lawrence Berkeley National Lab. His
research interests include Grid Computing, Workflow, Scientific Data Management and Paralled and Distributed Computing and
Algorithms. He has published over 80 papers in international journals and conferences in these areas. Prior to that, Dr Rotem
co-founded and served as a CTO of a startup company, called CommerceRoute, that made software products in the area of workflow
and data integration and before that, he was an Associate Professor in the Department of Computer Science, University of Waterloo,
Canada. Dr. Rotem holds a B.Sc degree in Mathematics and Statistics from the Hebrew University, Jerusalem, Israel and a Ph.D.
in Computer Science from the University of the Witwatersrand, Johannesburg, South Africa.
Arie Shoshani is a senior staff scientist at Lawrence Berkeley National Laboratory. He joined LBNL in 1976. He heads the Scientific Data
Management Group. He received his Ph.D. from Princeton University in 1969. From 1969 to 1976, he was a researcher at System
Development Corporation, where he worked on the Network Control Program for the ARPAnet, distributed databases, database conversion,
and natural language interfaces to data management systems. His current areas of work include data models, query languages,
temporal data, statistical and scientific database management, storage management on tertiary storage, and grid storage middleware.
Arie is also the director of a Scientific Data Management (SDM) Integrated Software Infrastructure Center (ISIC), one of seven
centers selected by the SciDAC program at DOE in 2001. In this capacity, he is coordinating the work of collaborators from
4 DOE laboratories and 4 universities (see: http://sdmcenter.lbl.gov). Dr. Shoshani has published over 65 technical papers
in refereed journals and conferences, chaired several workshops, conferences, and panels in database management; and served
on numerous program committees for various database conferences. He also served as an associate editor for the ACM Transactions
on Database Systems. He was elected a member of the VLDB Endowment Board, served as the Publication Board Chairperson for
the VLDB Journal, and as the Vice-President of the VLDB Endowment. His home page is http://www.lbl.gov/arie. 相似文献
7.
Jiannong Cao Alvin T. S. Chan Yudong Sun Sajal K. Das Minyi Guo 《Cluster computing》2006,9(3):355-371
Application scheduling plays an important role in high-performance cluster computing. Application scheduling can be classified
as job scheduling and task scheduling. This paper presents a survey on the software tools for the graph-based scheduling on
cluster systems with the focus on task scheduling. The tasks of a parallel or distributed application can be properly scheduled
onto multi-processors in order to optimize the performance of the program (e.g., execution time or resource utilization).
In general, scheduling algorithms are designed based on the notion of task graph that represents the relationship of parallel
tasks. The scheduling algorithms map the nodes of a graph to the processors in order to minimize overall execution time. Although
many scheduling algorithms have been proposed in the literature, surprisingly not many practical tools can be found in practical
use. After discussing the fundamental scheduling techniques, we propose a framework and taxonomy for the scheduling tools
on clusters. Using this framework, the features of existing scheduling tools are analyzed and compared. We also discuss the
important issues in improving the usability of the scheduling tools.
This work is supported by the Hong Kong Polytechnic University under grant H-ZJ80 and by NASA Ames Research Center by a cooperative
grant agreement with the University of Texas at Arlington.
Jiannong Cao received the BSc degree in computer science from Nanjing University, Nanjing, China in 1982, and the MSc and the Ph.D degrees
in computer science from Washington State University, Pullman, WA, USA, in 1986 and 1990 respectively. He is currently an
associate professor in Department of Computing at the Hong Kong Polytechnic University, Hong Kong. He is also the director
of the Internet and Mobile Computing Lab in the department. He was on the faculty of computer science at James Cook University
and University of Adelaide in Australia, and City University of Hong Kong. His research interests include parallel and distributed
computing, networking, mobile computing, fault tolerance, and distributed software architecture and tools. He has published
over 120 technical papers in the above areas. He has served as a member of editorial boards of several international journals,
a reviewer for international journals/conference proceedings, and also as an organizing/programme committee member for many
international conferences. Dr. Cao is a member of the IEEE Computer Society, the IEEE Communication Society, IEEE, and ACM.
He is also a member of the IEEE Technical Committee on Distributed Processing, IEEE Technical Committee on Parallel Processing,
IEEE Technical Committee on Fault Tolerant Computing, and Computer Architecture Professional Committee of the China Computer
Federation.
Alvin Chan is currently an assistant professor at the Hong Kong Polytechnic University. He graduated from the University of New South
Wales with a Ph.D. degree in 1995 and was subsequently employed as a Research Scientist by the CSIRO, Australia. From 1997
to 1998, he was employed by the Centre for Wireless Communications, National University of Singapore as a Program Manager.
Dr. Chan is one of the founding members and director of a university spin-off company, Information Access Technology Limited.
He is an active consultant and has been providing consultancy services to both local and overseas companies. His research
interests include mobile computing, context-aware computing and smart card applications.
Yudong Sun received the B.S. and M.S. degrees from Shanghai Jiao Tong University, China. He received Ph.D. degree from the University
of Hong Kong in 2002, all in computer science. From 1988 to 1996, he was among the teaching staff in Department of Computer
Science and Engineering at Shanghai Jiao Tong University. From 2002 to 2003, he held a research position at the Hong Kong
Polytechnic University. At present, he is a Research Associate in School of Computing Science at University of Newcastle upon
Tyne, UK. His research interests include parallel and distributed computing, Web services, Grid computing, and bioinformatics.
Sajal K. Das is currently a Professor of Computer Science and Engineering and the Founding Director of the Center for Research in Wireless
Mobility and Networking (CReWMaN) at the University of Texas at Arlington. His current research interests include resource
and mobility management in wireless networks, mobile and pervasive computing, sensor networks, mobile internet, parallel processing,
and grid computing. He has published over 250 research papers, and holds four US patents in wireless mobile networks. He received
the Best Paper Awards in ACM MobiCom’99, ICOIN-16, ACM, MSWiM’00 and ACM/IEEE PADS’97. Dr. Das serves on the Editorial Boards
of IEEE Transactions on Mobile Computing, ACM/Kluwer Wireless Networks, Parallel Processing Letters, Journal of Parallel Algorithms
and Applications. He served as General Chair of IEEE PerCom’04, IWDC’04, MASCOTS’02 ACM WoWMoM’00-02; General Vice Chair of
IEEE PerCom’03, ACM MobiCom’00 and IEEE HiPC’00-01; Program Chair of IWDC’02, WoWMoM’98-99; TPC Vice Chair of ICPADS’02; and
as TPC member of numerous IEEE and ACM conferences.
Minyi Guo received his Ph.D. degree in information science from University of Tsukuba, Japan in 1998. From 1998 to 2000, Dr. Guo had
been a research scientist of NEC Soft, Ltd. Japan. He is currently a professor at the Department of Computer Software, The
University of Aizu, Japan. From 2001 to 2003, he was a visiting professor of Georgia State University, USA, Hong Kong Polytechnic
University, Hong Kong. Dr. Guo has served as general chair, program committee or organizing committee chair for many international
conferences, and delivered more than 20 invited talks in USA, Australia, China, and Japan. He is the editor-in-chief of the
Journal of Embedded Systems. He is also in editorial board of International Journal of High Performance Computing and Networking,
Journal of Embedded Computing, Journal of Parallel and Distributed Scientific and Engineering Computing, and International
Journal of Computer and Applications.
Dr. Guo’s research interests include parallel and distributed processing, parallelizing compilers, data parallel languages,
data mining, molecular computing and software engineering. He is a member of the ACM, IEEE, IEEE Computer Society, and IEICE.
He is listed in Marquis Who’s Who in Science and Engineering. 相似文献
8.
High-performance computing increasingly occurs on “computational grids” composed of heterogeneous and geographically distributed
systems of computers, networks, and storage devices that collectively act as a single “virtual” computer. A key challenge
in this environment is to provide efficient access to data distributed across remote data servers. Our parallel I/O framework,
called Armada, allows application and data-set providers to flexibly compose graphs of processing modules that describe the
distribution, application interfaces, and processing required of the dataset before computation. Although the framework provides
a simple programming model for the application programmer and the data-set provider, the resulting graph may contain bottlenecks
that prevent efficient data access. In this paper, we present an algorithm used to restructure Armada graphs that distributes
computation and data flow to improve performance in the context of a wide-area computational grid.
This work was supported by Sandia National Laboratories under DOE contract DOE-AV6184.
Ron A. Oldfield is a senior member of the technical staff at Sandia National Laboratories in Albuquerque, NM. He received the B.Sc. in computer
science from the University of New Mexico in 1993. From 1993 to 1997, he worked in the computational sciences department of
Sandia National Laboratories, where he specialized in seismic research and parallel I/O. He was the primary developer for
the GONII-SSD (Gas and Oil National Information Infrastructure–Synthetic Seismic Dataset) project and a co-developer for the
R&D 100 award winning project “Salvo”, a project to develop a 3D finite-difference prestack-depth migration algorithm for
massively parallel architectures. From 1997 to 2003 he attended graduate school at Dartmouth college and received his Ph.D.
in June, 2003. In September of 2003, he returned to Sandia to work in the Scalable Computing Systems department. His research
interests include parallel and distributed computing, parallel I/O, and mobile computing.
David Kotz is a Professor of Computer Science at Dartmouth College in Hanover NH. After receiving his A.B. in Computer Science and Physics
from Dartmouth in 1986, he completed his Ph.D in Computer Science from Duke University in 1991. He returned to Dartmouth to
join the faculty in 1991, where he is now Professor of Computer Science, Director of the Center for Mobile Computing, and
Executive Director of the Institute for Security Technology Studies. His research interests include context-aware mobile computing,
pervasive computing, wireless networks, and intrusion detection. He is a member of the ACM, IEEE Computer Society, and USENIX
associations, and of Computer Professionals for Social Responsibility. For more information see http://www.cs.dartmouth.edu/dfk/. 相似文献
9.
In this paper, we present a new task scheduling algorithm, called Contention-Aware Scheduling (CAS) algorithm, with the objective
of delivering good quality of schedules in low running-time by considering contention on links of arbitrarily-connected, heterogeneous
processors. The CAS algorithm schedules tasks on processors and messages on links by considering the earliest finish time
attribute with the virtual cut-through (VCT) or the store-and-forward (SAF) switching. There are three types of CAS algorithm
presented in this paper, which differ in ordering the messages from immediate predecessor tasks. As part of the experimental
study, the performance of the CAS algorithm is compared with two well-known APN (arbitrary processor network) scheduling algorithms.
Experiments on the results of the synthetic benchmarks and the task graphs of the well-known problems clearly show that our
CAS algorithm outperforms the related work with respect to performance (given in normalized schedule length) and cost (given
in running time) to generate output schedules.
Ali Fuat Alkaya received the B.Sc. degree in mathematics from Koc University, Istanbul, Turkey in 1998, and the M.Sc. degree in computer
engineering from Marmara University, Istanbul, Turkey in 2002. He is currently a Ph.D. student in engineering management department
at the same university. His research interests include task scheduling and analysis of algorithms.
Haluk Rahmi Topcuoglu received the B.Sc. and M.Sc. degrees in computer engineering from Bogazici University, Istanbul, Turkey, in 1991 and 1993,
respectively. He received the Ph.D. degree in computer science from Syracuse University in 1999. He has been on the faculty
at Marmara University, Istanbul, Turkey since Fall 1999, where he is currently an Associate Professor in computer engineering
department. His main research interests are task scheduling and mapping in parallel and distributed systems; parallel processing;
evolutionary algorithms and their applicability for stationary and dynamic environments. He is a member of the ACM, the IEEE,
and the IEEE Computer Society.
e-mail: haluk@eng.marmara.edu.tr
e-mail: falkaya@eng.marmara.edu.tr 相似文献
10.
Chun-Hsi Huang Sanguthevar Rajasekaran Laurence Tianruo Yang Xin He 《Cluster computing》2006,9(3):345-353
This paper presents a general methodology for the communication-efficient parallelization of graph algorithms using the divide-and-conquer
approach and shows that this class of problems can be solved in cluster environments with good communication efficiency. Specifically,
the first practical parallel algorithm, based on a general coarse-grained model, for finding Hamiltonian paths in tournaments is presented. On any such parallel machines, this algorithm uses only (3log p+1), where p is the number of processors, communication rounds, which is independent of the tournament size, and can reuse the existing
linear-time algorithm in the sequential setting. For theoretical completeness, the algorithm is revised for fine-grained models,
where the ratio of computation and communication throughputs is low or the local memory size,
, of each individual processor is extremely limited
for any
, solving the problem with O(log p) communication rounds, while the hidden constant grows with the scalability factor 1/∊. Experiments have been carried out
on a Linux cluster of 32 Sun Ultra5 computers and an SGI Origin 2000 with 32 R10000 processors. The algorithm performance
on the Linux Cluster reaches 75% of the performance on the SGI Origin 2000 when the tournament size is about one million.
Computational resources and technical support are provided by the Center for Computational Research (CCR) at the State University
of New York at Buffalo.
Chun-Hsi Huang received his Ph.D. degree in Computer Science from the State University of New York at Buffalo in 2001. His is currently
an Assistant Professor of Computer Science and Engineering at the University of Connecticut. His interests include High Performance
Parallel Computing, Cluster and Grid Computing, Biomedical and Health Informatics, Algorithm Design and Analysis, Experimental
Algorithms and Computational Biology.
Sanguthevar Rajasekaran received his Ph.D. degree in Computer Science from Harvard University in 1988. Currently he is the UTC Chair Professor of
Computer Science and Engineering at the University of Connecticut and the Director of Booth Engineering Center for Advanced
Technologies (BECAT). His research interests include Parallel Algorithms, Bioinformatics, Data Mining, Randomized Computing,
Computer Simulations, and Combinatorial Optimization.
Laurence Tianruo Yang received is Ph.D. degree in Computer Science from the Oxford University. He is currently a professor of Computer Science
of the St. Francis Xavier University in Canada. His research interests include high-performance computing, embedded systems,
computer archtecture and high-speed networking.
Xin He received his Ph.D. degree in Computer Science from the Ohio State University in 1987. He is currently Professor of Computer
Science and Engineering at the State University of New York at Buffalo. His research interests include Algorithms, Data Structures,
Combinatorics and Computational Geometry. 相似文献
11.
A flexible multi-dimensional QoS performance measure framework for distributed heterogeneous systems
Jong-Kook Kim Debra A. Hensgen Taylor Kidd Howard Jay Siegel David St. John Cynthia Irvine Tim Levin N. Wayne Porter Viktor K. Prasanna Richard F. Freund 《Cluster computing》2006,9(3):281-296
When users’ tasks in a distributed heterogeneous computing environment (e.g., cluster of heterogeneous computers) are allocated
resources, the total demand placed on some system resources by the tasks, for a given interval of time, may exceed the availability
of those resources. In such a case, some tasks may receive degraded service or be dropped from the system. One part of a measure
to quantify the success of a resource management system (RMS) in such a distributed environment is the collective value of
the tasks completed during an interval of time, as perceived by the user, application, or policy maker. The Flexible Integrated
System Capability (FISC) measure presented here is a measure for quantifying this collective value. The FISC measure is a
flexible multi-dimensional measure such that any task attribute can be inserted and may include priorities, versions of a
task or data, deadlines, situational mode, security, application- and domain-specific QoS, and task dependencies. For an environment
where it is important to investigate how well data communication requests are satisfied, the data communication request satisfied
can be the basis of the FISC measure instead of tasks completed. The motivation behind the FISC measure is to determine the
performance of resource management schemes if tasks have multiple attributes that needs to be satisfied. The goal of this
measure is to compare the results of different resource management heuristics that are trying to achieve the same performance
objective but with different approaches.
This research was supported by the DARPA/ITO Quorum Program, by the DARPA/ISO BADD Program and the Office of Naval Research
under ONR grant number N00014-97-1-0804, by the DARPA/ITO AICE program under contract numbers DABT63-99-C-0010 and DABT63-99-C-0012,
and by the Colorado State University George T. Abell Endowment. Intel and Microsoft donated some of the equipment used in
this research.
Jong-Kook Kim is pursuing a Ph.D. degree from the School of Electrical and Computer Engineering at Purdue University (expected in August
2004). Jong-Kook received his M.S. degree in electrical and computer engineering from Purdue University in May 2000. He received
his B.S. degree in electronic engineering from Korea University, Seoul, Korea in 1998. He has presented his work at several
international conferences and has been a reviewer for numerous conferences and journals. His research interests include heterogeneous
distributed computing, computer architecture, performance measure, resource management, evolutionary heuristics, and power-aware
computing. He is a student member of the IEEE, IEEE Computer Society, and ACM.
Debra Hensgen is a member of the Research and Evaluation Team at OpenTV in Mountain View, California. OpenTV produces middleware for set-top
boxes in support of interactive television. She received her Ph.D. in the area of Distributed Operating Systems from the University
of Kentucky. Prior to moving to private industry, as an Associate Professor in the systems area, she worked with students
and colleagues to design and develop tools and systems for resource management, network re-routing algorithms and systems
that preserve quality of service guarantees, and visualization tools for performance debugging of parallel and distributed
systems. She has published numerous papers concerning her contributions to the Concurra toolkit for automatically generating
safe, efficient concurrent code, the Graze parallel processing performance debugger, the SAAM path information base, and the
SmartNet and MSHN Resource Management Systems.
Taylor Kidd is currently a Software Architect for Vidiom Systems in Portland Oregon. His current work involves the writing of multi-company
industrial specifications and the architecting of software systems for the digital cable television industry. He has been
involved in the establishment of international specifications for digital interactive television in both Europe and in the
US. Prior to his current position, Dr. Kidd has been a researcher for the US Navy as well as an Associate Professor at the
Naval Postgraduate School. Dr Kidd received his Ph.D. in Electrical Engineering in 1991 from the University of California,
San Diego.
H. J. Siegel was appointed the George T. Abell Endowed Chair Distinguished Professor of Electrical and Computer Engineering at Colorado
State University (CSU) in August 2001, where he is also a Professor of Computer Science. In December 2002, he became the first
Director of the CSU Information Science and Technology Center (ISTeC). ISTeC is a university-wide organization for promoting,
facilitating, and enhancing CSU’s research, education, and outreach activities pertaining to the design and innovative application
of computer, communication, and information systems. From 1976 to 2001, he was a professor at Purdue University. He received
two BS degrees from MIT, and the MA, MSE, and PhD degrees from Princeton University. His research interests include parallel
and distributed computing, heterogeneous computing, robust computing systems, parallel algorithms, parallel machine interconnection
networks, and reconfigurable parallel computer systems. He has co-authored over 300 published papers on parallel and distributed
computing and communication, is an IEEE Fellow, is an ACM Fellow, was a Coeditor-in-Chief of the Journal of Parallel and Distributed
Computing, and was on the Editorial Boards of both the IEEE Transactions on Parallel and Distributed Systems and the IEEE
Transactions on Computers. He was Program Chair/Co-Chair of three major international conferences, General Chair/Co-Chair
of four international conferences, and Chair/Co-Chair of five workshops. He has been an international keynote speaker and
tutorial lecturer, and has consulted for industry and government.
David St. John is Chief Information Officer for WeatherFlow, Inc., a weather services company specializing in coastal weather observations
and forecasts. He received a master’s degree in Engineering from the University of California, Irvine. He spent several years
as the head of staff on the Management System for Heterogeneous Networks project in the Computer Science Department of the
Naval Postgraduate School. His current relationship with cluster computing is as a user of the Regional Atmospheric Modeling
System (RAMS), a numerical weather model developed at Colorado State University. WeatherFlow runs RAMS operationally on a
Linux-based cluster.
Cynthia Irvine is a Professor of Computer Science at the Naval Postgraduate School in Monterey, California. She received her Ph.D. from
Case Western Reserve University and her B.A. in Physics from Rice University. She joined the faculty of the Naval Postgraduate
School in 1994. Previously she worked in industry on the development of high assurance secure systems. In 2001, Dr. Irvine
received the Naval Information Assurance Award. Dr. Irvine is the Director of the Center for Information Systems Security
Studies and Research at the Naval Postgraduate School. She has served on special panels for NSF, DARPA, and OSD. In the area
of computer security education Dr. Irvine has most recently served as the general chair of the Third World Conference on Information
Security Education and the Fifth Workshop on Education in Computer Security. She co-chaired the NSF workshop on Cyber-security
Workforce Needs Assessment and Educational Innovation and was a participant in the Computing Research Association/NSF sponsored
Grand Challenges in Information Assurance meeting. She is a member of the editorial board of the Journal of Information Warfare
and has served as a reviewer and/or program committee member of a variety of security related conferences. She has written
over 100 papers and articles and has supervised the work of over 80 students. Professor Irvine is a member of the ACM, the
AAS, a life member of the ASP, and a Senior Member of the IEEE.
Timothy E. Levin is a Research Associate Professor at the Naval Postgraduate School. He has spent over 18 years working in the design, development,
evaluation, and verification of secure computer systems, including operating systems, databases and networks. His current
research interests include high assurance system design and analysis, development of models and methods for the dynamic selection
of QoS security attributes, and the application of formal methods to the development of secure computer systems.
Viktor K. Prasanna received his BS in Electronics Engineering from the Bangalore University and his MS from the School of Automation, Indian
Institute of Science. He obtained his Ph.D. in Computer Science from the Pennsylvania State University in 1983. Currently,
he is a Professor in the Department of Electrical Engineering as well as in the Department of Computer Science at the University
of Southern California, Los Angeles. He is also an associate member of the Center for Applied Mathematical Sciences (CAMS)
at USC. He served as the Division Director for the Computer Engineering Division during 1994–98. His research interests include
parallel and distributed systems, embedded systems, configurable architectures and high performance computing. Dr. Prasanna
has published extensively and consulted for industries in the above areas. He has served on the organizing committees of several
international meetings in VLSI computations, parallel computation, and high performance computing. He is the Steering Co-chair
of the International Parallel and Distributed Processing Symposium [merged IEEE International Parallel Processing Symposium
(IPPS) and the Symposium on Parallel and Distributed Processing (SPDP)] and is the Steering Chair of the International Conference
on High Performance Computing(HiPC). He serves on the editorial boards of the Journal of Parallel and Distributed Computing
and the Proceedings of the IEEE. He is the Editor-in-Chief of the IEEE Transactions on Computers. He was the founding Chair
of the IEEE Computer Society Technical Committee on Parallel Processing. He is a Fellow of the IEEE.
Richard F. Freund is the originator of GridIQ’s network scheduling concepts that arose from mathematical and computing approaches he developed
for the Department of Defense in the early 1980’s. Dr. Freund has over twenty-five years experience in computational mathematics,
algorithm design, high performance computing, distributed computing, network planning, and heterogeneous scheduling. Since
1989, Dr. Freund has published over 45 journal articles in these fields. He has also been an editor of special editions of
IEEE Computer and the Journal of Parallel and Distributed Computing. In addition, he is a founder of the Heterogeneous Computing
Workshop, held annually in conjunction with the International Parallel Processing Symposium. Dr. Freund is the recipient of
many awards, which includes the prestigious Department of Defense Meritorious Civilian Service Award in 1984 and the Lauritsen-Bennet
Award from the Space and Naval Warfare Systems Command in San Diego, California. 相似文献
12.
Distributed Shared Arrays (DSA) is a distributed virtual machine that supports Java-compliant multithreaded programming with
mobility support for system reconfiguration in distributed environments. The DSA programming model allows programmers to explicitly
control data distribution so as to take advantage of the deep memory hierarchy, while relieving them from error-prone orchestration
of communication and synchronization at run-time. The DSA system is developed as an integral component of mobility support
middleware for Grid computing so that DSA-based virtual machines can be reconfigured to adapt to the varying resource supplies
or demand over the course of a computation. The DSA runtime system also features a directory-based cache coherence protocol
in support of replication of user-defined sharing granularity and a communication proxy mechanism for reducing network contention.
System reconfiguration is achieved by a DSA service migration mechanism, which moves the DSA service and residing computational
agents between physical servers for load balancing and fault resilience. We demonstrate the programmability of the model in
a number of parallel applications and evaluate its performance by application benchmark programs, in particular, the impact
of the coherence granularity and service migration overhead.
Song Fu received the BS degreee in computer science from Nanjing University of Aeronautics and Astronautics, China, in 1999, and
the MS degree in computer science from Nanjing University, China, in 2002. He is currently a PhD candidate in computer engineering
at Wayne State University. His research interests include the resource management, security, and mobility issues in wide-area
distributed systems.
Cheng-Zhong Xu received the BS and MS degrees in computer science from Nanjing University in 1986 and 1989, respectively, and the Ph.D.
degree in computer science from the University of Hong Kong in 1993. He is an Associate Professor in the Department of Electrical
and Computer Engineer of Wayne State University. His research interests lie in distributed are in distributed and parallel
systems, particularly in resource management for high performance cluster and grid computing and scalable and secure Internet
services. He has published more than100 peer-reviewed articles in journals and conference proceedings in these areas. He is
the author of the book Scalable and Secure Internet Services and Architecture (CRC Press, 2005) and a co-author of the book Load Balancing in Parallel Computers: Theory and Practice (Kluwer Academic, 1997). He serves on the editorial boards of J. of Parallel and Distributed Computing, J. of Parallel, Emergent,
and Distributed Systems, J. of High Performance Computing and Networking, and J. of Computers and Applications. He was the
founding program co-chair of International Workshop on Security in Systems and Networks (SSN), the general co-chair of the
IFIP 2006 International Conference on Embedded and Ubiquitous Computing (EUC06), and a member of the program committees of
numerous conferences. His research was supported in part by the US National Science Foundation, NASA, and Cray Research. He
is a recipient of the Faculty Research Award of Wayne State University in 2000, the Presidents Award for Excellence in Teaching
in 2002, and the Career Development Chair Award in 2003. He is a senior member of the IEEE.
Brian A. Wims was born in Washington, DC in 1967. He received the Bachelor of Science in Electrical Engineering from GMI-EMI (now called
Kettering University) in 1990; and Master of Science in Computer Engineering from Wayne State University in 1999. His research
interests are primarily in the fields of parallel and distributed systems with applications in Mobile Agent technologies.
From 1990–2001 he worked in various Engineering positions in General Motors, including Electrical Analysis, Software Design,
and Test and Development. In 2001, he joined the General Motors IS&S department where he is currently a Project Manager in
the Computer Aided Test group. Responsibilities include managing the development of test automation applications in the Electrical,
EMC, and Safety Labs.
Ramzi Basharahil was born in Aden, Yemen in 1972. He received the Bachelor of Science degree in Electrical Engineering from the United Arab
Emirates University. He graduated top of his engineering graduated class of 1997. He obtained Master of Science degree in
2001 from Wayne State University in the Department of Electrical and Computer Engineering. His main research interests are
primarily in the fields of parallel and distributed systems with applications to distributed processing across cluster of
servers.
From 1997 to 1998, he worked as a Teaching Assistant in the Department of Electrical Engineering at the UAE University. In
2000, he joined Internet Security Systems as a security software engineer. He later joined NetIQ Corporation in 2002 and still
working since then. He is leading the security events trending and events management software development where he is involved
in designing and the implementing event/log managements products. 相似文献
13.
Bluetooth scatternets may be operated in a loosely coupled mode, called Walk-In Bridge Scheduling, in which the master polls all of its slaves and bridges using E-limited service. Using the theory of queues with vacations, we derive the stability criteria for packet queues in piconet masters, slaves, and bridges. We show that the stability of the slave queues is more critical under high traffic locality, whereas the stability of the bridge queues becomes progressively more important as the non-local traffic increases. Our analysis shows that the limited exchange mode, in which the bridge residence time in a piconet is limited, performs better and has a wider stability region than the complete exchange mode in which the bridge stays in the piconet until all queued packets are exchanged. Simulations show that this scheduling approach offers good performance and excellent scalability, while maintaining scatternet stability.Vojislav B. Mii received his PhD in Computer Science from the University of Belgrade, Yugoslavia, in 1993. He is currently Assistant Professor of Computer Science, at the University of Manitoba in Winnipeg, Manitoba, Canada. Previously, he has held posts at the University of Belgrade, Yugoslavia, and the Hong Kong University of Science and Technology. His research interests include systems and software engineering and modeling and performance evaluation of wireless networks. He is a member of ACM, AIS, and IEEE.Jelena Mii received her PhD degree in Computer Engineering from the University of Belgrade, Yugoslavia, in 1993. She is currently Associate Professor of Computer Science at the University of Manitoba in Winnipeg, Manitoba, Canada. Previously, she has been with the Hong Kong University of Science and Technology. Her current research interests include wireless networks and mobile computing. She is a member of IEEE Computer Society.Ka Lok Chan received his MPhil degree in performance of Bluetooth networks at the Hong Kong University of Science and Technology. 相似文献
14.
Victoria Ungureanu Benjamin Melamed Michael Katehakis Phillip G. Bradford 《Cluster computing》2006,9(1):57-65
This paper proposes a new scheduling policy for cluster-based servers called DAS (Deferred Assignment Scheduling). The main
idea in DAS is to defer scheduling as much as possible in order to make better use of the accumulated information on job sizes.
In broad outline, DAS operates as follows: (1) incoming jobs are held by the dispatcher in a buffer; (2) the dispatcher monitors
the number of jobs being processed by each server; (3) when the number of jobs at a server queue drops below a prescribed
threshold, the dispatcher sends to it the shortest job in its buffer.
To gauge the efficacy of DAS, the paper presents simulation studies, using various data traces. The studies collected response
times and slowdowns for two cluster configurations under multi-threaded and multi-process back-end server architectures. The
experimental results show that in both architectures, DAS outperforms the Round-Robin policy in all traffic regimes, and the
JSQ (Join Shortest Queue) policy in medium and heavy traffic regimes.
Victoria Ungureanu (ACM) is a visiting researcher at DIMACS. She has a Ph.D. in Computer Science from Rutgers University.
Benjamin Melamed is a Professor II at the Rutgers Business School- Newark and New Brunswick, Department of MSIS. Melamed received a B.Sc.
degree in Mathematics and Statistics from Tel Aviv University in 1972, and a M.S. and Ph.D. degrees in Computer Science from
the University of Michigan in 1973 and 1976, respectively. He was awarded an AT&T Fellow in 1988 and an IEEE Fellow in 1994.
He became an IFIP WG7.3 member in 1997, and was elected to Beta Gamma Sigma in 1998.
Michael N. Katehakis is Professor of Management Science in the Department of Management Science and Information Systems, at Rutgers. He studied
at the University of Athens, Diploma (1974) in Mathematics, at the University of South Florida, M.A. (1978) in Statistics,
and at Columbia University, Ph.D. (1980) in Operations Research. He won the 1992 Wolfowitz Prize (with Govindarajulu Z.)
Phillip G. Bradford (ACM) is on the faculty in Computer Science Department at the University of Alabama. He earned his Ph.D. at Indiana University
in Bloomington, his MS at The University of Kansas and his BS at Rutgers University. 相似文献
15.
This paper presents a data management solution which allows fast Virtual Machine (VM) instantiation and efficient run-time
execution to support VMs as execution environments in Grid computing. It is based on novel distributed file system virtualization
techniques and is unique in that: (1) it provides on-demand cross-domain access to VM state for unmodified VM monitors; (2)
it enables private file system channels for VM instantiation by secure tunneling and session-key based authentication; (3)
it supports user-level and write-back disk caches, per-application caching policies and middleware-driven consistency models;
and (4) it leverages application-specific meta-data associated with files to expedite data transfers. The paper reports on
its performance in wide-area setups using VMware-based VMs. Results show that the solution delivers performance over 30% better
than native NFS and with warm caches it can bring the application-perceived overheads below 10% compared to a local-disk setup.
The solution also allows a VM with 1.6 GB virtual disk and 320 MB virtual memory to be cloned within 160 seconds for the first
clone and within 25 seconds for subsequent clones.
Ming Zhao is a PhD candidate in the department of Electrical and Computer Engineering and a member of the Advance Computing and Information
Systems Laboratory, at University of Florida. He received the degrees of BE and ME from Tsinghua University. His research
interests are in the areas of computer architecture, operating systems and distributed computing.
Jian Zhang is a PhD student in the Department of Electrical and Computer Engineering at University of Florida and a member of the Advance
Computing and Information Systems Laboratory (ACIS). Her research interest is in virtual machines and Grid computing. She
is a member of the IEEE and the ACM.
Renato J. Figueiredo received the B.S. and M.S. degrees in Electrical Engineering from the Universidade de Campinas in 1994 and 1995, respectively,
and the Ph.D. degree in Electrical and Computer Engineering from Purdue University in 2001. From 2001 until 2002 he was on
the faculty of the School of Electrical and Computer Engineering of Northwestern University at Evanston, Illinois. In 2002
he joined the Department of Electrical and Computer Engineering of the University of Florida as an Assistant Professor. His
research interests are in the areas of computer architecture, operating systems, and distributed systems. 相似文献
16.
Nodes in ad hoc networks generally transmit data at regular intervals over long periods of time. Recently, ad hoc network nodes have been built that run on little power and have very limited memory. Authentication is a significant challenge in ad hoc networks, even without considering size and power constraints. Expounding on idealized hashing, this paper examines lower bounds for ad hoc broadcast authentication for TESLA-like protocols. In particular, this paper explores idealized hashing for generating preimages of hash chains. Building on Bellare and Rogaways classical definition, a similar definition for families of hash chains is given. Using these idealized families of hash chain functions, this paper gives a time-space product (k2 log 4 n) bit operation1 lower-bound for optimal preimage hash chain generation for k constant. This bound holds where n is the total length of the hash chain and the hash function family is k-wise independent. These last results follow as corollaries to a lower bound of Coppersmith and Jakobsson.A preliminary version of this paper appeared at MWN 2003: Workshop on Mobile and Wireless Networks, (a workshop of the 23rd ICDCS), 743–748, Ivan Stojmenovic and Jingyuan Zhang Editors. IEEE Press.Phillip G. Bradford (ACM) is on the faculty in Computer Science at the University of Alabama. He was visiting faculty at Rutgers Business school and was a postdoc at the Max-Planck-Institute for Informatik. He earned his Ph.D. at Indiana University in Bloomington, his MS at The University of Kansas and his BS at Rutgers University. He has also had more than 4 years experience in industry.Olga V. Gavrylyako (ACMS) is a Ph.D. student at Computer Science Department of the University of Alabama. Her research interests include theoretical aspects of security for constrained devises, in particular security for ad hoc networks. Olga Gavrylyako received her Masters and Ph.D. degrees in Applied Mathematics from Kharkov State University, Ukraine. 相似文献
17.
Yi-Chang Zhuang Jyh-Biau Chang Tyng-Yeu Liang Ce-Kuen Shieh Laurence Tianruo Yang 《Cluster computing》2006,9(3):223-236
Recently, software distributed shared memory systems have successfully provided an easy user interface to parallel user applications
on distributed systems. In order to prompt program performance, most of DSM systems usually were greedy to utilize all of
available processors in a computer network to execute user programs. However, using more processors to execute programs cannot
necessarily guarantee to obtain better program performance. The overhead of paralleling programs is increased by the addition
in the number of processors used for program execution. If the performance gain from program parallel cannot compensate for
the overhead, increasing the number of execution processors will result in performance degradation and resource waste. In
this paper, we proposed a mechanism to dynamically find a suitable system scale to optimize performance for DSM applications
according to run-time information. The experimental results show that the proposed mechanism can precisely predict the processor
number that will result in the best performance and then effectively optimize the performance of the test applications by
adapting system scale according to the predicted result.
Yi-Chang Zhuang received his B.S., M.S. and Ph.D. degrees in electrical engineering from National Cheng Kung University in 1995, 1997, and
2004. He is currently working as an engineer at Industrial Technology Research Institute in Taiwan. His research interests
include object-based storage, file systems, distributed systems, and grid computing.
Jyh-Biau Chang is currently an assistant professor at the Information Management Department of Leader University in Taiwan. He received
his B.S., M.S. and Ph.D. degrees from Electrical Engineering Department of National Cheng Kung University in 1994, 1996, and
2005. His research interest is focused on cluster and grid computing, parallel and distributed system, and operating system.
Tyng-Yeu Liang is currently an assistant professor who teaches and studies at Department of Electrical Engineering, National Kaohsiung University
of Applied Sciences in Taiwan. He received his B.S., M.S. and Ph.D. degrees from National Cheng Kung University in 1992, 1994,
and 2000. His study is interested in cluster and grid computing, image processing and multimedia.
Ce-Kuen Shieh currently is a professor at the Electrical Engineering Department of National Cheng Kung University in Taiwan. He is also
the chief of computation center at National Cheng Kung University. He received his Ph.D. degree from the Department of Electrical
Engineering of National Cheng Kung University in 1988. He was the chairman of the Electrical Engineering Department of National
Cheng Kung University from 2002 to 2005. His research interest is focused on computer network, and parallel and distributed
system.
Laurence T. Yang is a professor at the Department of Computer Science, St. Francis Xavier University, Canada. His research includes high performance
computing and networking, embedded systems, ubiquitous/pervasive computing and intelligence, and autonomic and trusted computing. 相似文献
18.
One of the principal characteristics of large scale wireless sensor networks is their distributed, multi-hop nature. Due to this characteristic, applications such as query propagation rely regularly on network-wide flooding for information dissemination. If the transmission radius is not set optimally, the flooded packet may be holding the transmission medium for longer periods than are necessary, reducing overall network throughput. We analyze the impact of the transmission radius on the average settling time—the time at which all nodes in the network finish transmitting the flooded packet. Our analytical model takes into account the behavior of the underlying contention-based MAC protocol, as well as edge effects and the size of the network. We show that for large wireless networks there exists an intermediate transmission radius which minimizes the settling time, corresponding to an optimal tradeoff between reception and contention times. We also explain how physical propagation models affect small wireless networks and why there is no intermediate optimal transmission radius observed in these cases. The mathematical analysis is supported and validated through extensive simulations.Marco Zuniga is currently a PhD student in the Department of Electrical Engineering at the University of Southern California. He received his Bachelors degree in Electrical Engineering from the Pontificia Universidad Catolica del Peru in 1998, and his Masters degree in Electrical Engineering from the University of Southern California in 2002. His interests are in the area of Wireless Sensor Networks in general, and more specifically in studying the interaction amongst different layers to improve the performance of these networks. He is a member of IEEE and the Phi Kappa Phi Honor society.Bhaskar Krishnamachari is an Assistant Professor in the Department of Electrical Engineering at the University of Southern California (USC), where he also holds a joint appointment in the Department of Computer Science. He received his Bachelors degree in Electrical Engineering with a four-year full-tuition scholarship from The Cooper Union for the Advancement of Science and Art in 1998. He received his Masters degree and his Ph.D. in Electrical Engineering from Cornell University in 1999 and 2002, under a four-year university graduate fellowship. Dr. Krishnamacharis previous research has included work on critical density thresholds in wireless networks, data centric routing in sensor networks, mobility management in cellular telephone systems, multicast flow control, heuristic global optimization, and constraint satisfaction. His current research is focused on the discovery of fundamental principles and the analysis and design of protocols for next generation wireless sensor networks. He is a member of IEEE, ACM and the Tau Beta Pi and Eta Kappa Nu Engineering Honor Societies 相似文献
19.
IEEE 802.11b Ad Hoc Networks: Performance Measurements 总被引:1,自引:0,他引:1
In this paper we investigate the performance of IEEE 802.11b ad hoc networks by means of an experimental study. An extensive literature, based on simulation studies, there exists on the performance of IEEE 802.11 ad hoc networks. Our analysis reveals several aspects that are usually neglected in previous simulation studies. Firstly, since different transmission rates are used for control and data frames, different transmission ranges and carrier-sensing ranges may exist at the same time in the network. In addition, the transmission ranges are in practice much shorter than usually assumed in simulation analysis, not constant but highly variable (even in the same session) and depends on several factors. Finally, the results presented in this paper indicate that for correctly understanding the behavior of an 802.11b network operating in ad hoc mode, several different ranges must be considered. In addition to the transmission range, the physical carrier sensing range is very important. The transmission range is highly dependent on the data rate and is up to 100 m, while the physical carrier sensing range is almost independent from the data rate and is approximately 200 m. Furthermore, even though stations are outside from their respective physical carrier sensing range, they may still interfere if their distance is lower than 350 m.Giuseppe Anastasi received the Laurea degree in Electronics Engineering and the Ph.D. degree in Computer Engineering both from the University of Pisa, Italy, in 1990 and 1995, respectively. He is currently an associate professor of Computer Engineering at the Department of Information Engineering of the University of Pisa. His research interests include architectures and protocols for mobile computing, energy management, QoS in mobile networks, and ad hoc networks. He was a co-editor of the book Advanced Lectures in Networking (LNCS 2497, Springer, 2002), and published more than 50 papers, both in international journals and conference proceedings, in the area of computer networking. He served in the TPC of several international conferences including IFIP Networking 2002 and IEEE PerCom 2003. He is a member of the IEEE Computer Society.Eleonora Borgia received the Laurea degree in Computer Engineering from the University of Pisa, Italy, in 2002. She is currently working toward her Ph.D. degree at the IIT Institute of the Italian National Research Council (CNR). Her research interests are in the area of the wireless and mobile networks with particular attention to MAC protocols and routing algorithms for ad hoc networks.Marco Conti received the Laurea degree in Computer Science from the University of Pisa, Italy, in 1987. In 1987 he joined the Italian National Research Council (CNR). He is currently a senior researcher at CNR-IIT. His research interests include Internet architecture and protocols, wireless networks and ad hoc networking, mobile computing, and QoS in packet switching networks. He co-authored the book Metropolitan Area Networks (Springer, London, 1997), and published in journal and conference proceedings more than 100 research papers related to design, modeling, and performance evaluation of computer-network architectures and protocols. He served as the technical program committee chair of the IFIP-TC6 conferences Networking 2002 and PWC 2003, and technical program committee co-chair of ACM WoWMoM 2002. He is serving as technical program committee co-chair of the IEEE Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM 2005). He served as guest editor for the Cluster Computing Journal (special issue on Mobile Ad Hoc Networking), IEEE Transactions on Computers (special issue on Quality of Service issues in Internet Web Services), and ACM/Kluwer Mobile Networks & Applications Journal (special issue on Mobile Ad hoc Networks). He is member of IFIP WGs 6.2, 6.3 and 6.8.Enrico Gregori received the Laurea in electronic engineering from the University of Pisa in 1980. He joined CNUCE, an institute of the Italian National Research Council (CNR) in 1981. He is currently a CNR research director. In 1986 he held a visiting position in the IBM research center in Zurich working on network software engineering and on heterogeneous networking. He has contributed to several national and international projects on computer networking. He has authored more than 100 papers in the area of computer networks and has published in international journals and conference proceedings and is co-author of the book Metropolitan Area Networks (Springer, London, 1997). He was the General Chair of the IFIP TC6 conferences: Networking2002 and PWC2003 (Personal Wireless Communications). He served as guest editor for the Networking2002 journal special issues on: Performance Evaluation, Cluster Computing and ACM/Kluwer Wireless Networks Journals. He is a member of the board of directors of the Create-Net association, an association with several Universities and research centres that is fostering research on networking at European level. He is on the editorial board of the Cluster Computing, of the Computer Networks and of the Wireless Networks Journals. His current research interests include: Wireless access to Internet, Wireless LANs, Quality of service in packet-switching networks, Energy saving protocols, Evolution of TCP/IP protocols. 相似文献
20.
Baker Abdalhaq Ana Cortés Tomàs Margalef Germán Bianchini Emilio Luque 《Cluster computing》2006,9(3):329-343
One of the challenges still open to wildland fire simulators is the capacity of working under real-time constrains with the
aim of providing fire spread predictions that could be useful in fire mitigation interventions. We propose going one step
beyond the classical wildland fire prediction by linking evolutionary optimization strategies to the traditional scheme with
the aim of emulating an “ideal” fire propagation model as much as possible. In order to accelerate the fire prediction, this
enhanced prediction scheme has been developed in a fashion on a Linux cluster using MPI. Furthermore, a sensitivity analysis
has been carried out to determine the input parameters that we can fix to their typical values in order to reduce the search-space
involved in the optimization process and, therefore, accelerates the whole prediction strategy.
Baker Abdalhaq received the BSc. Computer Science from Princess Sumaya University College, Royal JordanianSocieaty, Amman Jordania in 1993.
In 2001 and 2004, he got the MSc and PhD in Computer Science from Universitat Autónoma de Barcelona (UAB), respectively. His
main research interest is focused on parallel fire simulation and, in particular, how to take advantage of the computational
power provided for massively distributed systems to enhance wildland fire prediction.
Ana Cortés received both her first degree and her PhD in Computer Science from the Universitat Autonoma de Barcelona (UAB), Spain, in
1990 and 2000, respectively. She is currently assistant professor of Computer Science at the UAB, where she is a member of
the Computer Architecture and Operating Systems Group at the Computer Science Department. Her current research interests concern
software support for parallel and distributed computing including algorithms and software tools for the load-balancing of
parallel programs. She has also been working on enhancing wildland fire prediction by exploiting parallel/distributed systems.
Tomàs Margalef got a BS degree in physics in 1988 from Universitat Autónoma de Barcelona (UAB). In 1990 he obtained the MSc in Computer
Science and in 1993 the PhD in Computer Science from UAB. Since 1988 he has been working in several aspects related to parallel
and distributed computing. Currently, his research interests focuses on development of high performance applications, automatic
performance analysis and dynamic performance tuning. Since 1997 he has been working on exploiting parallel/distributed processing
to accelerate and improve the prediction of forest fire propagation. He is an ACM member.
Germán Bianchini received the BSc. Computer Science from Universidad Nacional Del Comahue, Argentina, in 2002. In 2004 and 2006, he got the
MSc and PhD in Computer Science from Universitat Autónoma de Barcelona (UAB), respectively. His main research interest is
focused on parallel fire simulation and, in particular, how to take advantage of the computational power provided for massively
distributed systems to enhance wildland fire prediction.
Emilio Luque received the Licenciate in physics and PhD degrees from the University Complutense of Madrid (UCM) in 1968 and 1973 respectively.
Between 1973 and 1976 he was an associate professor at the UCM. Since 1976 he is a professor of “Computer Architecture and
Technology” at the University Autonoma of Barcelona (UAB), where he is leading the Computer Architecture and Operating System
(CAOS) Group at the Computer Science Department. Professor Luque has been the Computer Science Department chairman for more
than 10 years. He has been invited lecturer/researcher in Universities of USA, Argentina, Brazil, Poland, Ireland, Cuba, Italy,
Germany and PR of China. He has published more than 35 papers in technical journals and more than 100 papers at international
conferences and his current/major research areas are: computer architecture, interconnection networks, task scheduling in
parallel systems, parallel and distributed simulation environments, environment and programming tools for automatic performance
tuning in parallel systems, cluster and Grid computing, parallel computing for environmental applications (forest fire simulation,
forest monitoring) and distributed video on demand (VoD) systems. 相似文献