首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
While aggregating the throughput of existing disks on cluster nodes is a cost-effective approach to alleviate the I/O bottleneck in cluster computing, this approach suffers from potential performance degradations due to contentions for shared resources on the same node between storage data processing and user task computation. This paper proposes to judiciously utilize the storage redundancy in the form of mirroring existed in a RAID-10 style file system to alleviate this performance degradation. More specifically, a heuristic scheduling algorithm is developed, motivated from the observations of a simple cluster configuration, to spatially schedule write operations on the nodes with less load among each mirroring pair. The duplication of modified data to the mirroring nodes is performed asynchronously in the background. The read performance is improved by two techniques: doubling the degree of parallelism and hot-spot skipping. A synthetic benchmark is used to evaluate these algorithms in a real cluster environment and the proposed algorithms are shown to be very effective in performance enhancement. Yifeng Zhu received his B.Sc. degree in Electrical Engineering in 1998 from Huazhong University of Science and Technology, Wuhan, China; the M.S. and Ph.D. degree in Computer Science from University of Nebraska – Lincoln in 2002 and 2005 respectively. He is an assistant professor in the Electrical and Computer Engineering department at University of Maine. His main research interests are cluster computing, grid computing, computer architecture and systems, and parallel I/O storage systems. Dr. Zhu is a Member of ACM, IEEE, the IEEE Computer Society, and the Francis Crowe Society. Hong Jiang received the B.Sc. degree in Computer Engineering in 1982 from Huazhong University of Science and Technology, Wuhan, China; the M.A.Sc. degree in Computer Engineering in 1987 from the University of Toronto, Toronto, Canada; and the PhD degree in Computer Science in 1991 from the Texas A&M University, College Station, Texas, USA. Since August 1991 he has been at the University of Nebraska-Lincoln, Lincoln, Nebraska, USA, where he is Professor and Vice Chair in the Department of Computer Science and Engineering. His present research interests are computer architecture, parallel/distributed computing, cluster and Grid computing, computer storage systems and parallel I/O, performance evaluation, real-time systems, middleware, and distributed systems for distance education. He has over 100 publications in major journals and international Conferences in these areas and his research has been supported by NSF, DOD and the State of Nebraska. Dr. Jiang is a Member of ACM, the IEEE Computer Society, and the ACM SIGARCH. Xiao Qin received the BS and MS degrees in computer science from Huazhong University of Science and Technology in 1992 and 1999, respectively. He received the PhD degree in computer science from the University of Nebraska-Lincoln in 2004. Currently, he is an assistant professor in the department of computer science at the New Mexico Institute of Mining and Technology. He had served as a subject area editor of IEEE Distributed System Online (2000–2001). His research interests are in parallel and distributed systems, storage systems, real-time computing, performance evaluation, and fault-tolerance. He is a member of the IEEE. Dan Feng received the Ph.D degree from Huazhong University of Science and Technology, Wuhan, China, in 1997. She is currently a professor of School of Computer, Huazhong University of Science and Technology, Wuhan, China. She is the principal scientist of the the National Grand Fundamental Research 973 Program of China “Research on the organization and key technologies of the Storage System on the next generation Internet.” Her research interests include computer architecture, storage system, parallel I/O, massive storage and performance evaluation. David Swanson received a Ph.D. in physical (computational) chemistry at the University of Nebraska-Lincoln (UNL) in 1995, after which he worked as an NSF-NATO postdoctoral fellow at the Technical University of Wroclaw, Poland, in 1996, and subsequently as a National Research Council Research Associate at the Naval Research Laboratory in Washington, DC, from 1997–1998. In 1999 he returned to UNL where he directs the Research Computing Facility and currently serves as an Assistant Research Professor in the Department of Computer Science and Engineering. The Office of Naval Research, the National Science Foundation, and the State of Nebraska have supported his research in areas such as large-scale scientific simulation and distributed systems.  相似文献   

2.
One of the challenges still open to wildland fire simulators is the capacity of working under real-time constrains with the aim of providing fire spread predictions that could be useful in fire mitigation interventions. We propose going one step beyond the classical wildland fire prediction by linking evolutionary optimization strategies to the traditional scheme with the aim of emulating an “ideal” fire propagation model as much as possible. In order to accelerate the fire prediction, this enhanced prediction scheme has been developed in a fashion on a Linux cluster using MPI. Furthermore, a sensitivity analysis has been carried out to determine the input parameters that we can fix to their typical values in order to reduce the search-space involved in the optimization process and, therefore, accelerates the whole prediction strategy. Baker Abdalhaq received the BSc. Computer Science from Princess Sumaya University College, Royal JordanianSocieaty, Amman Jordania in 1993. In 2001 and 2004, he got the MSc and PhD in Computer Science from Universitat Autónoma de Barcelona (UAB), respectively. His main research interest is focused on parallel fire simulation and, in particular, how to take advantage of the computational power provided for massively distributed systems to enhance wildland fire prediction. Ana Cortés received both her first degree and her PhD in Computer Science from the Universitat Autonoma de Barcelona (UAB), Spain, in 1990 and 2000, respectively. She is currently assistant professor of Computer Science at the UAB, where she is a member of the Computer Architecture and Operating Systems Group at the Computer Science Department. Her current research interests concern software support for parallel and distributed computing including algorithms and software tools for the load-balancing of parallel programs. She has also been working on enhancing wildland fire prediction by exploiting parallel/distributed systems. Tomàs Margalef got a BS degree in physics in 1988 from Universitat Autónoma de Barcelona (UAB). In 1990 he obtained the MSc in Computer Science and in 1993 the PhD in Computer Science from UAB. Since 1988 he has been working in several aspects related to parallel and distributed computing. Currently, his research interests focuses on development of high performance applications, automatic performance analysis and dynamic performance tuning. Since 1997 he has been working on exploiting parallel/distributed processing to accelerate and improve the prediction of forest fire propagation. He is an ACM member. Germán Bianchini received the BSc. Computer Science from Universidad Nacional Del Comahue, Argentina, in 2002. In 2004 and 2006, he got the MSc and PhD in Computer Science from Universitat Autónoma de Barcelona (UAB), respectively. His main research interest is focused on parallel fire simulation and, in particular, how to take advantage of the computational power provided for massively distributed systems to enhance wildland fire prediction. Emilio Luque received the Licenciate in physics and PhD degrees from the University Complutense of Madrid (UCM) in 1968 and 1973 respectively. Between 1973 and 1976 he was an associate professor at the UCM. Since 1976 he is a professor of “Computer Architecture and Technology” at the University Autonoma of Barcelona (UAB), where he is leading the Computer Architecture and Operating System (CAOS) Group at the Computer Science Department. Professor Luque has been the Computer Science Department chairman for more than 10 years. He has been invited lecturer/researcher in Universities of USA, Argentina, Brazil, Poland, Ireland, Cuba, Italy, Germany and PR of China. He has published more than 35 papers in technical journals and more than 100 papers at international conferences and his current/major research areas are: computer architecture, interconnection networks, task scheduling in parallel systems, parallel and distributed simulation environments, environment and programming tools for automatic performance tuning in parallel systems, cluster and Grid computing, parallel computing for environmental applications (forest fire simulation, forest monitoring) and distributed video on demand (VoD) systems.  相似文献   

3.
Clusters of workstations are a practical approach to parallel computing that provide high performance at a low cost for many scientific and engineering applications. In order to handle problems with increasing data sets, methods supporting parallel out-of-core computations must be investigated. Since writing an out-of-core version of a program is a difficult task and virtual memory systems do not perform well in some cases, we have developed a parallel programming interface and the support library to provide efficient and convenient access to the out-of-core data. This paper focuses on how these components extend the range of problem sizes that can be solved on the cluster of workstations. Execution time of Jacobi iteration when using our interface, virtual memory and PVFS are compared to characterize the performance for various problem sizes, and it is concluded that our new interface significantly increases the sizes of problems that can be efficiently solved. Jianqi Tang received B.Sc. and M.Sc. from Harbin Institute of Technology in 1997 and 1999 respectively, both in computer application. Currently, she is a Ph.D. candidate at the Department of Computer Science and engineering, Harbin Institute of Technology. She has participated in several National research projects. Her research interests include parallel computing, parallel I/O and grid computing. Binxing Fang received M.Sc. in 1984 from Tsinghua University and Ph.D. from Harbin Institute of Technology in 1989, both in computer science. From 1990 to 1993 he was with National University of Defense Technology as a postdoctor. Since 1984, he is a faculty member at the Department of Computer Science and engineering of Harbin Institute of Technology, where he is presently a Professor. He is a Member of the National Information Expert Consultant Group and a Standing Member of the Council of Chinese Society of Communications. His research efforts focus on parallel computing, computer network and information security. Professor Fang has implemented over 30 projects from the state and ministry/province. Mingzeng Hu was born in 1935. He has been with the Department of Computer Science and engineering in Harbin Institute of Technology since 1958, where he is currently a Professor. He was a visiting scholar in the Siemens Company, Germany from 1978 to 1979, a visiting associate professor in Chiba University, Japan from 1984 to 1985, and a visiting professor in York University, Canada from 1989 to 1995. He is the Director of the National Key Laboratory of Computer Information Content Security. He is also a Member of 3rd Academic Degree Committee under the State Council of China. Professor Hu’s research interests include high performance computer architecture and parallel processing technology, fault tolerant computing, network system, VL design, and computer system security technology. He has implemented many projects from the state and ministry/province and has won several Ministry Science and Technology Progress Awards. He published over 100 papers in core journals home and abroad and one book. Professor Hu has supervised over 20 doctoral students. Hongli Zhang received M.Sc in computer system software in 1996 and Ph.D. in computer architecture in 1999 from Harbin Institute of Technology. Currently, she is an Associate Professor at the Department of Computer Science and engineering, Harbin Institute of Technology. Her research interests include computer network security and parallel computing.  相似文献   

4.
Software Component Frameworks are well known in the commercial business application world and now this technology is being explored with great interest as a way to build large-scale scientific applications on parallel computers. In the case of Grid systems, the current architectural model is based on the emerging web services framework. In this paper we describe progress that has been made on the Common Component Architecture model (CCA) and discuss its success and limitations when applied to problems in Grid computing. Our primary conclusion is that a component model fits very well with a services-oriented Grid, but the model of composition must allow for a very dynamic (both in space and in time) control of composition. We note that this adds a new dimension to conventional service workflow and it extends the “Inversion of Control” aspects of most component systems. Dennis Gannon is a professor of Computer Science at Indiana University. He received his Ph.D. in Computer Science from the University of Illinois in 1980 and his Ph.D. in Mathematics from the University of California in 1974. From 1980 to 1985, he was on the faculty at Purdue University. His research interests include software tools for high performance distributed systems and problem solving environments for scientific computation. Sriram Krishnan received his Ph.D. in Computer Science from Indiana University in 2004. He is currently in the Grid Development Group at the San Diego Supercomputer Center where he is working on designing a Web services based architecture for biomedical applications that is secure and scalable, and is conducive to the creation of complex workflows. He received my undergraduate degree in Computer Engineering from the University of Mumbai, India. Liang Fang is a Ph.D. student in Computer Science at Indiana University. His research interests include Grid computing, Web services, portals, their security and scalability issues. He is a Research Assistant in Computer Science at Indiana University, currently responsible for investigating authorization and other security solutions to the project of Linked Environments for Atmospheric Discovery (LEAD). Gopi Kandaswamy is a Ph.D. student in the Computer Science Department at Indiana University where he is current a Research Assistant. His research interests include Web services and workflow systems for the Grid. Yogesh Simmhan received his B.E. degree in Computer Science from Madras University, India in 2000, and is a doctoral candidate in Computer Science at Indiana University. He is currently working as a Research Assistant at Indiana University, investigating data management issues in the LEAD project. His interests lie in data provenance for workflow systems and its use in data quality estimation. Aleksander Slominski is a Ph.D. student in the Computer Science at Indiana University. His research interests include Grid and Web Services, streaming XML Pull Parsing and performance, Grid security, asynchronous messaging, events, and notifications brokers, component technologies, and workflow composition. He is currently working as a Research Assistant investigating creation and execution of dynamic workflows using Grid Process Execution Language (GPEL) based on WS-BPEL.  相似文献   

5.
I/O bottlenecks are already a problem in many large-scale applications that manipulate huge datasets. This problem is expected to get worse as applications get larger, and the I/O subsystem performance lags behind processor and memory speed improvements. At the same time, off-the-shelf clusters of workstations are becoming a popular platform for demanding applications due to their cost-effectiveness and widespread deployment. Caching I/O blocks is one effective way of alleviating disk latencies, and there can be multiple levels of caching on a cluster of workstations. Previous studies have shown the benefits of caching—whether it be local to a particular node, or a shared global cache across the cluster—for certain applications. However, we show that while caching is useful in some situations, it can hurt performance if we are not careful about what to cache and when to bypass the cache. This paper presents compilation techniques and runtime support to address this problem. These techniques are implemented and evaluated on an experimental Linux/Pentium cluster running a parallel file system. Our results using a diverse set of applications (scientific and commercial) demonstrate the benefits of a discretionary approach to caching for I/O subsystems on clusters, providing as much as 48% savings in overall execution time over indiscriminately caching everything in some applications. Parts of this paper have appeared in the Proceedings of the 3rd IEEE/ACM Symposium on Cluster Computing and the Grid (CCGrid'03). This paper is an extension of these prior results, and includes a more extensive performance evaluation. Murali Vilayannur is a Ph.D. student in the Department of Computer Science and Engineering at The Pennsylvania State University. His research interests are in High-Performance Parallel I/O, File Systems, Virtual Memory Algorithms and Operating Systems. Anand Sivasubramaniam received his B.Tech. in Computer Science from the Indian Institute of Technology, Madras, in 1989, and the M.S. and Ph.D. degrees in Computer Science from the Georgia Institute of Technology in 1991 and 1995 respectively. He has been on the faculty at The Pennsylvania State University since Fall 1995 where he is currently an Associate Professor. Anand's research interests are in computer architecture, operating systems, performance evaluation, and applications for both high performance computer systems and embedded systems. Anand's research has been funded by NSF through several grants, including the CAREER award, and from industries including IBM, Microsoft and Unisys Corp. He has several publications in leading journals and conferences, and is on the editorial board of IEEE Transactions on Computers and IEEE Transactions on Parallel and Distributed Systems. He is a recipient of the 2002 IBM Faculty Award. Anand is a member of the IEEE, IEEE Computer Society, and ACM. Mahmut Kandemir received the B.Sc. and M.Sc. degrees in control and computer engineering from Istanbul Technical University, Istanbul, Turkey, in 1988 and 1992, respectively. He received the Ph.D. from Syracuse University, Syracuse, New York in electrical engineering and computer science, in 1999. He has been an assistant professor in the Computer Science and Engineering Department at the Pennsylvania State University since August 1999. His main research interests are optimizing compilers, I/O intensive applications, and power-aware computing. He is a member of the IEEE and the ACM. Rajeev Thakur is a Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory. He received a B.E. from the University of Bombay, India, in 1990, M.S. from Syracuse University in 1992, and Ph.D. from Syracuse University in 1995, all in computer engineering. His research interests are in the area of high-performance computing in general and high-performance networking and I/O in particular. He was a member of the MPI Forum and participated actively in the definition of the I/O part of the MPI-2 standard. He is the author of a widely used, portable implementation of MPI-IO, called ROMIO. He is also a co-author of the book “Using MPI-2: Advanced Features of the Message Passing Interface” published by MIT Press. Robert Ross received his Ph.D. in Computer Engineering from Clemson University in 2000. He is now an Assistant Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory. His research interests are in message passing and storage systems for high performance computing environments. He is the primary author and lead developer for the Parallel Virtual File System (PVFS), a parallel file system for Linux clusters. Current projects include the ROMIO MPI-IO implementation, PVFS, PVFS2, and the MPICH2 implementation of the MPI message passing interface.  相似文献   

6.
Application scheduling plays an important role in high-performance cluster computing. Application scheduling can be classified as job scheduling and task scheduling. This paper presents a survey on the software tools for the graph-based scheduling on cluster systems with the focus on task scheduling. The tasks of a parallel or distributed application can be properly scheduled onto multi-processors in order to optimize the performance of the program (e.g., execution time or resource utilization). In general, scheduling algorithms are designed based on the notion of task graph that represents the relationship of parallel tasks. The scheduling algorithms map the nodes of a graph to the processors in order to minimize overall execution time. Although many scheduling algorithms have been proposed in the literature, surprisingly not many practical tools can be found in practical use. After discussing the fundamental scheduling techniques, we propose a framework and taxonomy for the scheduling tools on clusters. Using this framework, the features of existing scheduling tools are analyzed and compared. We also discuss the important issues in improving the usability of the scheduling tools. This work is supported by the Hong Kong Polytechnic University under grant H-ZJ80 and by NASA Ames Research Center by a cooperative grant agreement with the University of Texas at Arlington. Jiannong Cao received the BSc degree in computer science from Nanjing University, Nanjing, China in 1982, and the MSc and the Ph.D degrees in computer science from Washington State University, Pullman, WA, USA, in 1986 and 1990 respectively. He is currently an associate professor in Department of Computing at the Hong Kong Polytechnic University, Hong Kong. He is also the director of the Internet and Mobile Computing Lab in the department. He was on the faculty of computer science at James Cook University and University of Adelaide in Australia, and City University of Hong Kong. His research interests include parallel and distributed computing, networking, mobile computing, fault tolerance, and distributed software architecture and tools. He has published over 120 technical papers in the above areas. He has served as a member of editorial boards of several international journals, a reviewer for international journals/conference proceedings, and also as an organizing/programme committee member for many international conferences. Dr. Cao is a member of the IEEE Computer Society, the IEEE Communication Society, IEEE, and ACM. He is also a member of the IEEE Technical Committee on Distributed Processing, IEEE Technical Committee on Parallel Processing, IEEE Technical Committee on Fault Tolerant Computing, and Computer Architecture Professional Committee of the China Computer Federation. Alvin Chan is currently an assistant professor at the Hong Kong Polytechnic University. He graduated from the University of New South Wales with a Ph.D. degree in 1995 and was subsequently employed as a Research Scientist by the CSIRO, Australia. From 1997 to 1998, he was employed by the Centre for Wireless Communications, National University of Singapore as a Program Manager. Dr. Chan is one of the founding members and director of a university spin-off company, Information Access Technology Limited. He is an active consultant and has been providing consultancy services to both local and overseas companies. His research interests include mobile computing, context-aware computing and smart card applications. Yudong Sun received the B.S. and M.S. degrees from Shanghai Jiao Tong University, China. He received Ph.D. degree from the University of Hong Kong in 2002, all in computer science. From 1988 to 1996, he was among the teaching staff in Department of Computer Science and Engineering at Shanghai Jiao Tong University. From 2002 to 2003, he held a research position at the Hong Kong Polytechnic University. At present, he is a Research Associate in School of Computing Science at University of Newcastle upon Tyne, UK. His research interests include parallel and distributed computing, Web services, Grid computing, and bioinformatics. Sajal K. Das is currently a Professor of Computer Science and Engineering and the Founding Director of the Center for Research in Wireless Mobility and Networking (CReWMaN) at the University of Texas at Arlington. His current research interests include resource and mobility management in wireless networks, mobile and pervasive computing, sensor networks, mobile internet, parallel processing, and grid computing. He has published over 250 research papers, and holds four US patents in wireless mobile networks. He received the Best Paper Awards in ACM MobiCom’99, ICOIN-16, ACM, MSWiM’00 and ACM/IEEE PADS’97. Dr. Das serves on the Editorial Boards of IEEE Transactions on Mobile Computing, ACM/Kluwer Wireless Networks, Parallel Processing Letters, Journal of Parallel Algorithms and Applications. He served as General Chair of IEEE PerCom’04, IWDC’04, MASCOTS’02 ACM WoWMoM’00-02; General Vice Chair of IEEE PerCom’03, ACM MobiCom’00 and IEEE HiPC’00-01; Program Chair of IWDC’02, WoWMoM’98-99; TPC Vice Chair of ICPADS’02; and as TPC member of numerous IEEE and ACM conferences. Minyi Guo received his Ph.D. degree in information science from University of Tsukuba, Japan in 1998. From 1998 to 2000, Dr. Guo had been a research scientist of NEC Soft, Ltd. Japan. He is currently a professor at the Department of Computer Software, The University of Aizu, Japan. From 2001 to 2003, he was a visiting professor of Georgia State University, USA, Hong Kong Polytechnic University, Hong Kong. Dr. Guo has served as general chair, program committee or organizing committee chair for many international conferences, and delivered more than 20 invited talks in USA, Australia, China, and Japan. He is the editor-in-chief of the Journal of Embedded Systems. He is also in editorial board of International Journal of High Performance Computing and Networking, Journal of Embedded Computing, Journal of Parallel and Distributed Scientific and Engineering Computing, and International Journal of Computer and Applications. Dr. Guo’s research interests include parallel and distributed processing, parallelizing compilers, data parallel languages, data mining, molecular computing and software engineering. He is a member of the ACM, IEEE, IEEE Computer Society, and IEICE. He is listed in Marquis Who’s Who in Science and Engineering.  相似文献   

7.
The adequate location of wells in oil and environmental applications has a significant economic impact on reservoir management. However, the determination of optimal well locations is both challenging and computationally expensive. The overall goal of this research is to use the emerging Grid infrastructure to realize an autonomic self-optimizing reservoir framework. In this paper, we present a policy-driven peer-to-peer Grid middleware substrate to enable the use of the Simultaneous Perturbation Stochastic Approximation (SPSA) optimization algorithm, coupled with the Integrated Parallel Accurate Reservoir Simulator (IPARS) and an economic model to find the optimal solution for the well placement problem. Wolfgang Bangerth is a postdoctoral research fellow at both the Institute for Computational Engineering and Sciences, and the Institute for Geophyics, at the University of Texas at Austin. He obtained his Ph.D. in applied mathematics from the University of Heidelberg, Germany in 2002. He is the project leader for the deal.II finite element library (http://www.dealii.org). Wolfgang is a member of SIAM, AAAS, and ACM. Hector Klie obtained his Ph.D. degree in Computational Science and Engineering at Rice University, 1996, he completed his Master and undergraduate degrees in Computer Science at the Simon Bolivar University, Venezuela in 1991 and 1989, respectively. Hector Klie's main research interests are in the development of efficient parallel linear and nonlinear solvers and optimization algorithms for large-scale transport and flow of porous media problems. He currently holds the position of Associate Director and Senior Research Associate in the Center for Subsurface Modeling at the Institute of Computational Science and Engineering at The University of Texas at Austin. Dr. Klie is current member of SIAM, SPE and SEG. Vincent Matossian obtained a Masters in applied physics from the French Université Pierre et Marie Curie. Vincent is currently pursuing a Ph.D. degree in distributed systems at the Department of Electrical and Computer Engineering at Rutgers University under the guidance of Manish Parashar. His research interests include information discovery and ad-hoc communication paradigms in decentralized systems. Manish Parashar is Professor of Electrical and Computer Engineering at Rutgers University, where he also is director of the Applied Software Systems Laboratory. He received a BE degree in Electronics and Telecommunications from Bombay University, India and MS and Ph.D. degrees in Computer Engineering from Syracuse University. He has received the Rutgers Board of Trustees Award for Excellence in Research (2004–2005), NSF CAREER Award (1999) and the Enrico Fermi Scholarship from Argonne National Laboratory (1996). His research interests include autonomic computing, parallel & distributed computing (including peer-to-peer and Grid computing), scientific computing, software engineering. He is a senior member of IEEE, a member of the IEEE Computer Society Distinguished Visitor Program (2004–2007), and a member of ACM. Mary Fanett Wheeler obtained her Ph.D. at Rice University in 1971. Her primary research interest is in the numerical solutions of partial differential systems with applications to flow in porous media, geomechanics, surface flow, and parallel computation. Her numerical work includes formulation, analysis and implementation of finite-difference/finite-element discretization schemes for nonlinear, coupled PDE's as well as domain decomposition iterative solution methods. She has directed the Center for Subsurface Modeling, The University of Texas at Austin, since its creation in 1990. Dr. Wheeler is recepient of the Ernest and Virginia Cockrell Chair in Engineering and is Professor in the Department of Aerospace Engineering & Engineering Mechanics and in the Department of Petroleum & Geosystems Engineering of The University of Texas  相似文献   

8.
Load balancing in a workstation-based cluster system has been investigated extensively, mainly focusing on the effective usage of global CPU and memory resources. However, if a significant portion of applications running in the system is I/O-intensive, traditional load balancing policies can cause system performance to decrease substantially. In this paper, two I/O-aware load-balancing schemes, referred to as IOCM and WAL-PM, are presented to improve the overall performance of a cluster system with a general and practical workload including I/O activities. The proposed schemes dynamically detect I/O load imbalance of nodes in a cluster, and determine whether to migrate some I/O load from overloaded nodes to other less- or under-loaded nodes. The current running jobs are eligible to be migrated in WAL-PM only if overall performance improves. Besides balancing I/O load, the scheme judiciously takes into account both CPU and memory load sharing in the system, thereby maintaining the same level of performance as existing schemes when I/O load is low or well balanced. Extensive trace-driven simulations for both synthetic and real I/O-intensive applications show that: (1) Compared with existing schemes that only consider CPU and memory, the proposed schemes improve the performance with respect to mean slowdown by up to a factor of 20; (2) When compared to the existing approaches that only consider I/O with non-preemptive job migrations, the proposed schemes achieve improvements in mean slowdown by up to a factor of 10; (3) Under CPU-memory intensive workloads, our scheme improves the performance over the existing approaches that only consider I/O by up to 47.5%. Xiao Qin received the BSc and MSc degrees in computer science from Huazhong University of Science and Technology in 1992 and 1999, respectively. He received the PhD degree in computer science from the University of Nebraska-Lincoln in 2004. Currently, he is an assistant professor in the department of computer science at the New Mexico Institute of Mining and Technology. His research interests include parallel and distributed systems, storage systems, real-time computing, performance evaluation, and fault-tolerance. He served on program committees of international conferences like CLUSTER, ICPP, and IPCCC. During 2000–2001, he was on the editorial board of The IEEE Distributed System Online. He is a member of the IEEE. Hong Jiang received the B.Sc. degree in Computer Engineering in 1982 from Huazhong University of Science and Technology, Wuhan, China; the M.A.Sc. degree in Computer Engineering in 1987 from the University of Toronto, Toronto, Canada; and the PhD degree in Computer Science in 1991 from the Texas A&M University, College Station, Texas, USA. Since August 1991 he has been at the University of Nebraska-Lincoln, Lincoln, Nebraska, USA, where he is Associate Professor and Vice Chair in the Department of Computer Science and Engineering. His present research interests are computer architecture, parallel/distributed computing, computer storage systems and parallel I/O, performance evaluation, middleware, networking, and computational engineering. He has over 70 publications in major journals and international Conferences in these areas and his research has been supported by NSF, DOD and the State of Nebraska. Dr. Jiang is a Member of ACM, the IEEE Computer Society, and the ACM SIGARCH and ACM SIGCOMM. Yifeng Zhu received the B.E. degree in Electrical Engineering from Huazhong University of Science and Technology in 1998 and the M.S. degree in computer science from University of Nebraska Lincoln (UNL) in 2002. Currently he is working towards his Ph.D. degree in the department of computer science and engineering at UNL. His main fields of research interests are parallel I/O, networked storage, parallel scheduling, and cluster computing. He is a student member of IEEE. David Swanson received a Ph.D. in physical (computational) chemistry at the University of Nebraska-Lincoln (UNL) in 1995, after which he worked as an NSF-NATO postdoctoral fellow at the Technical University of Wroclaw, Poland, in 1996, and subsequently as a National Research Council Research Associate at the Naval Research Laboratory in Washington, DC, from 1997–1998. In early 1999 he returned to UNL where he has coordinated the Research Computing Facility and currently serves as an Assistant Research Professor in the Department of Computer Science and Engineering. The Office of Naval Research, the National Science Foundation, and the State of Nebraska have supported his research in areas such as large-scale parallel simulation and distributed systems.  相似文献   

9.
Caching techniques have been used widely to improve the performance gaps of storage hierarchies in computing systems. Little is known about the impact of policies on the response times of jobs that access and process very large files in data grids, particularly when data and computations on the data have to be co-located on the same host. In data intensive applications that access large data files over wide area network environment, such as data-grids, the combination of policies for job servicing (or scheduling), caching and cache replacement can significantly impact the performance of grid jobs. We present preliminary results of a simulation study that combines an admission policy with a cache replacement policy when servicing jobs submitted to a storage resource manager.The results show that, in comparison to a first come first serve policy, the response times of jobs are significantly improved, for practical limits of disk cache sizes, when the jobs that are back-logged to access the same files are taken into consideration in scheduling the next file to be retrieved into the disk cache. Not only are the response times of jobs improved, but also the metric measures for caching policies, such as the hit ratio and the average cost per retrieval, are improved irrespective of the cache replacement policy used. Ekow Otoo is research staff scientist with the scientific data management group at Lawrence Berkeley National Laboratory, University of California, Berkeley. He received his B.Sc. degree in Electrical Engineering from the University of Science and Technology, Kumasi, Ghana and a post graduate diploma in Computer Science from the University of Ghana, Legon. In 1977, he received his M.Sc. degree in Computer Science from the University of Newcastle Upon Tyne in Britain and his Ph.D. degree in Computer Science from McGill University, Montreal, Canada in 1983. He joined the faculty of the School of Computer Science, Carleton University, in 1983 and from 1987 to 1999, he was a tenured faculty member of the School of Computer Science, Carleton University, Ottawa, Canada. He has served as research consultant to Bell Northern Research, Ottawa, Canada, and as a research project consultant to the GIS Division, Geomatics Canada, Natural Resources Canada, from 1990 to 1998. Ekow Otoo is a member of the ACM and IEEE. His research interests include database management systems, data structures and algorithms, parallel I/O for high performance computing, parallel and distributed computing. Doron Rotem is currently a senior staff scientist and a member of the Data Management group at the Lawrence Berkeley National Lab. His research interests include Grid Computing, Workflow, Scientific Data Management and Paralled and Distributed Computing and Algorithms. He has published over 80 papers in international journals and conferences in these areas. Prior to that, Dr Rotem co-founded and served as a CTO of a startup company, called CommerceRoute, that made software products in the area of workflow and data integration and before that, he was an Associate Professor in the Department of Computer Science, University of Waterloo, Canada. Dr. Rotem holds a B.Sc degree in Mathematics and Statistics from the Hebrew University, Jerusalem, Israel and a Ph.D. in Computer Science from the University of the Witwatersrand, Johannesburg, South Africa. Arie Shoshani is a senior staff scientist at Lawrence Berkeley National Laboratory. He joined LBNL in 1976. He heads the Scientific Data Management Group. He received his Ph.D. from Princeton University in 1969. From 1969 to 1976, he was a researcher at System Development Corporation, where he worked on the Network Control Program for the ARPAnet, distributed databases, database conversion, and natural language interfaces to data management systems. His current areas of work include data models, query languages, temporal data, statistical and scientific database management, storage management on tertiary storage, and grid storage middleware. Arie is also the director of a Scientific Data Management (SDM) Integrated Software Infrastructure Center (ISIC), one of seven centers selected by the SciDAC program at DOE in 2001. In this capacity, he is coordinating the work of collaborators from 4 DOE laboratories and 4 universities (see: http://sdmcenter.lbl.gov). Dr. Shoshani has published over 65 technical papers in refereed journals and conferences, chaired several workshops, conferences, and panels in database management; and served on numerous program committees for various database conferences. He also served as an associate editor for the ACM Transactions on Database Systems. He was elected a member of the VLDB Endowment Board, served as the Publication Board Chairperson for the VLDB Journal, and as the Vice-President of the VLDB Endowment. His home page is http://www.lbl.gov/arie.  相似文献   

10.
Large amount of monitoring data can be collected from distributed systems as the observables to analyze system behaviors. However, without reasonable models to characterize systems, we can hardly interpret such monitoring data effectively for system management. In this paper, a new concept named flow intensity is introduced to measure the intensity with which internal monitoring data reacts to the volume of user requests in distributed transaction systems. We propose a novel approach to automatically model and search relationships between the flow intensities measured at various points across the system. If the modeled relationships hold all the time, they are regarded as invariants of the underlying system. Experimental results from a real system demonstrate that such invariants widely exist in distributed transaction systems. Further we discuss how such invariants can be used to characterize complex systems and support autonomic system management. Guofei Jiang received the B.S. and Ph.D. degrees in electrical and computer engineering from Beijing Institute of Technology, China, in 1993 and 1998, respectively. During 1998–2000, he was a postdoctoral fellow in computer engineering at Dartmouth College, NH. He is currently a research staff member with the Robust and Secure Systems Group in NEC Laboratories America at Princeton, NJ. During 2000–2004, he was a research scientist in the Institute for Security Technology Studies at Dartmouth College. His current research focus is on distributed system, dependable and secure computing, system and information theory. He has published over 50 technical papers in these areas. He is an associate editor of IEEE Security and Privacy magazine and has served in the program committees of many conferences. Haifeng Chen received the BEng and MEng degrees, both in automation, from Southeast University, China, in 1994 and 1997 respectively, and the PhD degree in computer engineering from Rutgers University, New Jersey, in 2004. He has worked as a researcher in the Chinese national research institute of power automation. He is currently a research staff member at NEC laboratory America, Princeton, NJ. His research interests include data mining, autonomic computing, pattern recognition and robust statistics. Kenji Yoshihira received the B.E. in EE at University of Tokyo in 1996 and designed processor chips for enterprise computer at Hitachi Ltd. for five years. He employed himself in CTO at Investoria Inc. in Japan to develop an Internet service system for financial information distribution through 2002 and received the M.S. in CS at New York University in 2004. He is currently a research staff member with the Robust and Secure Systems Group in NEC Laboratories America, inc. in NJ. His current research focus is on distributed system and autonomic computing.  相似文献   

11.
When users’ tasks in a distributed heterogeneous computing environment (e.g., cluster of heterogeneous computers) are allocated resources, the total demand placed on some system resources by the tasks, for a given interval of time, may exceed the availability of those resources. In such a case, some tasks may receive degraded service or be dropped from the system. One part of a measure to quantify the success of a resource management system (RMS) in such a distributed environment is the collective value of the tasks completed during an interval of time, as perceived by the user, application, or policy maker. The Flexible Integrated System Capability (FISC) measure presented here is a measure for quantifying this collective value. The FISC measure is a flexible multi-dimensional measure such that any task attribute can be inserted and may include priorities, versions of a task or data, deadlines, situational mode, security, application- and domain-specific QoS, and task dependencies. For an environment where it is important to investigate how well data communication requests are satisfied, the data communication request satisfied can be the basis of the FISC measure instead of tasks completed. The motivation behind the FISC measure is to determine the performance of resource management schemes if tasks have multiple attributes that needs to be satisfied. The goal of this measure is to compare the results of different resource management heuristics that are trying to achieve the same performance objective but with different approaches. This research was supported by the DARPA/ITO Quorum Program, by the DARPA/ISO BADD Program and the Office of Naval Research under ONR grant number N00014-97-1-0804, by the DARPA/ITO AICE program under contract numbers DABT63-99-C-0010 and DABT63-99-C-0012, and by the Colorado State University George T. Abell Endowment. Intel and Microsoft donated some of the equipment used in this research. Jong-Kook Kim is pursuing a Ph.D. degree from the School of Electrical and Computer Engineering at Purdue University (expected in August 2004). Jong-Kook received his M.S. degree in electrical and computer engineering from Purdue University in May 2000. He received his B.S. degree in electronic engineering from Korea University, Seoul, Korea in 1998. He has presented his work at several international conferences and has been a reviewer for numerous conferences and journals. His research interests include heterogeneous distributed computing, computer architecture, performance measure, resource management, evolutionary heuristics, and power-aware computing. He is a student member of the IEEE, IEEE Computer Society, and ACM. Debra Hensgen is a member of the Research and Evaluation Team at OpenTV in Mountain View, California. OpenTV produces middleware for set-top boxes in support of interactive television. She received her Ph.D. in the area of Distributed Operating Systems from the University of Kentucky. Prior to moving to private industry, as an Associate Professor in the systems area, she worked with students and colleagues to design and develop tools and systems for resource management, network re-routing algorithms and systems that preserve quality of service guarantees, and visualization tools for performance debugging of parallel and distributed systems. She has published numerous papers concerning her contributions to the Concurra toolkit for automatically generating safe, efficient concurrent code, the Graze parallel processing performance debugger, the SAAM path information base, and the SmartNet and MSHN Resource Management Systems. Taylor Kidd is currently a Software Architect for Vidiom Systems in Portland Oregon. His current work involves the writing of multi-company industrial specifications and the architecting of software systems for the digital cable television industry. He has been involved in the establishment of international specifications for digital interactive television in both Europe and in the US. Prior to his current position, Dr. Kidd has been a researcher for the US Navy as well as an Associate Professor at the Naval Postgraduate School. Dr Kidd received his Ph.D. in Electrical Engineering in 1991 from the University of California, San Diego. H. J. Siegel was appointed the George T. Abell Endowed Chair Distinguished Professor of Electrical and Computer Engineering at Colorado State University (CSU) in August 2001, where he is also a Professor of Computer Science. In December 2002, he became the first Director of the CSU Information Science and Technology Center (ISTeC). ISTeC is a university-wide organization for promoting, facilitating, and enhancing CSU’s research, education, and outreach activities pertaining to the design and innovative application of computer, communication, and information systems. From 1976 to 2001, he was a professor at Purdue University. He received two BS degrees from MIT, and the MA, MSE, and PhD degrees from Princeton University. His research interests include parallel and distributed computing, heterogeneous computing, robust computing systems, parallel algorithms, parallel machine interconnection networks, and reconfigurable parallel computer systems. He has co-authored over 300 published papers on parallel and distributed computing and communication, is an IEEE Fellow, is an ACM Fellow, was a Coeditor-in-Chief of the Journal of Parallel and Distributed Computing, and was on the Editorial Boards of both the IEEE Transactions on Parallel and Distributed Systems and the IEEE Transactions on Computers. He was Program Chair/Co-Chair of three major international conferences, General Chair/Co-Chair of four international conferences, and Chair/Co-Chair of five workshops. He has been an international keynote speaker and tutorial lecturer, and has consulted for industry and government. David St. John is Chief Information Officer for WeatherFlow, Inc., a weather services company specializing in coastal weather observations and forecasts. He received a master’s degree in Engineering from the University of California, Irvine. He spent several years as the head of staff on the Management System for Heterogeneous Networks project in the Computer Science Department of the Naval Postgraduate School. His current relationship with cluster computing is as a user of the Regional Atmospheric Modeling System (RAMS), a numerical weather model developed at Colorado State University. WeatherFlow runs RAMS operationally on a Linux-based cluster. Cynthia Irvine is a Professor of Computer Science at the Naval Postgraduate School in Monterey, California. She received her Ph.D. from Case Western Reserve University and her B.A. in Physics from Rice University. She joined the faculty of the Naval Postgraduate School in 1994. Previously she worked in industry on the development of high assurance secure systems. In 2001, Dr. Irvine received the Naval Information Assurance Award. Dr. Irvine is the Director of the Center for Information Systems Security Studies and Research at the Naval Postgraduate School. She has served on special panels for NSF, DARPA, and OSD. In the area of computer security education Dr. Irvine has most recently served as the general chair of the Third World Conference on Information Security Education and the Fifth Workshop on Education in Computer Security. She co-chaired the NSF workshop on Cyber-security Workforce Needs Assessment and Educational Innovation and was a participant in the Computing Research Association/NSF sponsored Grand Challenges in Information Assurance meeting. She is a member of the editorial board of the Journal of Information Warfare and has served as a reviewer and/or program committee member of a variety of security related conferences. She has written over 100 papers and articles and has supervised the work of over 80 students. Professor Irvine is a member of the ACM, the AAS, a life member of the ASP, and a Senior Member of the IEEE. Timothy E. Levin is a Research Associate Professor at the Naval Postgraduate School. He has spent over 18 years working in the design, development, evaluation, and verification of secure computer systems, including operating systems, databases and networks. His current research interests include high assurance system design and analysis, development of models and methods for the dynamic selection of QoS security attributes, and the application of formal methods to the development of secure computer systems. Viktor K. Prasanna received his BS in Electronics Engineering from the Bangalore University and his MS from the School of Automation, Indian Institute of Science. He obtained his Ph.D. in Computer Science from the Pennsylvania State University in 1983. Currently, he is a Professor in the Department of Electrical Engineering as well as in the Department of Computer Science at the University of Southern California, Los Angeles. He is also an associate member of the Center for Applied Mathematical Sciences (CAMS) at USC. He served as the Division Director for the Computer Engineering Division during 1994–98. His research interests include parallel and distributed systems, embedded systems, configurable architectures and high performance computing. Dr. Prasanna has published extensively and consulted for industries in the above areas. He has served on the organizing committees of several international meetings in VLSI computations, parallel computation, and high performance computing. He is the Steering Co-chair of the International Parallel and Distributed Processing Symposium [merged IEEE International Parallel Processing Symposium (IPPS) and the Symposium on Parallel and Distributed Processing (SPDP)] and is the Steering Chair of the International Conference on High Performance Computing(HiPC). He serves on the editorial boards of the Journal of Parallel and Distributed Computing and the Proceedings of the IEEE. He is the Editor-in-Chief of the IEEE Transactions on Computers. He was the founding Chair of the IEEE Computer Society Technical Committee on Parallel Processing. He is a Fellow of the IEEE. Richard F. Freund is the originator of GridIQ’s network scheduling concepts that arose from mathematical and computing approaches he developed for the Department of Defense in the early 1980’s. Dr. Freund has over twenty-five years experience in computational mathematics, algorithm design, high performance computing, distributed computing, network planning, and heterogeneous scheduling. Since 1989, Dr. Freund has published over 45 journal articles in these fields. He has also been an editor of special editions of IEEE Computer and the Journal of Parallel and Distributed Computing. In addition, he is a founder of the Heterogeneous Computing Workshop, held annually in conjunction with the International Parallel Processing Symposium. Dr. Freund is the recipient of many awards, which includes the prestigious Department of Defense Meritorious Civilian Service Award in 1984 and the Lauritsen-Bennet Award from the Space and Naval Warfare Systems Command in San Diego, California.  相似文献   

12.
Recently, software distributed shared memory systems have successfully provided an easy user interface to parallel user applications on distributed systems. In order to prompt program performance, most of DSM systems usually were greedy to utilize all of available processors in a computer network to execute user programs. However, using more processors to execute programs cannot necessarily guarantee to obtain better program performance. The overhead of paralleling programs is increased by the addition in the number of processors used for program execution. If the performance gain from program parallel cannot compensate for the overhead, increasing the number of execution processors will result in performance degradation and resource waste. In this paper, we proposed a mechanism to dynamically find a suitable system scale to optimize performance for DSM applications according to run-time information. The experimental results show that the proposed mechanism can precisely predict the processor number that will result in the best performance and then effectively optimize the performance of the test applications by adapting system scale according to the predicted result. Yi-Chang Zhuang received his B.S., M.S. and Ph.D. degrees in electrical engineering from National Cheng Kung University in 1995, 1997, and 2004. He is currently working as an engineer at Industrial Technology Research Institute in Taiwan. His research interests include object-based storage, file systems, distributed systems, and grid computing. Jyh-Biau Chang is currently an assistant professor at the Information Management Department of Leader University in Taiwan. He received his B.S., M.S. and Ph.D. degrees from Electrical Engineering Department of National Cheng Kung University in 1994, 1996, and 2005. His research interest is focused on cluster and grid computing, parallel and distributed system, and operating system. Tyng-Yeu Liang is currently an assistant professor who teaches and studies at Department of Electrical Engineering, National Kaohsiung University of Applied Sciences in Taiwan. He received his B.S., M.S. and Ph.D. degrees from National Cheng Kung University in 1992, 1994, and 2000. His study is interested in cluster and grid computing, image processing and multimedia. Ce-Kuen Shieh currently is a professor at the Electrical Engineering Department of National Cheng Kung University in Taiwan. He is also the chief of computation center at National Cheng Kung University. He received his Ph.D. degree from the Department of Electrical Engineering of National Cheng Kung University in 1988. He was the chairman of the Electrical Engineering Department of National Cheng Kung University from 2002 to 2005. His research interest is focused on computer network, and parallel and distributed system. Laurence T. Yang is a professor at the Department of Computer Science, St. Francis Xavier University, Canada. His research includes high performance computing and networking, embedded systems, ubiquitous/pervasive computing and intelligence, and autonomic and trusted computing.  相似文献   

13.
Distributed Shared Arrays (DSA) is a distributed virtual machine that supports Java-compliant multithreaded programming with mobility support for system reconfiguration in distributed environments. The DSA programming model allows programmers to explicitly control data distribution so as to take advantage of the deep memory hierarchy, while relieving them from error-prone orchestration of communication and synchronization at run-time. The DSA system is developed as an integral component of mobility support middleware for Grid computing so that DSA-based virtual machines can be reconfigured to adapt to the varying resource supplies or demand over the course of a computation. The DSA runtime system also features a directory-based cache coherence protocol in support of replication of user-defined sharing granularity and a communication proxy mechanism for reducing network contention. System reconfiguration is achieved by a DSA service migration mechanism, which moves the DSA service and residing computational agents between physical servers for load balancing and fault resilience. We demonstrate the programmability of the model in a number of parallel applications and evaluate its performance by application benchmark programs, in particular, the impact of the coherence granularity and service migration overhead. Song Fu received the BS degreee in computer science from Nanjing University of Aeronautics and Astronautics, China, in 1999, and the MS degree in computer science from Nanjing University, China, in 2002. He is currently a PhD candidate in computer engineering at Wayne State University. His research interests include the resource management, security, and mobility issues in wide-area distributed systems. Cheng-Zhong Xu received the BS and MS degrees in computer science from Nanjing University in 1986 and 1989, respectively, and the Ph.D. degree in computer science from the University of Hong Kong in 1993. He is an Associate Professor in the Department of Electrical and Computer Engineer of Wayne State University. His research interests lie in distributed are in distributed and parallel systems, particularly in resource management for high performance cluster and grid computing and scalable and secure Internet services. He has published more than100 peer-reviewed articles in journals and conference proceedings in these areas. He is the author of the book Scalable and Secure Internet Services and Architecture (CRC Press, 2005) and a co-author of the book Load Balancing in Parallel Computers: Theory and Practice (Kluwer Academic, 1997). He serves on the editorial boards of J. of Parallel and Distributed Computing, J. of Parallel, Emergent, and Distributed Systems, J. of High Performance Computing and Networking, and J. of Computers and Applications. He was the founding program co-chair of International Workshop on Security in Systems and Networks (SSN), the general co-chair of the IFIP 2006 International Conference on Embedded and Ubiquitous Computing (EUC06), and a member of the program committees of numerous conferences. His research was supported in part by the US National Science Foundation, NASA, and Cray Research. He is a recipient of the Faculty Research Award of Wayne State University in 2000, the Presidents Award for Excellence in Teaching in 2002, and the Career Development Chair Award in 2003. He is a senior member of the IEEE. Brian A. Wims was born in Washington, DC in 1967. He received the Bachelor of Science in Electrical Engineering from GMI-EMI (now called Kettering University) in 1990; and Master of Science in Computer Engineering from Wayne State University in 1999. His research interests are primarily in the fields of parallel and distributed systems with applications in Mobile Agent technologies. From 1990–2001 he worked in various Engineering positions in General Motors, including Electrical Analysis, Software Design, and Test and Development. In 2001, he joined the General Motors IS&S department where he is currently a Project Manager in the Computer Aided Test group. Responsibilities include managing the development of test automation applications in the Electrical, EMC, and Safety Labs. Ramzi Basharahil was born in Aden, Yemen in 1972. He received the Bachelor of Science degree in Electrical Engineering from the United Arab Emirates University. He graduated top of his engineering graduated class of 1997. He obtained Master of Science degree in 2001 from Wayne State University in the Department of Electrical and Computer Engineering. His main research interests are primarily in the fields of parallel and distributed systems with applications to distributed processing across cluster of servers. From 1997 to 1998, he worked as a Teaching Assistant in the Department of Electrical Engineering at the UAE University. In 2000, he joined Internet Security Systems as a security software engineer. He later joined NetIQ Corporation in 2002 and still working since then. He is leading the security events trending and events management software development where he is involved in designing and the implementing event/log managements products.  相似文献   

14.
The main contribution of this paper is to propose a new cluster maintenance algorithm and a companion cluster initialization algorithm based on a number of interesting and novel properties of diameter-2 graphs. The initialization algorithm naturally blends into cluster maintenance, showing the unity between these two operations. We refer to our algorithms as tree-based since they depend on a spanning tree maintained at various nodes. Unlike the vast majority of published clustering algorithms, our algorithms are cluster-centric, as opposed to node-centric, and work in the presence of node mobility. Extensive simulation results have shown the effectiveness of our algorithms when compared to other clustering schemes proposed in the literature.Lan Wang received his B.S. and M.S. degrees in computer science from Harbin Engineering University, China in 1992 and 1995, respectively. From 1995 to 1999, he worked as a software engineer at the System Engineering Research Institute of CSSC, Beijing, China. He is currently a PhD student at the Computer Science Department of Old Dominion University.Stephan Olariu received the M.Sc. and Ph.D. degrees in computer science from McGill University, Montreal, Canada in 1983 and 1986, respectively. In 1986 he joined the Computer Science Department at Old Dominion University where he is now a full professor. He has published extensively in various journals, book chapters, and conference proceedings. His research interests include wireless networks and mobile computing, parallel and distributed systems, performance evaluation, and medical image processing. Prof. Olariu serves on the editorial board of several archival journals including IEEE Transactions on Parallel and Distributed Systems, Journal of Parallel and Distributed Computing, International Journal of Foundations of Computer Science, Journal of Supercomputing, International Journal of Computer Mathematics, VLSI Design, and Parallel Algorithms and Applications.  相似文献   

15.
This paper presents a general methodology for the communication-efficient parallelization of graph algorithms using the divide-and-conquer approach and shows that this class of problems can be solved in cluster environments with good communication efficiency. Specifically, the first practical parallel algorithm, based on a general coarse-grained model, for finding Hamiltonian paths in tournaments is presented. On any such parallel machines, this algorithm uses only (3log p+1), where p is the number of processors, communication rounds, which is independent of the tournament size, and can reuse the existing linear-time algorithm in the sequential setting. For theoretical completeness, the algorithm is revised for fine-grained models, where the ratio of computation and communication throughputs is low or the local memory size, , of each individual processor is extremely limited for any , solving the problem with O(log p) communication rounds, while the hidden constant grows with the scalability factor 1/∊. Experiments have been carried out on a Linux cluster of 32 Sun Ultra5 computers and an SGI Origin 2000 with 32 R10000 processors. The algorithm performance on the Linux Cluster reaches 75% of the performance on the SGI Origin 2000 when the tournament size is about one million. Computational resources and technical support are provided by the Center for Computational Research (CCR) at the State University of New York at Buffalo. Chun-Hsi Huang received his Ph.D. degree in Computer Science from the State University of New York at Buffalo in 2001. His is currently an Assistant Professor of Computer Science and Engineering at the University of Connecticut. His interests include High Performance Parallel Computing, Cluster and Grid Computing, Biomedical and Health Informatics, Algorithm Design and Analysis, Experimental Algorithms and Computational Biology. Sanguthevar Rajasekaran received his Ph.D. degree in Computer Science from Harvard University in 1988. Currently he is the UTC Chair Professor of Computer Science and Engineering at the University of Connecticut and the Director of Booth Engineering Center for Advanced Technologies (BECAT). His research interests include Parallel Algorithms, Bioinformatics, Data Mining, Randomized Computing, Computer Simulations, and Combinatorial Optimization. Laurence Tianruo Yang received is Ph.D. degree in Computer Science from the Oxford University. He is currently a professor of Computer Science of the St. Francis Xavier University in Canada. His research interests include high-performance computing, embedded systems, computer archtecture and high-speed networking. Xin He received his Ph.D. degree in Computer Science from the Ohio State University in 1987. He is currently Professor of Computer Science and Engineering at the State University of New York at Buffalo. His research interests include Algorithms, Data Structures, Combinatorics and Computational Geometry.  相似文献   

16.
The Collaboratory for Multi-scale Chemical Science (CMCS) is developing a powerful informatics-based approach to synthesizing multi-scale information in support of systems-based research and is applying it within combustion science. An open source multi-scale informatics toolkit is being developed that addresses a number of issues core to the emerging concept of knowledge grids including provenance tracking and lightweight federation of data and application resources into cross-scale information flows. The CMCS portal is currently in use by a number of high-profile pilot groups and is playing a significant role in enabling their efforts to improve and extend community maintained chemical reference information. James D. Myers received his B.A. in Physics from Cornell University in 1985 and his Ph.D. in Chemistry from the University of California at Berkeley in 1993. He is currently the Associate Director for Collaborative Technologies at the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana Champaign. Dr. Myers is the lead investigator on the U.S. Department of Energy (DOE) sponsored Scientific Annotation Middleware project (http://www.scidac.org/SAM/) (scientific content management, semantic annotation, and records functionality) and is serving as the Chief Technical Officer for the DOE-sponsored Collaboratory for Multiscale Chemical Science (CMCS) project. His is also the lead architect for the Mid-America Earthquake Center's MAEViz hazard risk management collaboratory and co-lead of NCSA's Collaborative Large-scale Engineering Analysis Network for Environmental Research (CLEANER) related cybercollaboratory effort. Open source software developed by Dr. Myers and his colleagues including the electronic laboratory notebook (ELN) and the Collaborative Research Environment (CORE) real-time collaboration environment have been downloaded from the Pacific Northwest National Laboratory (PNNL) Collaboratory website (http://collaboratory.pnl.gov) by thousands of researchers and educators. Due to space limitations, individual bios for all 28 authors are not shown. The CMCS project is led by Dr. Larry Rahn (rahn@sandia.gov) at Sandia National Laboratories. The team includes combustion researchers and computer science researchers and developers at five DOE National Laboratories (Argonne, Lawrence Livermore, Los Alamos, Pacific Northwest, and Sandia National Laboratories), the National Institute of Standards and Technology, Massachusetts Institute of Technology, and the University of California, Berkeley. Current contact information and biographic information for team members is available at http://cmcs.org/team.php.  相似文献   

17.
IEEE 802.11b Ad Hoc Networks: Performance Measurements   总被引:1,自引:0,他引:1  
In this paper we investigate the performance of IEEE 802.11b ad hoc networks by means of an experimental study. An extensive literature, based on simulation studies, there exists on the performance of IEEE 802.11 ad hoc networks. Our analysis reveals several aspects that are usually neglected in previous simulation studies. Firstly, since different transmission rates are used for control and data frames, different transmission ranges and carrier-sensing ranges may exist at the same time in the network. In addition, the transmission ranges are in practice much shorter than usually assumed in simulation analysis, not constant but highly variable (even in the same session) and depends on several factors. Finally, the results presented in this paper indicate that for correctly understanding the behavior of an 802.11b network operating in ad hoc mode, several different ranges must be considered. In addition to the transmission range, the physical carrier sensing range is very important. The transmission range is highly dependent on the data rate and is up to 100 m, while the physical carrier sensing range is almost independent from the data rate and is approximately 200 m. Furthermore, even though stations are outside from their respective physical carrier sensing range, they may still interfere if their distance is lower than 350 m.Giuseppe Anastasi received the Laurea degree in Electronics Engineering and the Ph.D. degree in Computer Engineering both from the University of Pisa, Italy, in 1990 and 1995, respectively. He is currently an associate professor of Computer Engineering at the Department of Information Engineering of the University of Pisa. His research interests include architectures and protocols for mobile computing, energy management, QoS in mobile networks, and ad hoc networks. He was a co-editor of the book Advanced Lectures in Networking (LNCS 2497, Springer, 2002), and published more than 50 papers, both in international journals and conference proceedings, in the area of computer networking. He served in the TPC of several international conferences including IFIP Networking 2002 and IEEE PerCom 2003. He is a member of the IEEE Computer Society.Eleonora Borgia received the Laurea degree in Computer Engineering from the University of Pisa, Italy, in 2002. She is currently working toward her Ph.D. degree at the IIT Institute of the Italian National Research Council (CNR). Her research interests are in the area of the wireless and mobile networks with particular attention to MAC protocols and routing algorithms for ad hoc networks.Marco Conti received the Laurea degree in Computer Science from the University of Pisa, Italy, in 1987. In 1987 he joined the Italian National Research Council (CNR). He is currently a senior researcher at CNR-IIT. His research interests include Internet architecture and protocols, wireless networks and ad hoc networking, mobile computing, and QoS in packet switching networks. He co-authored the book Metropolitan Area Networks (Springer, London, 1997), and published in journal and conference proceedings more than 100 research papers related to design, modeling, and performance evaluation of computer-network architectures and protocols. He served as the technical program committee chair of the IFIP-TC6 conferences Networking 2002 and PWC 2003, and technical program committee co-chair of ACM WoWMoM 2002. He is serving as technical program committee co-chair of the IEEE Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM 2005). He served as guest editor for the Cluster Computing Journal (special issue on Mobile Ad Hoc Networking), IEEE Transactions on Computers (special issue on Quality of Service issues in Internet Web Services), and ACM/Kluwer Mobile Networks & Applications Journal (special issue on Mobile Ad hoc Networks). He is member of IFIP WGs 6.2, 6.3 and 6.8.Enrico Gregori received the Laurea in electronic engineering from the University of Pisa in 1980. He joined CNUCE, an institute of the Italian National Research Council (CNR) in 1981. He is currently a CNR research director. In 1986 he held a visiting position in the IBM research center in Zurich working on network software engineering and on heterogeneous networking. He has contributed to several national and international projects on computer networking. He has authored more than 100 papers in the area of computer networks and has published in international journals and conference proceedings and is co-author of the book Metropolitan Area Networks (Springer, London, 1997). He was the General Chair of the IFIP TC6 conferences: Networking2002 and PWC2003 (Personal Wireless Communications). He served as guest editor for the Networking2002 journal special issues on: Performance Evaluation, Cluster Computing and ACM/Kluwer Wireless Networks Journals. He is a member of the board of directors of the Create-Net association, an association with several Universities and research centres that is fostering research on networking at European level. He is on the editorial board of the Cluster Computing, of the Computer Networks and of the Wireless Networks Journals. His current research interests include: Wireless access to Internet, Wireless LANs, Quality of service in packet-switching networks, Energy saving protocols, Evolution of TCP/IP protocols.  相似文献   

18.
When an individual grows up in a society, he learns certain behavior patterns which are “accepted” by that society. He may in general have a tendency toward behavior patterns other than those which are “accepted” by the society. This tendency toward such unaccepted behavior may be due to a process of cerebration which results in doubt as to the “correctness” of the accepted behavior. Thus, on the one hand, the individual learns to follow the accepted rules almost automatically; on the other hand, he may tend to consciously break those rules. Using a neural circuit, suggested by H. D. Landahl in his theory of learning, a neurobiophysical interpretation of the above situation is outlined. Mathematical expressions are derived which describe the social behavior of an individual as a function of his age, social status, and some neurobiophysical parameters.  相似文献   

19.
The complexity of human societies of the past few thousand years rivals that of social insect societies. We hypothesize that two sets of social “instincts” underpin and constrain the evolution of complex societies. One set is ancient and shared with other social primate species, and one is derived and unique to our lineage. The latter evolved by the late Pleistocene, and led to the evolution of institutions of intermediate complexity in acephalous societies. The institutions of complex societies often conflict with our social instincts. The complex societies of the past few thousand years can function only because cultural evolution has created effective “work-arounds” to manage such instincts. We describe a series of work-arounds and use the data on the relative effectiveness of WWII armies to test the work-around hypothesis. Richerson received his Ph.D. degree in zoology from UC Davis in 1969. He is currently a professor in the Department of Environmental Science and Policy. In addition to his work in cultural evolution, he has worked on the limnology of Lake Tahoe and Clear Lake in California, and on Lake Titicaca in Peru and Bolivia. Boyd received his Ph.D. degree in ecology from UC Davis in 1975, though his thesis work was a resource economics problem. He is currently a professor of anthropology at UCLA. His research interests besides cultural evolution are game theory and a small bit of primatology from time to time.  相似文献   

20.
An attacker’s connection can propagate quickly to different parts of a transparent All-Optical Network. Such attacks affect the normal traffic and cause a quality of service degradation or outright service denial. Attack monitors can collect the information of each link and each node to help diagnose the attacker’s exact location. Quick detection and localization of an attack source helps avoid losing large amounts of data in an all-optical network. However, to detect attack sources, it is not necessary to put monitors on all nodes. Since not every wavelength on every link is being used all the time, we propose to use the idle wavelengths to setup diagnostic connections and obtain the necessary information needed for diagnosis purposes. We show that placing a relatively small number of monitors at some key nodes in a network is sufficient to achieve level of performance. However, the monitor placement policy, routing policy, and diagnosis method are challenging problems. We, in this paper, first develop a monitor placement policy, a test connection policy, and a routing policy based on our definition of crosstalk attack and monitor node models. With these policies, we show that we can always detect and localize the malicious connections as long as there is no more than one malicious connection on each wavelength in the whole network. After this, we develop a scalable diagnosis method, which can localize the sources of the such malicious attacks in a fast manner. Arun K. Somani is currently Jerry R. Junkins Chair Professor of Electrical and Computer Engineering at Iowa State University. He earned his MSEE and Ph.D. degrees in electrical engineering from the McGill University, Montreal, Canada, in 1983 and 1985, respectively. He worked as Scientific Officer for Govt. of India, New Delhi from 1974 to 1982. From 1985 to 1997, he was a faculty member at the University of Washington, Seattle, WA, where he was a Professor of EE and CSE from 1995 onwards. From 1997 to 2002, he served as David C. Nicholas Professor of Electrical and Computer Engineering at Iowa State University. Professor Somani’s research interests are in the area of fault tolerant computing, computer communication and networks, wireless and optical networking, computer architecture, and parallel computer systems. Tao Wu received the B.S. and M.S.E.E. degrees in telecommunication engineering from the University of Electronic Science and Technology of China, Sichuan, China, in 1993 and 1996, respectively, and the Ph.D. degree in computer and electrical engineering from Iowa State University, Ames, in 2003. He is currently a Software Engineer with Microsoft Corporation. His research interests are in the area of WDM-based optical networking, network security, and image processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号