首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Distributed Shared Arrays (DSA) is a distributed virtual machine that supports Java-compliant multithreaded programming with mobility support for system reconfiguration in distributed environments. The DSA programming model allows programmers to explicitly control data distribution so as to take advantage of the deep memory hierarchy, while relieving them from error-prone orchestration of communication and synchronization at run-time. The DSA system is developed as an integral component of mobility support middleware for Grid computing so that DSA-based virtual machines can be reconfigured to adapt to the varying resource supplies or demand over the course of a computation. The DSA runtime system also features a directory-based cache coherence protocol in support of replication of user-defined sharing granularity and a communication proxy mechanism for reducing network contention. System reconfiguration is achieved by a DSA service migration mechanism, which moves the DSA service and residing computational agents between physical servers for load balancing and fault resilience. We demonstrate the programmability of the model in a number of parallel applications and evaluate its performance by application benchmark programs, in particular, the impact of the coherence granularity and service migration overhead. Song Fu received the BS degreee in computer science from Nanjing University of Aeronautics and Astronautics, China, in 1999, and the MS degree in computer science from Nanjing University, China, in 2002. He is currently a PhD candidate in computer engineering at Wayne State University. His research interests include the resource management, security, and mobility issues in wide-area distributed systems. Cheng-Zhong Xu received the BS and MS degrees in computer science from Nanjing University in 1986 and 1989, respectively, and the Ph.D. degree in computer science from the University of Hong Kong in 1993. He is an Associate Professor in the Department of Electrical and Computer Engineer of Wayne State University. His research interests lie in distributed are in distributed and parallel systems, particularly in resource management for high performance cluster and grid computing and scalable and secure Internet services. He has published more than100 peer-reviewed articles in journals and conference proceedings in these areas. He is the author of the book Scalable and Secure Internet Services and Architecture (CRC Press, 2005) and a co-author of the book Load Balancing in Parallel Computers: Theory and Practice (Kluwer Academic, 1997). He serves on the editorial boards of J. of Parallel and Distributed Computing, J. of Parallel, Emergent, and Distributed Systems, J. of High Performance Computing and Networking, and J. of Computers and Applications. He was the founding program co-chair of International Workshop on Security in Systems and Networks (SSN), the general co-chair of the IFIP 2006 International Conference on Embedded and Ubiquitous Computing (EUC06), and a member of the program committees of numerous conferences. His research was supported in part by the US National Science Foundation, NASA, and Cray Research. He is a recipient of the Faculty Research Award of Wayne State University in 2000, the Presidents Award for Excellence in Teaching in 2002, and the Career Development Chair Award in 2003. He is a senior member of the IEEE. Brian A. Wims was born in Washington, DC in 1967. He received the Bachelor of Science in Electrical Engineering from GMI-EMI (now called Kettering University) in 1990; and Master of Science in Computer Engineering from Wayne State University in 1999. His research interests are primarily in the fields of parallel and distributed systems with applications in Mobile Agent technologies. From 1990–2001 he worked in various Engineering positions in General Motors, including Electrical Analysis, Software Design, and Test and Development. In 2001, he joined the General Motors IS&S department where he is currently a Project Manager in the Computer Aided Test group. Responsibilities include managing the development of test automation applications in the Electrical, EMC, and Safety Labs. Ramzi Basharahil was born in Aden, Yemen in 1972. He received the Bachelor of Science degree in Electrical Engineering from the United Arab Emirates University. He graduated top of his engineering graduated class of 1997. He obtained Master of Science degree in 2001 from Wayne State University in the Department of Electrical and Computer Engineering. His main research interests are primarily in the fields of parallel and distributed systems with applications to distributed processing across cluster of servers. From 1997 to 1998, he worked as a Teaching Assistant in the Department of Electrical Engineering at the UAE University. In 2000, he joined Internet Security Systems as a security software engineer. He later joined NetIQ Corporation in 2002 and still working since then. He is leading the security events trending and events management software development where he is involved in designing and the implementing event/log managements products.  相似文献   

2.
The adequate location of wells in oil and environmental applications has a significant economic impact on reservoir management. However, the determination of optimal well locations is both challenging and computationally expensive. The overall goal of this research is to use the emerging Grid infrastructure to realize an autonomic self-optimizing reservoir framework. In this paper, we present a policy-driven peer-to-peer Grid middleware substrate to enable the use of the Simultaneous Perturbation Stochastic Approximation (SPSA) optimization algorithm, coupled with the Integrated Parallel Accurate Reservoir Simulator (IPARS) and an economic model to find the optimal solution for the well placement problem. Wolfgang Bangerth is a postdoctoral research fellow at both the Institute for Computational Engineering and Sciences, and the Institute for Geophyics, at the University of Texas at Austin. He obtained his Ph.D. in applied mathematics from the University of Heidelberg, Germany in 2002. He is the project leader for the deal.II finite element library (http://www.dealii.org). Wolfgang is a member of SIAM, AAAS, and ACM. Hector Klie obtained his Ph.D. degree in Computational Science and Engineering at Rice University, 1996, he completed his Master and undergraduate degrees in Computer Science at the Simon Bolivar University, Venezuela in 1991 and 1989, respectively. Hector Klie's main research interests are in the development of efficient parallel linear and nonlinear solvers and optimization algorithms for large-scale transport and flow of porous media problems. He currently holds the position of Associate Director and Senior Research Associate in the Center for Subsurface Modeling at the Institute of Computational Science and Engineering at The University of Texas at Austin. Dr. Klie is current member of SIAM, SPE and SEG. Vincent Matossian obtained a Masters in applied physics from the French Université Pierre et Marie Curie. Vincent is currently pursuing a Ph.D. degree in distributed systems at the Department of Electrical and Computer Engineering at Rutgers University under the guidance of Manish Parashar. His research interests include information discovery and ad-hoc communication paradigms in decentralized systems. Manish Parashar is Professor of Electrical and Computer Engineering at Rutgers University, where he also is director of the Applied Software Systems Laboratory. He received a BE degree in Electronics and Telecommunications from Bombay University, India and MS and Ph.D. degrees in Computer Engineering from Syracuse University. He has received the Rutgers Board of Trustees Award for Excellence in Research (2004–2005), NSF CAREER Award (1999) and the Enrico Fermi Scholarship from Argonne National Laboratory (1996). His research interests include autonomic computing, parallel & distributed computing (including peer-to-peer and Grid computing), scientific computing, software engineering. He is a senior member of IEEE, a member of the IEEE Computer Society Distinguished Visitor Program (2004–2007), and a member of ACM. Mary Fanett Wheeler obtained her Ph.D. at Rice University in 1971. Her primary research interest is in the numerical solutions of partial differential systems with applications to flow in porous media, geomechanics, surface flow, and parallel computation. Her numerical work includes formulation, analysis and implementation of finite-difference/finite-element discretization schemes for nonlinear, coupled PDE's as well as domain decomposition iterative solution methods. She has directed the Center for Subsurface Modeling, The University of Texas at Austin, since its creation in 1990. Dr. Wheeler is recepient of the Ernest and Virginia Cockrell Chair in Engineering and is Professor in the Department of Aerospace Engineering & Engineering Mechanics and in the Department of Petroleum & Geosystems Engineering of The University of Texas  相似文献   

3.
While aggregating the throughput of existing disks on cluster nodes is a cost-effective approach to alleviate the I/O bottleneck in cluster computing, this approach suffers from potential performance degradations due to contentions for shared resources on the same node between storage data processing and user task computation. This paper proposes to judiciously utilize the storage redundancy in the form of mirroring existed in a RAID-10 style file system to alleviate this performance degradation. More specifically, a heuristic scheduling algorithm is developed, motivated from the observations of a simple cluster configuration, to spatially schedule write operations on the nodes with less load among each mirroring pair. The duplication of modified data to the mirroring nodes is performed asynchronously in the background. The read performance is improved by two techniques: doubling the degree of parallelism and hot-spot skipping. A synthetic benchmark is used to evaluate these algorithms in a real cluster environment and the proposed algorithms are shown to be very effective in performance enhancement. Yifeng Zhu received his B.Sc. degree in Electrical Engineering in 1998 from Huazhong University of Science and Technology, Wuhan, China; the M.S. and Ph.D. degree in Computer Science from University of Nebraska – Lincoln in 2002 and 2005 respectively. He is an assistant professor in the Electrical and Computer Engineering department at University of Maine. His main research interests are cluster computing, grid computing, computer architecture and systems, and parallel I/O storage systems. Dr. Zhu is a Member of ACM, IEEE, the IEEE Computer Society, and the Francis Crowe Society. Hong Jiang received the B.Sc. degree in Computer Engineering in 1982 from Huazhong University of Science and Technology, Wuhan, China; the M.A.Sc. degree in Computer Engineering in 1987 from the University of Toronto, Toronto, Canada; and the PhD degree in Computer Science in 1991 from the Texas A&M University, College Station, Texas, USA. Since August 1991 he has been at the University of Nebraska-Lincoln, Lincoln, Nebraska, USA, where he is Professor and Vice Chair in the Department of Computer Science and Engineering. His present research interests are computer architecture, parallel/distributed computing, cluster and Grid computing, computer storage systems and parallel I/O, performance evaluation, real-time systems, middleware, and distributed systems for distance education. He has over 100 publications in major journals and international Conferences in these areas and his research has been supported by NSF, DOD and the State of Nebraska. Dr. Jiang is a Member of ACM, the IEEE Computer Society, and the ACM SIGARCH. Xiao Qin received the BS and MS degrees in computer science from Huazhong University of Science and Technology in 1992 and 1999, respectively. He received the PhD degree in computer science from the University of Nebraska-Lincoln in 2004. Currently, he is an assistant professor in the department of computer science at the New Mexico Institute of Mining and Technology. He had served as a subject area editor of IEEE Distributed System Online (2000–2001). His research interests are in parallel and distributed systems, storage systems, real-time computing, performance evaluation, and fault-tolerance. He is a member of the IEEE. Dan Feng received the Ph.D degree from Huazhong University of Science and Technology, Wuhan, China, in 1997. She is currently a professor of School of Computer, Huazhong University of Science and Technology, Wuhan, China. She is the principal scientist of the the National Grand Fundamental Research 973 Program of China “Research on the organization and key technologies of the Storage System on the next generation Internet.” Her research interests include computer architecture, storage system, parallel I/O, massive storage and performance evaluation. David Swanson received a Ph.D. in physical (computational) chemistry at the University of Nebraska-Lincoln (UNL) in 1995, after which he worked as an NSF-NATO postdoctoral fellow at the Technical University of Wroclaw, Poland, in 1996, and subsequently as a National Research Council Research Associate at the Naval Research Laboratory in Washington, DC, from 1997–1998. In 1999 he returned to UNL where he directs the Research Computing Facility and currently serves as an Assistant Research Professor in the Department of Computer Science and Engineering. The Office of Naval Research, the National Science Foundation, and the State of Nebraska have supported his research in areas such as large-scale scientific simulation and distributed systems.  相似文献   

4.
Application scheduling plays an important role in high-performance cluster computing. Application scheduling can be classified as job scheduling and task scheduling. This paper presents a survey on the software tools for the graph-based scheduling on cluster systems with the focus on task scheduling. The tasks of a parallel or distributed application can be properly scheduled onto multi-processors in order to optimize the performance of the program (e.g., execution time or resource utilization). In general, scheduling algorithms are designed based on the notion of task graph that represents the relationship of parallel tasks. The scheduling algorithms map the nodes of a graph to the processors in order to minimize overall execution time. Although many scheduling algorithms have been proposed in the literature, surprisingly not many practical tools can be found in practical use. After discussing the fundamental scheduling techniques, we propose a framework and taxonomy for the scheduling tools on clusters. Using this framework, the features of existing scheduling tools are analyzed and compared. We also discuss the important issues in improving the usability of the scheduling tools. This work is supported by the Hong Kong Polytechnic University under grant H-ZJ80 and by NASA Ames Research Center by a cooperative grant agreement with the University of Texas at Arlington. Jiannong Cao received the BSc degree in computer science from Nanjing University, Nanjing, China in 1982, and the MSc and the Ph.D degrees in computer science from Washington State University, Pullman, WA, USA, in 1986 and 1990 respectively. He is currently an associate professor in Department of Computing at the Hong Kong Polytechnic University, Hong Kong. He is also the director of the Internet and Mobile Computing Lab in the department. He was on the faculty of computer science at James Cook University and University of Adelaide in Australia, and City University of Hong Kong. His research interests include parallel and distributed computing, networking, mobile computing, fault tolerance, and distributed software architecture and tools. He has published over 120 technical papers in the above areas. He has served as a member of editorial boards of several international journals, a reviewer for international journals/conference proceedings, and also as an organizing/programme committee member for many international conferences. Dr. Cao is a member of the IEEE Computer Society, the IEEE Communication Society, IEEE, and ACM. He is also a member of the IEEE Technical Committee on Distributed Processing, IEEE Technical Committee on Parallel Processing, IEEE Technical Committee on Fault Tolerant Computing, and Computer Architecture Professional Committee of the China Computer Federation. Alvin Chan is currently an assistant professor at the Hong Kong Polytechnic University. He graduated from the University of New South Wales with a Ph.D. degree in 1995 and was subsequently employed as a Research Scientist by the CSIRO, Australia. From 1997 to 1998, he was employed by the Centre for Wireless Communications, National University of Singapore as a Program Manager. Dr. Chan is one of the founding members and director of a university spin-off company, Information Access Technology Limited. He is an active consultant and has been providing consultancy services to both local and overseas companies. His research interests include mobile computing, context-aware computing and smart card applications. Yudong Sun received the B.S. and M.S. degrees from Shanghai Jiao Tong University, China. He received Ph.D. degree from the University of Hong Kong in 2002, all in computer science. From 1988 to 1996, he was among the teaching staff in Department of Computer Science and Engineering at Shanghai Jiao Tong University. From 2002 to 2003, he held a research position at the Hong Kong Polytechnic University. At present, he is a Research Associate in School of Computing Science at University of Newcastle upon Tyne, UK. His research interests include parallel and distributed computing, Web services, Grid computing, and bioinformatics. Sajal K. Das is currently a Professor of Computer Science and Engineering and the Founding Director of the Center for Research in Wireless Mobility and Networking (CReWMaN) at the University of Texas at Arlington. His current research interests include resource and mobility management in wireless networks, mobile and pervasive computing, sensor networks, mobile internet, parallel processing, and grid computing. He has published over 250 research papers, and holds four US patents in wireless mobile networks. He received the Best Paper Awards in ACM MobiCom’99, ICOIN-16, ACM, MSWiM’00 and ACM/IEEE PADS’97. Dr. Das serves on the Editorial Boards of IEEE Transactions on Mobile Computing, ACM/Kluwer Wireless Networks, Parallel Processing Letters, Journal of Parallel Algorithms and Applications. He served as General Chair of IEEE PerCom’04, IWDC’04, MASCOTS’02 ACM WoWMoM’00-02; General Vice Chair of IEEE PerCom’03, ACM MobiCom’00 and IEEE HiPC’00-01; Program Chair of IWDC’02, WoWMoM’98-99; TPC Vice Chair of ICPADS’02; and as TPC member of numerous IEEE and ACM conferences. Minyi Guo received his Ph.D. degree in information science from University of Tsukuba, Japan in 1998. From 1998 to 2000, Dr. Guo had been a research scientist of NEC Soft, Ltd. Japan. He is currently a professor at the Department of Computer Software, The University of Aizu, Japan. From 2001 to 2003, he was a visiting professor of Georgia State University, USA, Hong Kong Polytechnic University, Hong Kong. Dr. Guo has served as general chair, program committee or organizing committee chair for many international conferences, and delivered more than 20 invited talks in USA, Australia, China, and Japan. He is the editor-in-chief of the Journal of Embedded Systems. He is also in editorial board of International Journal of High Performance Computing and Networking, Journal of Embedded Computing, Journal of Parallel and Distributed Scientific and Engineering Computing, and International Journal of Computer and Applications. Dr. Guo’s research interests include parallel and distributed processing, parallelizing compilers, data parallel languages, data mining, molecular computing and software engineering. He is a member of the ACM, IEEE, IEEE Computer Society, and IEICE. He is listed in Marquis Who’s Who in Science and Engineering.  相似文献   

5.
When users’ tasks in a distributed heterogeneous computing environment (e.g., cluster of heterogeneous computers) are allocated resources, the total demand placed on some system resources by the tasks, for a given interval of time, may exceed the availability of those resources. In such a case, some tasks may receive degraded service or be dropped from the system. One part of a measure to quantify the success of a resource management system (RMS) in such a distributed environment is the collective value of the tasks completed during an interval of time, as perceived by the user, application, or policy maker. The Flexible Integrated System Capability (FISC) measure presented here is a measure for quantifying this collective value. The FISC measure is a flexible multi-dimensional measure such that any task attribute can be inserted and may include priorities, versions of a task or data, deadlines, situational mode, security, application- and domain-specific QoS, and task dependencies. For an environment where it is important to investigate how well data communication requests are satisfied, the data communication request satisfied can be the basis of the FISC measure instead of tasks completed. The motivation behind the FISC measure is to determine the performance of resource management schemes if tasks have multiple attributes that needs to be satisfied. The goal of this measure is to compare the results of different resource management heuristics that are trying to achieve the same performance objective but with different approaches. This research was supported by the DARPA/ITO Quorum Program, by the DARPA/ISO BADD Program and the Office of Naval Research under ONR grant number N00014-97-1-0804, by the DARPA/ITO AICE program under contract numbers DABT63-99-C-0010 and DABT63-99-C-0012, and by the Colorado State University George T. Abell Endowment. Intel and Microsoft donated some of the equipment used in this research. Jong-Kook Kim is pursuing a Ph.D. degree from the School of Electrical and Computer Engineering at Purdue University (expected in August 2004). Jong-Kook received his M.S. degree in electrical and computer engineering from Purdue University in May 2000. He received his B.S. degree in electronic engineering from Korea University, Seoul, Korea in 1998. He has presented his work at several international conferences and has been a reviewer for numerous conferences and journals. His research interests include heterogeneous distributed computing, computer architecture, performance measure, resource management, evolutionary heuristics, and power-aware computing. He is a student member of the IEEE, IEEE Computer Society, and ACM. Debra Hensgen is a member of the Research and Evaluation Team at OpenTV in Mountain View, California. OpenTV produces middleware for set-top boxes in support of interactive television. She received her Ph.D. in the area of Distributed Operating Systems from the University of Kentucky. Prior to moving to private industry, as an Associate Professor in the systems area, she worked with students and colleagues to design and develop tools and systems for resource management, network re-routing algorithms and systems that preserve quality of service guarantees, and visualization tools for performance debugging of parallel and distributed systems. She has published numerous papers concerning her contributions to the Concurra toolkit for automatically generating safe, efficient concurrent code, the Graze parallel processing performance debugger, the SAAM path information base, and the SmartNet and MSHN Resource Management Systems. Taylor Kidd is currently a Software Architect for Vidiom Systems in Portland Oregon. His current work involves the writing of multi-company industrial specifications and the architecting of software systems for the digital cable television industry. He has been involved in the establishment of international specifications for digital interactive television in both Europe and in the US. Prior to his current position, Dr. Kidd has been a researcher for the US Navy as well as an Associate Professor at the Naval Postgraduate School. Dr Kidd received his Ph.D. in Electrical Engineering in 1991 from the University of California, San Diego. H. J. Siegel was appointed the George T. Abell Endowed Chair Distinguished Professor of Electrical and Computer Engineering at Colorado State University (CSU) in August 2001, where he is also a Professor of Computer Science. In December 2002, he became the first Director of the CSU Information Science and Technology Center (ISTeC). ISTeC is a university-wide organization for promoting, facilitating, and enhancing CSU’s research, education, and outreach activities pertaining to the design and innovative application of computer, communication, and information systems. From 1976 to 2001, he was a professor at Purdue University. He received two BS degrees from MIT, and the MA, MSE, and PhD degrees from Princeton University. His research interests include parallel and distributed computing, heterogeneous computing, robust computing systems, parallel algorithms, parallel machine interconnection networks, and reconfigurable parallel computer systems. He has co-authored over 300 published papers on parallel and distributed computing and communication, is an IEEE Fellow, is an ACM Fellow, was a Coeditor-in-Chief of the Journal of Parallel and Distributed Computing, and was on the Editorial Boards of both the IEEE Transactions on Parallel and Distributed Systems and the IEEE Transactions on Computers. He was Program Chair/Co-Chair of three major international conferences, General Chair/Co-Chair of four international conferences, and Chair/Co-Chair of five workshops. He has been an international keynote speaker and tutorial lecturer, and has consulted for industry and government. David St. John is Chief Information Officer for WeatherFlow, Inc., a weather services company specializing in coastal weather observations and forecasts. He received a master’s degree in Engineering from the University of California, Irvine. He spent several years as the head of staff on the Management System for Heterogeneous Networks project in the Computer Science Department of the Naval Postgraduate School. His current relationship with cluster computing is as a user of the Regional Atmospheric Modeling System (RAMS), a numerical weather model developed at Colorado State University. WeatherFlow runs RAMS operationally on a Linux-based cluster. Cynthia Irvine is a Professor of Computer Science at the Naval Postgraduate School in Monterey, California. She received her Ph.D. from Case Western Reserve University and her B.A. in Physics from Rice University. She joined the faculty of the Naval Postgraduate School in 1994. Previously she worked in industry on the development of high assurance secure systems. In 2001, Dr. Irvine received the Naval Information Assurance Award. Dr. Irvine is the Director of the Center for Information Systems Security Studies and Research at the Naval Postgraduate School. She has served on special panels for NSF, DARPA, and OSD. In the area of computer security education Dr. Irvine has most recently served as the general chair of the Third World Conference on Information Security Education and the Fifth Workshop on Education in Computer Security. She co-chaired the NSF workshop on Cyber-security Workforce Needs Assessment and Educational Innovation and was a participant in the Computing Research Association/NSF sponsored Grand Challenges in Information Assurance meeting. She is a member of the editorial board of the Journal of Information Warfare and has served as a reviewer and/or program committee member of a variety of security related conferences. She has written over 100 papers and articles and has supervised the work of over 80 students. Professor Irvine is a member of the ACM, the AAS, a life member of the ASP, and a Senior Member of the IEEE. Timothy E. Levin is a Research Associate Professor at the Naval Postgraduate School. He has spent over 18 years working in the design, development, evaluation, and verification of secure computer systems, including operating systems, databases and networks. His current research interests include high assurance system design and analysis, development of models and methods for the dynamic selection of QoS security attributes, and the application of formal methods to the development of secure computer systems. Viktor K. Prasanna received his BS in Electronics Engineering from the Bangalore University and his MS from the School of Automation, Indian Institute of Science. He obtained his Ph.D. in Computer Science from the Pennsylvania State University in 1983. Currently, he is a Professor in the Department of Electrical Engineering as well as in the Department of Computer Science at the University of Southern California, Los Angeles. He is also an associate member of the Center for Applied Mathematical Sciences (CAMS) at USC. He served as the Division Director for the Computer Engineering Division during 1994–98. His research interests include parallel and distributed systems, embedded systems, configurable architectures and high performance computing. Dr. Prasanna has published extensively and consulted for industries in the above areas. He has served on the organizing committees of several international meetings in VLSI computations, parallel computation, and high performance computing. He is the Steering Co-chair of the International Parallel and Distributed Processing Symposium [merged IEEE International Parallel Processing Symposium (IPPS) and the Symposium on Parallel and Distributed Processing (SPDP)] and is the Steering Chair of the International Conference on High Performance Computing(HiPC). He serves on the editorial boards of the Journal of Parallel and Distributed Computing and the Proceedings of the IEEE. He is the Editor-in-Chief of the IEEE Transactions on Computers. He was the founding Chair of the IEEE Computer Society Technical Committee on Parallel Processing. He is a Fellow of the IEEE. Richard F. Freund is the originator of GridIQ’s network scheduling concepts that arose from mathematical and computing approaches he developed for the Department of Defense in the early 1980’s. Dr. Freund has over twenty-five years experience in computational mathematics, algorithm design, high performance computing, distributed computing, network planning, and heterogeneous scheduling. Since 1989, Dr. Freund has published over 45 journal articles in these fields. He has also been an editor of special editions of IEEE Computer and the Journal of Parallel and Distributed Computing. In addition, he is a founder of the Heterogeneous Computing Workshop, held annually in conjunction with the International Parallel Processing Symposium. Dr. Freund is the recipient of many awards, which includes the prestigious Department of Defense Meritorious Civilian Service Award in 1984 and the Lauritsen-Bennet Award from the Space and Naval Warfare Systems Command in San Diego, California.  相似文献   

6.
As computing technology becomes more pervasive and mobile services are deployed, applications will need flexible access control mechanisms. Although lots of researches have been done on access control, these efforts focus on relatively static scenarios where access depends on identity of the subject. They do not address access control issues for pervasive applications where the access privileges of a subject not only depend on its identity but also on its current context and state. In this paper, we present the SESAME dynamic context-aware access control mechanism for pervasive applications. SESAME complements current authorization mechanisms to dynamically grant and adapt permissions to users based on their current context. The underlying dynamic role based access control (DRBAC) model extends the classic role based access control (RBAC). We also present a prototype implementation of SESAME and DRBAC with the Discover computational collaboratory and an experimental evaluation of its overheads. Guangsen Zhang is Ph.D. student in the Department of Electrical and Computer Engineering at Rutgers University. He received his MS from Rutgers University. His research interests include parallel & distributed computing, distributed system security. Manish Parashar is an Associate Professor in the Department of Electrical and Computer Engineering at Rutgers University. His research interests include autonomic computing, parallel & distributed computing, scientific computing, and software engineering.  相似文献   

7.
Load balancing in a workstation-based cluster system has been investigated extensively, mainly focusing on the effective usage of global CPU and memory resources. However, if a significant portion of applications running in the system is I/O-intensive, traditional load balancing policies can cause system performance to decrease substantially. In this paper, two I/O-aware load-balancing schemes, referred to as IOCM and WAL-PM, are presented to improve the overall performance of a cluster system with a general and practical workload including I/O activities. The proposed schemes dynamically detect I/O load imbalance of nodes in a cluster, and determine whether to migrate some I/O load from overloaded nodes to other less- or under-loaded nodes. The current running jobs are eligible to be migrated in WAL-PM only if overall performance improves. Besides balancing I/O load, the scheme judiciously takes into account both CPU and memory load sharing in the system, thereby maintaining the same level of performance as existing schemes when I/O load is low or well balanced. Extensive trace-driven simulations for both synthetic and real I/O-intensive applications show that: (1) Compared with existing schemes that only consider CPU and memory, the proposed schemes improve the performance with respect to mean slowdown by up to a factor of 20; (2) When compared to the existing approaches that only consider I/O with non-preemptive job migrations, the proposed schemes achieve improvements in mean slowdown by up to a factor of 10; (3) Under CPU-memory intensive workloads, our scheme improves the performance over the existing approaches that only consider I/O by up to 47.5%. Xiao Qin received the BSc and MSc degrees in computer science from Huazhong University of Science and Technology in 1992 and 1999, respectively. He received the PhD degree in computer science from the University of Nebraska-Lincoln in 2004. Currently, he is an assistant professor in the department of computer science at the New Mexico Institute of Mining and Technology. His research interests include parallel and distributed systems, storage systems, real-time computing, performance evaluation, and fault-tolerance. He served on program committees of international conferences like CLUSTER, ICPP, and IPCCC. During 2000–2001, he was on the editorial board of The IEEE Distributed System Online. He is a member of the IEEE. Hong Jiang received the B.Sc. degree in Computer Engineering in 1982 from Huazhong University of Science and Technology, Wuhan, China; the M.A.Sc. degree in Computer Engineering in 1987 from the University of Toronto, Toronto, Canada; and the PhD degree in Computer Science in 1991 from the Texas A&M University, College Station, Texas, USA. Since August 1991 he has been at the University of Nebraska-Lincoln, Lincoln, Nebraska, USA, where he is Associate Professor and Vice Chair in the Department of Computer Science and Engineering. His present research interests are computer architecture, parallel/distributed computing, computer storage systems and parallel I/O, performance evaluation, middleware, networking, and computational engineering. He has over 70 publications in major journals and international Conferences in these areas and his research has been supported by NSF, DOD and the State of Nebraska. Dr. Jiang is a Member of ACM, the IEEE Computer Society, and the ACM SIGARCH and ACM SIGCOMM. Yifeng Zhu received the B.E. degree in Electrical Engineering from Huazhong University of Science and Technology in 1998 and the M.S. degree in computer science from University of Nebraska Lincoln (UNL) in 2002. Currently he is working towards his Ph.D. degree in the department of computer science and engineering at UNL. His main fields of research interests are parallel I/O, networked storage, parallel scheduling, and cluster computing. He is a student member of IEEE. David Swanson received a Ph.D. in physical (computational) chemistry at the University of Nebraska-Lincoln (UNL) in 1995, after which he worked as an NSF-NATO postdoctoral fellow at the Technical University of Wroclaw, Poland, in 1996, and subsequently as a National Research Council Research Associate at the Naval Research Laboratory in Washington, DC, from 1997–1998. In early 1999 he returned to UNL where he has coordinated the Research Computing Facility and currently serves as an Assistant Research Professor in the Department of Computer Science and Engineering. The Office of Naval Research, the National Science Foundation, and the State of Nebraska have supported his research in areas such as large-scale parallel simulation and distributed systems.  相似文献   

8.
One of the challenges still open to wildland fire simulators is the capacity of working under real-time constrains with the aim of providing fire spread predictions that could be useful in fire mitigation interventions. We propose going one step beyond the classical wildland fire prediction by linking evolutionary optimization strategies to the traditional scheme with the aim of emulating an “ideal” fire propagation model as much as possible. In order to accelerate the fire prediction, this enhanced prediction scheme has been developed in a fashion on a Linux cluster using MPI. Furthermore, a sensitivity analysis has been carried out to determine the input parameters that we can fix to their typical values in order to reduce the search-space involved in the optimization process and, therefore, accelerates the whole prediction strategy. Baker Abdalhaq received the BSc. Computer Science from Princess Sumaya University College, Royal JordanianSocieaty, Amman Jordania in 1993. In 2001 and 2004, he got the MSc and PhD in Computer Science from Universitat Autónoma de Barcelona (UAB), respectively. His main research interest is focused on parallel fire simulation and, in particular, how to take advantage of the computational power provided for massively distributed systems to enhance wildland fire prediction. Ana Cortés received both her first degree and her PhD in Computer Science from the Universitat Autonoma de Barcelona (UAB), Spain, in 1990 and 2000, respectively. She is currently assistant professor of Computer Science at the UAB, where she is a member of the Computer Architecture and Operating Systems Group at the Computer Science Department. Her current research interests concern software support for parallel and distributed computing including algorithms and software tools for the load-balancing of parallel programs. She has also been working on enhancing wildland fire prediction by exploiting parallel/distributed systems. Tomàs Margalef got a BS degree in physics in 1988 from Universitat Autónoma de Barcelona (UAB). In 1990 he obtained the MSc in Computer Science and in 1993 the PhD in Computer Science from UAB. Since 1988 he has been working in several aspects related to parallel and distributed computing. Currently, his research interests focuses on development of high performance applications, automatic performance analysis and dynamic performance tuning. Since 1997 he has been working on exploiting parallel/distributed processing to accelerate and improve the prediction of forest fire propagation. He is an ACM member. Germán Bianchini received the BSc. Computer Science from Universidad Nacional Del Comahue, Argentina, in 2002. In 2004 and 2006, he got the MSc and PhD in Computer Science from Universitat Autónoma de Barcelona (UAB), respectively. His main research interest is focused on parallel fire simulation and, in particular, how to take advantage of the computational power provided for massively distributed systems to enhance wildland fire prediction. Emilio Luque received the Licenciate in physics and PhD degrees from the University Complutense of Madrid (UCM) in 1968 and 1973 respectively. Between 1973 and 1976 he was an associate professor at the UCM. Since 1976 he is a professor of “Computer Architecture and Technology” at the University Autonoma of Barcelona (UAB), where he is leading the Computer Architecture and Operating System (CAOS) Group at the Computer Science Department. Professor Luque has been the Computer Science Department chairman for more than 10 years. He has been invited lecturer/researcher in Universities of USA, Argentina, Brazil, Poland, Ireland, Cuba, Italy, Germany and PR of China. He has published more than 35 papers in technical journals and more than 100 papers at international conferences and his current/major research areas are: computer architecture, interconnection networks, task scheduling in parallel systems, parallel and distributed simulation environments, environment and programming tools for automatic performance tuning in parallel systems, cluster and Grid computing, parallel computing for environmental applications (forest fire simulation, forest monitoring) and distributed video on demand (VoD) systems.  相似文献   

9.
Recently, software distributed shared memory systems have successfully provided an easy user interface to parallel user applications on distributed systems. In order to prompt program performance, most of DSM systems usually were greedy to utilize all of available processors in a computer network to execute user programs. However, using more processors to execute programs cannot necessarily guarantee to obtain better program performance. The overhead of paralleling programs is increased by the addition in the number of processors used for program execution. If the performance gain from program parallel cannot compensate for the overhead, increasing the number of execution processors will result in performance degradation and resource waste. In this paper, we proposed a mechanism to dynamically find a suitable system scale to optimize performance for DSM applications according to run-time information. The experimental results show that the proposed mechanism can precisely predict the processor number that will result in the best performance and then effectively optimize the performance of the test applications by adapting system scale according to the predicted result. Yi-Chang Zhuang received his B.S., M.S. and Ph.D. degrees in electrical engineering from National Cheng Kung University in 1995, 1997, and 2004. He is currently working as an engineer at Industrial Technology Research Institute in Taiwan. His research interests include object-based storage, file systems, distributed systems, and grid computing. Jyh-Biau Chang is currently an assistant professor at the Information Management Department of Leader University in Taiwan. He received his B.S., M.S. and Ph.D. degrees from Electrical Engineering Department of National Cheng Kung University in 1994, 1996, and 2005. His research interest is focused on cluster and grid computing, parallel and distributed system, and operating system. Tyng-Yeu Liang is currently an assistant professor who teaches and studies at Department of Electrical Engineering, National Kaohsiung University of Applied Sciences in Taiwan. He received his B.S., M.S. and Ph.D. degrees from National Cheng Kung University in 1992, 1994, and 2000. His study is interested in cluster and grid computing, image processing and multimedia. Ce-Kuen Shieh currently is a professor at the Electrical Engineering Department of National Cheng Kung University in Taiwan. He is also the chief of computation center at National Cheng Kung University. He received his Ph.D. degree from the Department of Electrical Engineering of National Cheng Kung University in 1988. He was the chairman of the Electrical Engineering Department of National Cheng Kung University from 2002 to 2005. His research interest is focused on computer network, and parallel and distributed system. Laurence T. Yang is a professor at the Department of Computer Science, St. Francis Xavier University, Canada. His research includes high performance computing and networking, embedded systems, ubiquitous/pervasive computing and intelligence, and autonomic and trusted computing.  相似文献   

10.
Software Component Frameworks are well known in the commercial business application world and now this technology is being explored with great interest as a way to build large-scale scientific applications on parallel computers. In the case of Grid systems, the current architectural model is based on the emerging web services framework. In this paper we describe progress that has been made on the Common Component Architecture model (CCA) and discuss its success and limitations when applied to problems in Grid computing. Our primary conclusion is that a component model fits very well with a services-oriented Grid, but the model of composition must allow for a very dynamic (both in space and in time) control of composition. We note that this adds a new dimension to conventional service workflow and it extends the “Inversion of Control” aspects of most component systems. Dennis Gannon is a professor of Computer Science at Indiana University. He received his Ph.D. in Computer Science from the University of Illinois in 1980 and his Ph.D. in Mathematics from the University of California in 1974. From 1980 to 1985, he was on the faculty at Purdue University. His research interests include software tools for high performance distributed systems and problem solving environments for scientific computation. Sriram Krishnan received his Ph.D. in Computer Science from Indiana University in 2004. He is currently in the Grid Development Group at the San Diego Supercomputer Center where he is working on designing a Web services based architecture for biomedical applications that is secure and scalable, and is conducive to the creation of complex workflows. He received my undergraduate degree in Computer Engineering from the University of Mumbai, India. Liang Fang is a Ph.D. student in Computer Science at Indiana University. His research interests include Grid computing, Web services, portals, their security and scalability issues. He is a Research Assistant in Computer Science at Indiana University, currently responsible for investigating authorization and other security solutions to the project of Linked Environments for Atmospheric Discovery (LEAD). Gopi Kandaswamy is a Ph.D. student in the Computer Science Department at Indiana University where he is current a Research Assistant. His research interests include Web services and workflow systems for the Grid. Yogesh Simmhan received his B.E. degree in Computer Science from Madras University, India in 2000, and is a doctoral candidate in Computer Science at Indiana University. He is currently working as a Research Assistant at Indiana University, investigating data management issues in the LEAD project. His interests lie in data provenance for workflow systems and its use in data quality estimation. Aleksander Slominski is a Ph.D. student in the Computer Science at Indiana University. His research interests include Grid and Web Services, streaming XML Pull Parsing and performance, Grid security, asynchronous messaging, events, and notifications brokers, component technologies, and workflow composition. He is currently working as a Research Assistant investigating creation and execution of dynamic workflows using Grid Process Execution Language (GPEL) based on WS-BPEL.  相似文献   

11.
Caching techniques have been used widely to improve the performance gaps of storage hierarchies in computing systems. Little is known about the impact of policies on the response times of jobs that access and process very large files in data grids, particularly when data and computations on the data have to be co-located on the same host. In data intensive applications that access large data files over wide area network environment, such as data-grids, the combination of policies for job servicing (or scheduling), caching and cache replacement can significantly impact the performance of grid jobs. We present preliminary results of a simulation study that combines an admission policy with a cache replacement policy when servicing jobs submitted to a storage resource manager.The results show that, in comparison to a first come first serve policy, the response times of jobs are significantly improved, for practical limits of disk cache sizes, when the jobs that are back-logged to access the same files are taken into consideration in scheduling the next file to be retrieved into the disk cache. Not only are the response times of jobs improved, but also the metric measures for caching policies, such as the hit ratio and the average cost per retrieval, are improved irrespective of the cache replacement policy used. Ekow Otoo is research staff scientist with the scientific data management group at Lawrence Berkeley National Laboratory, University of California, Berkeley. He received his B.Sc. degree in Electrical Engineering from the University of Science and Technology, Kumasi, Ghana and a post graduate diploma in Computer Science from the University of Ghana, Legon. In 1977, he received his M.Sc. degree in Computer Science from the University of Newcastle Upon Tyne in Britain and his Ph.D. degree in Computer Science from McGill University, Montreal, Canada in 1983. He joined the faculty of the School of Computer Science, Carleton University, in 1983 and from 1987 to 1999, he was a tenured faculty member of the School of Computer Science, Carleton University, Ottawa, Canada. He has served as research consultant to Bell Northern Research, Ottawa, Canada, and as a research project consultant to the GIS Division, Geomatics Canada, Natural Resources Canada, from 1990 to 1998. Ekow Otoo is a member of the ACM and IEEE. His research interests include database management systems, data structures and algorithms, parallel I/O for high performance computing, parallel and distributed computing. Doron Rotem is currently a senior staff scientist and a member of the Data Management group at the Lawrence Berkeley National Lab. His research interests include Grid Computing, Workflow, Scientific Data Management and Paralled and Distributed Computing and Algorithms. He has published over 80 papers in international journals and conferences in these areas. Prior to that, Dr Rotem co-founded and served as a CTO of a startup company, called CommerceRoute, that made software products in the area of workflow and data integration and before that, he was an Associate Professor in the Department of Computer Science, University of Waterloo, Canada. Dr. Rotem holds a B.Sc degree in Mathematics and Statistics from the Hebrew University, Jerusalem, Israel and a Ph.D. in Computer Science from the University of the Witwatersrand, Johannesburg, South Africa. Arie Shoshani is a senior staff scientist at Lawrence Berkeley National Laboratory. He joined LBNL in 1976. He heads the Scientific Data Management Group. He received his Ph.D. from Princeton University in 1969. From 1969 to 1976, he was a researcher at System Development Corporation, where he worked on the Network Control Program for the ARPAnet, distributed databases, database conversion, and natural language interfaces to data management systems. His current areas of work include data models, query languages, temporal data, statistical and scientific database management, storage management on tertiary storage, and grid storage middleware. Arie is also the director of a Scientific Data Management (SDM) Integrated Software Infrastructure Center (ISIC), one of seven centers selected by the SciDAC program at DOE in 2001. In this capacity, he is coordinating the work of collaborators from 4 DOE laboratories and 4 universities (see: http://sdmcenter.lbl.gov). Dr. Shoshani has published over 65 technical papers in refereed journals and conferences, chaired several workshops, conferences, and panels in database management; and served on numerous program committees for various database conferences. He also served as an associate editor for the ACM Transactions on Database Systems. He was elected a member of the VLDB Endowment Board, served as the Publication Board Chairperson for the VLDB Journal, and as the Vice-President of the VLDB Endowment. His home page is http://www.lbl.gov/arie.  相似文献   

12.
One of the principal characteristics of large scale wireless sensor networks is their distributed, multi-hop nature. Due to this characteristic, applications such as query propagation rely regularly on network-wide flooding for information dissemination. If the transmission radius is not set optimally, the flooded packet may be holding the transmission medium for longer periods than are necessary, reducing overall network throughput. We analyze the impact of the transmission radius on the average settling time—the time at which all nodes in the network finish transmitting the flooded packet. Our analytical model takes into account the behavior of the underlying contention-based MAC protocol, as well as edge effects and the size of the network. We show that for large wireless networks there exists an intermediate transmission radius which minimizes the settling time, corresponding to an optimal tradeoff between reception and contention times. We also explain how physical propagation models affect small wireless networks and why there is no intermediate optimal transmission radius observed in these cases. The mathematical analysis is supported and validated through extensive simulations.Marco Zuniga is currently a PhD student in the Department of Electrical Engineering at the University of Southern California. He received his Bachelors degree in Electrical Engineering from the Pontificia Universidad Catolica del Peru in 1998, and his Masters degree in Electrical Engineering from the University of Southern California in 2002. His interests are in the area of Wireless Sensor Networks in general, and more specifically in studying the interaction amongst different layers to improve the performance of these networks. He is a member of IEEE and the Phi Kappa Phi Honor society.Bhaskar Krishnamachari is an Assistant Professor in the Department of Electrical Engineering at the University of Southern California (USC), where he also holds a joint appointment in the Department of Computer Science. He received his Bachelors degree in Electrical Engineering with a four-year full-tuition scholarship from The Cooper Union for the Advancement of Science and Art in 1998. He received his Masters degree and his Ph.D. in Electrical Engineering from Cornell University in 1999 and 2002, under a four-year university graduate fellowship. Dr. Krishnamacharis previous research has included work on critical density thresholds in wireless networks, data centric routing in sensor networks, mobility management in cellular telephone systems, multicast flow control, heuristic global optimization, and constraint satisfaction. His current research is focused on the discovery of fundamental principles and the analysis and design of protocols for next generation wireless sensor networks. He is a member of IEEE, ACM and the Tau Beta Pi and Eta Kappa Nu Engineering Honor Societies  相似文献   

13.
This paper presents a general methodology for the communication-efficient parallelization of graph algorithms using the divide-and-conquer approach and shows that this class of problems can be solved in cluster environments with good communication efficiency. Specifically, the first practical parallel algorithm, based on a general coarse-grained model, for finding Hamiltonian paths in tournaments is presented. On any such parallel machines, this algorithm uses only (3log p+1), where p is the number of processors, communication rounds, which is independent of the tournament size, and can reuse the existing linear-time algorithm in the sequential setting. For theoretical completeness, the algorithm is revised for fine-grained models, where the ratio of computation and communication throughputs is low or the local memory size, , of each individual processor is extremely limited for any , solving the problem with O(log p) communication rounds, while the hidden constant grows with the scalability factor 1/∊. Experiments have been carried out on a Linux cluster of 32 Sun Ultra5 computers and an SGI Origin 2000 with 32 R10000 processors. The algorithm performance on the Linux Cluster reaches 75% of the performance on the SGI Origin 2000 when the tournament size is about one million. Computational resources and technical support are provided by the Center for Computational Research (CCR) at the State University of New York at Buffalo. Chun-Hsi Huang received his Ph.D. degree in Computer Science from the State University of New York at Buffalo in 2001. His is currently an Assistant Professor of Computer Science and Engineering at the University of Connecticut. His interests include High Performance Parallel Computing, Cluster and Grid Computing, Biomedical and Health Informatics, Algorithm Design and Analysis, Experimental Algorithms and Computational Biology. Sanguthevar Rajasekaran received his Ph.D. degree in Computer Science from Harvard University in 1988. Currently he is the UTC Chair Professor of Computer Science and Engineering at the University of Connecticut and the Director of Booth Engineering Center for Advanced Technologies (BECAT). His research interests include Parallel Algorithms, Bioinformatics, Data Mining, Randomized Computing, Computer Simulations, and Combinatorial Optimization. Laurence Tianruo Yang received is Ph.D. degree in Computer Science from the Oxford University. He is currently a professor of Computer Science of the St. Francis Xavier University in Canada. His research interests include high-performance computing, embedded systems, computer archtecture and high-speed networking. Xin He received his Ph.D. degree in Computer Science from the Ohio State University in 1987. He is currently Professor of Computer Science and Engineering at the State University of New York at Buffalo. His research interests include Algorithms, Data Structures, Combinatorics and Computational Geometry.  相似文献   

14.
High-performance computing increasingly occurs on “computational grids” composed of heterogeneous and geographically distributed systems of computers, networks, and storage devices that collectively act as a single “virtual” computer. A key challenge in this environment is to provide efficient access to data distributed across remote data servers. Our parallel I/O framework, called Armada, allows application and data-set providers to flexibly compose graphs of processing modules that describe the distribution, application interfaces, and processing required of the dataset before computation. Although the framework provides a simple programming model for the application programmer and the data-set provider, the resulting graph may contain bottlenecks that prevent efficient data access. In this paper, we present an algorithm used to restructure Armada graphs that distributes computation and data flow to improve performance in the context of a wide-area computational grid. This work was supported by Sandia National Laboratories under DOE contract DOE-AV6184. Ron A. Oldfield is a senior member of the technical staff at Sandia National Laboratories in Albuquerque, NM. He received the B.Sc. in computer science from the University of New Mexico in 1993. From 1993 to 1997, he worked in the computational sciences department of Sandia National Laboratories, where he specialized in seismic research and parallel I/O. He was the primary developer for the GONII-SSD (Gas and Oil National Information Infrastructure–Synthetic Seismic Dataset) project and a co-developer for the R&D 100 award winning project “Salvo”, a project to develop a 3D finite-difference prestack-depth migration algorithm for massively parallel architectures. From 1997 to 2003 he attended graduate school at Dartmouth college and received his Ph.D. in June, 2003. In September of 2003, he returned to Sandia to work in the Scalable Computing Systems department. His research interests include parallel and distributed computing, parallel I/O, and mobile computing. David Kotz is a Professor of Computer Science at Dartmouth College in Hanover NH. After receiving his A.B. in Computer Science and Physics from Dartmouth in 1986, he completed his Ph.D in Computer Science from Duke University in 1991. He returned to Dartmouth to join the faculty in 1991, where he is now Professor of Computer Science, Director of the Center for Mobile Computing, and Executive Director of the Institute for Security Technology Studies. His research interests include context-aware mobile computing, pervasive computing, wireless networks, and intrusion detection. He is a member of the ACM, IEEE Computer Society, and USENIX associations, and of Computer Professionals for Social Responsibility. For more information see http://www.cs.dartmouth.edu/dfk/.  相似文献   

15.
I/O bottlenecks are already a problem in many large-scale applications that manipulate huge datasets. This problem is expected to get worse as applications get larger, and the I/O subsystem performance lags behind processor and memory speed improvements. At the same time, off-the-shelf clusters of workstations are becoming a popular platform for demanding applications due to their cost-effectiveness and widespread deployment. Caching I/O blocks is one effective way of alleviating disk latencies, and there can be multiple levels of caching on a cluster of workstations. Previous studies have shown the benefits of caching—whether it be local to a particular node, or a shared global cache across the cluster—for certain applications. However, we show that while caching is useful in some situations, it can hurt performance if we are not careful about what to cache and when to bypass the cache. This paper presents compilation techniques and runtime support to address this problem. These techniques are implemented and evaluated on an experimental Linux/Pentium cluster running a parallel file system. Our results using a diverse set of applications (scientific and commercial) demonstrate the benefits of a discretionary approach to caching for I/O subsystems on clusters, providing as much as 48% savings in overall execution time over indiscriminately caching everything in some applications. Parts of this paper have appeared in the Proceedings of the 3rd IEEE/ACM Symposium on Cluster Computing and the Grid (CCGrid'03). This paper is an extension of these prior results, and includes a more extensive performance evaluation. Murali Vilayannur is a Ph.D. student in the Department of Computer Science and Engineering at The Pennsylvania State University. His research interests are in High-Performance Parallel I/O, File Systems, Virtual Memory Algorithms and Operating Systems. Anand Sivasubramaniam received his B.Tech. in Computer Science from the Indian Institute of Technology, Madras, in 1989, and the M.S. and Ph.D. degrees in Computer Science from the Georgia Institute of Technology in 1991 and 1995 respectively. He has been on the faculty at The Pennsylvania State University since Fall 1995 where he is currently an Associate Professor. Anand's research interests are in computer architecture, operating systems, performance evaluation, and applications for both high performance computer systems and embedded systems. Anand's research has been funded by NSF through several grants, including the CAREER award, and from industries including IBM, Microsoft and Unisys Corp. He has several publications in leading journals and conferences, and is on the editorial board of IEEE Transactions on Computers and IEEE Transactions on Parallel and Distributed Systems. He is a recipient of the 2002 IBM Faculty Award. Anand is a member of the IEEE, IEEE Computer Society, and ACM. Mahmut Kandemir received the B.Sc. and M.Sc. degrees in control and computer engineering from Istanbul Technical University, Istanbul, Turkey, in 1988 and 1992, respectively. He received the Ph.D. from Syracuse University, Syracuse, New York in electrical engineering and computer science, in 1999. He has been an assistant professor in the Computer Science and Engineering Department at the Pennsylvania State University since August 1999. His main research interests are optimizing compilers, I/O intensive applications, and power-aware computing. He is a member of the IEEE and the ACM. Rajeev Thakur is a Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory. He received a B.E. from the University of Bombay, India, in 1990, M.S. from Syracuse University in 1992, and Ph.D. from Syracuse University in 1995, all in computer engineering. His research interests are in the area of high-performance computing in general and high-performance networking and I/O in particular. He was a member of the MPI Forum and participated actively in the definition of the I/O part of the MPI-2 standard. He is the author of a widely used, portable implementation of MPI-IO, called ROMIO. He is also a co-author of the book “Using MPI-2: Advanced Features of the Message Passing Interface” published by MIT Press. Robert Ross received his Ph.D. in Computer Engineering from Clemson University in 2000. He is now an Assistant Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory. His research interests are in message passing and storage systems for high performance computing environments. He is the primary author and lead developer for the Parallel Virtual File System (PVFS), a parallel file system for Linux clusters. Current projects include the ROMIO MPI-IO implementation, PVFS, PVFS2, and the MPICH2 implementation of the MPI message passing interface.  相似文献   

16.
Bluetooth scatternets may be operated in a loosely coupled mode, called Walk-In Bridge Scheduling, in which the master polls all of its slaves and bridges using E-limited service. Using the theory of queues with vacations, we derive the stability criteria for packet queues in piconet masters, slaves, and bridges. We show that the stability of the slave queues is more critical under high traffic locality, whereas the stability of the bridge queues becomes progressively more important as the non-local traffic increases. Our analysis shows that the limited exchange mode, in which the bridge residence time in a piconet is limited, performs better and has a wider stability region than the complete exchange mode in which the bridge stays in the piconet until all queued packets are exchanged. Simulations show that this scheduling approach offers good performance and excellent scalability, while maintaining scatternet stability.Vojislav B. Mii received his PhD in Computer Science from the University of Belgrade, Yugoslavia, in 1993. He is currently Assistant Professor of Computer Science, at the University of Manitoba in Winnipeg, Manitoba, Canada. Previously, he has held posts at the University of Belgrade, Yugoslavia, and the Hong Kong University of Science and Technology. His research interests include systems and software engineering and modeling and performance evaluation of wireless networks. He is a member of ACM, AIS, and IEEE.Jelena Mii received her PhD degree in Computer Engineering from the University of Belgrade, Yugoslavia, in 1993. She is currently Associate Professor of Computer Science at the University of Manitoba in Winnipeg, Manitoba, Canada. Previously, she has been with the Hong Kong University of Science and Technology. Her current research interests include wireless networks and mobile computing. She is a member of IEEE Computer Society.Ka Lok Chan received his MPhil degree in performance of Bluetooth networks at the Hong Kong University of Science and Technology.  相似文献   

17.
In this paper we are concerned with the live verification of the consistency of a replicated system, an issue that has not been addressed by the research community so far. We consider the problem of how to enable the system to detect automatically and in production whether the invariants defining the correctness of object replication are violated. This feature could greatly improve the dependability of distributed applications and is necessary for constructing self-managing and self-healing replicated systems. We focus on systems that enforce strongly consistent replication: all replicas of each object must be kept “continuously” in-sync. This replication strategy is appropriate for application domains where correctness guarantees in spite of failures are more important than performance and scalability. We present the design and implementation of a replicated web service capable of self-checking whether all replicas are indeed kept in sync. This check occurs on-line, transparently to clients. We also discuss the performance cost of self-checking in our prototype. Alberto Bartoli is Associate Professor of Computer Engineering at the University of Trieste, Italy. He took a degree in Electrical Engineering in 1989 and a doctorate in Computer Engineering in 1994, both at the University of Pisa, Italy. His research interests are in the area of reliability and fault-tolerance in distributed systems. Giovanni Masarin took a degree in Electronic Engineering in 2004, at the University of Trieste, Italy. He is currently involved in product development at RadioTrevisan, a company specialized in the production of lawful interception equipments.  相似文献   

18.
Qing Dai  Jie Wu 《Cluster computing》2005,8(2-3):127-133
Power conservation is a critical issue for ad hoc wireless networks. The main objective of the paper is to find the minimum uniform transmission range of an ad hoc wireless network, where each node uses the same transmission power, while maintaining network connectivity. Three different algorithms, Prims Minimum Spanning Tree (MST), its extension with Fibonacci heap implementation, and an area-based binary search are developed to solve the problem. Their performance is compared by simulation study together with Kruskals MST, a known solution proposed by Ramanathan and Rosales-Hain for topology control by transmission power adjustment, and an edge-based binary search used by the same study in order to find the per-node-minimality after Kruskals algorithm is applied. Our results show that Prims MST outperforms both Kruskals MST and the two binary searches. The performance between Prims MST implemented with binary heap and Fibonacci heap is fairly close, with the Fibonacci implementation slightly outperforming the other.Qing Dai received her M.S. degree in Computer Science from Florida Atlantic University on August 2003, and M.S. degree in Microbiology from Upstate University on July 2000. She is currently a software engineer at Motorola, Plantation, FL.Jie Wu is a Professor at Department of Computer Science and Engineering, Florida Atlantic University. He has published over 200 papers in various journals and conference proceedings. His research interests are in the areas of wireless networks and mobile computing, routing protocols, fault-tolerant computing, and interconnection networks. He served on many conference organization committees. Dr. Wu is on the editorial board of IEEE Transactions on Parallel and Distributed Systems and was a co-guest-editor of IEEE Computer and Journal of Parallel and Distributed Computing. He is the author of the text Distributed System Design published by the CRC press. He was also the recipient of the 1996–97 and 2001–2002 Researcher of the Year Award at Florida Atlantic University. Dr. Wu has served as an IEEE Computer Society Distinguished Visitor. He is a Member of ACM and a Senior Member of IEEE.  相似文献   

19.
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member’s departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution.  相似文献   

20.
An attacker’s connection can propagate quickly to different parts of a transparent All-Optical Network. Such attacks affect the normal traffic and cause a quality of service degradation or outright service denial. Attack monitors can collect the information of each link and each node to help diagnose the attacker’s exact location. Quick detection and localization of an attack source helps avoid losing large amounts of data in an all-optical network. However, to detect attack sources, it is not necessary to put monitors on all nodes. Since not every wavelength on every link is being used all the time, we propose to use the idle wavelengths to setup diagnostic connections and obtain the necessary information needed for diagnosis purposes. We show that placing a relatively small number of monitors at some key nodes in a network is sufficient to achieve level of performance. However, the monitor placement policy, routing policy, and diagnosis method are challenging problems. We, in this paper, first develop a monitor placement policy, a test connection policy, and a routing policy based on our definition of crosstalk attack and monitor node models. With these policies, we show that we can always detect and localize the malicious connections as long as there is no more than one malicious connection on each wavelength in the whole network. After this, we develop a scalable diagnosis method, which can localize the sources of the such malicious attacks in a fast manner. Arun K. Somani is currently Jerry R. Junkins Chair Professor of Electrical and Computer Engineering at Iowa State University. He earned his MSEE and Ph.D. degrees in electrical engineering from the McGill University, Montreal, Canada, in 1983 and 1985, respectively. He worked as Scientific Officer for Govt. of India, New Delhi from 1974 to 1982. From 1985 to 1997, he was a faculty member at the University of Washington, Seattle, WA, where he was a Professor of EE and CSE from 1995 onwards. From 1997 to 2002, he served as David C. Nicholas Professor of Electrical and Computer Engineering at Iowa State University. Professor Somani’s research interests are in the area of fault tolerant computing, computer communication and networks, wireless and optical networking, computer architecture, and parallel computer systems. Tao Wu received the B.S. and M.S.E.E. degrees in telecommunication engineering from the University of Electronic Science and Technology of China, Sichuan, China, in 1993 and 1996, respectively, and the Ph.D. degree in computer and electrical engineering from Iowa State University, Ames, in 2003. He is currently a Software Engineer with Microsoft Corporation. His research interests are in the area of WDM-based optical networking, network security, and image processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号