首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.  相似文献   

2.
VPM tokens: virtual machine-aware power budgeting in datacenters   总被引:1,自引:0,他引:1  
Power consumption and cooling overheads are becoming increasingly significant for enterprise datacenters, affecting overall costs and the ability to extend resource capacities. To help mitigate these issues, active power management technologies are being deployed aggressively, including power budgeting, which enables improved power provisioning and can address critical periods when power delivery or cooling capabilities are temporarily reduced. Given the use of virtualization to encapsulate application components into virtual machines (VMs), however, such power management capabilities must address the interplay between budgeting physical resources and the performance of the virtual machines used to run these applications. This paper proposes a set of management components and abstractions for use by software power budgeting policies. The key idea is to manage power from a VM-centric point of view, where the goal is to be aware of global utility tradeoffs between different virtual machines (and their applications) when maintaining power constraints for the physical hardware on which they run. Our approach to VM-aware power budgeting uses multiple distributed managers integrated into the VirtualPower Management (VPM) framework whose actions are coordinated via a new abstraction, termed VPM tokens. An implementation with the Xen hypervisor illustrates technical benefits of VPM tokens that include up to 43% improvements in global utility, highlighting the ability to dynamically improve cluster performance while still meeting power budgets. We also demonstrate how VirtualPower based budgeting technologies can be leveraged to improve datacenter efficiency in the context of cooling infrastructure management.
Yogendra JoshiEmail:
  相似文献   

3.
Taking advantage of distributed storage technology and virtualization technology, cloud storage systems provide virtual machine clients customizable storage service. They can be divided into two types: distributed file system and block level storage system. There are two disadvantages in existing block level storage system: Firstly, Some of them are tightly coupled with their cloud computing environments. As a result, it’s hard to extend them to support other cloud computing platforms; Secondly, The bottleneck of volume server seriously affects the performance and reliability of the whole system. In this paper we present a lightweighted block-level storage system for clouds—ORTHRUS, based on virtualization technology. We first design the architecture with multiple volume servers and its workflows, which can improve system performance and avoid the problem. Secondly, we propose a Listen-Detect-Switch mechanism for ORTHRUS to deal with contingent volume servers’ failure. At last we design a strategy that dynamically balances load between multiple volume servers. We characterize machine capability and load quantity with black box model, and implement the dynamic load balance strategy which is based on genetic algorithm. Extensive experimental results show that the aggregated I/O throughputs of ORTHRUS are significantly improved (approximately two times of that in Orthrus), and both I/O throughputs and IOPS are also remarkably improved (about 1.8 and 1.2 times, respectively) by our dynamic load balance strategy.  相似文献   

4.
With the advent of cloud and virtualization technologies and the integration of various computer communication technologies, today’s computing environments can provide virtualized high quality services. The network traffic has also continuously increased with remarkable growth. Software defined networking/network function virtualization (SDN/NFV) enhancing the infrastructure agility, thus network operators and service providers are able to program their own network functions on vendor independent hardware substrate. However, in order for the SDN/NFV to realize a profit, it must provide a new resource sharing and monitoring procedures among the regionally distributed and virtualized computers. In this paper, we proposes a NFV monitoring architecture based practical measuring framework for network performance measurement. We also proposes a end-to-end connectivity support platform across a whole SDN/NFV networks has not been fully addressed.  相似文献   

5.
Energy efficiency and high computing power are basic design considerations across modern-day computing solutions due to different concerns such as system performance, operational cost, and environmental issues. Desktop Grid and Volunteer Computing System (DGVCS) so called opportunistic infrastructures offer computational power at low cost focused on harvesting idle computing cycles of existing commodity computing resources. Other than allowing to customize the end user offer, virtualization is considered as one key techniques to reduce energy consumption in large-scale systems and contributes to the scalability of the system. This paper presents an energy efficient approach for opportunistic infrastructures based on task consolidation and customization of virtual machines. The experimental results with single desktops and complete computer rooms show that virtualization significantly improves the energy efficiency of opportunistic grids compared with dedicated computing systems without disturbing the end-user.  相似文献   

6.
Cloud computing is an emerging technology and is being widely considered for resource utilization in various research areas. One of the main advantages of cloud computing is its flexibility in computing resource allocations. Many computing cycles can be ready in very short time and can be smoothly reallocated between tasks. Because of this, there are many private companies entering the new business of reselling their idle computing cycles. Research institutes have also started building their own cloud systems for their various research purposes. In this paper, we introduce a framework for virtual cluster system called vcluster which is capable of utilizing computing resources from heterogeneous clouds and provides a uniform view in computing resource management. vcluster is an IaaS (Infrastructure as a Service) based cloud resource management system. It distributes batch jobs to multiple clouds depending on the status of queue and system pool. The main design philosophy behind vcluster is cloud and batch system agnostic and it is achieved through plugins. This feature mitigates the complexity of integrating heterogeneous clouds. In the pilot system development, we use FermiCloud and Amazon EC2, which are a private and a public cloud system, respectively. In this paper, we also discuss the features and functionalities that must be considered in virtual cluster systems.  相似文献   

7.
Large scale clusters based on virtualization technologies have been widely used in many areas, including the data center and cloud computing environment. But how to save energy is a big challenge for building a “green cluster” recently. However, previous researches, including local approaches, which focus on saving the energy of the components in a single workstation without a global vision on the whole cluster, and cluster-wide energy saving techniques, which can only be applied to homogeneous workstations and specific applications, cannot solve the challenges. This paper describes the design and implementation of a novel scheme, called Magnet, that uses live migration of virtual machines to transfer load among the nodes on a multi-layer ring-based overlay. This scheme can reduce the power consumption greatly by regarding all the cluster nodes as a whole based on virtualization technologies. And, it can be applied to both the homogeneous and heterogeneous servers. Experimental measurements show that the new method can reduce the power consumption by 74.8% over base at most with certain adjustably acceptable overhead. The effectiveness and performance insights are also analytically verified.  相似文献   

8.
Management is an important challenge for future enterprises. Previous work has addressed platform management (e.g., power and thermal management) separately from virtualization management (e.g., virtual machine (VM) provisioning, application performance). Coordinating the actions taken by these different management layers is important and beneficial, for reasons of performance, stability, and efficiency. Such coordination, in addition to working well with existing multi-vendor solutions, also needs to be extensible to support future management solutions potentially operating on different sensors and actuators. In response to these requirements, this paper proposes vManage, a solution to loosely couple platform and virtualization management and facilitate coordination between them in data centers. Our solution is comprised of registry and proxy mechanisms that provide unified monitoring and actuation across platform and virtualization domains, and coordinators that provide policy execution for better VM placement and runtime management, including a formal approach to ensure system stability from inefficient management actions. The solution is instantiated in a Xen environment through a platform-aware virtualization manager at a cluster management node, and a virtualization-aware platform manager on each server. Experimental evaluations using enterprise benchmarks show that compared to traditional solutions, vManage can achieve additional power savings (10% lower power) with significantly improved service-level guarantees (71% less violations) and stability (54% fewer VM migrations), at low overhead.  相似文献   

9.
The increases in multi-core processor parallelism and in the flexibility of many-core accelerator processors, such as GPUs, have turned traditional SMP systems into hierarchical, heterogeneous computing environments. Fully exploiting these improvements in parallel system design remains an open problem. Moreover, most of the current tools for the development of parallel applications for hierarchical systems concentrate on the use of only a single processor type (e.g., accelerators) and do not coordinate several heterogeneous processors. Here, we show that making use of all of the heterogeneous computing resources can significantly improve application performance. Our approach, which consists of optimizing applications at run-time by efficiently coordinating application task execution on all available processing units is evaluated in the context of replicated dataflow applications. The proposed techniques were developed and implemented in an integrated run-time system targeting both intra- and inter-node parallelism. The experimental results with a real-world complex biomedical application show that our approach nearly doubles the performance of the GPU-only implementation on a distributed heterogeneous accelerator cluster.  相似文献   

10.
This paper presents a data management solution which allows fast Virtual Machine (VM) instantiation and efficient run-time execution to support VMs as execution environments in Grid computing. It is based on novel distributed file system virtualization techniques and is unique in that: (1) it provides on-demand cross-domain access to VM state for unmodified VM monitors; (2) it enables private file system channels for VM instantiation by secure tunneling and session-key based authentication; (3) it supports user-level and write-back disk caches, per-application caching policies and middleware-driven consistency models; and (4) it leverages application-specific meta-data associated with files to expedite data transfers. The paper reports on its performance in wide-area setups using VMware-based VMs. Results show that the solution delivers performance over 30% better than native NFS and with warm caches it can bring the application-perceived overheads below 10% compared to a local-disk setup. The solution also allows a VM with 1.6 GB virtual disk and 320 MB virtual memory to be cloned within 160 seconds for the first clone and within 25 seconds for subsequent clones. Ming Zhao is a PhD candidate in the department of Electrical and Computer Engineering and a member of the Advance Computing and Information Systems Laboratory, at University of Florida. He received the degrees of BE and ME from Tsinghua University. His research interests are in the areas of computer architecture, operating systems and distributed computing. Jian Zhang is a PhD student in the Department of Electrical and Computer Engineering at University of Florida and a member of the Advance Computing and Information Systems Laboratory (ACIS). Her research interest is in virtual machines and Grid computing. She is a member of the IEEE and the ACM. Renato J. Figueiredo received the B.S. and M.S. degrees in Electrical Engineering from the Universidade de Campinas in 1994 and 1995, respectively, and the Ph.D. degree in Electrical and Computer Engineering from Purdue University in 2001. From 2001 until 2002 he was on the faculty of the School of Electrical and Computer Engineering of Northwestern University at Evanston, Illinois. In 2002 he joined the Department of Electrical and Computer Engineering of the University of Florida as an Assistant Professor. His research interests are in the areas of computer architecture, operating systems, and distributed systems.  相似文献   

11.
水旱轮作系统作物养分管理策略   总被引:25,自引:0,他引:25  
水旱轮作系统是我国主要的作物生产系统之一,主要分布在长江流域.作物和土壤季节间的干湿交替变化是这一系统的显著特征,这也引起了土壤物理、化学和生物学特性在不同作物季节间的交替变化,构成独特的农田生态系统.该系统面临的主要问题包括:生产力下降或徘徊不前,灌溉水日益短缺,养分管理不合理,资源利用效率低和环境污染等.本文在综述水旱轮作系统特征和存在问题的基础上,进一步提出通过养分资源综合管理策略解决该系统养分投入、作物生产和环境风险之间的矛盾.该策略的核心内容是:从整个轮作系统角度出发调控养分,综合应用各种养分资源(化肥、有机肥及环境养分),使养分供应匹配作物需求,并根据不同养分资源特点采取相应的管理技术,使养分管理与节水、高产栽培等农作技术相结合.  相似文献   

12.
13.
开展生态系统数字化、信息化、智能化管理,全面提升粤港澳大湾区生态环境质量,是建设国际一流湾区的必然趋势。以城市群生态系统智能化管理为目标,系统整合各类生态环境相关数据资源,形成生态系统管理数据和决策支撑体系,并以此为基础构建生态智能管理平台。研究以生态系统要素和功能管理逻辑为核心,构建了生态系统管理业务流程:(1)精准剖析生态环境问题,确定问题发生的尺度、范围并对其进行分类和定性;(2)确立生态管理目标,制定适宜的管理策略;(3)根据现状与基线进行生态系统服务权衡,通过生态管理恢复工程提升生态系统质量;(4)通过环境物联网监测生态系统变化,及时调整和改进生态系统管理计划。针对城市群生态系统多尺度、多层次、复杂化等特点,在制定管理决策时应充分权衡管理目标和生态服务,兼顾各类生态系统服务效益;需通过示范性生态工程印证管理方案的可行性、适用性和协同性;以趋善化理念为指导思想,不断优化调整生态管理目标;同时在管理活动实施的过程中不断积累、凝练、总结所获得的反馈信息和经验。面向生态管理体制和管理能力的现代化提升需求,融合大数据、地理信息系统(GIS)、全球广域网络(Web)等信息化技术,构建粤港澳大湾区生态管理智能平台,实现多主体信息共享,打破管理决策的"黑箱",为推进生态环境管理现代化提供可靠可行的方案。构建的生态系统管理业务流程和管理策略,将知识充分融入管理决策的制定流程,能服务于粤港澳大湾区的生态文明建设,推动可持续和高质量发展。  相似文献   

14.
Recent advances in processor, networking and software technologies have made distributed computing a reality in today's world. Distributed systems offer many advantages, ranging from a higher performance to the effective utilization of physically dispersed resources. Many diverse application domains can benefit by exploiting principles of distributed computing. Information filtering is one such application domain. In this article, we present a design of a homogeneous distributed multi-agent information filtering system, called D-SIFTER. D-SIFTER is based on the language-dependent model of Java RMI. The detailed design process and various experiments carried out using D-SIFTER are also described. The results indicate that the distributed inter-agent collaboration improves the overall filtering performance.  相似文献   

15.
Recent advances in hardware and software virtualization offer unprecedented management capabilities for the mapping of virtual resources to physical resources. It is highly desirable to further create a “service hosting abstraction” that allows application owners to focus on service level objectives (SLOs) for their applications. This calls for a resource management solution that achieves the SLOs for many applications in response to changing data center conditions and hides the complexity from both application owners and data center operators. In this paper, we describe an automated capacity and workload management system that integrates multiple resource controllers at three different scopes and time scales. Simulation and experimental results confirm that such an integrated solution ensures efficient and effective use of data center resources while reducing service level violations for high priority applications.
Ludmila CherkasovaEmail:
  相似文献   

16.

One of the technology for increasing the safety and welfare of humans in roads is Vehicular Cloud Computing (VCC). This technology can utilize cloud computing advantages in the Vehicular Ad Hoc Network (VANET). VCC by utilizing modern equipment along with current vehicles, can play a significant role in the area of smart transportation systems. Given the potential of this technology, effective methods for managing existing resources and providing the expected service quality that is essential for such an environment are not yet available as it should. One of the most important barriers to providing such solutions seems to be resource constraints and very high dynamics in vehicles in VCC. In this article, based on virtualization and taking into account the environment with these features, we propose simple ways to manage resources better and improve the quality of service. We were able to achieve better results in simulation than previous methods by providing a flexible data structure to control the important data in the environment effectively. To illustrate the impact of the proposed methods, we compared them with some of the most important methods in this context, and we used SUMO 1.2.0 and MATLAB R2019a software to simulate them. Simulation results indicate that the proposed methods provide better results than previous methods in terms of resource efficiency, Quality of Service (QoS), and load balancing.

  相似文献   

17.
Data centers, as resource providers, are expected to deliver on performance guarantees while optimizing resource utilization to reduce cost. Virtualization techniques provide the opportunity of consolidating multiple separately managed containers of virtual resources on underutilized physical servers. A key challenge that comes with virtualization is the simultaneous on-demand provisioning of shared physical resources to virtual containers and the management of their capacities to meet service-quality targets at the least cost. This paper proposes a two-level resource management system to dynamically allocate resources to individual virtual containers. It uses local controllers at the virtual-container level and a global controller at the resource-pool level. An important advantage of this two-level control architecture is that it allows independent controller designs for separately optimizing the performance of applications and the use of resources. Autonomic resource allocation is realized through the interaction of the local and global controllers. A novelty of the local controller designs is their use of fuzzy logic-based approaches to efficiently and robustly deal with the complexity and uncertainties of dynamically changing workloads and resource usage. The global controller determines the resource allocation based on a proposed profit model, with the goal of maximizing the total profit of the data center. Experimental results obtained through a prototype implementation demonstrate that, for the scenarios under consideration, the proposed resource management system can significantly reduce resource consumption while still achieving application performance targets.
Mazin YousifEmail:
  相似文献   

18.
Virtualization technology promises to provide better isolation and consolidation in traditional servers. However, with VMM (virtual machine monitor) layer getting involved, virtualization system changes the architecture of traditional software stack, bringing about limitations in resource allocating. The non-uniform VCPU (virtual CPU)-PCPU (physical CPU) mapping, deriving from both the configuration or the deployment of virtual machines and the dynamic runtime feature of applications, causes the different percentage of processor allocation in the same physical machine,and the VCPUs mapped these PCPUs will gain asymmetric performance. The guest OS, however, is agnostic to the non-uniformity. With assumption that all VCPUs have the same performance, it can carry out sub-optimal policies when allocating virtual resource for applications. Likewise, application runtime system can also make the same mistakes. Our focus in this paper is to understand the performance implications of the non-uniform VCPU-PCPU mapping in a virtualization system. Based on real measurements of a virtualization system with state of art multi-core processors running different commercial and emerging applications, we demonstrate that the presence of the non-uniform mapping has negative impacts on application’s performance predictability. This study aims to provide timely and practical insights on the problem of non-uniform VCPU mapping, when virtual machines being deployed and configured, in emerging cloud.  相似文献   

19.
黄初龙  于昌平  高兵  黄云凤 《生态学报》2016,36(22):7267-7278
自然水循环与社会水循环的关联研究是制定系统性水代谢对策的前提。从水资源代谢过程与格局角度,用物质流分析法(MFA)剖析了亚热带季风气候雨影区缺水城市枯水年份资源水与虚拟水耦合代谢的路径、数量,提取代谢效率评价所需的过程与结构指标,以社会、经济、生态环境效益最优化为评价原则构建了城市水资源代谢效率评价指标体系,采用层次分析法赋权,评价了近10年来枯水年份厦门市资源水与虚拟水耦合代谢效率。结果表明,厦门水资源代谢效率呈加速提高趋势,主要由用水效率类和水代谢过程与结构类指标驱动。说明提高水资源代谢效率的根本对策在于用水行为和用水过程与结构的改善。采用情景分析法设计主导驱动指标不同组合下水资源代谢效率情景方案,提高了方案的可操作性,可基于这些指标制定水资源管理对策。基于MFA结果提取指标丰富了指标体系构建理论。  相似文献   

20.
Evolution of advanced manufacturing technologies and the new manufacturing paradigm has enriched the computer integrated manufacturing (CIM) methodology. The new advances have put more demands for CIM integration technology and associated supporting tools. One of these demands is to provide CIM systems with better software architecture, more flexible integration mechanisms, and powerful support platforms. In this paper, we present an integrating infrastructure for CIM implementation in manufacturing enterprises to form an integrated automation system. A research prototype of an integrating infrastructure has been developed for the development, integration, and operation of integrated CIM system. It is based on the client/server structure and employs object-oriented and agent technology. System openness, scalability, and maintenance are ensured by conforming to international standards and by using effective system design software and management tools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号