首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Power management is becoming very important in data centers. To apply power management in cloud computing, Green Computing has been proposed and considered. Cloud computing is one of the new promising techniques, that are appealing to many big companies. In fact, due to its dynamic structure and property in online services, cloud computing differs from current data centers in terms of power management. To better manage the power consumption of web services in cloud computing with dynamic user locations and behaviors, we propose a power budgeting design based on the logical level, using distribution trees. By setting multiple trees or forest, we can differentiate and analyze the effect of workload types and Service Level Agreements (SLAs, e.g. response time) in terms of power characteristics. Based on these, we introduce classified power capping for different services as the control reference to maximize power saving when there are mixed workloads.  相似文献   

2.
Cloud computing should inherently support various types of data-intensive workloads with different storage access patterns. This makes a high-performance storage system in the Cloud an important component. Emerging flash device technologies such as solid state drives (SSDs) are a viable choice for building high performance computing (HPC) cloud storage systems to address more fine-grained data access patterns. However, the bit-per-dollar SSD price is still higher than the prices of HDDs. This study proposes an optimized progressive file layout (PFL) method to leverage the advantages of SSDs in a parallel file system such as Lustre so that small file I/O performance can be significantly improved. A PFL can dynamically adjust chunk sizes and stripe patterns according to various I/O traffics. Extensive experimental results show that this approach (i.e. building a hybrid storage system based on a combination of SSDs and HDDs) can actually achieve balanced throughput over mixed I/O workloads consisting of large and small file access patterns.  相似文献   

3.
We present a technique that controls the peak power consumption of a high-density server by implementing a feedback controller that uses precise, system-level power measurement to periodically select the highest performance state while keeping the system within a fixed power constraint. A control theoretic methodology is applied to systematically design this control loop with analytic assurances of system stability and controller performance, despite unpredictable workloads and running environments. In a real server we are able to control power over a 1 second period to within 1 W and over an 8 second period to within 0.1 W. Conventional servers respond to power supply constraint situations by using simple open-loop policies to set a safe performance level in order to limit peak power consumption. We show that closed-loop control can provide higher performance under these conditions and implement this technique on an IBM BladeCenter HS20 server. Experimental results demonstrate that closed-loop control provides up to 82% higher application performance compared to open-loop control and up to 17% higher performance compared to a widely used ad-hoc technique.
Malcolm WareEmail:
  相似文献   

4.
There are typically multiple heterogeneous servers providing various services in cloud computing. High power consumption of these servers increases the cost of running a data center. Thus, there is a problem of reducing the power cost with tolerable performance degradation. In this paper, we optimize the performance and power consumption tradeoff for multiple heterogeneous servers. We consider the following problems: (1) optimal job scheduling with fixed service rates; (2) joint optimal service speed scaling and job scheduling. For problem (1), we present the Karush-Kuhn-Tucker (KKT) conditions and provide a closed-form solution. For problem (2), both continuous speed scaling and discrete speed scaling are considered. In discrete speed scaling, the feasible service rates are discrete and bounded. We formulate the problem as an MINLP problem and propose a distributed algorithm by online value iteration, which has lower complexity than a centralized algorithm. Our approach provides an analytical way to manage the tradeoff between performance and power consumption. The simulation results show the gain of using speed scaling, and also prove the effectiveness and efficiency of the proposed algorithms.  相似文献   

5.
The science cloud paradigm has been actively developed and investigated, but still requires a suitable model for science cloud system in order to support increasing scientific computation needs with high performance. This paper presents an effective provisioning model of science cloud, particularly for large-scale high throughput computing applications. In this model, we utilize job traces where a statistical method is applied to pick the most influential features to improve application performance. With these features, a system determines where VM is deployed (allocation) and which instance type is proper (provisioning). An adaptive evaluation step which is subsequent to the job execution enables our model to adapt to dynamical computing environments. We show performance achievements by comparing the proposed model with other policies through experiments and expect noticeable improvements on performance as well as reduction of cost from resource consumption through our model.  相似文献   

6.
Cloud data centers often schedule heterogeneous workloads without considering energy consumption and carbon emission aspects. Tremendous amount of energy consumption leads to high operational costs and reduces return on investment and contributes towards carbon footprints to the environment. Therefore, there is need of energy-aware cloud based system which schedules computing resources automatically by considering energy consumption as an important parameter. In this paper, energy efficient autonomic cloud system [Self-Optimization of Cloud Computing Energy-efficient Resources (SOCCER)] is proposed for energy efficient scheduling of cloud resources in data centers. The proposed work considers energy as a Quality of Service (QoS) parameter and automatically optimizes the efficiency of cloud resources by reducing energy consumption. The performance of the proposed system has been evaluated in real cloud environment and the experimental results show that the proposed system performs better in terms of energy consumption of cloud resources and utilizes these resources optimally.  相似文献   

7.
This paper studies a self-organized criticality model called sandpile for dynamically load-balancing tasks arriving in the form of Bag-of-Tasks in large-scale decentralized system. The sandpile is designed as a decentralized agent system characterizing a cellular automaton, which works in a critical state at the edge of chaos. Depending on the state of the cellular automaton, different responses may occur when a new task is assigned to a resource: it may change nothing or generate avalanches that reconfigure the state of the system. The abundance of such avalanches is in power-law relation with their sizes, a scale-invariant behavior that emerges without requiring tuning or control parameters. That means that large—catastrophic—avalanches are very rare but small ones occur very often. Such emergent pattern can be efficiently adapted for non-clairvoyant scheduling, where tasks are load balanced in computing resources trying to maximize the performance but without assuming any knowledge on the tasks features. The algorithm design is experimentally validated showing that the sandpile is able to find near-optimal schedules by reacting differently to different conditions of workloads and architectures.  相似文献   

8.
Ramlan EI  Zauner KP 《Bio Systems》2011,105(1):14-24
Despite an exponential increase in computing power over the past decades, present information technology falls far short of expectations in areas such as cognitive systems and micro robotics. Organisms demonstrate that it is possible to implement information processing in a radically different way from what we have available in present technology, and that there are clear advantages from the perspective of power consumption, integration density, and real-time processing of ambiguous data. Accordingly, the question whether the current silicon substrate and associated computing paradigm is the most suitable approach to all types of computation has come to the fore. Macromolecular materials, so successfully employed by nature, possess uniquely promising properties as an alternate substrate for information processing. The two key features of macromolecules are their conformational dynamics and their self-assembly capabilities. The purposeful design of macromolecules capable of exploiting these features has proven to be a challenge, however, for some groups of molecules it is increasingly practicable. We here introduce an algorithm capable of designing groups self-assembling of nucleic acid molecules with multiple conformational states. Evaluation using natural and artificially designed nucleic acid molecules favours this algorithm significantly, as compared to the probabilistic approach. Furthermore, the thermodynamic properties of the generated candidates are within the same approximation as the customised trans-acting switching molecules reported in the laboratory.  相似文献   

9.
Workstation clusters are emerging as a general-purpose computing platform for the execution of workloads comprising parallel and sequential applications. The scalability and flexibility typical of implicit coscheduling strategies makes them a very promising solution to the scheduling needs of workstation clusters. In this paper we present a simulation study that compares, for a variety of workloads (that include both parallel and sequential applications) and operating system schedulers, 12 implicit coscheduling strategies in terms of the performance they are able to deliver to applications. By using a detailed simulator, we evaluate the performance of different coscheduling alternatives for a variety of simulation scenarios, and we identify the set of strategies that deliver the best performance to all the applications composing typical cluster workloads. Moreover, we show that for schedulers providing immediate preemption, the best strategies are also the simplest ones to implement.  相似文献   

10.
According to the fact that cloud servers have different energy consumption on different running states, as well as the energy waste problem caused by the mismatching between cloud servers and cloud tasks, we carry out researches on the energy optimal method achieved by a priced timed automaton for the cloud computing center in this paper. The priced timed automaton is used to model the running behaviors of the cloud computing system. After introducing the matching matrix of cloud tasks and cloud resources as well as the power matrix of the running states of cloud servers, we design a generation algorithm for the cloud system automaton based on the generation rules and reduction rules given ahead. Then, we propose another algorithm to settle the minimum path energy consumption problem in the cloud system automaton, therefore obtaining an energy optimal solution and an energy optimal value for the cloud system. A case study and repeated experimental analyses manifest that our method is effective and feasible.  相似文献   

11.
An new cascade control system is presented that reproducibly keeps the cultivation part of recombinant protein production processes on its predetermined track. While the system directly controls carbon dioxide production mass and carbon dioxide production rates along their setpoint profiles in fed-batch cultivation, it simultaneously keeps the specific biomass growth rates and the biomass profiles on their desired paths. The control scheme was designed and tuned using a virtual plant environment based on the industrial process control system SIMATIC PCS 7 (Siemens AG). It is shown by means of validation experiments that the simulations in this straightforward approach directly reflect the experimentally observed controller behaviour. Within the virtual plant environment, it was shown that the cascade control is considerably better than previously used control approaches. The controller significantly improved the batch-to-batch reproducibility of the fermentations. Experimental tests confirmed that it is particularly suited for cultivation processes suffering from long response times and delays. The performance of the new controller is demonstrated during its application in Escherichia coli fed-batch cultivations as well as in animal cell cultures with CHO cells. The technique is a simple and reliable alternative to more sophisticate model-supported controllers.  相似文献   

12.
Energy efficiency and high computing power are basic design considerations across modern-day computing solutions due to different concerns such as system performance, operational cost, and environmental issues. Desktop Grid and Volunteer Computing System (DGVCS) so called opportunistic infrastructures offer computational power at low cost focused on harvesting idle computing cycles of existing commodity computing resources. Other than allowing to customize the end user offer, virtualization is considered as one key techniques to reduce energy consumption in large-scale systems and contributes to the scalability of the system. This paper presents an energy efficient approach for opportunistic infrastructures based on task consolidation and customization of virtual machines. The experimental results with single desktops and complete computer rooms show that virtualization significantly improves the energy efficiency of opportunistic grids compared with dedicated computing systems without disturbing the end-user.  相似文献   

13.
With the continuous development of hardware and software, Graphics Processor Units (GPUs) have been used in the general-purpose computation field. They have emerged as a computational accelerator that dramatically reduces the application execution time with CPUs. To achieve high computing performance, a GPU typically includes hundreds of computing units. The high density of computing resource on a chip brings in high power consumption. Therefore power consumption has become one of the most important problems for the development of GPUs. This paper analyzes the energy consumption of parallel algorithms executed in GPUs and provides a method to evaluate the energy scalability for parallel algorithms. Then the parallel prefix sum is analyzed to illustrate the method for the energy conservation, and the energy scalability is experimentally evaluated using Sparse Matrix-Vector Multiply (SpMV). The results show that the optimal number of blocks, memory choice and task scheduling are the important keys to balance the performance and the energy consumption of GPUs.  相似文献   

14.
In this paper, an approach to the estimation of multiple biomass growth rates and biomass concentration is proposed for a class of aerobic bioprocesses characterized by on-line measurements of dissolved oxygen and carbon dioxide concentrations, as well as off-line measurements of biomass concentration. The approach is based on adaptive observer theory and includes two steps. In the first step, an adaptive estimator of two out of three biomass growth rates is designed. In the second step, the third biomass growth rate and the biomass concentration are estimated, using two different adaptive estimators. One of them is based on on-line measurements of dissolved oxygen concentration and off-line measurement of biomass concentrations, while the other needs only on-line measurements of the carbon dioxide concentration. Simulations demonstrated good performance of the proposed estimators under continuous and batch-fed conditions.  相似文献   

15.
Power management in large-scale computational environments can significantly benefit from predictive models. Such models provide information about the power consumption behavior of workloads prior to running them. Power consumption depends on the characteristics of both the machine and the workload. However, combinational features such as the cache miss rate cannot be considered due to their unavailability before running the workload. Therefore, pre-execution power modeling requires both machine-independent workload characteristics and workload-independent machine characteristics. In this paper the predictive modeling problem is tackled by the proposal of a two-stage modeling framework. In the first stage, a machine learning approach is taken to predict single-threaded workload power consumption at a specific frequency. The second stage analytically scales this output to any intended thread/frequency configuration. Experimental results show that the proposed approach can yield highly accurate predictions about workload power consumption with an average error of 3.7 % on six different test platforms.  相似文献   

16.
There is growing incentive to reduce the power consumed by large-scale data centers that host online services such as banking, retail commerce, and gaming. Virtualization is a promising approach to consolidating multiple online services onto a smaller number of computing resources. A virtualized server environment allows computing resources to be shared among multiple performance-isolated platforms called virtual machines. By dynamically provisioning virtual machines, consolidating the workload, and turning servers on and off as needed, data center operators can maintain the desired quality-of-service (QoS) while achieving higher server utilization and energy efficiency. We implement and validate a dynamic resource provisioning framework for virtualized server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a lookahead control scheme. The proposed approach accounts for the switching costs incurred while provisioning virtual machines and explicitly encodes the corresponding risk in the optimization problem. Experiments using the Trade6 enterprise application show that a server cluster managed by the controller conserves, on average, 22% of the power required by a system without dynamic control while still maintaining QoS goals. Finally, we use trace-based simulations to analyze controller performance on server clusters larger than our testbed, and show how concepts from approximation theory can be used to further reduce the computational burden of controlling large systems.
Guofei JiangEmail:
  相似文献   

17.
Data centers, as resource providers, are expected to deliver on performance guarantees while optimizing resource utilization to reduce cost. Virtualization techniques provide the opportunity of consolidating multiple separately managed containers of virtual resources on underutilized physical servers. A key challenge that comes with virtualization is the simultaneous on-demand provisioning of shared physical resources to virtual containers and the management of their capacities to meet service-quality targets at the least cost. This paper proposes a two-level resource management system to dynamically allocate resources to individual virtual containers. It uses local controllers at the virtual-container level and a global controller at the resource-pool level. An important advantage of this two-level control architecture is that it allows independent controller designs for separately optimizing the performance of applications and the use of resources. Autonomic resource allocation is realized through the interaction of the local and global controllers. A novelty of the local controller designs is their use of fuzzy logic-based approaches to efficiently and robustly deal with the complexity and uncertainties of dynamically changing workloads and resource usage. The global controller determines the resource allocation based on a proposed profit model, with the goal of maximizing the total profit of the data center. Experimental results obtained through a prototype implementation demonstrate that, for the scenarios under consideration, the proposed resource management system can significantly reduce resource consumption while still achieving application performance targets.
Mazin YousifEmail:
  相似文献   

18.
Energy consumption in high performance computing data centers has become a long standing issue. With rising costs of operating the data center, various techniques need to be employed to reduce the overall energy consumption. Currently, among others there are techniques that guarantee reduced energy consumption by powering on/off the idle nodes. However, most of them do not consider the energy consumed by other components in a rack. Our study addresses this aspect of the data center. We show that we can gain considerable energy savings by reducing the energy consumed by these rack components. In this regard, we propose a scheduling technique that will help schedule jobs with the above mentioned goal. We claim that by our scheduling technique we can reduce the energy consumption considerably without affecting other performance metrics of a job. We implement this technique as an enhancement to the well-known Maui scheduler and present our results. We propose three different algorithms as part of this technique. The algorithms evaluate the various trade-offs that could be possibly made with respect to overall cluster performance. We compare our technique with various currently available Maui scheduler configurations. We simulate a wide variety of workloads from real cluster deployments using the simulation mode of Maui. Our results consistently show about 7 to 14 % savings over the currently available Maui scheduler configurations. We shall also see that our technique can be applied in tandem with most of the existing energy aware scheduling techniques to achieve enhanced energy savings. We also consider the side effects of power losses due to the network switches as a result of deploying our technique. We compare our technique with the existing techniques in terms of the power losses due to these switches based on the results in Sharma and Ranganathan, Lecture Notes in Computer Science, vol. 5550, 2009 and account for the power losses. We there on provide a best fit scheme with the rack considerations. We then propose an enhanced technique that merges the two extremes of node allocation based on rack information. We see that we can provide a way to configure the scheduler based on the kind of workload that it schedules and reduce the effect of job splitting across multiple racks. We further discuss how the enhancement can be utilized to build a learning model which can be used to adaptively adjust the scheduling parameters based on the workload experienced.  相似文献   

19.
Various mechanisms have recently been developed that combine linkage mechanisms and wheels. In particular, the combination of passive linkage mechanisms and small wheels is a main research trend because standard wheeled mobile mechanisms find it difficult to move on rough terrain. In our previous research, a six-wheel mobile robot employing a passive linkage mechanism has been developed to enhance maneuverability and was able to climb over a 0.20 m bump and stairs. We designed a hybrid velocity and torque controller using a neural network since simple velocity controllers fail to climb up. In this paper, we propose an environment recognition system for a wheeled mobile robot that consists of multiple classification analyses to make the robot more adaptive to various environments by selecting a suitable system such as decision making, navigation and controller using the result of the environment recognition system. We evaluate the recognition performance in operation environments; slopes, bumps and stairs by comparing principle component, k-means and self-organizing map analyses.  相似文献   

20.
A collection of virtual machines (VMs) interconnected with an overlay network with a layer 2 abstraction has proven to be a powerful, unifying abstraction for adaptive distributed and parallel computing on loosely-coupled environments. It is now feasible to allow VMs hosting high performance computing (HPC) applications to seamlessly bridge distributed cloud resources and tightly-coupled supercomputing and cluster resources. However, to achieve the application performance that the tightly-coupled resources are capable of, it is important that the overlay network not introduce significant overhead relative to the native hardware, which is not the case for current user-level tools, including our own existing VNET/U system. In response, we describe the design, implementation, and evaluation of a virtual networking system that has negligible latency and bandwidth overheads in 1–10 Gbps networks. Our system, VNET/P, is directly embedded into our publicly available Palacios virtual machine monitor (VMM). VNET/P achieves native performance on 1 Gbps Ethernet networks and very high performance on 10 Gbps Ethernet networks. The NAS benchmarks generally achieve over 95 % of their native performance on both 1 and 10 Gbps. We have further demonstrated that VNET/P can operate successfully over more specialized tightly-coupled networks, such as Infiniband and Cray Gemini. Our results suggest it is feasible to extend a software-based overlay network designed for computing at wide-area scales into tightly-coupled environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号