首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
DENS: data center energy-efficient network-aware scheduling   总被引:1,自引:0,他引:1  
In modern data centers, energy consumption accounts for a considerably large slice of operational expenses. The existing work in data center energy optimization is focusing only on job distribution between computing servers based on workload or thermal profiles. This paper underlines the role of communication fabric in data center energy consumption and presents a scheduling approach that combines energy efficiency and network awareness, named DENS. The DENS methodology balances the energy consumption of a data center, individual job performance, and traffic demands. The proposed approach optimizes the tradeoff between job consolidation (to minimize the amount of computing servers) and distribution of traffic patterns (to avoid hotspots in the data center network).  相似文献   

2.
Data centers are the backbone of cloud infrastructure platform to support large-scale data processing and storage. More and more business-to-consumer and enterprise applications are based on cloud data center. However, the amount of data center energy consumption is inevitably lead to high operation costs. The aim of this paper is to comprehensive reduce energy consumption of cloud data center servers, network, and cooling systems. We first build an energy efficient cloud data center system including its architecture, job and power consumption model. Then, we combine the linear regression and wavelet neural network techniques into a prediction method, which we call MLWNN, to forecast the cloud data center short-term workload. Third, we propose a heuristic energy efficient job scheduling with workload prediction solution, which is divided into resource management strategy and online energy efficient job scheduling algorithm. Our extensive simulation performance evaluation results clearly demonstrate that our proposed solution has good performance and is very suitable for low workload cloud data center.  相似文献   

3.
Efficient application scheduling is critical for achieving high performance in heterogeneous computing (HC) environments. Because of such importance, there are many researches on this problem and various algorithms have been proposed. Duplication-based algorithms are one kind of well known algorithms to solve scheduling problems, which achieve high performance on minimizing the overall completion time (makespan) of applications. However, they pursuit of the shortest makespan overly by duplicating some tasks redundantly, which leads to a large amount of energy consumption and resource waste. With the growing advocacy for green computing systems, energy conservation has been an important issue and gained a particular interest. An existing technique to reduce energy consumption of an application is dynamic voltage/frequency scaling (DVFS), whose efficiency is affected by the overhead of time and energy caused by voltage scaling. In this paper, we propose a new energy-aware scheduling algorithm with reduced task duplication called Energy-Aware Scheduling by Minimizing Duplication (EAMD), which takes the energy consumption as well as the makespan of an application into consideration. It adopts a subtle energy-aware method to search and delete redundant task copies in the schedules generated by duplication-based algorithms, and it is easier to operate than DVFS, and produces no extra time and energy consumption. This algorithm not only consumes less energy but also maintains good performance in terms of makespan compared with duplication-based algorithms. Two kinds of DAGs, i.e., randomly generated graphs and two real-world application graphs, are tested in our experiments. Experimental results show that EAMD can save up to 15.59 % energy consumption for HLD and HCPFD, two classic duplication-based algorithms. Several factors affecting the performance are also analyzed in the paper.  相似文献   

4.
Increasing power consumption of IT infrastructures and growing electricity prices have led to the development of several energy-saving techniques in the last couple of years. Virtualization and consolidation of services is one of the key technologies in data centers to reduce overprovisioning and therefore increase energy savings. This paper shows that the energy-optimal allocation of virtualized services in a heterogeneous server infrastructure is NP-hard and can be modeled as a variant of the multidimensional vector packing problem. Furthermore, it proposes a model to predict the performance degradation of a service when it is consolidated with other services. The model allows considering the tradeoff between power consumption and service performance during service allocation. Finally, the paper presents two heuristics that approximate the energy-optimal and performance-aware resource allocation problem and shows that the allocations determined by the proposed heuristics are more energy-efficient than the widely applied maximum-density consolidation.  相似文献   

5.

High energy consumption (EC) is one of the leading and interesting issue in the cloud environment. The optimization of EC is generally related to scheduling problem. Optimum scheduling strategy is used to select the resources or tasks in such a way that system performance is not violated while minimizing EC and maximizing resource utilization (RU). This paper presents a task scheduling model for scheduling the tasks on virtual machines (VMs). The objective of the proposed model is to minimize EC, maximize RU, and minimize workflow makespan while preserving the task’s deadline and dependency constraints. An energy and resource efficient workflow scheduling algorithm (ERES) is proposed to schedule the workflow tasks to the VMs and dynamically deploy/un-deploy the VMs based on the workflow task’s requirements. An energy model is presented to compute the EC of the servers. Double threshold policy is used to perceive the server’ status i.e. overloaded/underloaded or normal. To balance the workload on the overloaded/underloaded servers, live VM migration strategy is used. To check the effectiveness of the proposed algorithm, exhaustive simulation experiments are conducted. The proposed algorithm is compared with power efficient scheduling and VM consolidation (PESVMC) algorithm on the accounts of RU, energy efficiency and task makespan. Further, the results are also verified in the real cloud environment. The results demonstrate the effectiveness of the proposed ERES algorithm.

  相似文献   

6.
Nowadays, scientists and companies are confronted with multiple competing goals such as makespan in high-performance computing and economic cost in Clouds that have to be simultaneously optimised. Multi-objective scheduling of scientific applications in these systems is therefore receiving increasing research attention. Most existing approaches typically aggregate all objectives in a single function, defined a-priori without any knowledge about the problem being solved, which negatively impacts the quality of the solutions. In contrast, Pareto-based approaches having as outcome a set of (nearly) optimal solutions that represent a tradeoff among the different objectives, have been scarcely studied. In this paper, we analyse MOHEFT, a Pareto-based list scheduling heuristic that provides the user with a set of tradeoff optimal solutions from which the one that better suits the user requirements can be manually selected. We demonstrate the potential of our method for multi-objective workflow scheduling on the commercial Amazon EC2 Cloud. We compare the quality of the MOHEFT tradeoff solutions with two state-of-the-art approaches using different synthetic and real-world workflows: the classical HEFT algorithm for single-objective scheduling and the SPEA2* genetic algorithm used in multi-objective optimisation problems. The results demonstrate that our approach is able to compute solutions of higher quality than SPEA2*. In addition, we show that MOHEFT is more suitable than SPEA2* for workflow scheduling in the context of commercial Clouds, since the genetic-based approach is unable of dealing with some of the constraints imposed by these systems.  相似文献   

7.
According to the fact that cloud servers have different energy consumption on different running states, as well as the energy waste problem caused by the mismatching between cloud servers and cloud tasks, we carry out researches on the energy optimal method achieved by a priced timed automaton for the cloud computing center in this paper. The priced timed automaton is used to model the running behaviors of the cloud computing system. After introducing the matching matrix of cloud tasks and cloud resources as well as the power matrix of the running states of cloud servers, we design a generation algorithm for the cloud system automaton based on the generation rules and reduction rules given ahead. Then, we propose another algorithm to settle the minimum path energy consumption problem in the cloud system automaton, therefore obtaining an energy optimal solution and an energy optimal value for the cloud system. A case study and repeated experimental analyses manifest that our method is effective and feasible.  相似文献   

8.
Energy preservation is very important nowadays. A large number of applications in science, engineering, astronomy and business analytics are classified as Bag-of-Tasks (BoT) applications. A BoT is a collection of independent tasks that do not communicate with each other during execution. BoT scheduling has been severely studied from a performance point of view. In this paper, we address the problem of energy-efficient BoT scheduling in a heterogeneous environment with the twin objectives of minimizing finish time and energy consumption. Specifically, we extend two performance-oriented scheduling policies, Min–Min and Max–Min, and propose power-aware centralized scheduling policies that incorporate a dynamic voltage/frequency scaling mechanism and can power on and off unneeded computing nodes of a heterogeneous cluster environment using dynamic power management. Additionally, to evaluate the system using a more realistic workload, high-priority tasks with and without time-constraints are also submitted. A series of simulation experiments conducted, show that we can achieve significant energy savings without affecting significantly the execution of BoTs and high-priority tasks. Additional experiments on a real system also confirmed the effectiveness of our policies.  相似文献   

9.
In this study, we address the meta-task scheduling problem in heterogeneous computing (HC) systems, which is to find a task assignment that minimizes the schedule length of a meta-task composed of several independent tasks with no data dependencies. The fact that the meta-task scheduling problem in HC systems is NP-hard has motivated the development of many heuristic scheduling algorithms. These heuristic algorithms, however, neglect the stochastic nature of task execution times in an attempt to minimize a deterministic objective function, which is the maximum of the expected values of machine loads. Contrary to existing heuristics, we account for this stochastic nature by modeling task execution times as random variables. We, then, formulate a stochastic scheduling problem where the objective is to minimize the expected value of the maximum of machine loads. We prove that this new objective is underestimated by the deterministic objective function and that an optimal task assignment obtained with respect to the deterministic objective function could be inefficient in a real computing platform. In order to solve the stochastic scheduling problem posed, we develop a genetic algorithm based scheduling heuristic. Our extensive simulation studies show that the proposed genetic algorithm can produce better task assignments as compared to existing heuristics. Specifically, we observe a performance improvement on the relative cost heuristic (M.-Y. Wu and W. Shu, A high-performance mapping algorithm for heterogeneous computing systems, in: Int. Parallel and Distributed Processing Symposium, San Francisco, CA, April 2001) by up to 61%.  相似文献   

10.
Li  Chunlin  Cai  Qianqian  Luo  Youlong 《Cluster computing》2022,25(2):1421-1439

Improper data replacement and inappropriate selection of job scheduling policy are important reasons for the degradation of Spark system operation speed, which directly causes the performance degradation of Spark parallel computing. In this paper, we analyze the existing caching mechanism of Spark and find that there is still more room for optimization of the existing caching policy. For the task structure analysis, the key information of Spark tasks is taken out to obtain the data and memory usage during the task runtime, and based on this, an RDD weight calculation method is proposed, which integrates various factors affecting the RDD usage and establishes an RDD weight model. Based on this model, a minimum weight replacement algorithm based on RDD structure analyzing is proposed. The algorithm ensure that the relatively more valuable data in the data replacement process can be cached into memory. In addition, the default job scheduling algorithm of the Spark framework considers a single factor, which cannot form effective scheduling for jobs and causes a waste of cluster resources. In this paper, an adaptive job scheduling policy based on job classification is proposed to solve the above problem. The policy can classify job types and schedule resources more effectively for different types of jobs. The experimental results show that the proposed dynamic data replacement algorithm effectively improves Spark's memory utilization. The proposed job classification-based adaptive job scheduling algorithm effectively improves the system resource utilization and shortens the job completion time.

  相似文献   

11.
Energy consumption in high performance computing data centers has become a long standing issue. With rising costs of operating the data center, various techniques need to be employed to reduce the overall energy consumption. Currently, among others there are techniques that guarantee reduced energy consumption by powering on/off the idle nodes. However, most of them do not consider the energy consumed by other components in a rack. Our study addresses this aspect of the data center. We show that we can gain considerable energy savings by reducing the energy consumed by these rack components. In this regard, we propose a scheduling technique that will help schedule jobs with the above mentioned goal. We claim that by our scheduling technique we can reduce the energy consumption considerably without affecting other performance metrics of a job. We implement this technique as an enhancement to the well-known Maui scheduler and present our results. We propose three different algorithms as part of this technique. The algorithms evaluate the various trade-offs that could be possibly made with respect to overall cluster performance. We compare our technique with various currently available Maui scheduler configurations. We simulate a wide variety of workloads from real cluster deployments using the simulation mode of Maui. Our results consistently show about 7 to 14 % savings over the currently available Maui scheduler configurations. We shall also see that our technique can be applied in tandem with most of the existing energy aware scheduling techniques to achieve enhanced energy savings. We also consider the side effects of power losses due to the network switches as a result of deploying our technique. We compare our technique with the existing techniques in terms of the power losses due to these switches based on the results in Sharma and Ranganathan, Lecture Notes in Computer Science, vol. 5550, 2009 and account for the power losses. We there on provide a best fit scheme with the rack considerations. We then propose an enhanced technique that merges the two extremes of node allocation based on rack information. We see that we can provide a way to configure the scheduler based on the kind of workload that it schedules and reduce the effect of job splitting across multiple racks. We further discuss how the enhancement can be utilized to build a learning model which can be used to adaptively adjust the scheduling parameters based on the workload experienced.  相似文献   

12.
In this paper, we consider the problem of scheduling divisible loads on arbitrary graphs with the objective to minimize the total processing time of the entire load submitted for processing. We consider an arbitrary graph network comprising heterogeneous processors interconnected via heterogeneous links in an arbitrary fashion. The divisible load is assumed to originate at any processor in the network. We transform the problem into a multi-level unbalanced tree network and schedule the divisible load. We design systematic procedures to identify and eliminate any redundant processor–link pairs (those pairs whose consideration in scheduling will penalize the performance) and derive an optimal tree structure to obtain an optimal processing time, for a fixed sequence of load distribution. Since the algorithm thrives to determine an equivalent number of processors (resources) that can be used for processing the entire load, we refer to this approach as resource-aware optimal load distribution (RAOLD) algorithm. We extend our study by applying the optimal sequencing theorem proposed for single-level tree networks in the literature for multi-level tree for obtaining an optimal solution. We evaluate the performance for a wide range of arbitrary graphs with varying connectivity probabilities and processor densities. We also study the effect of network scalability and connectivity. We demonstrate the time performance when the point of load origination differs in the network and highlight certain key features that may be useful for algorithm and/or network system designers. We evaluate the time performance with rigorous simulation experiments under different system parameters for the ease of a complete understanding.  相似文献   

13.
In today’s highly competitive uncertain project environments, it is of crucial importance to develop analytical models and algorithms to schedule and control project activities so that the deviations from the project objectives are minimized. This paper addresses the integrated scheduling and control in multi-mode project environments. We propose an optimization model that models the dynamic behavior of projects and integrates optimal control into a practically relevant project scheduling problem. From the scheduling perspective, we address the discrete time/cost trade-off problem, whereas an optimal control formulation is used to capture the effect of project control. Moreover, we develop a solution algorithm for two particular instances of the optimal project control. This algorithm combines a tabu search strategy and nonlinear programming. It is applied to a large scale test bed and its efficiency is tested by means of computational experiments. To the best of our knowledge, this research is the first application of optimal control theory to multi-mode project networks. The models and algorithms developed in this research are targeted as a support tool for project managers in both scheduling and deciding on the timing and quantity of control activities.  相似文献   

14.
盖玲 《生物数学学报》2009,24(1):166-170
本文考虑了工件的加工速度随交货期的接近不断加快的排序模型,给出了初始速度不为零情况下最小化加工时间的最优算法.并讨论了不完全信息情形下工件的最大延迟比该模型的研究结果可应用在种群对食物的最优收寻问题中.  相似文献   

15.
Taking advantage of distributed storage technology and virtualization technology, cloud storage systems provide virtual machine clients customizable storage service. They can be divided into two types: distributed file system and block level storage system. There are two disadvantages in existing block level storage system: Firstly, Some of them are tightly coupled with their cloud computing environments. As a result, it’s hard to extend them to support other cloud computing platforms; Secondly, The bottleneck of volume server seriously affects the performance and reliability of the whole system. In this paper we present a lightweighted block-level storage system for clouds—ORTHRUS, based on virtualization technology. We first design the architecture with multiple volume servers and its workflows, which can improve system performance and avoid the problem. Secondly, we propose a Listen-Detect-Switch mechanism for ORTHRUS to deal with contingent volume servers’ failure. At last we design a strategy that dynamically balances load between multiple volume servers. We characterize machine capability and load quantity with black box model, and implement the dynamic load balance strategy which is based on genetic algorithm. Extensive experimental results show that the aggregated I/O throughputs of ORTHRUS are significantly improved (approximately two times of that in Orthrus), and both I/O throughputs and IOPS are also remarkably improved (about 1.8 and 1.2 times, respectively) by our dynamic load balance strategy.  相似文献   

16.
This paper is the first to tackle the problem of designing routes in service companies that are responsible for the metrological control of measuring equipments at customer sites. This real-world problem belongs to the well-known Rich Vehicle Routing Problems which combine multiple attributes that distinguish them from traditional vehicle routing problems. The attributes include fixed heterogeneous fleet of vehicles, time windows for customers and depot, resource synchronization between tours, driver-customer and vehicle-customer constraints, customer priorities and unserved customers. This routing and scheduling problem is modeled with linear programming techniques and solved by a variable neighborhood descent metaheuristic based on a tabu search algorithm with a holding list. A real-life case study faced by a company in the region of Andalusia (Spain) is also presented in this work. The performance of the metaheuristic is compared with the literature for the standard fixed heterogeneous vehicle routing problem. Results obtained on a real case instance improve the solutions implemented by the company.  相似文献   

17.
Cheng  Feng  Huang  Yifeng  Tanpure  Bhavana  Sawalani  Pawan  Cheng  Long  Liu  Cong 《Cluster computing》2022,25(1):619-631

As the services provided by cloud vendors are providing better performance, achieving auto-scaling, load-balancing, and optimized performance along with low infrastructure maintenance, more and more companies migrate their services to the cloud. Since the cloud workload is dynamic and complex, scheduling the jobs submitted by users in an effective way is proving to be a challenging task. Although a lot of advanced job scheduling approaches have been proposed in the past years, almost all of them are designed to handle batch jobs rather than real-time workloads, such as that user requests are submitted at any time with any amount of numbers. In this work, we have proposed a Deep Reinforcement Learning (DRL) based job scheduler that dispatches the jobs in real time to tackle this problem. Specifically, we focus on scheduling user requests in such a way as to provide the quality of service (QoS) to the end-user along with a significant reduction of the cost spent on the execution of jobs on the virtual instances. We have implemented our method by Deep Q-learning Network (DQN) model, and our experimental results demonstrate that our approach can significantly outperform the commonly used real-time scheduling algorithms.

  相似文献   

18.
Task scheduling for large-scale computing systems is a challenging problem. From the users perspective, the main concern is the performance of the submitted tasks, whereas, for the cloud service providers, reducing operation cost while providing the required service is critical. Therefore, it is important for task scheduling mechanisms to balance users’ performance requirements and energy efficiency because energy consumption is one of the major operational costs. We present a time dependent value of service (VoS) metric that will be maximized by the scheduling algorithm that take into consideration the arrival time of a task while evaluating the value functions for completing a task at a given time and the tasks energy consumption. We consider the variation in value for completing a task at different times such that the value of energy reduction can change significantly between peak and non-peak periods. To determine the value of a task completion, we use completion time and energy consumption with soft and hard thresholds. We define the VoS for a given workload to be the sum of the values for all tasks that are executed during a given period of time. Our system model is based on virtual machines, where each task will be assigned a resource configuration characterized by the number of the homogeneous cores and amount of memory. For the scheduling of each task submitted to our system, we use the estimated time to compute matrix and the estimated energy consumption matrix which are created using historical data. We design, evaluate, and compare our task scheduling methods to show that a significant improvement in energy consumption can be achieved when considering time-of-use dependent scheduling algorithms. The simulation results show that we improve the performance and the energy values up to 49% when compared to schedulers that do not consider the value functions. Similar to the simulation results, our experimental results from running our value based scheduling on an IBM blade server show up to 82% improvement in performance value, 110% improvement in energy value, and up to 77% improvement in VoS compared to schedulers that do not consider the value functions.  相似文献   

19.
The emergence of cloud computing has made it become an attractive solution for large-scale data processing and storage applications. Cloud infrastructures provide users a remote access to powerful computing capacity, large storage space and high network bandwidth to deploy various applications. With the support of cloud computing, many large-scale applications have been migrated to cloud infrastructures instead of running on in-house local servers. Among these applications, continuous write applications (CWAs) such as online surveillance systems, can significantly benefit due to the flexibility and advantages of cloud computing. However, with specific characteristics such as continuous data writing and processing, and high level demand of data availability, cloud service providers prefer to use sophisticated models for provisioning resources to meet CWAs’ demands while minimizing the operational cost of the infrastructure. In this paper, we present a novel architecture of multiple cloud service providers (CSPs) or commonly referred to as Cloud-of-Clouds. Based on this architecture, we propose two operational cost-aware algorithms for provisioning cloud resources for CWAs, namely neighboring optimal resource provisioning algorithm and global optimal resource provisioning algorithm, in order to minimize the operational cost and thereby maximizing the revenue of CSPs. We validate the proposed algorithms through comprehensive simulations. The two proposed algorithms are compared against each other to assess their effectiveness, and with a commonly used and practically viable round-robin approach. The results demonstrate that NORPA and GORPA outperform the conventional round-robin algorithm by reducing the operational cost by up to 28 and 57 %, respectively. The low complexity of the proposed cost-aware algorithms allows us to apply it to a realistic Cloud-of-Clouds environment in industry as well as academia.  相似文献   

20.
In hybrid clouds, there is a technique named cloud bursting which can allow companies to expand their capacity to meet the demands of peak workloads in a low-priced manner. In this work, a cost-aware job scheduling approach based on queueing theory in hybrid clouds is proposed. The job scheduling problem in the private cloud is modeled as a queueing model. A genetic algorithm is applied to achieve optimal queues for jobs to improve the utilization rate of the private cloud. Then, the task execution time is predicted by back propagation neural network. The max–min strategy is applied to schedule tasks according to the prediction results in hybrid clouds. Experiments show that our cost-aware job scheduling algorithm can reduce the average job waiting time and average job response time in the private cloud. In additional, our proposed job scheduling algorithm can improve the system throughput of the private cloud. It also can reduce the average task waiting time, average task response time and total costs in hybrid clouds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号