首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
Cloud computing can leverage over-provisioned resources that are wasted in traditional data centers hosting production applications by consolidating tasks with lower QoS and SLA requirements. However, the dramatic fluctuation of workloads with lower QoS and SLA requirements may impact the performance of production applications. Frequent task eviction, killing and rescheduling operations also waste CPU cycles and create overhead. This paper aims to schedule hybrid workloads in the cloud data center to reduce task failures and increase resource utilization. The multi-prediction model, including the ARMA model and the feedback based online AR model, is used to predict the current and the future resource availability. Decision to accept or reject a new task is based on the available resources and task properties. Evaluations show that the scheduler can reduce the host overload and failed tasks by nearly 70%, and increase effective resource utilization by more than 65%. The task delay performance degradation is also acceptable.  相似文献   

2.
Data centers are the backbone of cloud infrastructure platform to support large-scale data processing and storage. More and more business-to-consumer and enterprise applications are based on cloud data center. However, the amount of data center energy consumption is inevitably lead to high operation costs. The aim of this paper is to comprehensive reduce energy consumption of cloud data center servers, network, and cooling systems. We first build an energy efficient cloud data center system including its architecture, job and power consumption model. Then, we combine the linear regression and wavelet neural network techniques into a prediction method, which we call MLWNN, to forecast the cloud data center short-term workload. Third, we propose a heuristic energy efficient job scheduling with workload prediction solution, which is divided into resource management strategy and online energy efficient job scheduling algorithm. Our extensive simulation performance evaluation results clearly demonstrate that our proposed solution has good performance and is very suitable for low workload cloud data center.  相似文献   

3.
DENS: data center energy-efficient network-aware scheduling   总被引:1,自引:0,他引:1  
In modern data centers, energy consumption accounts for a considerably large slice of operational expenses. The existing work in data center energy optimization is focusing only on job distribution between computing servers based on workload or thermal profiles. This paper underlines the role of communication fabric in data center energy consumption and presents a scheduling approach that combines energy efficiency and network awareness, named DENS. The DENS methodology balances the energy consumption of a data center, individual job performance, and traffic demands. The proposed approach optimizes the tradeoff between job consolidation (to minimize the amount of computing servers) and distribution of traffic patterns (to avoid hotspots in the data center network).  相似文献   

4.
Cluster Computing - The internet is expanding its viewpoint into each conceivable part of the cutting-edge economy. Unshackled from our web programs today, the internet is characterizing our way of...  相似文献   

5.
Energy consumption in high performance computing data centers has become a long standing issue. With rising costs of operating the data center, various techniques need to be employed to reduce the overall energy consumption. Currently, among others there are techniques that guarantee reduced energy consumption by powering on/off the idle nodes. However, most of them do not consider the energy consumed by other components in a rack. Our study addresses this aspect of the data center. We show that we can gain considerable energy savings by reducing the energy consumed by these rack components. In this regard, we propose a scheduling technique that will help schedule jobs with the above mentioned goal. We claim that by our scheduling technique we can reduce the energy consumption considerably without affecting other performance metrics of a job. We implement this technique as an enhancement to the well-known Maui scheduler and present our results. We propose three different algorithms as part of this technique. The algorithms evaluate the various trade-offs that could be possibly made with respect to overall cluster performance. We compare our technique with various currently available Maui scheduler configurations. We simulate a wide variety of workloads from real cluster deployments using the simulation mode of Maui. Our results consistently show about 7 to 14 % savings over the currently available Maui scheduler configurations. We shall also see that our technique can be applied in tandem with most of the existing energy aware scheduling techniques to achieve enhanced energy savings. We also consider the side effects of power losses due to the network switches as a result of deploying our technique. We compare our technique with the existing techniques in terms of the power losses due to these switches based on the results in Sharma and Ranganathan, Lecture Notes in Computer Science, vol. 5550, 2009 and account for the power losses. We there on provide a best fit scheme with the rack considerations. We then propose an enhanced technique that merges the two extremes of node allocation based on rack information. We see that we can provide a way to configure the scheduler based on the kind of workload that it schedules and reduce the effect of job splitting across multiple racks. We further discuss how the enhancement can be utilized to build a learning model which can be used to adaptively adjust the scheduling parameters based on the workload experienced.  相似文献   

6.
The demand for cloud computing is increasing dramatically due to the high computational requirements of business, social, web and scientific applications. Nowadays, applications and services are hosted on the cloud in order to reduce the costs of hardware, software and maintenance. To satisfy this high demand, the number of large-scale data centers has increased, which consumes a high volume of electrical power, has a negative impact on the environment, and comes with high operational costs. In this paper, we discuss many ongoing or implemented energy aware resource allocation techniques for cloud environments. We also present a comprehensive review on the different energy aware resource allocation and selection algorithms for virtual machines in the cloud. Finally, we come up with further research issues and challenges for future cloud environments.  相似文献   

7.
In heterogeneous distributed computing systems like cloud computing, the problem of mapping tasks to resources is a major issue which can have much impact on system performance. For some reasons such as heterogeneous and dynamic features and the dependencies among requests, task scheduling is known to be a NP-complete problem. In this paper, we proposed a hybrid heuristic method (HSGA) to find a suitable scheduling for workflow graph, based on genetic algorithm in order to obtain the response quickly moreover optimizes makespan, load balancing on resources and speedup ratio. At first, the HSGA algorithm makes tasks prioritization in complex graph considering their impact on others, based on graph topology. This technique is efficient to reduction of completion time of application. Then, it merges Best-Fit and Round Robin methods to make an optimal initial population to obtain a good solution quickly, and apply some suitable operations such as mutation to control and lead the algorithm to optimized solution. This algorithm evaluates the solutions by considering efficient parameters in cloud environment. Finally, the proposed algorithm presents the better results with increasing number of tasks in application graph in contrast with other studied algorithms.  相似文献   

8.
Ghafari  R.  Kabutarkhani  F. Hassani  Mansouri  N. 《Cluster computing》2022,25(2):1035-1093
Cluster Computing - Cloud computing is very popular because of its unique features such as scalability, elasticity, on-demand service, and security. A large number of tasks are performed...  相似文献   

9.

In recent years, cloud computing can be considered an emerging technology that can share resources with users. Because cloud computing is on-demand, efficient use of resources such as memory, processors, bandwidth, etc., is a big challenge. Despite the advantages of cloud computing, sometimes it is not a proper choice due to its delay in responding appropriately to existing requests, which led to the need for another technology called fog computing. Fog computing reduces traffic and time lags by expanding cloud services to the network and closer to users. It can schedule resources with higher efficiency and utilize them to impact the user's experience dramatically. This paper aims to survey some studies that have been done in the field of scheduling in fog/cloud computing environments. The focus of this survey is on published studies between 2015 and 2021 in journals or conferences. We selected 71 studies in a systematic literature review (SLR) from four major scientific databases based on their relation to our paper. We classified these studies into five categories based on their traced parameters and their focus area. This classification comprises 1—performance 2—energy efficiency, 3—resource utilization, 4—performance and energy efficiency, and 5—performance and resource utilization simultaneously. 42.3% of the studies focused on performance, 9.9% on energy efficiency, 7.0% on resource utilization, 21.1% on both performance and energy efficiency, and 19.7% on both performance and resource utilization. Finally, we present challenges and open issues in the resource scheduling methods in fog/cloud computing environments.

  相似文献   

10.

High energy consumption (EC) is one of the leading and interesting issue in the cloud environment. The optimization of EC is generally related to scheduling problem. Optimum scheduling strategy is used to select the resources or tasks in such a way that system performance is not violated while minimizing EC and maximizing resource utilization (RU). This paper presents a task scheduling model for scheduling the tasks on virtual machines (VMs). The objective of the proposed model is to minimize EC, maximize RU, and minimize workflow makespan while preserving the task’s deadline and dependency constraints. An energy and resource efficient workflow scheduling algorithm (ERES) is proposed to schedule the workflow tasks to the VMs and dynamically deploy/un-deploy the VMs based on the workflow task’s requirements. An energy model is presented to compute the EC of the servers. Double threshold policy is used to perceive the server’ status i.e. overloaded/underloaded or normal. To balance the workload on the overloaded/underloaded servers, live VM migration strategy is used. To check the effectiveness of the proposed algorithm, exhaustive simulation experiments are conducted. The proposed algorithm is compared with power efficient scheduling and VM consolidation (PESVMC) algorithm on the accounts of RU, energy efficiency and task makespan. Further, the results are also verified in the real cloud environment. The results demonstrate the effectiveness of the proposed ERES algorithm.

  相似文献   

11.
Mobile Cloud Computing (MCC) is broadening the ubiquitous market for mobile devices. Because of the hardware limitation of mobile devices, the heavy computing tasks should be processed by service images (SIs) on the cloud. Due to the scalability and mobility of users and services, dynamic resource demands and time-varying network condition, SIs must be re-located to adapt the new circumstances. In this paper, we formulate the SI placement as an optimization problem which minimizes the communication cost subject to resource demand constraints. We then propose a real-time SI placement scheme which includes two sequent stages of clustering/filtering and condensed placement to solve the formulated problem. The former omits the infeasible slots prior to placement in order to improve computational complexity. The latter focuses on the SI placement through a novel condensed solution. The numerical results show that our solution converges to the global optimum with a negligible gap while performing much faster execution time compared with the exhaustive search method. This improvement leverages the real-time services especially in MCC environment.  相似文献   

12.
Cluster Computing - Recently, modern businesses have started to transform into cloud computing platforms to deploy their workflow applications. However, scheduling workflow under resource...  相似文献   

13.
Cluster Computing - Cloud infrastructures are suitable environments for processing large scientific workflows. Nowadays, new challenges are emerging in the field of optimizing workflows such that...  相似文献   

14.
Energy efficiency is the predominant issue which troubles the modern ICT industry. The ever-increasing ICT innovations and services have exponentially added to the energy demands and this proliferated the urgency of fostering the awareness for development of energy efficiency mechanisms. But for a successful and effective accomplishment of such mechanisms, the support of underlying ICT platform is significant. Eventually, Cloud computing has gained attention and has emerged as a panacea to beat the energy consumption issues. This paper scrutinizes the importance of multicore processors, virtualization and consolidation techniques for achieving energy efficiency in Cloud computing. It proposes Green Cloud Scheduling Model (GCSM) that exploits the heterogeneity of tasks and resources with the help of a scheduler unit which allocates and schedules deadline-constrained tasks delimited to only energy conscious nodes. GCSM makes energy-aware task allocation decisions dynamically and aims to prevent performance degradation and achieves desired QoS. The evaluation and comparative analysis of the proposed model with two other techniques is done by setting up a Cloud environment. The results indicate that GCSM achieves 71 % of energy savings and high performance in terms of deadline fulfillment.  相似文献   

15.
This paper proposes solutions to monitor the load and to balance the load of cloud data center. The proposed solutions work in two phases and graph theoretical concepts are applied in both phases. In the first phase, cloud data center is modeled as a network graph. This network graph is augmented with minimum dominating set concept of graph theory for monitoring its load. For constructing minimum dominating set, this paper proposes a new variant of minimum dominating set (V-MDS) algorithm and is compared with existing construction algorithms proposed by Rooji and Fomin. The V-MDS approach of querying cloud data center load information is compared with Central monitor approach. The second phase focuses on system and network-aware live virtual machine migration for load balancing cloud data center. For this, a new system and traffic-aware live VM migration for load balancing (ST-LVM-LB) algorithm is proposed and is compared with existing benchmarked algorithms dynamic management algorithm (DMA) and Sandpiper. To study the performance of the proposed algorithms, CloudSim3.0.3 simulator is used. The experimental results show that, V-MDS algorithm takes quadratic time complexity, whereas Rooji and Fomin algorithms take exponential time complexity. Then the V-MDS approach for querying Cloud Data Center load information is compared with the Central monitor approach and the experimental result shows that the proposed approach reduces the number of message updates by half than the Central monitor approach. The experimental results show on load balancing that the developed ST-LVM-LB algorithm triggers lesser Virtual Machine migrations, takes lesser time and migration cost to migrate with minimum network overhead. Thus the proposed algorithms improve the service delivery performance of cloud data center by incorporating graph theoretical solutions in monitoring and balancing the load.  相似文献   

16.
17.
In this paper, researching on task scheduling is a way from the perspective of resource allocation and management to improve performance of Hadoop system. In order to save the network bandwidth resources in Hadoop cluster environment and improve the performance of Hadoop system, a ReduceTask scheduling strategy that based on data-locality is improved. In MapReduce stage, there are two main data streams in cluster network, they are slow task migration and remote copies of data. The two overlapping burst data transfer can easily become bottlenecks of the cluster network. To reduce the amount of remote copies of data, combining with data-locality, we establish a minimum network resource consumption model (MNRC). MNRC is used to calculate the network resources consumption of ReduceTask. Based on this model, we design a delay priority scheduling policy for the ReduceTask which is based on the cost of network resource consumption. Finally, MNRC is verified by simulation experiments. Evaluation results show that MNRC outperforms the saving cluster network resource by an average of 7.5% in heterogeneous.  相似文献   

18.
Cluster Computing - Cloud Computing is referred to as a set of hardware and software that are being combined to deliver various services of computing. The cloud keeps the services for delivery of...  相似文献   

19.
A gene with yet unknown physiological function can be studied by changing its expression level followed by analysis of the resulting phenotype. This type of functional genomics study can be complicated by the occurrence of 'silent mutations', the phenotypes of which are not easily observable in terms of metabolic fluxes (e.g., the growth rate). Nevertheless, genetic alteration may give rise to significant yet complicated changes in the metabolome. We propose here a conceptual functional genomics strategy based on microbial metabolome data, which identifies changes in in vivo enzyme activities in the mutants. These predicted changes are used to formulate hypotheses to infer unknown gene functions. The required metabolome data can be obtained solely from high-throughput mass spectrometry analysis, which provides the following in vivo information: (1) the metabolite concentrations in the reference and the mutant strain; (2) the metabolic fluxes in both strains and (3) the enzyme kinetic parameters of the reference strain. We demonstrate in silico that changes in enzyme activities can be accurately predicted by this approach, even in 'silent mutants'.  相似文献   

20.
Kaur  Gurleen  Bala  Anju 《Cluster computing》2021,24(3):1955-1974
Cluster Computing - Cloud computing has attracted scientists to deploy scientific applications by offering services such as Infrastructure-as-a-service (IaaS), Software-as-a-service (SaaS), and...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号