共查询到20条相似文献,搜索用时 14 毫秒
1.
Concentrating on a single resource cannot efficiently cope with the overall high utilization of resources in cloud data centers. Nowadays multiple resource scheduling problem is more attractive to researchers. Some studies achieve progresses in multi-resource scenarios. However, these previous heuristics have obvious limitations in complex software defined cloud environment. Focusing on energy conservation and load balancing, we propose a preciousness model for multiple resource scheduling in this paper. We give the formulation of the problem and propose an innovative strategy (P-Aware). In P-Aware, a special algorithm PMDBP (Proportional Multi-dimensional Bin Packing) is applied in the multi-dimensional bin packing approach. In this algorithm, multiple resources are consumed in a proportional way. Structure and details of PMDBP are discussed in this paper. Extensive experiments demonstrate that our strategy outperforms others both in efficiency and load balancing. Now P-Aware has been implemented in the resource management system in our cooperative company to cut energy consumption and reduce resource contention. 相似文献
2.
Cluster Computing - Multi-core systems has evolved enormously during the last decade with the improvement in the integration technology which makes it possible to house large number of transistors... 相似文献
3.
Cloud computing can leverage over-provisioned resources that are wasted in traditional data centers hosting production applications by consolidating tasks with lower QoS and SLA requirements. However, the dramatic fluctuation of workloads with lower QoS and SLA requirements may impact the performance of production applications. Frequent task eviction, killing and rescheduling operations also waste CPU cycles and create overhead. This paper aims to schedule hybrid workloads in the cloud data center to reduce task failures and increase resource utilization. The multi-prediction model, including the ARMA model and the feedback based online AR model, is used to predict the current and the future resource availability. Decision to accept or reject a new task is based on the available resources and task properties. Evaluations show that the scheduler can reduce the host overload and failed tasks by nearly 70%, and increase effective resource utilization by more than 65%. The task delay performance degradation is also acceptable. 相似文献
4.
Data centers are the backbone of cloud infrastructure platform to support large-scale data processing and storage. More and more business-to-consumer and enterprise applications are based on cloud data center. However, the amount of data center energy consumption is inevitably lead to high operation costs. The aim of this paper is to comprehensive reduce energy consumption of cloud data center servers, network, and cooling systems. We first build an energy efficient cloud data center system including its architecture, job and power consumption model. Then, we combine the linear regression and wavelet neural network techniques into a prediction method, which we call MLWNN, to forecast the cloud data center short-term workload. Third, we propose a heuristic energy efficient job scheduling with workload prediction solution, which is divided into resource management strategy and online energy efficient job scheduling algorithm. Our extensive simulation performance evaluation results clearly demonstrate that our proposed solution has good performance and is very suitable for low workload cloud data center. 相似文献
5.
MapReduce is an effective tool for processing large amounts of data in parallel using a cluster of processors or computers. One common data processing task is the join operation, which combines two or more datasets based on values common to each. In this paper, we present a network aware multi-way join for MapReduce (SmartJoin) that improves performance and considers network traffic when redistributing workload amongst reducers. SmartJoin achieves this by dynamically redistributing tuples directly between reducers with an intelligent network aware algorithm. We show that our presented technique has significant potential to minimize the time required to join multiple datasets. In our evaluation, we show that SmartJoin has up to 39 % improvement compared to the non-redistribution method, a 26.8 % improvement over random redistribution and 27.6 % improvement over worst join redistribution. 相似文献
6.
The information and communication technology (ICT) sector has grown exponentially in the recent years. An essential component of the ICT organizations is constituted by the data centers that are densely populated with redundant servers and communicational links to ensure the provision of 99.99 % availability of services; a fact responsible for the heavy energy consumption by data centers. For energy economy, the redundant elements can be powered off based on the current workload within the data center. We propose a Data Center-wide Energy-Efficient Resource Scheduling framework (DCEERS) that schedules data center resources according to the current workload of the data center. Minimum subset of resources to service the current workload are calculated by solving the Minimum Cost Multi Commodity Flow (MCMCF) using the Benders decomposition algorithm. The Benders decomposition algorithm is scalable: it solves the MCMCF problem in linear time for large data center environments. Our simulated results depict that DCEERS achieves better energy efficiency than the previous data center resource scheduling methodologies. 相似文献
7.
Cluster Computing - The MapReduce (MR) scheduling is a prominent area of research to minimize energy consumption in the Hadoop framework in the era of green computing. Very few scheduling... 相似文献
8.
Frédéric Pinel Bernabé Dorronsoro Johnatan E. Pecero Pascal Bouvry Samee U. Khan 《Cluster computing》2013,16(3):421-433
The sensitivity analysis of a Cellular Genetic Algorithm (CGA) with local search is used to design a new and faster heuristic for the problem of mapping independent tasks to a distributed system (such as a computer cluster or grid) in order to minimize makespan (the time when the last task finishes). The proposed heuristic improves the previously known Min-Min heuristic. Moreover, the heuristic finds mappings of similar quality to the original CGA but in a significantly reduced runtime (1,000 faster). The proposed heuristic is evaluated across twelve different classes of scheduling instances. In addition, a proof of the energy-efficiency of the algorithm is provided. This convergence study suggests how additional energy reduction can be achieved by inserting low power computing nodes to the distributed computer system. Simulation results show that this approach reduces both energy consumption and makespan. 相似文献
9.
Russon AE 《Current biology : CB》2010,20(22):R981-R983
A study of orangutans' daily energy expenditure confirmed exceptionally slow?metabolism. It suggests they evolved a lifestyle designed to minimize energy use. If so, shifting to a higher energy-use strategy may help explain how?humans evolved. 相似文献
10.
Hou Aiqin Wu Chase Q. Duan Qiang Quan Dawei Zuo Liudong Li Yangyang Zhu Michelle M. Fang Dingyi 《Cluster computing》2022,25(4):3019-3034
Cluster Computing - The widespread deployment of scientific applications and business services of various types on clouds requires the transfer of big data with different priorities between... 相似文献
11.
Cluster Computing - Data centers are growing densely and providing various services to millions of users through a collection of limited servers. That’s why large-scale data center servers... 相似文献
12.
Cluster Computing - The unprecedented growth in data volume results in an urgent need for a dramatic increase in the size of data center networks. Accommodating millions to even billions of servers... 相似文献
13.
Energy consumption in high performance computing data centers has become a long standing issue. With rising costs of operating the data center, various techniques need to be employed to reduce the overall energy consumption. Currently, among others there are techniques that guarantee reduced energy consumption by powering on/off the idle nodes. However, most of them do not consider the energy consumed by other components in a rack. Our study addresses this aspect of the data center. We show that we can gain considerable energy savings by reducing the energy consumed by these rack components. In this regard, we propose a scheduling technique that will help schedule jobs with the above mentioned goal. We claim that by our scheduling technique we can reduce the energy consumption considerably without affecting other performance metrics of a job. We implement this technique as an enhancement to the well-known Maui scheduler and present our results. We propose three different algorithms as part of this technique. The algorithms evaluate the various trade-offs that could be possibly made with respect to overall cluster performance. We compare our technique with various currently available Maui scheduler configurations. We simulate a wide variety of workloads from real cluster deployments using the simulation mode of Maui. Our results consistently show about 7 to 14 % savings over the currently available Maui scheduler configurations. We shall also see that our technique can be applied in tandem with most of the existing energy aware scheduling techniques to achieve enhanced energy savings. We also consider the side effects of power losses due to the network switches as a result of deploying our technique. We compare our technique with the existing techniques in terms of the power losses due to these switches based on the results in Sharma and Ranganathan, Lecture Notes in Computer Science, vol. 5550, 2009 and account for the power losses. We there on provide a best fit scheme with the rack considerations. We then propose an enhanced technique that merges the two extremes of node allocation based on rack information. We see that we can provide a way to configure the scheduler based on the kind of workload that it schedules and reduce the effect of job splitting across multiple racks. We further discuss how the enhancement can be utilized to build a learning model which can be used to adaptively adjust the scheduling parameters based on the workload experienced. 相似文献
14.
15.
Cloud computing and web emerging applications have created the need for more powerful data centers. These data centers need high bandwidth interconnects that can sustain the high interaction between the web-, application- and database-servers. Data center networks based on electronic packet switches will have to consume excessive power in order to satisfy the required communication bandwidth of future data centers. Optical interconnects have gained attention recently as a promising energy efficient solution offering high throughput, low latency and reduced energy consumption compared to current networks based on commodity switches. This paper presents a comparison on the power consumption of several optical interconnection schemes based on AWGRs, Wavelength Selective Switches (WSS) or Semiconductor Optical Amplifiers (SOAs). Based on a thorough analysis of each architecture, it is shown that optical interconnects can achieve at least an order of magnitude higher energy efficiency compared to current data center networks based on electrical packet based switches and they could contribute to greener IT network infrastructures. 相似文献
16.
Alharbi Fares Tian Yu-Chu Tang Maolin Ferdaus Md Hasanul Zhang Wei-Zhe Yu Zu-Guo 《Cluster computing》2021,24(2):1255-1275
Cluster Computing - Enterprise cloud data centers consume a tremendous amount of energy due to the large number of physical machines (PMs). These PMs host a huge number of virtual machines (VMs),... 相似文献
17.
Cluster Computing - The internet is expanding its viewpoint into each conceivable part of the cutting-edge economy. Unshackled from our web programs today, the internet is characterizing our way of... 相似文献
18.
The demand for cloud computing is increasing dramatically due to the high computational requirements of business, social, web and scientific applications. Nowadays, applications and services are hosted on the cloud in order to reduce the costs of hardware, software and maintenance. To satisfy this high demand, the number of large-scale data centers has increased, which consumes a high volume of electrical power, has a negative impact on the environment, and comes with high operational costs. In this paper, we discuss many ongoing or implemented energy aware resource allocation techniques for cloud environments. We also present a comprehensive review on the different energy aware resource allocation and selection algorithms for virtual machines in the cloud. Finally, we come up with further research issues and challenges for future cloud environments. 相似文献
19.
作为解决生命领域复杂科学问题的关键要素以及驱动科学发现与决策的基础资源,微生物科学数据资源已成为国家的重要战略资源。国家微生物科学数据中心(https://nmdc.cn/)的建设使得海量微生物数据资源可以得到有效的整理整合和开放共享,这对于微生物资源的研究、利用和可持续发展都起着至关重要的作用。本文从核心资源、服务内容、功能特色等多方面总结了国家微生物科学数据中心平台的建设进展,并提出了面向微生物领域科研及产业用户的应用实践。 相似文献
20.
Vijay Shankar Rajanna Anand Jahagirdar Smit Shah Kartik Gopalan 《Cluster computing》2012,15(2):183-200
Large cluster-based cloud computing platforms increasingly use commodity Ethernet technologies, such as Gigabit Ethernet, 10GigE, and Fibre Channel over Ethernet (FCoE), for intra-cluster communication. Traffic congestion can become a performance concern in the Ethernet due to consolidation of data, storage, and control traffic over a common layer-2 fabric, as well as consolidation of multiple virtual machines (VMs) over less physical hardware. Even as networking vendors race to develop switch-level hardware support for congestion management, we make the case that virtualization has opened up a complementary set of opportunities to reduce or even eliminate network congestion in cloud computing clusters. We present the design, implementation, and evaluation of a system called XCo, that performs explicit coordination of network transmissions over a shared Ethernet fabric to proactively prevent network congestion. XCo is a software-only distributed solution executing only in the end-nodes. A central controller uses explicit permissions to temporally separate (at millisecond granularity) the transmissions from competing senders through congested links. XCo is fully transparent to applications, presently deployable, and independent of any switch-level hardware support. We present a detailed evaluation of our XCo prototype across a number of network congestion scenarios, and demonstrate that XCo significantly improves network performance during periods of congestion. We also evaluate the behavior of XCo for large topologies using NS3 simulations. 相似文献