首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Run time variability of parallel applications continues to present significant challenges to their performance and energy efficiency in high-performance computing (HPC) systems. When run times are extended and unpredictable, application developers perceive this as a degradation of system (or subsystem) performance. Extended run times directly contribute to proportionally higher energy consumption, potentially negating efforts by applications, or the HPC system, to optimize energy consumption using low-level control techniques, such as dynamic voltage and frequency scaling (DVFS). Therefore, successful systemic management of application run time performance can result in less wasted energy, or even energy savings. We have been studying run time variability in terms of communication time, from the perspective of the application, focusing on the interconnection network. More recently, our focus has shifted to developing a more complete understanding of the effects of HPC subsystem interactions on parallel applications. In this context, the set of executing applications on the HPC system is treated as a subsystem, along with more traditional subsystems like the communication subsystem, storage subsystem, etc. To gain insight into the run time variability problem, our earlier work developed a framework to emulate parallel applications (PACE) that stresses the communication subsystem. Evaluation of run time sensitivity to network performance of real applications is performed with a tool called PARSE, which uses PACE. In this paper, we propose a model defining application-level behavioral attributes, that collectively describes how applications behave in terms of their run time performance, as functions of their process distribution on the system (spacial locality), and subsystem interactions (communication subsystem degradation). These subsystem interactions are produced when multiple applications execute concurrently on the same HPC system. We also revisit our evaluation framework and tools to demonstrate the flexibility of our application characterization techniques, and the ease with which attributes can be quantified. The validity of the model is demonstrated using our tools with several parallel benchmarks and application fragments. Results suggest that it is possible to articulate application-level behavioral attributes as a tuple of numeric values that describe course-grained performance behavior.  相似文献   

2.
Reducing energy consumption is an increasingly important issue in cloud computing, more specifically when dealing with High Performance Computing (HPC). Minimizing energy consumption can significantly reduce the amount of energy bills and then increase the provider’s profit. In addition, the reduction of energy decreases greenhouse gas emissions. Therefore, many researches are carried out to develop new methods in order to make HPC applications consuming less energy. In this paper, we present a multi-objective genetic algorithm (MO-GA) that optimizes the energy consumption, CO2 emissions and the generated profit of a geographically distributed cloud computing infrastructure. We also propose a greedy heuristic that aims to maximize the number of scheduled applications in order to compare it with the MO-GA. The two approaches have been experimented using realistic workload traces from Feitelson’s PWA Parallel Workload Archive. The results show that MO-GA outperforms the greedy heuristic by a significant margin in terms of energy consumption and CO2 emissions. In addition, MO-GA is also proved to be slightly better in terms of profit while scheduling more applications.  相似文献   

3.
Inhibition of myocardial fatty acid oxidation can improve left ventricular (LV) mechanical efficiency by increasing LV power for a given rate of myocardial energy expenditure. This phenomenon has not been assessed at high workloads in nonischemic myocardium; therefore, we subjected in vivo pig hearts to a high workload for 5 min and assessed whether blocking mitochondrial fatty acid oxidation with the carnitine palmitoyltransferase-I inhibitor oxfenicine would improve LV mechanical efficiency. In addition, the cardiac content of malonyl-CoA (an endogenous inhibitor of carnitine palmitoyltransferase-I) and activity of acetyl-CoA carboxylase (which synthesizes malonyl-CoA) were assessed. Increased workload was induced by aortic constriction and dobutamine infusion, and LV efficiency was calculated from the LV pressure-volume loop and LV energy expenditure. In untreated pigs, the increase in LV power resulted in a 2.5-fold increase in fatty acid oxidation and cardiac malonyl-CoA content but did not affect the activation state of acetyl-CoA carboxylase. The activation state of the acetyl-CoA carboxylase inhibitory kinase AMP-activated protein kinase decreased by 40% with increased cardiac workload. Pretreatment with oxfenicine inhibited fatty acid oxidation by 75% and had no effect on cardiac energy expenditure but significantly increased LV power and LV efficiency (37 +/- 5% vs. 26 +/- 5%, P < 0.05) at high workload. In conclusion, 1) myocardial fatty acid oxidation increases with a short-term increase in cardiac workload, despite an increase in malonyl-CoA concentration, and 2) inhibition of fatty acid oxidation improves LV mechanical efficiency by increasing LV power without affecting cardiac energy expenditure.  相似文献   

4.
5.
We analyze how the foraging currencies "rate" (net energy gainper unit time) and "efficiency" (net energy gain per unit energyexpenditure) relate to the workload adopted by a forager. Weconsider feeding (gathering food for immediate consumption)as opposed to provisioning and investigate the influence oftime and energy constraints. In our model the forager may varythe level of energy expenditure while foraging; increased expenditureincreases the rate of gain, but with diminishing returns. Weshow that rate maximizing requires a higher rate of energy expenditurethan efficiency-maximizing, and we compare the performance ofrate- and efficiency-maximizing tactics when the feeding strategyis (1) to maximize the total net gain while foraging; (2) tomaximize the total net daily gain; or (3) to meet a requirement.Generally, the rate-maximizing tactic only performs best whentime is limiting; otherwise, a lighter workload and slower feedingrate perform better. Under the restricted conditions analyzedhere, no general statement can be made about the best tacticwhen the strategy is to meet a requirement. These results mayhelp explain several instances of "submaximal" foraging describedin the literature.  相似文献   

6.

High energy consumption (EC) is one of the leading and interesting issue in the cloud environment. The optimization of EC is generally related to scheduling problem. Optimum scheduling strategy is used to select the resources or tasks in such a way that system performance is not violated while minimizing EC and maximizing resource utilization (RU). This paper presents a task scheduling model for scheduling the tasks on virtual machines (VMs). The objective of the proposed model is to minimize EC, maximize RU, and minimize workflow makespan while preserving the task’s deadline and dependency constraints. An energy and resource efficient workflow scheduling algorithm (ERES) is proposed to schedule the workflow tasks to the VMs and dynamically deploy/un-deploy the VMs based on the workflow task’s requirements. An energy model is presented to compute the EC of the servers. Double threshold policy is used to perceive the server’ status i.e. overloaded/underloaded or normal. To balance the workload on the overloaded/underloaded servers, live VM migration strategy is used. To check the effectiveness of the proposed algorithm, exhaustive simulation experiments are conducted. The proposed algorithm is compared with power efficient scheduling and VM consolidation (PESVMC) algorithm on the accounts of RU, energy efficiency and task makespan. Further, the results are also verified in the real cloud environment. The results demonstrate the effectiveness of the proposed ERES algorithm.

  相似文献   

7.
Data centers are the backbone of cloud infrastructure platform to support large-scale data processing and storage. More and more business-to-consumer and enterprise applications are based on cloud data center. However, the amount of data center energy consumption is inevitably lead to high operation costs. The aim of this paper is to comprehensive reduce energy consumption of cloud data center servers, network, and cooling systems. We first build an energy efficient cloud data center system including its architecture, job and power consumption model. Then, we combine the linear regression and wavelet neural network techniques into a prediction method, which we call MLWNN, to forecast the cloud data center short-term workload. Third, we propose a heuristic energy efficient job scheduling with workload prediction solution, which is divided into resource management strategy and online energy efficient job scheduling algorithm. Our extensive simulation performance evaluation results clearly demonstrate that our proposed solution has good performance and is very suitable for low workload cloud data center.  相似文献   

8.
Autonomous wireless sensor networks are subject to power, bandwidth, and resource limitations that can be represented as capacity constraints imposed to their equivalent flow networks. The maximum sustainable workload (i.e., the maximum data flow from the sensor nodes to the collection point which is compatible with the capacity constraints) is the maxflow of the flow network. Although a large number of energy-aware routing algorithms for ad-hoc networks have been proposed, they usually aim at maximizing the lifetime of the network rather than the steady-state sustainability of the workload. Energy harvesting techniques, providing renewable supply to sensor nodes, prompt for a paradigm shift from energy-constrained lifetime optimization to power-constrained workload optimization.  相似文献   

9.
Two kinds of free‐standing electrodes, reduced graphene oxide (rGO)‐wrapped Fe‐doped MnO2 composite (G‐MFO) and rGO‐wrapped hierarchical porous carbon microspheres composite (G‐HPC) are fabricated using a frozen lake‐inspired, bubble‐assistance method. This configuration fully enables utilization of the synergistic effects from both components, endowing the materials to be excellent electrodes for flexible and lightweight electrochemical capacitors. Moreover, a nonaqueous HPC‐doped gel polymer electrolyte (GPE‐HPC) is employed to broad voltage window and improve heat resistance. A fabricated asymmetric supercapacitor based on G‐MFO cathode and G‐HPC anode with GPE‐HPC electrolyte achieves superior flexibility and reliability, enhanced energy/power density, and outstanding cycling stability. The ability to power light‐emitting diodes also indicates the feasibility for practical use. Therefore, it is believed that this novel design may hold great promise for future flexible electronic devices.  相似文献   

10.

Data transmission and retrieval in a cloud computing environment are usually handled by storage device providers or physical storage units leased by third parties. Improving network performance considering power connectivity and resource stability while ensuring workload balance is a hot topic in cloud computing. In this research, we have addressed the data duplication problem by providing two dynamic models with two variant architectures to investigate the strengths and shortcomings of architectures in Big Data Cloud Computing Networks. The problems of the data duplication process will be discussed accurately in each model. Attempts have been made to improve the performance of the cloud network by taking into account and correcting the flaws of the previously proposed algorithms. The accuracy of the proposed models have been investigated by simulation. Achieved results indicate an increase in the workload balance of the network and a decrease in response time to user requests in the model with a grouped architecture for all the architectures. Also, the proposed duplicate data model with peer-to-peer network architecture has been able to increase the cloud network optimality compared to the models presented with the same architecture.

  相似文献   

11.
Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.  相似文献   

12.
A simple and scalable direct laser machining process to fabricate MXene‐on‐paper coplanar microsupercapacitors is reported. Commercially available printing paper is employed as a platform in order to coat either hydrofluoric acid‐etched or clay‐like 2D Ti3C2 MXene sheets, followed by laser machining to fabricate thick‐film MXene coplanar electrodes over a large area. The size, morphology, and conductivity of the 2D MXene sheets are found to strongly affect the electrochemical performance due to the efficiency of the ion‐electron kinetics within the layered MXene sheets. The areal performance metrics of Ti3C2 MXene‐on‐paper microsupercapacitors show very competitive power‐energy densities, comparable to the reported state‐of‐the‐art paper‐based microsupercapacitors. Various device architectures are fabricated using the MXene‐on‐paper electrodes and successfully demonstrated as a micropower source for light emitting diodes. The MXene‐on‐paper electrodes show promise for flexible on‐paper energy storage devices.  相似文献   

13.
In the large-scale parallel computing environment, resource allocation and energy efficient techniques are required to deliver the quality of services (QoS) and to reduce the operational cost of the system. Because the cost of the energy consumption in the environment is a dominant part of the owner’s and user’s budget. However, when considering energy efficiency, resource allocation strategies become more difficult, and QoS (i.e., queue time and response time) may violate. This paper therefore is a comparative study on job scheduling in large-scale parallel systems to: (a) minimize the queue time, response time, and energy consumption and (b) maximize the overall system utilization. We compare thirteen job scheduling policies to analyze their behavior. A set of job scheduling policies includes (a) priority-based, (b) first fit, (c) backfilling, and (d) window-based policies. All of the policies are extensively simulated and compared. For the simulation, a real data center workload comprised of 22385 jobs is used. Based on results of their performance, we incorporate energy efficiency in three policies i.e., (1) best result producer, (2) average result producer, and (3) worst result producer. We analyze the (a) queue time, (b) response time, (c) slowdown ratio, and (d) energy consumption to evaluate the policies. Moreover, we present a comprehensive workload characterization for optimizing system’s performance and for scheduler design. Major workload characteristics including (a) Narrow, (b) Wide, (c) Short, and (d) Long jobs are characterized for detailed analysis of the schedulers’ performance. This study highlights the strengths and weakness of various job scheduling polices and helps to choose an appropriate job scheduling policy in a given scenario.  相似文献   

14.
Reynolds number and thus body size may potentially limit aerodynamic force production in flying insects due to relative changes of viscous forces on the beating wings. By comparing four different species of fruit flies similar in shape but with different body mass, we have investigated how small insects cope with changes in fluid mechanical constraints on power requirements for flight and the efficiency with which chemical energy is turned into aerodynamic flight forces. The animals were flown in a flight arena in which stroke kinematics, aerodynamic force production, and carbon dioxide release were measured within the entire working range of the flight motor. The data suggest that during hovering performance mean lift coefficient for flight is higher in smaller animals than in their larger relatives. This result runs counter to predictions based on conventional aerodynamic theory and suggests subtle differences in stroke kinematics between the animals. Estimates in profile power requirements based on high drag coefficient suggest that among all tested species of fruit flies elastic energy storage might not be required to minimize energetic expenditures during flight. Moreover, muscle efficiency significantly increases with increasing body size whereas aerodynamic efficiency tends to decrease with increasing size or Reynolds number. As a consequence of these two opposite trends, total flight efficiency tends to increase only slightly within the 6-fold range of body sizes. Surprisingly, total flight efficiency in fruit flies is broadly independent of different profile power estimates and typically yields mean values between 2–4%.  相似文献   

15.
DENS: data center energy-efficient network-aware scheduling   总被引:1,自引:0,他引:1  
In modern data centers, energy consumption accounts for a considerably large slice of operational expenses. The existing work in data center energy optimization is focusing only on job distribution between computing servers based on workload or thermal profiles. This paper underlines the role of communication fabric in data center energy consumption and presents a scheduling approach that combines energy efficiency and network awareness, named DENS. The DENS methodology balances the energy consumption of a data center, individual job performance, and traffic demands. The proposed approach optimizes the tradeoff between job consolidation (to minimize the amount of computing servers) and distribution of traffic patterns (to avoid hotspots in the data center network).  相似文献   

16.
Energy efficient virtual machine (VM) consolidation in modern data centers is typically optimized using methods such as Mixed Integer Programming, which typically require precise input to the model. Unfortunately, many parameters are uncertain or very difficult to predict precisely in the real world. As a consequence, a once calculated solution may be highly infeasible in practice. In this paper, we use methods from robust optimization theory in order to quantify the impact of uncertainty in modern data centers. We study the impact of different parameter uncertainties on the energy efficiency and overbooking ratios such as e.g. VM resource demands, migration related overhead or the power consumption model of the servers used. We also show that setting aside additional resource to cope with uncertainty of workload influences the overbooking ration of the servers and the energy consumption. We show that, by using our model, Cloud operators can calculate a more robust migration schedule leading to higher total energy consumption. A more risky operator may well choose a more opportunistic schedule leading to lower energy consumption but also higher risk of SLA violation.  相似文献   

17.
A global energy crop productivity model that provides geospatially explicit quantitative details on biomass potential and factors affecting sustainability would be useful, but does not exist now. This study describes a modeling platform capable of meeting many challenges associated with global‐scale agro‐ecosystem modeling. We designed an analytical framework for bioenergy crops consisting of six major components: (i) standardized natural resources datasets, (ii) global field‐trial data and crop management practices, (iii) simulation units and management scenarios, (iv) model calibration and validation, (v) high‐performance computing (HPC) simulation, and (vi) simulation output processing and analysis. The HPC‐Environmental Policy Integrated Climate (HPC‐EPIC) model simulated a perennial bioenergy crop, switchgrass (Panicum virgatum L.), estimating feedstock production potentials and effects across the globe. This modeling platform can assess soil C sequestration, net greenhouse gas (GHG) emissions, nonpoint source pollution (e.g., nutrient and pesticide loss), and energy exchange with the atmosphere. It can be expanded to include additional bioenergy crops (e.g., miscanthus, energy cane, and agave) and food crops under different management scenarios. The platform and switchgrass field‐trial dataset are available to support global analysis of biomass feedstock production potential and corresponding metrics of sustainability.  相似文献   

18.
The complexity and requirements of web applications are increasing in order to meet more sophisticated business models (web services and cloud computing, for instance). For this reason, characteristics such as performance, scalability and security are addressed in web server cluster design. Due to the rising energy costs and also to environmental concerns, energy consumption in this type of system has become a main issue. This paper shows energy consumption reduction techniques that use a load forecasting method, combined with DVFS (Dynamic Voltage and Frequency Scaling) and dynamic configuration techniques (turning servers on and off), in a soft real-time web server clustered environment. Our system promotes energy consumption reduction while maintaining user’s satisfaction with respect to request deadlines being met. The results obtained show that prediction capabilities increase the QoS (Quality of Service) of the system, while maintaining or improving the energy savings over state-of-the-art power management mechanisms. To validate this predictive policy, a web application running a real workload profile was deployed in an Apache server cluster testbed running Linux.  相似文献   

19.
Till date, fabrication of piezoelectric nanogenerator (PNG) with highly durable, high power density, and high energy conversion efficiency is of great concern. Here a flexible, sensitive, cost effective hybrid piezoelectric nanogenerator (HPNG) developed by integrating flexible steel woven fabric electrodes into poly(vinylidene fluoride) (PVDF)/aluminum oxides decorated reduced graphene oxide (AlO‐rGO) nanocomposite film is reported where AlO‐rGO acts as nucleating agent for electroactive β‐phase formation. The HPNG exhibits reliable energy harvesting performance with high output, fast charging capability, and high durability compared with previously reported PVDF based PNGs. This HPNG is capable for harvesting energy from a variety and easy accessible biomechanical and mechanical energy sources such as, body movements (e.g., hand folding, jogging, heel pressing, and foot striking, etc.) and machine vibration. The HPNG exhibits high output power density and energy conversion efficiency, facilitating direct light on different color of several commercial light‐emitting diodes instantly and powers up many portable electronic devices like wrist watch, calculator, speaker, and mobile liquid crystal display (LCD) screen through capacitor charging. More importantly, HPNG retains its performance after long compression cycles (≈158 400), demonstrating great promise as a piezoelectric energy harvester toward practical applications in harvesting biomechanical and mechanical energy for self‐powered systems.  相似文献   

20.
The large choice of Distributed Computing Infrastructures (DCIs) available allows users to select and combine their preferred architectures amongst Clusters, Grids, Clouds, Desktop Grids and more. In these hybrid DCIs, elasticity is emerging as a key property. In elastic infrastructures, resources available to execute application continuously vary, either because of application requirements or because of constraints on the infrastructure, such as node volatility. In the former case, there is no guarantee that the computing resources will remain available during the entire execution of an application. In this paper, we show that Bag-of-Tasks (BoT) execution on these “Best-Effort” infrastructures suffer from a drop of the task completion rate at the end of the execution. The SpeQuloS service presented in this paper improves the Quality of Service (QoS) of BoT applications executed on hybrid and elastic infrastructures. SpeQuloS monitors the execution of the BoT, and dynamically supplies fast and reliable Cloud resources when the critical part of the BoT is executed. SpeQuloS offers several features to hybrid DCIs users, such as estimating completion time and execution speedup. Performance evaluation shows that BoT executions can be accelerated by a factor 2, while offloading less than 2.5 % of the workload to the Cloud. We report on several scenarios where SpeQuloS is deployed on hybrid infrastructures featuring a large variety of infrastructures combinations. In the context of the European Desktop Grid Initiative (EDGI), SpeQuloS is operated to improve QoS of Desktop Grids using resources from private Clouds. We present a use case where SpeQuloS uses both EC2 regular and spot instances to decrease the cost of computation while preserving a similar QoS level. Finally, in the last scenario SpeQuloS allows to optimize Grid5000 resources utilization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号