首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.  相似文献   

2.
Cloud computing and web emerging applications have created the need for more powerful data centers. These data centers need high bandwidth interconnects that can sustain the high interaction between the web-, application- and database-servers. Data center networks based on electronic packet switches will have to consume excessive power in order to satisfy the required communication bandwidth of future data centers. Optical interconnects have gained attention recently as a promising energy efficient solution offering high throughput, low latency and reduced energy consumption compared to current networks based on commodity switches. This paper presents a comparison on the power consumption of several optical interconnection schemes based on AWGRs, Wavelength Selective Switches (WSS) or Semiconductor Optical Amplifiers (SOAs). Based on a thorough analysis of each architecture, it is shown that optical interconnects can achieve at least an order of magnitude higher energy efficiency compared to current data center networks based on electrical packet based switches and they could contribute to greener IT network infrastructures.  相似文献   

3.
Large scale clusters based on virtualization technologies have been widely used in many areas, including the data center and cloud computing environment. But how to save energy is a big challenge for building a “green cluster” recently. However, previous researches, including local approaches, which focus on saving the energy of the components in a single workstation without a global vision on the whole cluster, and cluster-wide energy saving techniques, which can only be applied to homogeneous workstations and specific applications, cannot solve the challenges. This paper describes the design and implementation of a novel scheme, called Magnet, that uses live migration of virtual machines to transfer load among the nodes on a multi-layer ring-based overlay. This scheme can reduce the power consumption greatly by regarding all the cluster nodes as a whole based on virtualization technologies. And, it can be applied to both the homogeneous and heterogeneous servers. Experimental measurements show that the new method can reduce the power consumption by 74.8% over base at most with certain adjustably acceptable overhead. The effectiveness and performance insights are also analytically verified.  相似文献   

4.
Energy efficiency is the predominant issue which troubles the modern ICT industry. The ever-increasing ICT innovations and services have exponentially added to the energy demands and this proliferated the urgency of fostering the awareness for development of energy efficiency mechanisms. But for a successful and effective accomplishment of such mechanisms, the support of underlying ICT platform is significant. Eventually, Cloud computing has gained attention and has emerged as a panacea to beat the energy consumption issues. This paper scrutinizes the importance of multicore processors, virtualization and consolidation techniques for achieving energy efficiency in Cloud computing. It proposes Green Cloud Scheduling Model (GCSM) that exploits the heterogeneity of tasks and resources with the help of a scheduler unit which allocates and schedules deadline-constrained tasks delimited to only energy conscious nodes. GCSM makes energy-aware task allocation decisions dynamically and aims to prevent performance degradation and achieves desired QoS. The evaluation and comparative analysis of the proposed model with two other techniques is done by setting up a Cloud environment. The results indicate that GCSM achieves 71 % of energy savings and high performance in terms of deadline fulfillment.  相似文献   

5.
The emergence of cloud computing has made it become an attractive solution for large-scale data processing and storage applications. Cloud infrastructures provide users a remote access to powerful computing capacity, large storage space and high network bandwidth to deploy various applications. With the support of cloud computing, many large-scale applications have been migrated to cloud infrastructures instead of running on in-house local servers. Among these applications, continuous write applications (CWAs) such as online surveillance systems, can significantly benefit due to the flexibility and advantages of cloud computing. However, with specific characteristics such as continuous data writing and processing, and high level demand of data availability, cloud service providers prefer to use sophisticated models for provisioning resources to meet CWAs’ demands while minimizing the operational cost of the infrastructure. In this paper, we present a novel architecture of multiple cloud service providers (CSPs) or commonly referred to as Cloud-of-Clouds. Based on this architecture, we propose two operational cost-aware algorithms for provisioning cloud resources for CWAs, namely neighboring optimal resource provisioning algorithm and global optimal resource provisioning algorithm, in order to minimize the operational cost and thereby maximizing the revenue of CSPs. We validate the proposed algorithms through comprehensive simulations. The two proposed algorithms are compared against each other to assess their effectiveness, and with a commonly used and practically viable round-robin approach. The results demonstrate that NORPA and GORPA outperform the conventional round-robin algorithm by reducing the operational cost by up to 28 and 57 %, respectively. The low complexity of the proposed cost-aware algorithms allows us to apply it to a realistic Cloud-of-Clouds environment in industry as well as academia.  相似文献   

6.
MapReduce offers an ease-of-use programming paradigm for processing large data sets, making it an attractive model for opportunistic compute resources. However, unlike dedicated resources, where MapReduce has mostly been deployed, opportunistic resources have significantly higher rates of node volatility. As a consequence, the data and task replication scheme adopted by existing MapReduce implementations is woefully inadequate on such volatile resources. In this paper, we propose MOON, short for MapReduce On Opportunistic eNvironments, which is designed to offer reliable MapReduce service for opportunistic computing. MOON adopts a hybrid resource architecture by supplementing opportunistic compute resources with a small set of dedicated resources, and it extends Hadoop, an open-source implementation of MapReduce, with adaptive task and data scheduling algorithms to take advantage of the hybrid resource architecture. Our results on an emulated opportunistic computing system running atop a 60-node cluster demonstrate that MOON can deliver significant performance improvements to Hadoop on volatile compute resources and even finish jobs that are not able to complete in Hadoop.  相似文献   

7.
High performance cloud computing is behind the scene powering “the next big thing” as the mainstream accelerator for innovation in many areas. We describe here how to accelerate inexpensive ARM-based computing nodes with high-end GPGPUs hosted on x86_64 machines using the GVirtuS general-purpose virtualization service. We draw the vision of a possible next generation computing clusters characterized by highly heterogeneous parallelism heading to a lower electric power demanding, less heat producing and more environmental friendliness. Preliminary but promising performance data suggest that this solution could be considered as part of the foundations of the next generation of high performance cloud computing components.  相似文献   

8.
Airoldi L  Bulleri F 《PloS one》2011,6(8):e22985

Background

Coastal landscapes are being transformed as a consequence of the increasing demand for infrastructures to sustain residential, commercial and tourist activities. Thus, intertidal and shallow marine habitats are largely being replaced by a variety of artificial substrata (e.g. breakwaters, seawalls, jetties). Understanding the ecological functioning of these artificial habitats is key to planning their design and management, in order to minimise their impacts and to improve their potential to contribute to marine biodiversity and ecosystem functioning. Nonetheless, little effort has been made to assess the role of human disturbances in shaping the structure of assemblages on marine artificial infrastructures. We tested the hypothesis that some negative impacts associated with the expansion of opportunistic and invasive species on urban infrastructures can be related to the severe human disturbances that are typical of these environments, such as those from maintenance and renovation works.

Methodology/Principal Findings

Maintenance caused a marked decrease in the cover of dominant space occupiers, such as mussels and oysters, and a significant enhancement of opportunistic and invasive forms, such as biofilm and macroalgae. These effects were particularly pronounced on sheltered substrata compared to exposed substrata. Experimental application of the disturbance in winter reduced the magnitude of the impacts compared to application in spring or summer. We use these results to identify possible management strategies to inform the improvement of the ecological value of artificial marine infrastructures.

Conclusions/Significance

We demonstrate that some of the impacts of globally expanding marine urban infrastructures, such as those related to the spread of opportunistic, and invasive species could be mitigated through ecologically-driven planning and management of long-term maintenance of these structures. Impact mitigation is a possible outcome of policies that consider the ecological features of built infrastructures and the fundamental value of controlling biodiversity in marine urban systems.  相似文献   

9.
Goal, Scope and Background  Assessing future energy and transport systems is of major importance for providing timely information for decision makers. In the discussion of technology options, fuel cells are often portrayed as attractive options for power plants and automotive applications. However, when analysing these systems, the LCA analyst is confronted with methodological problems, particularly with data gaps and the requirement of forecasting and anticipation of future developments. This series of two papers aims at providing a methodological framework for assessing future energy and transport systems (Part 1) and applies this to the two major application areas of fuel cells (Part 2). Methods  To allow the LCA of future energy and transport systems, forecasting tools like, amongst others, cost estimation methods and process simulation of systems are investigated with respect to the applicability in LCAs of future systems (Part 1). The manufacturing process of an SOFC stack is used as an illustration for the forecasting procedure. In Part 2, detailed LCAs of fuel cell power plants and power trains are carried out including fuel (hydrogen, methanol, gasoline, diesel and natural gas) and energy converter production. To compare it with competing technologies, internal combustion engines (automotive applications) and reciprocating engines, gas turbines and combined cycle plants (stationary applications) are also analysed. Results and Discussion  Principally, the investigated forecasting methods are suitable for future energy system assessment. The selection of the best method depends on different factors such as required ressources, quality of the results and flexibility. In particular, the time horizon of the investigation determines which forecasting tool may be applied. Environmentally relevant process steps exhibiting a significant time dependency shall always be investigated using different independent forecasting tools to ensure stability of the results. The results of the LCA underline that, in general, fuel cells offer advantages in the impact categories usually dominated by pol-lutant emissions, such as acidification and eutrophication, whereas for global warming and primary energy demand, the situation depends on a set of parameters such as driving cycle and fuel economy ratio in mobile applications and thermal/total efficiencies in stationary applications. For the latter impact categories, the choice of the primary energy carrier for fuel production (renewable or fossil) dominates the impact reduction. With increasing efficiency and improving emission performance of the conventional systems, the competition in both mobile and stationary applications is getting even stronger. The production of the fuel cell system is of low overall significance in stationary applications, whereas in vehicles, the lower life-time of the vehicle leads to a much higher significance of the power train production. Recommendations and Perspectives  In future, rapid technological and energy economic development will bring further advances for both fuel cells and conventional energy converters. Therefore, LCAs at such an early stage of the market development can only be considered preliminary. It is an essential requirement to accompany the ongoing research and development with iterative LCAs, constantly pointing at environmental hot spots and bottlenecks.  相似文献   

10.
Converting low‐grade thermal energy with small temperature gradient into electricity is challenging due to the low efficiency and high cost. Here, a new type of thermal–electric nanogenerator is reported that utilizes electrokinetic effect for effective harvesting thermal energy. The nanogenerator is based on an evaporation‐driven water flow in porous medium with small temperature gradient. With a piece of porous carbon film and deionized water, a maximum open‐circuit voltage of 0.89 V under a temperature difference of 4.2 °C is obtained, having a corresponding pseudo‐Seebeck coefficient of 210 mV K?1. The large pseudo‐Seebeck coefficient endows the nanogenerator sufficient power output for powering existing electronics directly. Furthermore, a wearable bracelet nanogenerator utilizing body heat is also demonstrated. The unique properties of such conversion process offer great potential for ultra‐low temperature‐gradient thermal energy recovery, wearable electronics, and self‐powered sensor systems.  相似文献   

11.
Goal, Scope and Background  Assessing future energy and transport systems is of major importance for providing timely information for decision makers. In the discussion of technology options, fuel cells are often portrayed as attractive options for power plants and automotive applications. However, when analysing these systems, the LCA analyst is confronted with methodological problems, particularly with data gaps and the requirement of an anticipation of future developments. This series of two papers aims at providing a methodological framework for assessing future energy and transport systems (Part 1) and applies this to the two major application areas of fuel cells (Part 2). Methods  To allow the LCA of future energy and transport systems forecasting tools like, amongst others, cost estimation methods and process simulation of systems are investigated with respect to the applicability in LCAs of future systems (Part 1). The manufacturing process of an SOFC stack is used as an illustration for the forecasting procedure. In Part 2, detailed LCAs of fuel cell power plants and power trains are carried out including fuel (hydrogen, methanol, gasoline, diesel and natural gas) and energy converter production. To compare it with competing technologies, internal combustion engines (automotive applications) and reciprocating engines, gas turbines and combined cycle plants (stationary applications) are analysed as well. Results and Discussion  Principally, the investigated forecasting methods are suitable for future energy system assessment. The selection of the best method depends on different factors such as required ressources, quality of the results and flexibility. In particular, the time horizon of the investigation determines which forecasting tool may be applied. Environmentally relevant process steps exhibiting a significant time dependency shall always be investigated using different independent forecasting tools to ensure stability of the results. The results of the LCA (Part 2) underline that principally, fuel cells offer advantages in the impact categories which are typically dominated by pollutant emissions, such as acidification and eutrophication, whereas for global warming and primary energy demand, the situation depends on a set of parameters such as driving cycle and fuel economy ratio in mobile applica-tions and thermal/total efficiencies in stationary applications. For the latter impact categories, the choice of the primary en-ergy carrier for fuel production (renewable or fossil) dominates the impact reduction. With increasing efficiency and improving emission performance of the conventional systems, the competition regarding all impact categories in both mobile and stationary applications is getting even stronger. The production of the fuel cell system is of low overall significance in stationary applications, whereas in automotive applications, the production of the fuel cell power train and required materials leads to increased impacts compared to internal combustion engines and thus reduces the achievable environmental impact reduction. Recommendations and Perspectives  The rapid technological and energy economic development will bring further advances for both fuel cells and conventional energy converters. Therefore, LCAs at such an early stage of the market development can only be considered preliminary. It is an essential requirement to accompany the ongoing research and development with iterative LCAs, constantly pointing at environmental hot spots and bottlenecks.  相似文献   

12.
Taking advantage of distributed storage technology and virtualization technology, cloud storage systems provide virtual machine clients customizable storage service. They can be divided into two types: distributed file system and block level storage system. There are two disadvantages in existing block level storage system: Firstly, Some of them are tightly coupled with their cloud computing environments. As a result, it’s hard to extend them to support other cloud computing platforms; Secondly, The bottleneck of volume server seriously affects the performance and reliability of the whole system. In this paper we present a lightweighted block-level storage system for clouds—ORTHRUS, based on virtualization technology. We first design the architecture with multiple volume servers and its workflows, which can improve system performance and avoid the problem. Secondly, we propose a Listen-Detect-Switch mechanism for ORTHRUS to deal with contingent volume servers’ failure. At last we design a strategy that dynamically balances load between multiple volume servers. We characterize machine capability and load quantity with black box model, and implement the dynamic load balance strategy which is based on genetic algorithm. Extensive experimental results show that the aggregated I/O throughputs of ORTHRUS are significantly improved (approximately two times of that in Orthrus), and both I/O throughputs and IOPS are also remarkably improved (about 1.8 and 1.2 times, respectively) by our dynamic load balance strategy.  相似文献   

13.
The single factor limiting the harnessing of the enormous computing power of clusters for parallel computing is the lack of appropriate software. Present cluster operating systems are not built to support parallel computing – they do not provide services to manage parallelism. The cluster operating environments that are used to assist the execution of parallel applications do not provide support for both Message Passing (MP) or Distributed Shared Memory (DSM) paradigms. They are only offered as separate components implemented at the user level as library and independent servers. Due to poor operating systems users must deal with computers of a cluster rather than to see this cluster as a single powerful computer. A Single System Image of the cluster is not offered to users. There is a need for an operating system for clusters. We claim and demonstrate that it is possible to develop a cluster operating system that is able to efficiently manage parallelism, support Message Passing and DSM and offer the Single System Image. In order to substantiate the claim the first version of a cluster operating system, called GENESIS, that manages parallelism and offers the Single System Image has been developed.  相似文献   

14.
Management is an important challenge for future enterprises. Previous work has addressed platform management (e.g., power and thermal management) separately from virtualization management (e.g., virtual machine (VM) provisioning, application performance). Coordinating the actions taken by these different management layers is important and beneficial, for reasons of performance, stability, and efficiency. Such coordination, in addition to working well with existing multi-vendor solutions, also needs to be extensible to support future management solutions potentially operating on different sensors and actuators. In response to these requirements, this paper proposes vManage, a solution to loosely couple platform and virtualization management and facilitate coordination between them in data centers. Our solution is comprised of registry and proxy mechanisms that provide unified monitoring and actuation across platform and virtualization domains, and coordinators that provide policy execution for better VM placement and runtime management, including a formal approach to ensure system stability from inefficient management actions. The solution is instantiated in a Xen environment through a platform-aware virtualization manager at a cluster management node, and a virtualization-aware platform manager on each server. Experimental evaluations using enterprise benchmarks show that compared to traditional solutions, vManage can achieve additional power savings (10% lower power) with significantly improved service-level guarantees (71% less violations) and stability (54% fewer VM migrations), at low overhead.  相似文献   

15.
Virtualization technology reduces the costs for server installation, operation, and maintenance and it can simplify development of distributed systems. Currently, there are various virtualization technologies such as Xen, KVM, VMware, and etc, and all these technologies support various virtualization functions individually on the heterogeneous platforms. Therefore, it is important to be able to integrate and manage these heterogeneous virtualized resources in order to develop distributed systems based on the current virtualization techniques. This paper presents an integrated management system that is able to provide information for the usage of heterogeneous virtual resources and also to control them. The main focus of the system is to abstract various virtual resources and to reconfigure them flexibly. For this, an integrated management system has been developed and implemented based on a libvirt-based virtualization API and data distribution service (DDS).  相似文献   

16.
With the advances of network function virtualization and cloud computing technologies, a number of network services are implemented across data centers by creating a service chain using different virtual network functions (VNFs) running on virtual machines. Due to the complexity of network infrastructure, creating a service chain requires high operational cost especially in carrier-grade network service providers and supporting stringent QoS requirements from users is also a complicated task. There have been various research efforts to address these problems that only focus on one aspect of optimization goal either from users such as latency minimization and QoS based optimization, or from service providers such as resource optimization and cost minimization. However, meeting the requirements both from users and service providers efficiently is still a challenging issue. This paper proposes a VNF placement algorithm called VNF-EQ that allows users to meet their service latency requirements, while minimizing the energy consumption at the same time. The proposed algorithm is dynamic in a sense that the locations or the service chains of VNFs are reconfigured to minimize the energy consumption when the traffic passing through the chain falls below a pre-defined threshold. We use genetic algorithm to formulate this problem because it is a variation of the multi-constrained path selection problem known as NP-complete. The benchmarking results show that the proposed approach outperforms other heuristic algorithms by as much as 49% and reduces the energy consumptions by rearranging VNFs.  相似文献   

17.
FPGA based distributed self healing architecture for reusable systems   总被引:1,自引:0,他引:1  
Creating an environment of “no doubt” for computing systems is critical for supporting next generation science, engineering, and commercial applications. With reconfigurable devices such as Field Programmable Gate Arrays (FPGAs), designers are provided with a seductive tool to use as a basis for sophisticated but highly reliable platforms. Reconfigurable computing platforms potentially offer the enhancement of reliability and recovery from catastrophic failures through partial and dynamic reconfigurations; and eliminate the need for redundant hardware resources typically used by existing fault-tolerant systems. We propose a two-level self-healing methodology to offer 100% availability for mission critical systems with comparatively less hardware overhead and performance degradation. Our proposed system first undertakes healing at the node-level. Failing to rectify the system at the node-level, network-level healing is then undertaken. We have designed a system based on Xilinx Virtex-5 FPGAs and Cirronet wireless mesh nodes to demonstrate autonomous wireless healing capability among networked node devices. Our prototype is a proof-of-concept work which demonstrates the feasibility of using FPGAs to provide maximum computational availability in a critical self-healing distributed architecture.  相似文献   

18.
Advanced thermoelectric technologies can drastically improve energy efficiencies of industrial infrastructures, solar cells, automobiles, aircrafts, etc. When a thermoelectric device is used as a solid‐state heat pump and/or as a power generator, its efficiency depends pivotally on three fundamental transport properties of materials, namely, the thermal conductivity, electrical conductivity, and thermopower. The development of advanced thermoelectric materials is very challenging because these transport properties are interrelated. This paper reviews the physical mechanisms that have led to recent material advances. Progresses in both inorganic and organic materials are summarized. While the majority of the contemporary effort has been focused on lowering the lattice thermal conductivity, the latest development in nanocomposites suggests that properly engineered interfaces are crucial for realizing the energy filtering effect and improving the power factor. We expect that the nanocomposite approach could be the focus of future materials breakthroughs.  相似文献   

19.
Moon JH  Lee JW  Lee UD 《Bioresource technology》2011,102(20):9550-9557
An economic analysis of biomass power generation was conducted. Two key technologies--direct combustion with a steam turbine and gasification with a syngas engine--were mainly examined. In view of the present domestic biomass infrastructure of Korea, a small and distributed power generation system ranging from 0.5 to 5 MW(e) was considered. It was found that gasification with a syngas engine becomes more economically feasible as the plant size decreases. Changes in the economic feasibilities with and without RPS or heat sales were also investigated. A sensitivity analysis of each system was conducted for representative parameters. Regarding the cost of electricity generation, electrical efficiency and fuel cost significantly affect both direct combustion and gasification systems. Regarding the internal rate of return (IRR), the heat sales price becomes important for obtaining a higher IRR, followed by power generation capacity and electrical efficiency.  相似文献   

20.
Computational simulations and thus scientific computing is the third pillar alongside theory and experiment in todays science. The term e-science evolved as a new research field that focuses on collaboration in key areas of science using next generation computing infrastructures (i.e. co-called e-science infrastructures) to extend the potential of scientific computing. During the past years, significant international and broader interdisciplinary research is increasingly carried out by global collaborations that often share a single e-science infrastructure. More recently, increasing complexity of e-science applications that embrace multiple physical models (i.e. multi-physics) and consider a larger range of scales (i.e. multi-scale) is creating a steadily growing demand for world-wide interoperable infrastructures that allow for new innovative types of e-science by jointly using different kinds of e-science infrastructures. But interoperable infrastructures are still not seamlessly provided today and we argue that this is due to the absence of a realistically implementable infrastructure reference model. Therefore, the fundamental goal of this paper is to provide insights into our proposed infrastructure reference model that represents a trimmed down version of ogsa in terms of functionality and complexity, while on the other hand being more specific and thus easier to implement. The proposed reference model is underpinned with experiences gained from e-science applications that achieve research advances by using interoperable e-science infrastructures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号