首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Live migration of virtual machine (VM) provides a significant benefit for virtual server mobility without disrupting service. It is widely used for system management in virtualized data centers. However, migration costs may vary significantly for different workloads due to the variety of VM configurations and workload characteristics. To take into account the migration overhead in migration decision-making, we investigate design methodologies to quantitatively predict the migration performance and energy consumption. We thoroughly analyze the key parameters that affect the migration cost from theory to practice. We construct application-oblivious models for the cost prediction by using learned knowledge about the workloads at the hypervisor (also called VMM) level. This should be the first kind of work to estimate VM live migration cost in terms of both performance and energy in a quantitative approach. We evaluate the models using five representative workloads on a Xen virtualized environment. Experimental results show that the refined model yields higher than 90% prediction accuracy in comparison with measured cost. Model-guided decisions can significantly reduce the migration cost by more than 72.9% at an energy saving of 73.6%.  相似文献   

2.
This paper proposes solutions to monitor the load and to balance the load of cloud data center. The proposed solutions work in two phases and graph theoretical concepts are applied in both phases. In the first phase, cloud data center is modeled as a network graph. This network graph is augmented with minimum dominating set concept of graph theory for monitoring its load. For constructing minimum dominating set, this paper proposes a new variant of minimum dominating set (V-MDS) algorithm and is compared with existing construction algorithms proposed by Rooji and Fomin. The V-MDS approach of querying cloud data center load information is compared with Central monitor approach. The second phase focuses on system and network-aware live virtual machine migration for load balancing cloud data center. For this, a new system and traffic-aware live VM migration for load balancing (ST-LVM-LB) algorithm is proposed and is compared with existing benchmarked algorithms dynamic management algorithm (DMA) and Sandpiper. To study the performance of the proposed algorithms, CloudSim3.0.3 simulator is used. The experimental results show that, V-MDS algorithm takes quadratic time complexity, whereas Rooji and Fomin algorithms take exponential time complexity. Then the V-MDS approach for querying Cloud Data Center load information is compared with the Central monitor approach and the experimental result shows that the proposed approach reduces the number of message updates by half than the Central monitor approach. The experimental results show on load balancing that the developed ST-LVM-LB algorithm triggers lesser Virtual Machine migrations, takes lesser time and migration cost to migrate with minimum network overhead. Thus the proposed algorithms improve the service delivery performance of cloud data center by incorporating graph theoretical solutions in monitoring and balancing the load.  相似文献   

3.
The field of live VM (virtual machine) migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization) idea with the SA (simulated annealing) idea to achieve an improved PSO-based approach with the better global search''s ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable.  相似文献   

4.
Virtual machines (VM) migration can improve availability, manageability, performance and fault tolerance of systems. Current migration researches mainly focus on the promotion of the efficiency by using shared storage, priority-based policy etc.. But the effect of migration is not well concerned. In fact, once physical servers are overloaded from denial-of-service attack (DDoS) attack, a hasty migration operation not only unable to alleviate the harm of the attack, but also increases the harmfulness. In this paper, a novel DDoS attack, Cloud-Droplet-Freezing (CDF) attack, is described according to the characteristics of cloud computing cluster. Our experiments show that such attack is able to congest internal network communication of cloud server cluster, whilst consume resources of physical server. Base on the analysis of CDF attack, we highlight the method of evaluating potential threats hidden behind the normal VM migration and analyze the flaws of existing intrusion detection systems/prevention system for defensing the CDF attack.  相似文献   

5.
Live virtual machine migration can have a major impact on how a cloud system performs, as it consumes significant amounts of network resources such as bandwidth. Migration contributes to an increase in consumption of network resources which leads to longer migration times and ultimately has a detrimental effect on the performance of a cloud computing system. Most industrial approaches use ad-hoc manual policies to migrate virtual machines. In this paper, we propose an autonomous network aware live migration strategy that observes the current demand level of a network and performs appropriate actions based on what it is experiencing. The Artificial Intelligence technique known as Reinforcement Learning acts as a decision support system, enabling an agent to learn optimal scheduling times for live migration while analysing current network traffic demand. We demonstrate that an autonomous agent can learn to utilise available resources when peak loads saturate the cloud network.  相似文献   

6.
In high performance computing (HPC) resources’ extensive experiments are frequently executed. HPC resources (e.g. computing machines and switches) should be able to handle running several experiments in parallel. Typically HPC utilizes parallelization in programs, processing and data. The underlying network is seen as the only non-parallelized HPC component (i.e. no dynamic virtual slicing based on HPC jobs). In this scope we present an approach in this paper to utilize software defined networking (SDN) to parallelize HPC clusters among the different running experiments. We propose to accomplish this through two major components: A passive module (network mapper/remapper) to select for each experiment as soon as it starts the least busy resources in the network, and an SDN-HPC active load balancer to perform more complex and intelligent operations. Active load balancer can logically divide the network based on experiments’ host files. The goal is to reduce traffic to unnecessary hosts or ports. An HPC experiment should multicast, rather than broadcast to only cluster nodes that are used by the experiment. We use virtual tenant network modules in Opendaylight controller to create VLANs based on HPC experiments. In each HPC host, virtual interfaces are created to isolate traffic from the different experiments. The traffic between the different physical hosts that belong to the same experiment can be distinguished based on the VLAN ID assigned to each experiment. We evaluate the new approach using several HPC public benchmarks. Results show a significant enhancement in experiments’ performance especially when HPC cluster experiences running several heavy load experiments simultaneously. Results show also that this multi-casting approach can significantly reduce casting overhead that is caused by using a single cast for all resources in the HPC cluster. In comparison with InfiniBand networks that offer interconnect services with low latency and high bandwidth, HPC services based on SDN can provide two distinguished objectives that may not be possible with InfiniBand: The first objective is the integration of HPC with Ethernet enterprise networks and hence expanding HPC usage to much wider domains. The second objective is the ability to enable users and their applications to customize HPC services with different QoS requirements that fit the different needs of those applications and optimize the usage of HPC clusters.  相似文献   

7.
The proliferation of cloud data center applications and network function virtualization (NFV) boosts dynamic and QoS dependent traffic into the data centers network. Currently, lots of network routing protocols are requirement agnostic, while other QoS-aware protocols are computationally complex and inefficient for small flows. In this paper, a computationally efficient congestion avoidance scheme, called CECT, for software-defined cloud data centers is proposed. The proposed algorithm, CECT, not only minimizes network congestion but also reallocates the resources based on the flow requirements. To this end, we use a routing architecture to reconfigure the network resources triggered by two events: (1) the elapsing of a predefined time interval, or, (2) the occurrence of congestion. Moreover, a forwarding table entries compression technique is used to reduce the computational complexity of CECT. In this way, we mathematically formulate an optimization problem and define a genetic algorithm to solve the proposed optimization problem. We test the proposed algorithm on real-world network traffic. Our results show that CECT is computationally fast and the solution is feasible in all cases. In order to evaluate our algorithm in term of throughput, CECT is compared with ECMP (where the shortest path algorithm is used as the cost function). Simulation results confirm that the throughput obtained by running CECT is improved up to 3× compared to ECMP while packet loss is decreased up to 2×.  相似文献   

8.
Cloud federation has paved the way for cloud service providers (CSP) to collaborate with other CSPs to serve users’ resource requests, which are prohibitively high for any single CSP during peak time. Moreover, to entice different CSPs to participate in federation, it is necessary to maximize the profit of all CSPs involved in the federation. Further, federation enables overloaded CSPs to distribute their load among other underloaded member CSPs of federation by migrating the virtual machines (VM). Migration of VM among member CSPs of federation, also enables to increase the reliability and availability of cloud services on occurrence of faults in the datacenters of CSPs. Thus it becomes important for CSPs to form a federation with other CSPs, in such a way that the migration cost of VMs between CSPs of the same federation is minimized and simultaneously profit of CSPs in federation is maximized. In this paper, we model the problem of forming federation among CSPs as a hedonic coalition game, with a utility function depending on profit and migration cost, with the objective of maximizing the former and minimizing the latter. We propose an algorithm to solve this hedonic game and compare its performance with other existing game-theory based cloud federation formation mechanisms.  相似文献   

9.
Large cluster-based cloud computing platforms increasingly use commodity Ethernet technologies, such as Gigabit Ethernet, 10GigE, and Fibre Channel over Ethernet (FCoE), for intra-cluster communication. Traffic congestion can become a performance concern in the Ethernet due to consolidation of data, storage, and control traffic over a common layer-2 fabric, as well as consolidation of multiple virtual machines (VMs) over less physical hardware. Even as networking vendors race to develop switch-level hardware support for congestion management, we make the case that virtualization has opened up a complementary set of opportunities to reduce or even eliminate network congestion in cloud computing clusters. We present the design, implementation, and evaluation of a system called XCo, that performs explicit coordination of network transmissions over a shared Ethernet fabric to proactively prevent network congestion. XCo is a software-only distributed solution executing only in the end-nodes. A central controller uses explicit permissions to temporally separate (at millisecond granularity) the transmissions from competing senders through congested links. XCo is fully transparent to applications, presently deployable, and independent of any switch-level hardware support. We present a detailed evaluation of our XCo prototype across a number of network congestion scenarios, and demonstrate that XCo significantly improves network performance during periods of congestion. We also evaluate the behavior of XCo for large topologies using NS3 simulations.  相似文献   

10.
Alharbi  Fares  Tian  Yu-Chu  Tang  Maolin  Ferdaus  Md Hasanul  Zhang  Wei-Zhe  Yu  Zu-Guo 《Cluster computing》2021,24(2):1255-1275
Cluster Computing - Enterprise cloud data centers consume a tremendous amount of energy due to the large number of physical machines (PMs). These PMs host a huge number of virtual machines (VMs),...  相似文献   

11.
Aghasi  Ali  Jamshidi  Kamal  Bohlooli  Ali 《Cluster computing》2022,25(2):1015-1033

The remarkable growth of cloud computing applications has caused many data centers to encounter unprecedented power consumption and heat generation. Cloud providers share their computational infrastructure through virtualization technology. The scheduler component decides which physical machine hosts the requested virtual machine. This process is virtual machine placement (VMP) which, affects the power distribution, and thereby the energy consumption of the data centers. Due to the heterogeneity and multidimensionality of resources, this task is not trivial, and many studies have tried to address this problem using different methods. However, the majority of such studies fail to consider the cooling energy, which accounts for almost 30% of the energy consumption in a data center. In this paper, we propose a metaheuristic approach based on the binary version of gravitational search algorithm to simultaneously minimize the computational and cooling energy in the VMP problem. In addition, we suggest a self-adaptive mechanism based on fuzzy logic to control the behavior of the algorithms in terms of exploitation and exploration. The simulation results illustrate that the proposed algorithm reduced energy consumption by 26% in the PlanetLab Dataset and 30% in the Google cluster dataset relative to the average of compared algorithms. The results also indicate that the proposed algorithm provides a much more thermally reliable operation.

  相似文献   

12.
Energy efficient virtual machine (VM) consolidation in modern data centers is typically optimized using methods such as Mixed Integer Programming, which typically require precise input to the model. Unfortunately, many parameters are uncertain or very difficult to predict precisely in the real world. As a consequence, a once calculated solution may be highly infeasible in practice. In this paper, we use methods from robust optimization theory in order to quantify the impact of uncertainty in modern data centers. We study the impact of different parameter uncertainties on the energy efficiency and overbooking ratios such as e.g. VM resource demands, migration related overhead or the power consumption model of the servers used. We also show that setting aside additional resource to cope with uncertainty of workload influences the overbooking ration of the servers and the energy consumption. We show that, by using our model, Cloud operators can calculate a more robust migration schedule leading to higher total energy consumption. A more risky operator may well choose a more opportunistic schedule leading to lower energy consumption but also higher risk of SLA violation.  相似文献   

13.
Workload hotspot detection is a key component of virtual machine (VM) management in virtualized environment. One of its challenges is how to effectively collect the resource usage of VMs. Also, since data centers usually have hundreds or even thousands of nodes, workload hotspot detection must be able to handle a large amount of monitoring data. In this paper, we address these two challenges. We first present a novel approach to VM memory monitoring. This approach collects memory usage data by walking through the page tables of VMs and by checking the present bit of page table entry. Second, we present a MapReduce-based approach to efficiently analyze a large amount of resource usage data of VMs and nodes. Leveraging the power of parallelism and robustness of MapReduce can significantly accelerate the detection of hotspots. Extensive simulations have been performed to evaluate the proposed approaches. The simulation results show that our approach can achieve effective estimation of memory usage with low overhead and can quickly detect workload hotspots.  相似文献   

14.
With the advances of network function virtualization and cloud computing technologies, a number of network services are implemented across data centers by creating a service chain using different virtual network functions (VNFs) running on virtual machines. Due to the complexity of network infrastructure, creating a service chain requires high operational cost especially in carrier-grade network service providers and supporting stringent QoS requirements from users is also a complicated task. There have been various research efforts to address these problems that only focus on one aspect of optimization goal either from users such as latency minimization and QoS based optimization, or from service providers such as resource optimization and cost minimization. However, meeting the requirements both from users and service providers efficiently is still a challenging issue. This paper proposes a VNF placement algorithm called VNF-EQ that allows users to meet their service latency requirements, while minimizing the energy consumption at the same time. The proposed algorithm is dynamic in a sense that the locations or the service chains of VNFs are reconfigured to minimize the energy consumption when the traffic passing through the chain falls below a pre-defined threshold. We use genetic algorithm to formulate this problem because it is a variation of the multi-constrained path selection problem known as NP-complete. The benchmarking results show that the proposed approach outperforms other heuristic algorithms by as much as 49% and reduces the energy consumptions by rearranging VNFs.  相似文献   

15.
Memory CD8(+) T cells are an important component of the adaptive immune response against many infections, and understanding how Ag-specific memory CD8(+) T cells are generated and maintained is crucial for the development of vaccines. We recently reported the existence of memory-phenotype, Ag-specific CD8(+) T cells in unimmunized mice (virtual memory or VM cells). However, it was not clear when and where these cells are generated during normal development, nor the factors required for their production and maintenance. This issue is especially pertinent given recent data showing that memory-like CD8 T cells can be generated in the thymus, in a bystander response to IL-4. In this study, we show that the size of the VM population is reduced in IL-4R-deficient animals. However, the VM population appears first in the periphery and not the thymus of normal animals, suggesting this role of IL-4 is manifest following thymic egress. We also show that the VM pool is durable, showing basal proliferation and long-term maintenance in normal animals, and also being retained during responses to unrelated infection.  相似文献   

16.
Together with the rapid development of IT technology, cloud computing has been considered as the next generation’s computing infrastructure. One of the essential part of cloud computing is the virtual machine technology that enables to reduce the data center cost with better resource utilization. Especially, virtual desktop infrastructure (VDI) is receiving explosive attentions from IT markets because of its advantages of easier software management, greater data protection, and lower cost. However, sharing physical resources in VDI to consolidate multiple guest virtual machines (VMs) on a host has a tradeoff that can lead to significant I/O degradation. Optimizing I/O virtualization overhead is a challenging task because it needs to scrutinize multiple software layers between guest VMs and host where those VMs are executing. In this paper, we present a hypervisor-level cache, called hyperCache, which is possible to provide a shortcut in KVM/QEMU. It intercepts I/O requests in the hypervisor and analyses their I/O access patterns to select data retaining high access frequency. Also, it has a capability of maintaining the appropriate cache memory size by utilizing the cache block map. Our experimental results demonstrate that our method improves I/O bandwidth by up to 4.7x over the existing QEMU.  相似文献   

17.
Following activation within secondary lymphoid tissue, CD8 T cells must migrate to targets, such as infected self tissue, allografts, and tumors, to mediate contact-dependent effector functions. To test whether the pattern of migration of activated CD8 T cells was dependent on the site of Ag encounter, we examined the distribution of mouse Ag-specific CD8 T cells following local challenges. Our findings indicated that activated CD8 T cells migrated pervasively to all nonlymphoid organs irrespective of the site of initial Ag engagement. Using an adoptive transfer system, migration of nonlymphoid memory cells was also examined. Although some limited preference for the tissue of origin was noted, transferred CD8 memory T cells from various nonlymphoid tissues migrated promiscuously, except to the intestinal mucosa, supporting the concept that distinct memory pools may exist. However, regardless of the tissue of origin, reactivation of transferred memory cells resulted in widespread dissemination of new effector cells. These data indicated that recently activated primary or memory CD8 T cells were transiently endowed with the ability to traffic to all nonlymphoid organs, while memory cell trafficking was more restricted. These observations will help refine our understanding of effector and memory CD8 T cell migration patterns.  相似文献   

18.
Modelling dispersal is a fundamental step in the design of population viability analyses. Here, we address the question of the generalisation of population viability analysis models across landscapes by comparing dispersal between two metapopulations of the bog fritillary butterfly ( Proclossiana eunomia ) living in similar highly fragmented landscapes (<1% of suitable habitat in 9 km2). Differences in dispersal patterns were investigated using the virtual migration (VM) model, which was parameterised with capture–mark–recapture data collected during several years in both landscapes. The VM model allows the estimation of 6 parameters describing dispersal and mortality as well as the simulation of dispersal in the landscapes. The model revealed large differences in the VM parameter estimates between the two landscapes and consequently, simulations indicated differential rates of emigration and dispersal mortality. Furthermore, results from crossed-simulations i.e. simulations performed in one of the landscape but using parameter estimates from the other landscape emphasize that dispersal parameters are very specific to each metapopulation and to their landscape. Hence, we urge conservation biologists to be cautious with such parameter generalisations, even for the same species in comparable landscapes.  相似文献   

19.
Data centers are the backbone of cloud infrastructure platform to support large-scale data processing and storage. More and more business-to-consumer and enterprise applications are based on cloud data center. However, the amount of data center energy consumption is inevitably lead to high operation costs. The aim of this paper is to comprehensive reduce energy consumption of cloud data center servers, network, and cooling systems. We first build an energy efficient cloud data center system including its architecture, job and power consumption model. Then, we combine the linear regression and wavelet neural network techniques into a prediction method, which we call MLWNN, to forecast the cloud data center short-term workload. Third, we propose a heuristic energy efficient job scheduling with workload prediction solution, which is divided into resource management strategy and online energy efficient job scheduling algorithm. Our extensive simulation performance evaluation results clearly demonstrate that our proposed solution has good performance and is very suitable for low workload cloud data center.  相似文献   

20.
Cloud services are on-demand services provided to end-users over the Internet and hosted by cloud service providers. A cloud service consists of a set of interacting applications/processes running on one or more interconnected VMs. Organizations are increasingly using cloud services as a cost-effective means for outsourcing their IT departments. However, cloud service availability is not guaranteed by cloud service providers, especially in the event of anomalous circumstances that spontaneously disrupt availability including natural disasters, power failure, and cybersecurity attacks. In this paper, we propose a framework for developing intelligent systems that can monitor and migrate cloud services to maximize their availability in case of cloud disruption. The framework connects an autonomic computing agent to the cloud to automatically migrate cloud services based on anticipated cloud disruption. The autonomic agent employs a modular design to facilitate the incorporation of different techniques for deciding when to migrate cloud services, what cloud services to migrate, and where to migrate the selected cloud services. We incorporated a virtual machine selection algorithm for deciding what cloud services to migrate that maximizes the availability of high priority services during migration under time and network bandwidth constraints. We implemented the framework and conducted experiments to evaluate the performance of the underlying techniques. Based on the experiments, the use of this framework results in less down-time due to migration, thereby leading to reduced cloud service disruption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号