首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Alharbi  Fares  Tian  Yu-Chu  Tang  Maolin  Ferdaus  Md Hasanul  Zhang  Wei-Zhe  Yu  Zu-Guo 《Cluster computing》2021,24(2):1255-1275
Cluster Computing - Enterprise cloud data centers consume a tremendous amount of energy due to the large number of physical machines (PMs). These PMs host a huge number of virtual machines (VMs),...  相似文献   

2.
Increasing power consumption of IT infrastructures and growing electricity prices have led to the development of several energy-saving techniques in the last couple of years. Virtualization and consolidation of services is one of the key technologies in data centers to reduce overprovisioning and therefore increase energy savings. This paper shows that the energy-optimal allocation of virtualized services in a heterogeneous server infrastructure is NP-hard and can be modeled as a variant of the multidimensional vector packing problem. Furthermore, it proposes a model to predict the performance degradation of a service when it is consolidated with other services. The model allows considering the tradeoff between power consumption and service performance during service allocation. Finally, the paper presents two heuristics that approximate the energy-optimal and performance-aware resource allocation problem and shows that the allocations determined by the proposed heuristics are more energy-efficient than the widely applied maximum-density consolidation.  相似文献   

3.
Aghasi  Ali  Jamshidi  Kamal  Bohlooli  Ali 《Cluster computing》2022,25(2):1015-1033

The remarkable growth of cloud computing applications has caused many data centers to encounter unprecedented power consumption and heat generation. Cloud providers share their computational infrastructure through virtualization technology. The scheduler component decides which physical machine hosts the requested virtual machine. This process is virtual machine placement (VMP) which, affects the power distribution, and thereby the energy consumption of the data centers. Due to the heterogeneity and multidimensionality of resources, this task is not trivial, and many studies have tried to address this problem using different methods. However, the majority of such studies fail to consider the cooling energy, which accounts for almost 30% of the energy consumption in a data center. In this paper, we propose a metaheuristic approach based on the binary version of gravitational search algorithm to simultaneously minimize the computational and cooling energy in the VMP problem. In addition, we suggest a self-adaptive mechanism based on fuzzy logic to control the behavior of the algorithms in terms of exploitation and exploration. The simulation results illustrate that the proposed algorithm reduced energy consumption by 26% in the PlanetLab Dataset and 30% in the Google cluster dataset relative to the average of compared algorithms. The results also indicate that the proposed algorithm provides a much more thermally reliable operation.

  相似文献   

4.
Cluster Computing - Cloud computing is a new computation technology that provides services to consumers and businesses. The main idea of Cloud computing is to present software and hardware services...  相似文献   

5.

Background  

Recently, extensive studies have been carried out on arrhythmia classification algorithms using artificial intelligence pattern recognition methods such as neural network. To improve practicality, many studies have focused on learning speed and the accuracy of neural networks. However, algorithms based on neural networks still have some problems concerning practical application, such as slow learning speeds and unstable performance caused by local minima.  相似文献   

6.
Liu  Xi  Liu  Jun 《Cluster computing》2022,25(2):1095-1109

We address the problem of online virtual machine (VM) provisioning and allocation with multiple types of resources. Formulating this problem in an auction-based setting, we propose an accurate mathematical model incorporating the ability to preempt and resume a given task for the sake of best overall use of resources. Our objective is to efficiently provide and allocate multiple VMs to maximize social welfare and encourage users to declare truthful requests. We first design an offline optimal mechanism based on the VCG mechanism; this mechanism has full knowledge of all users and offers ideal solutions. We also design an online greedy mechanism that considers only current knowledge while offering near-optimal solutions instead. Our proposed greedy mechanism consists of winner determination and payment algorithms. Furthermore, we show that the winner determination algorithm is monotonic and that the payment algorithm implements the critical payment. Both our allocation methods offer incentives to users providing true values for the sake of obtaining the best utility. We performed extensive experiments to investigate the performance of our proposed greedy mechanism compared to the optimal mechanism. Experimental results demonstrate that our proposed greedy mechanism obtains near-optimal solutions in a reasonable time.

  相似文献   

7.
Cluster Computing - Cloud computing provides effective ways to rapidly provision computing resources over the Internet. For a better management of resource provisioning, the system requires to...  相似文献   

8.
Brain-Computer Interface (BCI) is a technology that translates the brain electrical activity into a command for a device such as a robotic arm, a wheelchair or a spelling device. BCIs have long been described as an assistive technology for severely disabled patients because they completely bypass the need for muscular activity. The clinical reality is however dramatically different and most patients who use BCIs today are doing so as part of constraining clinical trials. To achieve the technological transfer from bench to bedside, BCI must gain ease of use and robustness of both measure (electroencephalography [EEG]) and interface (signal processing and applications). The Robust Brain-computer Interface for virtual Keyboard (RoBIK) project aimed at the development of a BCI system for communication that could be used on a daily basis by patients without the help of a trained team of researchers. To guide further developments clinicians first assessed patients’ needs. The prototype subsequently developed consisted in a 14 felt-pad electrodes EEG headset sampling at 256 Hz by an electronic component capable of transmitting signals wirelessly. The application was a virtual keyboard generating a novel stimulation paradigm to elicit P300 Evoked Related Potentials (ERPs) for communication. Raw EEG signals were treated with OpenViBE open-source software including novel signal processing and stimulation techniques.  相似文献   

9.
10.
The purpose of this paper is to propose models for project scheduling when there is considerable uncertainty in the activity durations, to the extent that the decision maker cannot with confidence associate probabilities with the possible outcomes of a decision. Our modeling techniques stem from robust discrete optimization, which is a theoretical framework that enables the decision maker to produce solutions that will have a reasonably good objective value under any likely input data scenario. We develop and implement a scenario-relaxation algorithm and a scenario-relaxation-based heuristic. The first algorithm produces optimal solutions but requires excessive running times even for medium-sized instances; the second algorithm produces high-quality solutions for medium-sized instances and outperforms two benchmark heuristics.  相似文献   

11.
By employing the virtual machines (VMs) consolidation technique at a virtualized data center, optimal mapping of VMs to physical machines (PMs) can be performed. The type of optimization approach and the policy of detecting the appropriate time to implement the consolidation process are influential in the performance of the consolidation technique. In a majority of researches, the consolidation approach merely focuses on the management of underloaded or overloaded PMs, while a number of VMs could also be in an underload or overload state. Managing an abnormal state of VM results in the postponement of PM getting into an abnormal state as well and affects the implementation time of the consolidation process. For the aim of optimal VM consolidation in this research, a self-adaptive architecture is presented to detect and manage underloaded and overloaded VMs /PMs in reaction to workload changes in the data center. The goal of consolidation process is employing the minimum number of active VMs and PMs, while guaranteeing the quality of service (QoS). Assessment criteria of QoS are two parameters including average number of requests in the PM buffer and average waiting time in the VM. To evaluate these two parameters, a probabilistic model of the data center is proposed by applying the queuing theory. The assessment results of the probabilistic model form a basis for decision-making in the modules of the proposed architecture. Numerical results obtained from the assessment of the probabilistic model via discrete-event simulator under various parameter settings confirm the efficiency of the proposed architecture in achieving the aims of the consolidation process.  相似文献   

12.
As one of the most important features of virtualization, virtual machine (VM) migration provides great benefits for load balancing, resources-saving, fault tolerance in modern cloud data centers. Considering the network traffic caused by transferring data during VM migration imposes a huge pressure on network bandwidth of cloud data centers, and by analyzing the characteristic of the transferred data, we found that the redundant data, which is produced between two physical hosts by hosting virtual machines cloned from same VM template, can be reduced to relieve the network traffic pressure. This paper presents a Metadata based VM migration approach (Mvmotion) to reduce the amount of transferred data during migration by utilizing memory de-redundant technique between two physical hosts. Mvmotion utilizes the hash based fingerprints to generate Metadata of memory, which is used to identify redundant memory of VMs between two hosts. Based on the Metadata, the transfer of redundant memory data during migration can be eliminated. Experiment demonstrates that, compare to Xen’s default migration approach, Mvmotion can reduce the total transferred data by 29–97 %, and decreases the migration time by 16–53 %.  相似文献   

13.
14.
Large scale clusters based on virtualization technologies have been widely used in many areas, including the data center and cloud computing environment. But how to save energy is a big challenge for building a “green cluster” recently. However, previous researches, including local approaches, which focus on saving the energy of the components in a single workstation without a global vision on the whole cluster, and cluster-wide energy saving techniques, which can only be applied to homogeneous workstations and specific applications, cannot solve the challenges. This paper describes the design and implementation of a novel scheme, called Magnet, that uses live migration of virtual machines to transfer load among the nodes on a multi-layer ring-based overlay. This scheme can reduce the power consumption greatly by regarding all the cluster nodes as a whole based on virtualization technologies. And, it can be applied to both the homogeneous and heterogeneous servers. Experimental measurements show that the new method can reduce the power consumption by 74.8% over base at most with certain adjustably acceptable overhead. The effectiveness and performance insights are also analytically verified.  相似文献   

15.
Distributed Shared Arrays (DSA) is a distributed virtual machine that supports Java-compliant multithreaded programming with mobility support for system reconfiguration in distributed environments. The DSA programming model allows programmers to explicitly control data distribution so as to take advantage of the deep memory hierarchy, while relieving them from error-prone orchestration of communication and synchronization at run-time. The DSA system is developed as an integral component of mobility support middleware for Grid computing so that DSA-based virtual machines can be reconfigured to adapt to the varying resource supplies or demand over the course of a computation. The DSA runtime system also features a directory-based cache coherence protocol in support of replication of user-defined sharing granularity and a communication proxy mechanism for reducing network contention. System reconfiguration is achieved by a DSA service migration mechanism, which moves the DSA service and residing computational agents between physical servers for load balancing and fault resilience. We demonstrate the programmability of the model in a number of parallel applications and evaluate its performance by application benchmark programs, in particular, the impact of the coherence granularity and service migration overhead. Song Fu received the BS degreee in computer science from Nanjing University of Aeronautics and Astronautics, China, in 1999, and the MS degree in computer science from Nanjing University, China, in 2002. He is currently a PhD candidate in computer engineering at Wayne State University. His research interests include the resource management, security, and mobility issues in wide-area distributed systems. Cheng-Zhong Xu received the BS and MS degrees in computer science from Nanjing University in 1986 and 1989, respectively, and the Ph.D. degree in computer science from the University of Hong Kong in 1993. He is an Associate Professor in the Department of Electrical and Computer Engineer of Wayne State University. His research interests lie in distributed are in distributed and parallel systems, particularly in resource management for high performance cluster and grid computing and scalable and secure Internet services. He has published more than100 peer-reviewed articles in journals and conference proceedings in these areas. He is the author of the book Scalable and Secure Internet Services and Architecture (CRC Press, 2005) and a co-author of the book Load Balancing in Parallel Computers: Theory and Practice (Kluwer Academic, 1997). He serves on the editorial boards of J. of Parallel and Distributed Computing, J. of Parallel, Emergent, and Distributed Systems, J. of High Performance Computing and Networking, and J. of Computers and Applications. He was the founding program co-chair of International Workshop on Security in Systems and Networks (SSN), the general co-chair of the IFIP 2006 International Conference on Embedded and Ubiquitous Computing (EUC06), and a member of the program committees of numerous conferences. His research was supported in part by the US National Science Foundation, NASA, and Cray Research. He is a recipient of the Faculty Research Award of Wayne State University in 2000, the Presidents Award for Excellence in Teaching in 2002, and the Career Development Chair Award in 2003. He is a senior member of the IEEE. Brian A. Wims was born in Washington, DC in 1967. He received the Bachelor of Science in Electrical Engineering from GMI-EMI (now called Kettering University) in 1990; and Master of Science in Computer Engineering from Wayne State University in 1999. His research interests are primarily in the fields of parallel and distributed systems with applications in Mobile Agent technologies. From 1990–2001 he worked in various Engineering positions in General Motors, including Electrical Analysis, Software Design, and Test and Development. In 2001, he joined the General Motors IS&S department where he is currently a Project Manager in the Computer Aided Test group. Responsibilities include managing the development of test automation applications in the Electrical, EMC, and Safety Labs. Ramzi Basharahil was born in Aden, Yemen in 1972. He received the Bachelor of Science degree in Electrical Engineering from the United Arab Emirates University. He graduated top of his engineering graduated class of 1997. He obtained Master of Science degree in 2001 from Wayne State University in the Department of Electrical and Computer Engineering. His main research interests are primarily in the fields of parallel and distributed systems with applications to distributed processing across cluster of servers. From 1997 to 1998, he worked as a Teaching Assistant in the Department of Electrical Engineering at the UAE University. In 2000, he joined Internet Security Systems as a security software engineer. He later joined NetIQ Corporation in 2002 and still working since then. He is leading the security events trending and events management software development where he is involved in designing and the implementing event/log managements products.  相似文献   

16.

Background  

Development of a fast and accurate scoring function in virtual screening remains a hot issue in current computer-aided drug research. Different scoring functions focus on diverse aspects of ligand binding, and no single scoring can satisfy the peculiarities of each target system. Therefore, the idea of a consensus score strategy was put forward. Integrating several scoring functions, consensus score re-assesses the docked conformations using a primary scoring function. However, it is not really robust and efficient from the perspective of optimization. Furthermore, to date, the majority of available methods are still based on single objective optimization design.  相似文献   

17.
Live virtual machine migration can have a major impact on how a cloud system performs, as it consumes significant amounts of network resources such as bandwidth. Migration contributes to an increase in consumption of network resources which leads to longer migration times and ultimately has a detrimental effect on the performance of a cloud computing system. Most industrial approaches use ad-hoc manual policies to migrate virtual machines. In this paper, we propose an autonomous network aware live migration strategy that observes the current demand level of a network and performs appropriate actions based on what it is experiencing. The Artificial Intelligence technique known as Reinforcement Learning acts as a decision support system, enabling an agent to learn optimal scheduling times for live migration while analysing current network traffic demand. We demonstrate that an autonomous agent can learn to utilise available resources when peak loads saturate the cloud network.  相似文献   

18.
19.
The evolution of omics and computational competency has accelerated discoveries of the underlying biological processes in an unprecedented way. High throughput methodologies, such as flow cytometry, can reveal deeper insights into cell processes, thereby allowing opportunities for scientific discoveries related to health and diseases. However, working with cytometry data often imposes complex computational challenges due to high-dimensionality, large size, and nonlinearity of the data structure. In addition, cytometry data frequently exhibit diverse patterns across biomarkers and suffer from substantial class imbalances which can further complicate the problem. The existing methods of cytometry data analysis either predict cell population or perform feature selection. Through this study, we propose a “wisdom of the crowd” approach to simultaneously predict rare cell populations and perform feature selection by integrating a pool of modern machine learning (ML) algorithms. Given that our approach integrates superior performing ML models across different normalization techniques based on entropy and rank, our method can detect diverse patterns existing across the model features. Furthermore, the method identifies a dynamic biomarker structure that divides the features into persistently selected, unselected, and fluctuating assemblies indicating the role of each biomarker in rare cell prediction, which can subsequently aid in studies of disease progression.  相似文献   

20.
Sugar beet (Beta vulgaris L. subsp. vulgaris) is deemed to be one of the most promising bioethanol feedstock crops in northern Japan. To establish viable sugar beet‐based bioethanol production systems, energy‐efficient protocols in sugar beet cultivation are being intensively sought. On this basis, the effects of alternative agronomic practices for sugar beet production on total energy inputs (from fuels and agricultural materials during cultivation and transportation) and ethanol yields (estimated from sugar yields) were assessed in terms of (i) direct drilling, (ii) reduced tillage (no moldboard plowing), (iii) no‐fungicide application, (iv) using a high‐yielding beet genotype, (v) delayed harvesting and (vi) root+crown harvesting. Compared with the conventional sugar beet production system used in the Tokachi region of Hokkaido, northern Japan, which makes use of transplants, direct drilling and no‐fungicide application contributed to reduced energy inputs from raising seedlings and fungicides, respectively, but sugar (or ethanol) yields were also reduced by these practices, to a greater equivalent extent than the reductions in energy inputs. Consequently, direct drilling (6.84 MJ L?1) and no‐fungicide application (7.78 MJ L?1) worsened the energy efficiency (total energy inputs to produce 1 L of ethanol), compared with conventional sugar beet production practices (5.82 MJ L?1). Sugar yields under conventional plow‐based tillage and reduced tillage practices were similar, but total energy inputs were reduced as a result of reduced fuel consumption from not plowing. Hence, reduced tillage showed improved energy efficiency (5.36 MJ L?1). The energy efficiency was also improved by using a high‐yielding genotype (5.23 MJ L?1) and root+crown harvesting (5.21 MJ L?1). For these practices, no major changes in total energy inputs were noted, but sugar yields were consistently increased. Neither total energy inputs nor ethanol yields were affected by extending the vegetative growing period by delaying harvesting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号