首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cloud computing environments (CCEs) are expected to deliver their services with qualities in service level agreements. On the other hand, they typically employ virtualization technology to consolidate multiple workloads on the same physical machine, thereby enhancing the overall utilization of physical resources. Most existing virtualization technologies are, however, unaware of their delivered quality of services (QoS). For example, the Xen hypervisor merely focuses on fair sharing of processor resources. We believe that CCEs have got married with traditional virtualization technologies without many traits in common. To bridge the gap between these two technologies, we have designed and implemented Kani, a QoS-aware hypervisor-level scheduler. Kani dynamically monitors the quality of delivered services to quantify the deviation between desired and delivered levels of QoS. Using this information, Kani determines how to allocate processor resources among running VMs so as to meet the expected QoS. Our evaluations of Kani scheduler prototype in Xen show that Kani outperforms the default Xen scheduler namely the Credit scheduler. For example, Kani reduces the average response time to requests to an Apache web server by up to \(93.6\,\%\); improves its throughput by up to \(97.9\,\%\); and mitigates the call setup time of an Asterisk media server by up to \(96.6\,\%\).  相似文献   

2.

In recent years, cloud computing can be considered an emerging technology that can share resources with users. Because cloud computing is on-demand, efficient use of resources such as memory, processors, bandwidth, etc., is a big challenge. Despite the advantages of cloud computing, sometimes it is not a proper choice due to its delay in responding appropriately to existing requests, which led to the need for another technology called fog computing. Fog computing reduces traffic and time lags by expanding cloud services to the network and closer to users. It can schedule resources with higher efficiency and utilize them to impact the user's experience dramatically. This paper aims to survey some studies that have been done in the field of scheduling in fog/cloud computing environments. The focus of this survey is on published studies between 2015 and 2021 in journals or conferences. We selected 71 studies in a systematic literature review (SLR) from four major scientific databases based on their relation to our paper. We classified these studies into five categories based on their traced parameters and their focus area. This classification comprises 1—performance 2—energy efficiency, 3—resource utilization, 4—performance and energy efficiency, and 5—performance and resource utilization simultaneously. 42.3% of the studies focused on performance, 9.9% on energy efficiency, 7.0% on resource utilization, 21.1% on both performance and energy efficiency, and 19.7% on both performance and resource utilization. Finally, we present challenges and open issues in the resource scheduling methods in fog/cloud computing environments.

  相似文献   

3.
As DNA sequencing outpaces improvements in computer speed, there is a critical need to accelerate tasks like alignment and SNP calling. Crossbow is a cloud-computing software tool that combines the aligner Bowtie and the SNP caller SOAPsnp. Executing in parallel using Hadoop, Crossbow analyzes data comprising 38-fold coverage of the human genome in three hours using a 320-CPU cluster rented from a cloud computing service for about $85. Crossbow is available from .  相似文献   

4.
Cloud computing should inherently support various types of data-intensive workloads with different storage access patterns. This makes a high-performance storage system in the Cloud an important component. Emerging flash device technologies such as solid state drives (SSDs) are a viable choice for building high performance computing (HPC) cloud storage systems to address more fine-grained data access patterns. However, the bit-per-dollar SSD price is still higher than the prices of HDDs. This study proposes an optimized progressive file layout (PFL) method to leverage the advantages of SSDs in a parallel file system such as Lustre so that small file I/O performance can be significantly improved. A PFL can dynamically adjust chunk sizes and stripe patterns according to various I/O traffics. Extensive experimental results show that this approach (i.e. building a hybrid storage system based on a combination of SSDs and HDDs) can actually achieve balanced throughput over mixed I/O workloads consisting of large and small file access patterns.  相似文献   

5.
The science cloud paradigm has been actively developed and investigated, but still requires a suitable model for science cloud system in order to support increasing scientific computation needs with high performance. This paper presents an effective provisioning model of science cloud, particularly for large-scale high throughput computing applications. In this model, we utilize job traces where a statistical method is applied to pick the most influential features to improve application performance. With these features, a system determines where VM is deployed (allocation) and which instance type is proper (provisioning). An adaptive evaluation step which is subsequent to the job execution enables our model to adapt to dynamical computing environments. We show performance achievements by comparing the proposed model with other policies through experiments and expect noticeable improvements on performance as well as reduction of cost from resource consumption through our model.  相似文献   

6.
7.
Cloud computing serves as a platform for remote users to utilize the heterogeneous resources in data-centers to compute High-Performance Computing jobs. The physical resources are virtualized in Cloud to entertain user services employing Virtual Machines (VMs). Job scheduling is deemed as a quintessential part of Cloud and efficient utilization of VMs by Cloud Service Providers demands an optimal job scheduling heuristic. An ideal scheduling heuristic should be efficient, fair, and starvation-free to produce a reduced makespan with improved resource utilization. However, static heuristics often lead to inefficient and poor resource utilization in the Cloud. An idle and underutilized host machine in Cloud still consumes up to 70% of the energy required by an active machine (Ray, in Indian J Comput Sci Eng 1(4):333–339, 2012). Consequently, it demands a load-balanced distribution of workload to achieve optimal resource utilization in Cloud. Existing Cloud scheduling heuristics such as Min–Min, Max–Min, and Sufferage distribute workloads among VMs based on minimum job completion time that ultimately causes a load imbalance. In this paper, a novel Resource-Aware Load Balancing Algorithm (RALBA) is presented to ensure a balanced distribution of workload based on computation capabilities of VMs. The RABLA framework comprises of two phases: (1) scheduling based on computing capabilities of VMs, and (2) the VM with earliest finish time is selected for jobs mapping. The outcomes of the RALBA have revealed that it provides substantial improvement against traditional heuristics regarding makespan, resource utilization, and throughput.  相似文献   

8.
MapReduce is a programming model to process a massive amount of data on cloud computing. MapReduce processes data in two phases and needs to transfer intermediate data among computers between phases. MapReduce allows programmers to aggregate intermediate data with a function named combiner before transferring it. By leaving programmers the choice of using a combiner, MapReduce has a risk of performance degradation because aggregating intermediate data benefits some applications but harms others. Now, MapReduce can work with our proposal named the Adaptive Combiner for MapReduce (ACMR) to automatically, smartly, and trainer for getting a better performance without any interference of programmers. In experiments on seven applications, MapReduce can utilize ACMR to get the performance comparable to the system that is optimal for an application.  相似文献   

9.
With the development of ubiquitous computing technology, users are using mobile devices which are for producing and accessing information. Due to the limited computing capability and storage, however, mobile cloud computing technology are emerging research issues in the architecture, design, and implementation. This paper proposes the trust management approach by analyzing user behavioral patterns for reliable mobile cloud computing. For this, we suggest a method to quantify a one-dimensional trusting relation based on the analysis of telephone call data from mobile devices. After that, we integrate inter-user trust relationship in mobile cloud environment. As a result, trustworthiness of data in data production, management, overall application, is enhanced.  相似文献   

10.
11.
With DNA sequencing now getting cheaper more quickly than data storage or computation, the time may have come for genome informatics to migrate to the cloud.  相似文献   

12.
Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein–ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs.  相似文献   

13.
Nowadays, biomedicine is characterised by a growing need for processing of large amounts of data in real time. This leads to new requirements for information and communication technologies (ICT). Cloud computing offers a solution to these requirements and provides many advantages, such as cost savings, elasticity and scalability of using ICT. The aim of this paper is to explore the concept of cloud computing and the related use of this concept in the area of biomedicine. Authors offer a comprehensive analysis of the implementation of the cloud computing approach in biomedical research, decomposed into infrastructure, platform and service layer, and a recommendation for processing large amounts of data in biomedicine. Firstly, the paper describes the appropriate forms and technological solutions of cloud computing. Secondly, the high-end computing paradigm of cloud computing aspects is analysed. Finally, the potential and current use of applications in scientific research of this technology in biomedicine is discussed.  相似文献   

14.
15.
The delivery of scalable, rich multimedia applications and services on the Internet requires sophisticated technologies for transcoding, distributing, and streaming content. Cloud computing provides an infrastructure for such technologies, but specific challenges still remain in the areas of task management, load balancing, and fault tolerance. To address these issues, we propose a cloud-based distributed multimedia streaming service (CloudDMSS), which is designed to run on all major cloud computing services. CloudDMSS is highly adapted to the structure and policies of Hadoop, thus it has additional capacities for transcoding, task distribution, load balancing, and content replication and distribution. To satisfy the design requirements of our service architecture, we propose four important algorithms: content replication, system recovery for Hadoop distributed multimedia streaming, management for cloud multimedia management, and streaming resource-based connection (SRC) for streaming job distribution. To evaluate the proposed system, we conducted several different performance tests on a local testbed: transcoding, streaming job distribution using SRC, streaming service deployment and robustness to data node and task failures. In addition, we performed three different tests in an actual cloud computing environment, Cloudit 2.0: transcoding, streaming job distribution using SRC, and streaming service deployment.  相似文献   

16.
17.
Shao  Bilin  Ji  Yanyan 《Cluster computing》2021,24(3):1989-2000

In recent years, how to design efficient auditing protocol to verify the integrity of users’ data, which is stored in cloud services provider (CSP), becomes a research focus. Homomorphic message authentication code (MAC) and homomorphic signature are two popular techniques to respectively design private and public auditing protocols. On the one hand, it is not suitable for the homomorphic-MAC-based auditing protocols to be outsourced to third-party auditor (TPA), who has more professional knowledge and computational abilities, although they have high efficiencies. On the other hand, the homomorphic-signature-based ones are very suitable for employing TPA without compromising user’s signing key but have very low efficiency (compared to the former case). In this paper, we propose a new auditing protocol, which perfectly combines the advantages of above two cases. In particular, it is almost as efficient as a homomorphic-MAC-based protocol proposed by Zhang et al. recently. Moreover, it is also suitable for outsourcing to TPA because it does not compromise the privacy of users’ signing key, which can be seen from our security analysis. Finally, numerical analysis and experimental results demonstrate the high-efficiency of our protocol.

  相似文献   

18.
19.
In this overview to biomedical computing in the cloud, we discussed two primary ways to use the cloud (a single instance or cluster), provided a detailed example using NGS mapping, and highlighted the associated costs. While many users new to the cloud may assume that entry is as straightforward as uploading an application and selecting an instance type and storage options, we illustrated that there is substantial up-front effort required before an application can make full use of the cloud's vast resources. Our intention was to provide a set of best practices and to illustrate how those apply to a typical application pipeline for biomedical informatics, but also general enough for extrapolation to other types of computational problems. Our mapping example was intended to illustrate how to develop a scalable project and not to compare and contrast alignment algorithms for read mapping and genome assembly. Indeed, with a newer aligner such as Bowtie, it is possible to map the entire African genome using one m2.2xlarge instance in 48 hours for a total cost of approximately $48 in computation time. In our example, we were not concerned with data transfer rates, which are heavily influenced by the amount of available bandwidth, connection latency, and network availability. When transferring large amounts of data to the cloud, bandwidth limitations can be a major bottleneck, and in some cases it is more efficient to simply mail a storage device containing the data to AWS (http://aws.amazon.com/importexport/). More information about cloud computing, detailed cost analysis, and security can be found in references.  相似文献   

20.
Singh  Parminder  Kaur  Avinash  Gupta  Pooja  Gill  Sukhpal Singh  Jyoti  Kiran 《Cluster computing》2021,24(2):717-737
Cluster Computing - The elasticity characteristic of cloud services attracts application providers to deploy applications in a cloud environment. The scalability feature of cloud computing gives...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号