首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 389 毫秒
1.
The emergence of cloud computing has made it become an attractive solution for large-scale data processing and storage applications. Cloud infrastructures provide users a remote access to powerful computing capacity, large storage space and high network bandwidth to deploy various applications. With the support of cloud computing, many large-scale applications have been migrated to cloud infrastructures instead of running on in-house local servers. Among these applications, continuous write applications (CWAs) such as online surveillance systems, can significantly benefit due to the flexibility and advantages of cloud computing. However, with specific characteristics such as continuous data writing and processing, and high level demand of data availability, cloud service providers prefer to use sophisticated models for provisioning resources to meet CWAs’ demands while minimizing the operational cost of the infrastructure. In this paper, we present a novel architecture of multiple cloud service providers (CSPs) or commonly referred to as Cloud-of-Clouds. Based on this architecture, we propose two operational cost-aware algorithms for provisioning cloud resources for CWAs, namely neighboring optimal resource provisioning algorithm and global optimal resource provisioning algorithm, in order to minimize the operational cost and thereby maximizing the revenue of CSPs. We validate the proposed algorithms through comprehensive simulations. The two proposed algorithms are compared against each other to assess their effectiveness, and with a commonly used and practically viable round-robin approach. The results demonstrate that NORPA and GORPA outperform the conventional round-robin algorithm by reducing the operational cost by up to 28 and 57 %, respectively. The low complexity of the proposed cost-aware algorithms allows us to apply it to a realistic Cloud-of-Clouds environment in industry as well as academia.  相似文献   

2.
Cloud Federation is an emerging computing model where multiple resources from independent Cloud providers are leveraged to create large-scale distributed virtual computing clusters, operating as into a single Cloud organization. This model enables the implementation of environmental diversity for Cloud applications, and overcomes the provisioning and scalability limits of a single Cloud, by introducing minimal additional cost for the Cloud consumer. In such a scenario, it is necessary to leverage on specific networking technologies that enable the effective support of inter-Cloud communication services between Cloud providers. This paper proposes an interconnection solution for Cloud federations based on publish/subscribe services. Moreover, we discuss some fundamental concerns needed to satisfy the inter-Cloud communication requirements in terms of reliability and availability. Finally, we present some experimental results that highlight some key reliability and denial of service vulnerability concerns in this domain.  相似文献   

3.
Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).  相似文献   

4.
Cloud services are on-demand services provided to end-users over the Internet and hosted by cloud service providers. A cloud service consists of a set of interacting applications/processes running on one or more interconnected VMs. Organizations are increasingly using cloud services as a cost-effective means for outsourcing their IT departments. However, cloud service availability is not guaranteed by cloud service providers, especially in the event of anomalous circumstances that spontaneously disrupt availability including natural disasters, power failure, and cybersecurity attacks. In this paper, we propose a framework for developing intelligent systems that can monitor and migrate cloud services to maximize their availability in case of cloud disruption. The framework connects an autonomic computing agent to the cloud to automatically migrate cloud services based on anticipated cloud disruption. The autonomic agent employs a modular design to facilitate the incorporation of different techniques for deciding when to migrate cloud services, what cloud services to migrate, and where to migrate the selected cloud services. We incorporated a virtual machine selection algorithm for deciding what cloud services to migrate that maximizes the availability of high priority services during migration under time and network bandwidth constraints. We implemented the framework and conducted experiments to evaluate the performance of the underlying techniques. Based on the experiments, the use of this framework results in less down-time due to migration, thereby leading to reduced cloud service disruption.  相似文献   

5.
Cloud computing, an on-demand computation model that consists of large data-centers (Clouds) managed by cloud providers, offers storage and computation needs for cloud users based on service level agreements (SLAs). Services in cloud computing are offered at relatively low cost. The model, therefore, forms a great target for many applications, such as startup businesses and e-commerce applications. The area of cloud computing has grown rapidly in the last few years; yet, it still faces some obstacles. For example, there is a lack of mechanisms that guarantee for cloud users the quality that they are actually getting, compared to the quality of service that is specified in SLAs. Another example is the concern of security, privacy and trust, since users lose control over their data and programs once they are sent to cloud providers. In this paper, we introduce a new architecture that aids the design and implementation of attestation services. The services monitor cloud-based applications to ensure software quality, such as security, privacy, trust and usability of cloud-based applications. Our approach is a user-centric approach through which users have more control on their own data/applications. Further, the proposed approach is a cloud-based approach where the powers of the clouds are utilized. Simulation results show that many services can be designed based on our architecture, with limited performance overhead.  相似文献   

6.
7.
Recently, there has been a significant increase in the use of cloud-based services that are offered in software as a service (SaaS) models by SaaS providers, and irregular access of different users to these cloud services leads to fluctuations in the demand workload. It is difficult to determine the suitable amount of resources required to run cloud services in response to the varying workloads, and this may lead to undesirable states of over-provisioning and under-provisioning. In this paper, we address improvements to resource provisioning for cloud services by proposing an autonomic resource provisioning approach that is based on the concept of the control monitor-analyze-plan-execute (MAPE) loop, and we design a resource provisioning framework for cloud environments. The experimental results show that the proposed approach reduces the total cost by up to 35 %, the number of service level agreement (SLA) violations by up to 40 %, and increases the resource utilization by up to 25 % compared with the other approaches.  相似文献   

8.
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.  相似文献   

9.
Cluster Computing - Cloud computing provides effective ways to rapidly provision computing resources over the Internet. For a better management of resource provisioning, the system requires to...  相似文献   

10.
The pay-as-you-go pricing model and the illusion of unlimited resources in the Cloud initiate the idea to provision services elastically. Elastic provisioning of services allocates/de-allocates resources dynamically in response to the changes of the workload. It minimizes the service provisioning cost while maintaining the desired service level objectives (SLOs). Model-predictive control is often used in building such elasticity controllers that dynamically provision resources. However, they need to be trained, either online or offline, before making accurate scaling decisions. The training process involves tedious and significant amount of work as well as some expertise, especially when the model has many dimensions and the training granularity is fine, which is proved to be essential in order to build an accurate elasticity controller. In this paper, we present OnlineElastMan, which is a self-trained proactive elasticity manager for cloud-based storage services. It automatically evolves itself while serving the workload. Experiments using OnlineElastMan with Cassandra indicate that OnlineElastMan continuously improves its provision accuracy, i.e., minimizing provisioning cost and SLO violations, under various workload patterns.  相似文献   

11.
In cloud computing, service providers offer cost-effective and on-demand IT services to service users on the basis of Service Level Agreements (SLAs). However the effective management of SLAs in cloud computing is essential for the service users to ensure that they achieve the desired outcomes from the formed service. In this paper, we introduce a SLA management framework that will enable service users to select the best available service provider on the basis of its reputation and then monitor the run time performance of the service provider to determine whether or not it will fulfill its promise defined in the SLA. Such analysis will assist the service user to make an informed decision about the continuation of service with the service provider.  相似文献   

12.
The performance of mobile devices including smart phones and laptops is steadily rising as prices plummet sharply. So, mobile devices are changing from being a mere interface for requesting services to becoming computing resources for providing and sharing services due to immeasurably improved performance. With the increasing number of mobile device users, the utilization rate of SNS (Social Networking Service) is also soaring. Applying SNS to the existing computing environment enables members of social network to share computing services without further authentication. To use mobile device as a computing resource, temporary network disconnection caused by user mobility and various HW/SW faults causing service disruption should be considered. Also these issues must be resolved to support mobile users and to provide user requirements for services. Accordingly, we propose fault tolerance and QoS (Quality of Services) scheduling using CAN (Content Addressable Network) in Mobile Social Cloud Computing (MSCC). MSCC is a computing environment that integrates social network-based cloud computing and mobile devices. In the computing environment, a mobile user can, through mobile devices, become a member of a social network through real world relationships. Essentially, members of a social network share cloud service or data with other members without further authentication by using their mobile device. We use CAN as the underlying MSCC to logically manage the locations of mobile devices. Fault tolerance and QoS scheduling consists of four sub-scheduling algorithms: malicious-user filtering, cloud service delivery, QoS provisioning, and replication and load-balancing. Under the proposed scheduling, a mobile device is used as a resource for providing cloud services, faults caused from user mobility or other reasons are tolerated and user requirements for QoS are considered. We simulate scheduling both with and without CAN. The simulation results show that our proposed scheduling algorithm enhances cloud service execution time, finish time and reliability and reduces the cloud service error rate.  相似文献   

13.
The delivery of scalable, rich multimedia applications and services on the Internet requires sophisticated technologies for transcoding, distributing, and streaming content. Cloud computing provides an infrastructure for such technologies, but specific challenges still remain in the areas of task management, load balancing, and fault tolerance. To address these issues, we propose a cloud-based distributed multimedia streaming service (CloudDMSS), which is designed to run on all major cloud computing services. CloudDMSS is highly adapted to the structure and policies of Hadoop, thus it has additional capacities for transcoding, task distribution, load balancing, and content replication and distribution. To satisfy the design requirements of our service architecture, we propose four important algorithms: content replication, system recovery for Hadoop distributed multimedia streaming, management for cloud multimedia management, and streaming resource-based connection (SRC) for streaming job distribution. To evaluate the proposed system, we conducted several different performance tests on a local testbed: transcoding, streaming job distribution using SRC, streaming service deployment and robustness to data node and task failures. In addition, we performed three different tests in an actual cloud computing environment, Cloudit 2.0: transcoding, streaming job distribution using SRC, and streaming service deployment.  相似文献   

14.
Cluster Computing - Cloud computing is a new computation technology that provides services to consumers and businesses. The main idea of Cloud computing is to present software and hardware services...  相似文献   

15.
Cloud computing took a step forward in the efficient use of hardware through virtualization technology. And as a result, cloud brings evident benefits for both users and providers. While users can acquire computational resources on-demand elastically, cloud vendors can also utilize maximally the investment costs for data centers infrastructure. In the Internet era, the number of appliances and services migrated to cloud environment increases exponentially. This leads to the expansion of data centers, which become bigger and bigger. Not just that these data centers must have the architecture with a high elasticity in order to serve the huge upsurge of tasks and balance the energy consumption. Although in recent times, many research works have dealt with finite capacity for single job queue in data centers, the multiple finite-capacity queues architecture receives less attention. In reality, the multiple queues architecture is widely used in large data centers. In this paper, we propose a novel three-state model for cloud servers. The model is deployed in both single and multiple finite capacity queues. We also bring forward several strategies to control multiple queues at the same time. This approach allows to reduce service waiting time for jobs and managing elastically the service capability for the whole system. We use CloudSim to simulate the cloud environment and to carry out the experiments in order to demonstrate the operability and effectiveness of the proposed method and strategies. The power consumption is also evaluated to provide insights into the system performance in respect of performance-energy trade-off.  相似文献   

16.
Because ecosystems are complex, tradeoffs exist among supplies of multiple ecosystem services, especially between the provisioning and regulating services. In ecosystem processes, net primary production (NPP) is connected with many other processes such as respiration and evapotranspiration. As one key supporting service, NPP is also related to other provisioning and regulating services. This study introduces an analysis framework of ecosystem services tradeoffs from the perspective of varied share of NPP, in the alpine grassland ecosystem of Damxung County on the Tibetan plateau, China. Total NPP was divided into the share of NPP spent on supplying provisioning services and the share used in supporting regulating services. Tradeoffs between provisioning and regulating services were analyzed by quantifying the change of meat provisioning service and the remaining share of NPP used in other ways; the corresponding change in the share of NPP used to support regulating services was also analyzed and compared with other changes in regulating services, such as carbon sequestration and water conservation services. The results show, from 2000 to 2010, the meat provisioning service increased by 199%, but this was at a cost of additional livestock feeding, which used more NPP of the alpine grassland ecosystem. As a result, by 2010 the remaining NPP used for supporting regulating services shrank to 77% of the 2000 level, which was accompanied by a decrease in carbon sequestration and water conservation services by 90% and 67%, respectively. The analysis of tradeoffs from the perspective of variations in the share of NPP used for various services will contribute to the study of mechanisms involved in providing ecosystem services, interactions between the provisioning of various services, and will also help land managers improve the management of ecosystems.  相似文献   

17.
Mobile Cloud Computing (MCC) is broadening the ubiquitous market for mobile devices. Because of the hardware limitation of mobile devices, the heavy computing tasks should be processed by service images (SIs) on the cloud. Due to the scalability and mobility of users and services, dynamic resource demands and time-varying network condition, SIs must be re-located to adapt the new circumstances. In this paper, we formulate the SI placement as an optimization problem which minimizes the communication cost subject to resource demand constraints. We then propose a real-time SI placement scheme which includes two sequent stages of clustering/filtering and condensed placement to solve the formulated problem. The former omits the infeasible slots prior to placement in order to improve computational complexity. The latter focuses on the SI placement through a novel condensed solution. The numerical results show that our solution converges to the global optimum with a negligible gap while performing much faster execution time compared with the exhaustive search method. This improvement leverages the real-time services especially in MCC environment.  相似文献   

18.
Cloud computing environment came about in order to effectively manage and use enormous amount of data that have become available with the development of the Internet. Cloud computing service is widely used not only to manage the users’ IT resources, but also to use enterprise IT resources in an effective manner. Various security threats have occurred while using cloud computing and plans for reaction are much needed, since they will eventually elevate to security threats to enterprise information. Plans to strengthen the security of enterprise information by using cloud security will be proposed in this research. These cloud computing security measures must be supported by the governmental policies. Publications on guidelines to information protection will raise awareness among the users and service providers. System of reaction must be created in order to constantly monitor and to promptly respond to any security accident. Therefore, both technical countermeasures and governmental policy must be supported at the same time. Cloud computing service is expanding more than ever, thus active research on cloud computing security is expected.  相似文献   

19.
The ecosystem service concept is becoming more and more acknowledged in science and decision-making, resulting in several applications in different case studies and in environmental management, but still it is developing in terms of definitions, typologies and understanding its complexity. By examining the interrelations between ecosystem properties, ecosystem integrity, biodiversity, ecosystem services and human well-being qualitatively, the mutual influences on each constituent of the ‘ecosystem service cascade’ are illuminated, giving an impulse for further discussions and improvements for a better understanding of the complexity of human–environmental systems. Results of the theoretical interactions are among others the assumption that provisioning services exclude or compete with each other, while the role of biodiversity was found to be supporting for regulating services and cultural services. Ecosystem services meet the criteria of being adequate human–environmental system indicators and therefore, they are an appropriate instrument for decision-making and management.  相似文献   

20.
Ecosystem services provide an instinctive way to understand the trade-offs associated with natural resource management. However, despite their apparent usefulness, several hurdles have prevented ecosystem services from becoming deeply embedded in environmental decision-making. Ecosystem service studies vary widely in focal services, geographic extent, and in methods for defining and measuring services. Dissent among scientists on basic terminology and approaches to evaluating ecosystem services create difficulties for those trying to incorporate ecosystem services into decision-making. To facilitate clearer comparison among recent studies, we provide a synthesis of common terminology and explain a rationale and framework for distinguishing among the components of ecosystem service delivery, including: an ecosystem's capacity to produce services; ecological pressures that interfere with an ecosystem's ability to provide the service; societal demand for the service; and flow of the service to people. We discuss how interpretation and measurement of these four components can differ among provisioning, regulating, and cultural services. Our flexible framework treats service capacity, ecological pressure, demand, and flow as separate but interactive entities to improve our ability to evaluate the sustainability of service provision and to help guide management decisions. We consider ecosystem service provision to be sustainable when demand is met without decreasing capacity for future provision of that service or causing undesirable declines in other services. When ecosystem service demand exceeds ecosystem capacity to provide services, society can choose to enhance natural capacity, decrease demand and/or ecological pressure, or invest in a technological substitute. Because regulating services are frequently overlooked in environmental assessments, we provide a more detailed examination of regulating services and propose a novel method for quantifying the flow of regulating services based on estimates of ecological work. We anticipate that our synthesis and framework will reduce inconsistency and facilitate coherence across analyses of ecosystem services, thereby increasing their utility in environmental decision-making.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号