首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
The performance of mobile devices including smart phones and laptops is steadily rising as prices plummet sharply. So, mobile devices are changing from being a mere interface for requesting services to becoming computing resources for providing and sharing services due to immeasurably improved performance. With the increasing number of mobile device users, the utilization rate of SNS (Social Networking Service) is also soaring. Applying SNS to the existing computing environment enables members of social network to share computing services without further authentication. To use mobile device as a computing resource, temporary network disconnection caused by user mobility and various HW/SW faults causing service disruption should be considered. Also these issues must be resolved to support mobile users and to provide user requirements for services. Accordingly, we propose fault tolerance and QoS (Quality of Services) scheduling using CAN (Content Addressable Network) in Mobile Social Cloud Computing (MSCC). MSCC is a computing environment that integrates social network-based cloud computing and mobile devices. In the computing environment, a mobile user can, through mobile devices, become a member of a social network through real world relationships. Essentially, members of a social network share cloud service or data with other members without further authentication by using their mobile device. We use CAN as the underlying MSCC to logically manage the locations of mobile devices. Fault tolerance and QoS scheduling consists of four sub-scheduling algorithms: malicious-user filtering, cloud service delivery, QoS provisioning, and replication and load-balancing. Under the proposed scheduling, a mobile device is used as a resource for providing cloud services, faults caused from user mobility or other reasons are tolerated and user requirements for QoS are considered. We simulate scheduling both with and without CAN. The simulation results show that our proposed scheduling algorithm enhances cloud service execution time, finish time and reliability and reduces the cloud service error rate.  相似文献   

2.
Cloud computing should inherently support various types of data-intensive workloads with different storage access patterns. This makes a high-performance storage system in the Cloud an important component. Emerging flash device technologies such as solid state drives (SSDs) are a viable choice for building high performance computing (HPC) cloud storage systems to address more fine-grained data access patterns. However, the bit-per-dollar SSD price is still higher than the prices of HDDs. This study proposes an optimized progressive file layout (PFL) method to leverage the advantages of SSDs in a parallel file system such as Lustre so that small file I/O performance can be significantly improved. A PFL can dynamically adjust chunk sizes and stripe patterns according to various I/O traffics. Extensive experimental results show that this approach (i.e. building a hybrid storage system based on a combination of SSDs and HDDs) can actually achieve balanced throughput over mixed I/O workloads consisting of large and small file access patterns.  相似文献   

3.
With the development of ubiquitous computing technology, users are using mobile devices which are for producing and accessing information. Due to the limited computing capability and storage, however, mobile cloud computing technology are emerging research issues in the architecture, design, and implementation. This paper proposes the trust management approach by analyzing user behavioral patterns for reliable mobile cloud computing. For this, we suggest a method to quantify a one-dimensional trusting relation based on the analysis of telephone call data from mobile devices. After that, we integrate inter-user trust relationship in mobile cloud environment. As a result, trustworthiness of data in data production, management, overall application, is enhanced.  相似文献   

4.
Taking advantage of distributed storage technology and virtualization technology, cloud storage systems provide virtual machine clients customizable storage service. They can be divided into two types: distributed file system and block level storage system. There are two disadvantages in existing block level storage system: Firstly, Some of them are tightly coupled with their cloud computing environments. As a result, it’s hard to extend them to support other cloud computing platforms; Secondly, The bottleneck of volume server seriously affects the performance and reliability of the whole system. In this paper we present a lightweighted block-level storage system for clouds—ORTHRUS, based on virtualization technology. We first design the architecture with multiple volume servers and its workflows, which can improve system performance and avoid the problem. Secondly, we propose a Listen-Detect-Switch mechanism for ORTHRUS to deal with contingent volume servers’ failure. At last we design a strategy that dynamically balances load between multiple volume servers. We characterize machine capability and load quantity with black box model, and implement the dynamic load balance strategy which is based on genetic algorithm. Extensive experimental results show that the aggregated I/O throughputs of ORTHRUS are significantly improved (approximately two times of that in Orthrus), and both I/O throughputs and IOPS are also remarkably improved (about 1.8 and 1.2 times, respectively) by our dynamic load balance strategy.  相似文献   

5.
Cloud computing, an on-demand computation model that consists of large data-centers (Clouds) managed by cloud providers, offers storage and computation needs for cloud users based on service level agreements (SLAs). Services in cloud computing are offered at relatively low cost. The model, therefore, forms a great target for many applications, such as startup businesses and e-commerce applications. The area of cloud computing has grown rapidly in the last few years; yet, it still faces some obstacles. For example, there is a lack of mechanisms that guarantee for cloud users the quality that they are actually getting, compared to the quality of service that is specified in SLAs. Another example is the concern of security, privacy and trust, since users lose control over their data and programs once they are sent to cloud providers. In this paper, we introduce a new architecture that aids the design and implementation of attestation services. The services monitor cloud-based applications to ensure software quality, such as security, privacy, trust and usability of cloud-based applications. Our approach is a user-centric approach through which users have more control on their own data/applications. Further, the proposed approach is a cloud-based approach where the powers of the clouds are utilized. Simulation results show that many services can be designed based on our architecture, with limited performance overhead.  相似文献   

6.
The demand for cloud computing is increasing dramatically due to the high computational requirements of business, social, web and scientific applications. Nowadays, applications and services are hosted on the cloud in order to reduce the costs of hardware, software and maintenance. To satisfy this high demand, the number of large-scale data centers has increased, which consumes a high volume of electrical power, has a negative impact on the environment, and comes with high operational costs. In this paper, we discuss many ongoing or implemented energy aware resource allocation techniques for cloud environments. We also present a comprehensive review on the different energy aware resource allocation and selection algorithms for virtual machines in the cloud. Finally, we come up with further research issues and challenges for future cloud environments.  相似文献   

7.
In this paper we present SNUAGE, a platform-as-a-service security framework for building secure and scalable multi-layered services based on the cloud computing model. SNUAGE ensures the authenticity, integrity, and confidentiality of data communication over the network links by creating a set of security associations between the data-bound components on the presentation layer and their respective data sources on the data persistence layer. SNUAGE encapsulates the security procedures, policies, and mechanisms in these security associations at the service development stage to form a collection of isolated and protected security domains. The secure communication among the entities in one security domain is governed and controlled by a standalone security processor and policy attached to this domain. This results into: (1) a safer data delivery mechanism that prevents security vulnerabilities in one domain from spreading to the other domains and controls the inter-domain information flow to protect the privacy of network data, (2) a reusable security framework that can be employed in existing platform-as-a-service environments and across diverse cloud computing service models, and (3) an increase in productivity and delivery of reliable and secure cloud computing services supported by a transparent programming model that relieves application developers from the intricate details of security programming. Last but not least, SNUAGE contributes to a major enhancement in the energy consumption and performance of supported cloud services by providing a suitable execution container in its protected security domains for a wide suite of energy- and performance-efficient cryptographic constructs such as those adopted by policy-driven and content-based security protocols. An energy analysis of the system shows, via real energy measurements, major savings in energy consumption on the consumer devices as well as on the cloud servers. Moreover, a sample implementation of the presented security framework is developed using Java and deployed and tested in a real cloud computing infrastructure using the Google App Engine service platform. Performance benchmarks show that the proposed framework provides a significant throughput enhancement compared to traditional network security protocols such as the Secure Sockets Layer and the Transport Layer Security protocols.  相似文献   

8.
With the continued growth in software environments on cloud application platforms, self-management at the Platform-as-a-Service (PaaS) level has become a pressing concern, and the run-time monitoring, analysis and detection of critical situations are all fundamental requirements if we are to achieve autonomic behaviour in complex PaaS environments. In this paper we focus on cloud application platforms offering their customers a range of generic built-in re-usable services. By identifying key characteristics of these complex dynamic systems, we compare cloud application platforms to distributed sensor networks, and investigate the viability of exploiting these similarities with a case study. We treat cloud data storage services as “virtual” sensors constantly emitting monitoring data, such as numbers of connections and storage space availability, which are then analysed by the central component of a monitoring framework so as to detect and react to SLA violations. We discuss the potential benefits, as well as some shortcomings, of adopting this approach.  相似文献   

9.

In recent years, cloud computing can be considered an emerging technology that can share resources with users. Because cloud computing is on-demand, efficient use of resources such as memory, processors, bandwidth, etc., is a big challenge. Despite the advantages of cloud computing, sometimes it is not a proper choice due to its delay in responding appropriately to existing requests, which led to the need for another technology called fog computing. Fog computing reduces traffic and time lags by expanding cloud services to the network and closer to users. It can schedule resources with higher efficiency and utilize them to impact the user's experience dramatically. This paper aims to survey some studies that have been done in the field of scheduling in fog/cloud computing environments. The focus of this survey is on published studies between 2015 and 2021 in journals or conferences. We selected 71 studies in a systematic literature review (SLR) from four major scientific databases based on their relation to our paper. We classified these studies into five categories based on their traced parameters and their focus area. This classification comprises 1—performance 2—energy efficiency, 3—resource utilization, 4—performance and energy efficiency, and 5—performance and resource utilization simultaneously. 42.3% of the studies focused on performance, 9.9% on energy efficiency, 7.0% on resource utilization, 21.1% on both performance and energy efficiency, and 19.7% on both performance and resource utilization. Finally, we present challenges and open issues in the resource scheduling methods in fog/cloud computing environments.

  相似文献   

10.
To deal with the environment’s heterogeneity, information providers usually offer access to their data by publishing Web services in the domain of pervasive computing. Therefore, to support applications that need to combine data from a diverse range of sources, pervasive computing requires a middleware to query multiple Web services. There exist works that have been investigating on generating optimal query plans. We however in this paper propose a query execution model, called PQModel, to optimize the process of query execution over Web Services. In other words, we attempt to improve query efficiency from the aspect of optimizing the execution processing of query plans.  相似文献   

11.
The public cloud storage auditing with deduplication has been studied to assure the data integrity and improve the storage efficiency for cloud storage in recent years. The cloud, however, has to store the link between the file and its data owners to support the valid data downloading in previous schemes. From this file-owner link, the cloud server can identify which users own the same file. It might expose the sensitive relationship among data owners of this multi-owners file, which seriously harms the data owners’ privacy. To address this problem, we propose an identity-protected secure auditing and deduplicating data scheme in this paper. In the proposed scheme, the cloud cannot learn any useful information on the relationship of data owners. Different from existing schemes, the cloud does not need to store the file-owner link for supporting valid data downloading. Instead, when the user downloads the file, he only needs to anonymously submit a credential to the cloud, and can download the file only if this credential is valid. Except this main contribution, our scheme has the following advantages over existing schemes. First, the proposed scheme achieves the constant storage, that is, the storage space is fully independent of the number of the data owners possessing the same file. Second, the proposed scheme achieves the constant computation. Only the first uploader needs to generate the authenticator for each file block, while subsequent owners do not need to generate it any longer. As a result, our scheme greatly reduces the storage overhead of the cloud and the computation overhead of data owners. The security analysis and experimental results show that our scheme is secure and efficient.  相似文献   

12.
Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).  相似文献   

13.
Cloud computing is founded by the concept of service computing, where everything is a service—computing services are now utilities. There are various known services in cloud computing. At the moment, there are Software as a Service (SaaS), Platform as a Service (PaaS), Hardware/Infrastructure as a Service (HaaS/IaaS), and Database as a Service (DaaS). In this paper, we propose Ontology as a Service (OaaS), which is an ontology tailoring process service in the cloud. In particular, we focus on sub-ontology extraction and replacement on the cloud. We use the Maximum Extraction method to facilitate this. UMLS meta-thesaurus ontology is used as a walk-through case study to illustrate our proposed method.  相似文献   

14.

Many consumers participate in the smart city via smart portable gadgets such as wearables, personal gadgets, mobile devices, or sensor systems. In the edge computation systems of IoT in the smart city, the fundamental difficulty of the sensor is to pick reliable participants. Since not all smart IoT gadgets are dedicated, certain intelligent IoT gadgets might destroy the networks or services deliberately and degrade the customer experience. A trust-based internet of things (TM-IoT) cloud computing method is proposed in this research. The problem is solved by choosing trustworthy partners to enhance the quality services of the IoT edging network in the Smart architectures. A smart device choice recommendation method based on the changing networks was developed. It applied the evolutionary concept of games to examine the reliability and durability of the technique of trust management presented in this article. The reliability and durability of the trustworthiness-managing system, the Lyapunov concept was applied.A real scenario for personal-health-control systems and air-qualitymonitoring and assessment in a smart city setting confirmed the efficiency of the confidence-management mechanism. Experiments have demonstrated that the methodology for trust administration suggested in this research plays a major part in promoting multi-intelligent gadget collaboration in the IoT edge computer system with an efficiency of 97%. It resists harmful threads against service suppliers more consistently and is ideal for the smart world's massive IoT edge computer system.

  相似文献   

15.
Advances in smart technologies, wireless networking, and the increased interest in services have led to the emergence of ubiquitous and pervasive computing as one of the most promising areas of computing in recent years. Researchers have become specifically interested in smart spaces and the significant improvements it can introduce to our lives. Most smart spaces rely on physical components such as sensors to sense and acquire information about the real world environment and surroundings. Although sensor networks can provide useful contextual information, they are known for their high degree of unreliability and limited resources. We believe that it is necessary to augment physical sensors with other kinds of data to create more reliable and truly context-aware smart spaces. In this paper we therefore utilize mobile devices and social networks to acquire more detailed and useful contextual information that can help create smarter spaces. We then propose a smart spaces architecture that utilizes these new contexts and in particular the social context.  相似文献   

16.
With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.  相似文献   

17.
Recently, there has been a significant increase in the use of cloud-based services that are offered in software as a service (SaaS) models by SaaS providers, and irregular access of different users to these cloud services leads to fluctuations in the demand workload. It is difficult to determine the suitable amount of resources required to run cloud services in response to the varying workloads, and this may lead to undesirable states of over-provisioning and under-provisioning. In this paper, we address improvements to resource provisioning for cloud services by proposing an autonomic resource provisioning approach that is based on the concept of the control monitor-analyze-plan-execute (MAPE) loop, and we design a resource provisioning framework for cloud environments. The experimental results show that the proposed approach reduces the total cost by up to 35 %, the number of service level agreement (SLA) violations by up to 40 %, and increases the resource utilization by up to 25 % compared with the other approaches.  相似文献   

18.
The emergence of cloud computing has made it become an attractive solution for large-scale data processing and storage applications. Cloud infrastructures provide users a remote access to powerful computing capacity, large storage space and high network bandwidth to deploy various applications. With the support of cloud computing, many large-scale applications have been migrated to cloud infrastructures instead of running on in-house local servers. Among these applications, continuous write applications (CWAs) such as online surveillance systems, can significantly benefit due to the flexibility and advantages of cloud computing. However, with specific characteristics such as continuous data writing and processing, and high level demand of data availability, cloud service providers prefer to use sophisticated models for provisioning resources to meet CWAs’ demands while minimizing the operational cost of the infrastructure. In this paper, we present a novel architecture of multiple cloud service providers (CSPs) or commonly referred to as Cloud-of-Clouds. Based on this architecture, we propose two operational cost-aware algorithms for provisioning cloud resources for CWAs, namely neighboring optimal resource provisioning algorithm and global optimal resource provisioning algorithm, in order to minimize the operational cost and thereby maximizing the revenue of CSPs. We validate the proposed algorithms through comprehensive simulations. The two proposed algorithms are compared against each other to assess their effectiveness, and with a commonly used and practically viable round-robin approach. The results demonstrate that NORPA and GORPA outperform the conventional round-robin algorithm by reducing the operational cost by up to 28 and 57 %, respectively. The low complexity of the proposed cost-aware algorithms allows us to apply it to a realistic Cloud-of-Clouds environment in industry as well as academia.  相似文献   

19.

Data transmission and retrieval in a cloud computing environment are usually handled by storage device providers or physical storage units leased by third parties. Improving network performance considering power connectivity and resource stability while ensuring workload balance is a hot topic in cloud computing. In this research, we have addressed the data duplication problem by providing two dynamic models with two variant architectures to investigate the strengths and shortcomings of architectures in Big Data Cloud Computing Networks. The problems of the data duplication process will be discussed accurately in each model. Attempts have been made to improve the performance of the cloud network by taking into account and correcting the flaws of the previously proposed algorithms. The accuracy of the proposed models have been investigated by simulation. Achieved results indicate an increase in the workload balance of the network and a decrease in response time to user requests in the model with a grouped architecture for all the architectures. Also, the proposed duplicate data model with peer-to-peer network architecture has been able to increase the cloud network optimality compared to the models presented with the same architecture.

  相似文献   

20.
Nowadays, complex smartphone applications are developed that support gaming, navigation, video editing, augmented reality, and speech recognition which require considerable computational power and battery lifetime. The cloud computing provides a brand new opportunity for the development of mobile applications. Mobile Hosts (MHs) are provided with data storage and processing services on a cloud computing platform rather than on the MHs. To provide seamless connection and reliable cloud service, we are focused on communication. When the connection to cloud server is increased explosively, each MH connection quality has to be declined. It causes several problems: network delay, retransmission, and so on. In this paper, we propose proxy based architecture to improve link performance for each MH in mobile cloud computing. By proposed proxy, the MH need not keep connection of the cloud server because it just connected one of proxy in the same subnet. And we propose the optimal access network discovery algorithm to optimize bandwidth usage. When the MH changes its point of attachment, proposed discovery algorithm helps to connect the optimal access network for cloud service. By experiment result and analysis, the proposed connection management method has better performance than the 802.11 access method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号