首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cloud computing provides many kinds of application services for cloud users, but security problems have caused great impact on Software as a Service (SaaS). As a commercial model, SaaS is related among different participants who could be malicious or dishonest. This paper presents a Software Service Signature (S3) to deal with several security issues in SaaS and keep the interests and rights of all participants in safety. Our design is based on ID-based proxy signatures from pairings. The analysis shows that the proposed scheme can effectively strengthen the security through authentication in cloud computing.  相似文献   

2.
As DNA sequencing outpaces improvements in computer speed, there is a critical need to accelerate tasks like alignment and SNP calling. Crossbow is a cloud-computing software tool that combines the aligner Bowtie and the SNP caller SOAPsnp. Executing in parallel using Hadoop, Crossbow analyzes data comprising 38-fold coverage of the human genome in three hours using a 320-CPU cluster rented from a cloud computing service for about $85. Crossbow is available from .  相似文献   

3.
MapReduce is a programming model to process a massive amount of data on cloud computing. MapReduce processes data in two phases and needs to transfer intermediate data among computers between phases. MapReduce allows programmers to aggregate intermediate data with a function named combiner before transferring it. By leaving programmers the choice of using a combiner, MapReduce has a risk of performance degradation because aggregating intermediate data benefits some applications but harms others. Now, MapReduce can work with our proposal named the Adaptive Combiner for MapReduce (ACMR) to automatically, smartly, and trainer for getting a better performance without any interference of programmers. In experiments on seven applications, MapReduce can utilize ACMR to get the performance comparable to the system that is optimal for an application.  相似文献   

4.
With DNA sequencing now getting cheaper more quickly than data storage or computation, the time may have come for genome informatics to migrate to the cloud.  相似文献   

5.
6.
Nowadays, biomedicine is characterised by a growing need for processing of large amounts of data in real time. This leads to new requirements for information and communication technologies (ICT). Cloud computing offers a solution to these requirements and provides many advantages, such as cost savings, elasticity and scalability of using ICT. The aim of this paper is to explore the concept of cloud computing and the related use of this concept in the area of biomedicine. Authors offer a comprehensive analysis of the implementation of the cloud computing approach in biomedical research, decomposed into infrastructure, platform and service layer, and a recommendation for processing large amounts of data in biomedicine. Firstly, the paper describes the appropriate forms and technological solutions of cloud computing. Secondly, the high-end computing paradigm of cloud computing aspects is analysed. Finally, the potential and current use of applications in scientific research of this technology in biomedicine is discussed.  相似文献   

7.
An increasing number of personal electronic handheld devices (e.g., SmartPhone, netbook, MID and etc.), which make up the personal pervasive computing environments, are playing an important role in our daily lives. Data storage and sharing is difficult for these devices due to the data inflation and the natural limitations of mobile devices, such as the limited storage space and the limited computing capability. Since the emerging cloud storage solutions can provide reliable and unlimited storage, they satisfy to the requirement of pervasive computing very well. Thus we designed a new cloud storage platform which includes a series of shadow storage services to address these new data management challenges in pervasive computing environments, which called as “SmartBox”. In SmartBox, each device is associated its shadow storage with a unique account, and the shadow storage acts as backup center as well as personal repository when the device is connected. To facilitate file navigation, all datasets in shadow storage are organized based on file attributes which support the users to seek files by semantic queries. We implemented a prototype of SmartBox focusing on pervasive environments being made up of Internet accessible devices. Experimental results with the deployments confirm the efficacy of shadow storage services in SmartBox.  相似文献   

8.
Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein–ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs.  相似文献   

9.
10.
This special issue of the cluster computing journal will feature articles that discuss tools and applications for cloud computing. Specifically, it aims at delivering the state-of-the-art research on current cloud computing tools topics, and at promoting the cloud applications discipline by bringing to the attention of the community novel problems that must be investigated.  相似文献   

11.
Cloud computing environments (CCEs) are expected to deliver their services with qualities in service level agreements. On the other hand, they typically employ virtualization technology to consolidate multiple workloads on the same physical machine, thereby enhancing the overall utilization of physical resources. Most existing virtualization technologies are, however, unaware of their delivered quality of services (QoS). For example, the Xen hypervisor merely focuses on fair sharing of processor resources. We believe that CCEs have got married with traditional virtualization technologies without many traits in common. To bridge the gap between these two technologies, we have designed and implemented Kani, a QoS-aware hypervisor-level scheduler. Kani dynamically monitors the quality of delivered services to quantify the deviation between desired and delivered levels of QoS. Using this information, Kani determines how to allocate processor resources among running VMs so as to meet the expected QoS. Our evaluations of Kani scheduler prototype in Xen show that Kani outperforms the default Xen scheduler namely the Credit scheduler. For example, Kani reduces the average response time to requests to an Apache web server by up to \(93.6\,\%\); improves its throughput by up to \(97.9\,\%\); and mitigates the call setup time of an Asterisk media server by up to \(96.6\,\%\).  相似文献   

12.
The science cloud paradigm has been actively developed and investigated, but still requires a suitable model for science cloud system in order to support increasing scientific computation needs with high performance. This paper presents an effective provisioning model of science cloud, particularly for large-scale high throughput computing applications. In this model, we utilize job traces where a statistical method is applied to pick the most influential features to improve application performance. With these features, a system determines where VM is deployed (allocation) and which instance type is proper (provisioning). An adaptive evaluation step which is subsequent to the job execution enables our model to adapt to dynamical computing environments. We show performance achievements by comparing the proposed model with other policies through experiments and expect noticeable improvements on performance as well as reduction of cost from resource consumption through our model.  相似文献   

13.
14.
Cloud computing serves as a platform for remote users to utilize the heterogeneous resources in data-centers to compute High-Performance Computing jobs. The physical resources are virtualized in Cloud to entertain user services employing Virtual Machines (VMs). Job scheduling is deemed as a quintessential part of Cloud and efficient utilization of VMs by Cloud Service Providers demands an optimal job scheduling heuristic. An ideal scheduling heuristic should be efficient, fair, and starvation-free to produce a reduced makespan with improved resource utilization. However, static heuristics often lead to inefficient and poor resource utilization in the Cloud. An idle and underutilized host machine in Cloud still consumes up to 70% of the energy required by an active machine (Ray, in Indian J Comput Sci Eng 1(4):333–339, 2012). Consequently, it demands a load-balanced distribution of workload to achieve optimal resource utilization in Cloud. Existing Cloud scheduling heuristics such as Min–Min, Max–Min, and Sufferage distribute workloads among VMs based on minimum job completion time that ultimately causes a load imbalance. In this paper, a novel Resource-Aware Load Balancing Algorithm (RALBA) is presented to ensure a balanced distribution of workload based on computation capabilities of VMs. The RABLA framework comprises of two phases: (1) scheduling based on computing capabilities of VMs, and (2) the VM with earliest finish time is selected for jobs mapping. The outcomes of the RALBA have revealed that it provides substantial improvement against traditional heuristics regarding makespan, resource utilization, and throughput.  相似文献   

15.
In this overview to biomedical computing in the cloud, we discussed two primary ways to use the cloud (a single instance or cluster), provided a detailed example using NGS mapping, and highlighted the associated costs. While many users new to the cloud may assume that entry is as straightforward as uploading an application and selecting an instance type and storage options, we illustrated that there is substantial up-front effort required before an application can make full use of the cloud's vast resources. Our intention was to provide a set of best practices and to illustrate how those apply to a typical application pipeline for biomedical informatics, but also general enough for extrapolation to other types of computational problems. Our mapping example was intended to illustrate how to develop a scalable project and not to compare and contrast alignment algorithms for read mapping and genome assembly. Indeed, with a newer aligner such as Bowtie, it is possible to map the entire African genome using one m2.2xlarge instance in 48 hours for a total cost of approximately $48 in computation time. In our example, we were not concerned with data transfer rates, which are heavily influenced by the amount of available bandwidth, connection latency, and network availability. When transferring large amounts of data to the cloud, bandwidth limitations can be a major bottleneck, and in some cases it is more efficient to simply mail a storage device containing the data to AWS (http://aws.amazon.com/importexport/). More information about cloud computing, detailed cost analysis, and security can be found in references.  相似文献   

16.
17.
A collection of virtual machines (VMs) interconnected with an overlay network with a layer 2 abstraction has proven to be a powerful, unifying abstraction for adaptive distributed and parallel computing on loosely-coupled environments. It is now feasible to allow VMs hosting high performance computing (HPC) applications to seamlessly bridge distributed cloud resources and tightly-coupled supercomputing and cluster resources. However, to achieve the application performance that the tightly-coupled resources are capable of, it is important that the overlay network not introduce significant overhead relative to the native hardware, which is not the case for current user-level tools, including our own existing VNET/U system. In response, we describe the design, implementation, and evaluation of a virtual networking system that has negligible latency and bandwidth overheads in 1–10 Gbps networks. Our system, VNET/P, is directly embedded into our publicly available Palacios virtual machine monitor (VMM). VNET/P achieves native performance on 1 Gbps Ethernet networks and very high performance on 10 Gbps Ethernet networks. The NAS benchmarks generally achieve over 95 % of their native performance on both 1 and 10 Gbps. We have further demonstrated that VNET/P can operate successfully over more specialized tightly-coupled networks, such as Infiniband and Cray Gemini. Our results suggest it is feasible to extend a software-based overlay network designed for computing at wide-area scales into tightly-coupled environments.  相似文献   

18.
19.
Friel  Nial; Rue  Havard 《Biometrika》2007,94(3):661-672
We illustrate how the recursive algorithm of Reeves & Pettitt(2004) for general factorizable models can be extended to allowexact sampling, maximization of distributions and computationof marginal distributions. All of the methods we describe applyto discrete-valued Markov random fields with nearest neighbourintegrations defined on regular lattices; in particular we illustratethat exact inference can be performed for hidden autologisticmodels defined on moderately sized lattices. In this contextwe offer an extension of this methodology which allows approximateinference to be carried out for larger lattices without resortingto simulation techniques such as Markov chain Monte Carlo. Inparticular our work offers the basis for an automatic inferencemachine for such models.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号