首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Sabitha  S.  Rajasree  M. S. 《Cluster computing》2021,24(2):1455-1478

The exponential growth of data storage and sharing in cloud demands an efficient access control mechanism for flexible data sharing. Attribute-Based Encryption (ABE) is a promising cryptographic solution to share data among users in the cloud. But it suffers from user revocation, attribute revocation, forward secrecy and backward secrecy issues. Communication and computation overhead is more due to the linear variation in the size of ciphertext and the secret key with respect to the number of attributes. In this paper, we investigate an on-demand access control for flexible sharing of secure data among randomly selected users. It is a tunable access control mechanism for the flexible sharing of ciphertext classes in the cloud. It delegates the decryption rights of any set of ciphertext classes among the users only if their attributes are satisfied with the access policy associated with ciphertext and if they should possess a compact key corresponding to the intended set of ciphertext classes. It produces a constant size ciphertext and a compact secret key to efficiently utilize the storage space and reduce the communication cost. The compact key aggregates the power of secret keys used to encrypt the outsourced data. This method flexibly shares the ciphertext classes among the randomly selected users with a specific set of attributes. All other ciphertext classes outside the set remain confidential. It allows dynamic data updates by verifying the data manipulation privilege of users with the help of claim policy. The proposed scheme provides access control of varying granularity, at user-level, at file-level, and attribute-level. Granularity levels can be chosen based on applications and user demands. Hence, it is a multi-level, tunable access control over the shared data. It is very useful for secure data storage. This scheme tackles user revocation and attribute revocation problems so that, it allows the data owner to revoke a specific user or a group of users. It prevents forward and backward secrecy issues.

  相似文献   

2.
Nowadays, complex smartphone applications are developed that support gaming, navigation, video editing, augmented reality, and speech recognition which require considerable computational power and battery lifetime. The cloud computing provides a brand new opportunity for the development of mobile applications. Mobile Hosts (MHs) are provided with data storage and processing services on a cloud computing platform rather than on the MHs. To provide seamless connection and reliable cloud service, we are focused on communication. When the connection to cloud server is increased explosively, each MH connection quality has to be declined. It causes several problems: network delay, retransmission, and so on. In this paper, we propose proxy based architecture to improve link performance for each MH in mobile cloud computing. By proposed proxy, the MH need not keep connection of the cloud server because it just connected one of proxy in the same subnet. And we propose the optimal access network discovery algorithm to optimize bandwidth usage. When the MH changes its point of attachment, proposed discovery algorithm helps to connect the optimal access network for cloud service. By experiment result and analysis, the proposed connection management method has better performance than the 802.11 access method.  相似文献   

3.
Saidi  Ahmed  Nouali  Omar  Amira  Abdelouahab 《Cluster computing》2022,25(1):167-185

Attribute-based encryption (ABE) is an access control mechanism that ensures efficient data sharing among dynamic groups of users by setting up access structures indicating who can access what. However, ABE suffers from expensive computation and privacy issues in resource-constrained environments such as IoT devices. In this paper, we present SHARE-ABE, a novel collaborative approach for preserving privacy that is built on top of Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Our approach uses Fog computing to outsource the most laborious decryption operations to Fog nodes. The latter collaborate to partially decrypt the data using an original and efficient chained architecture. Additionally, our approach preserves the privacy of the access policy by introducing false attributes. Furthermore, we introduce a new construction of a collaboration attribute that allows users within the same group to combine their attributes while satisfying the access policy. Experiments and analyses of the security properties demonstrate that the proposed scheme is secure and efficient especially for resource-constrained IoT devices.

  相似文献   

4.
SRS (Sequence Retrieval System) is a widely used keyword search engine for querying biological databases. BLAST2 is the most widely used tool to query databases by sequence similarity search. These tools allow users to retrieve sequences by shared keyword or by shared similarity, with many public web servers available. However, with the increasingly large datasets available it is now quite common that a user is interested in some subset of homologous sequences but has no efficient way to restrict retrieval to that set. By allowing the user to control SRS from the BLAST output, BLAST2SRS (http://blast2srs.embl.de/) aims to meet this need. This server therefore combines the two ways to search sequence databases: similarity and keyword.  相似文献   

5.
MapReduce is a programming model to process a massive amount of data on cloud computing. MapReduce processes data in two phases and needs to transfer intermediate data among computers between phases. MapReduce allows programmers to aggregate intermediate data with a function named combiner before transferring it. By leaving programmers the choice of using a combiner, MapReduce has a risk of performance degradation because aggregating intermediate data benefits some applications but harms others. Now, MapReduce can work with our proposal named the Adaptive Combiner for MapReduce (ACMR) to automatically, smartly, and trainer for getting a better performance without any interference of programmers. In experiments on seven applications, MapReduce can utilize ACMR to get the performance comparable to the system that is optimal for an application.  相似文献   

6.
T. Janani  Y. Darak  M. Brindha 《IRBM》2021,42(2):83-93
The recent advances in digital medical imaging and storage in cloud are bringing about more demands for efficient and secure image retrieval and management. Typically, medical images are very sensitive to changes where any modifications in its content may bring about an erroneous medical diagnosis. Therefore, securing medical images is a very essential process and the major task is, the medical image must maintain their sensitive contents at the time of reconstruction. The proposed methodology executes a secure image encryption and search of medical images proficiently over encrypted image database without leaking any sensitive data. It also ensures medical data integrity by introducing an efficient recovery mechanism on ROI of the image. The proposed scheme obtains recovery information about the image from the ROI of the medical data and embeds it in the RONI region using IWT transform which act as a reversible watermarking. If any alterations or tampers are caused to ROI at the third-party end, then it can be identified and recovered from the obtained recovery data. Besides, the model also executes a Copyright protection scheme to locate the authorized users, who illegally duplicate and distribute the retrieved image to unauthorized entities.  相似文献   

7.
Joly Y  Zeps N  Knoppers BM 《Human genetics》2011,130(3):441-449
Large-scale, public genomic databases have greatly improved the capacity of researchers to do genomic research. In order to ensure that the scientific community uses data from these public resources properly, data access agreements have been developed to complement already existing legal and ethical norms. Sanctions to address cases of data misuse constitute an essential part of this compliance framework meant to protect stakeholders in genomic research. Yet very little research and community debate has been done on this most important topic. This paper presents a review of different sanctions that could be invoked in cases of non-compliance from data users. They have been identified through comprehensive research and analysis of over 450 documents (journal articles, policy, guidelines, access policies, etc.) related to this topic. Given the considerable impact on users of even the milder sanctions considered in our paper, it is essential that stakeholders strive to achieve the highest degree of standardization and transparency when designing controlled-access agreements. It is only fair, after all, that users be able to expect that the border between acceptable and unacceptable conduct is clearly delineated and predictable in controlled-access policies. This suggests the importance for researchers to undertake additional empirical studies on the clarity and accessibility of existing database access agreements and related policies in the near future.  相似文献   

8.
With the advances of cloud computing and virtualization technologies, running MapReduce applications over clouds has been attracting more and more attention in recent years. However, as a fundamental problem, the performance of MapReduce applications can sometimes be severely degraded due to the overheads from I/O virtualization and resource competitions among virtual machines. In this paper, we propose a dynamic block device reconfiguration algorithm in virtual MapReduce clusters, which reduces the data transfer time between virtual machines and thereby improving the performance of MapReduce applications on top of the clouds. The proposed algorithm utilizes a block device reconfiguration scheme, where a block device attached to a virtual machine can be dynamically detached and reattached to other virtual machines at runtime. This scheme allows us to move files easily across different virtual machines without any network transfers between virtual machines. This algorithm is also dynamic in a sense that it estimates the total data transfer times between virtual machines using multiple regression analysis based on CPU utilization and data size, and adaptively determines a least-cost data transfer path between a mapper virtual machine and a reducer virtual machine. We have implemented our algorithm in Hadoop MapReduce. The benchmarking results showed that the overheads incurred by transferring data from mapper virtual machines to reducer virtual machines are minimized and the execution times of MapReduce applications are shortened up to 14 %.  相似文献   

9.
Cloud computing took a step forward in the efficient use of hardware through virtualization technology. And as a result, cloud brings evident benefits for both users and providers. While users can acquire computational resources on-demand elastically, cloud vendors can also utilize maximally the investment costs for data centers infrastructure. In the Internet era, the number of appliances and services migrated to cloud environment increases exponentially. This leads to the expansion of data centers, which become bigger and bigger. Not just that these data centers must have the architecture with a high elasticity in order to serve the huge upsurge of tasks and balance the energy consumption. Although in recent times, many research works have dealt with finite capacity for single job queue in data centers, the multiple finite-capacity queues architecture receives less attention. In reality, the multiple queues architecture is widely used in large data centers. In this paper, we propose a novel three-state model for cloud servers. The model is deployed in both single and multiple finite capacity queues. We also bring forward several strategies to control multiple queues at the same time. This approach allows to reduce service waiting time for jobs and managing elastically the service capability for the whole system. We use CloudSim to simulate the cloud environment and to carry out the experiments in order to demonstrate the operability and effectiveness of the proposed method and strategies. The power consumption is also evaluated to provide insights into the system performance in respect of performance-energy trade-off.  相似文献   

10.
Cloud storage is an important application service in cloud computing, it allows data users to store and access their files anytime, from anywhere and with any device. To ensure the security of the outsourced data, data user needs to periodically check data integrity. In some cases, the identity privacy of data user must be protected. However, in the existing preserving identity privacy protocols, data tag generation is mainly based on complex ring signature or group signature. It brings a heavy burden to data user. To ensure identity privacy of data user, in this paper we propose a novel identity privacy-preserving public auditing protocol by utilizing chameleon hash function. It can achieve the following properties: (1) the identity privacy of data user is preserved for cloud server; (2) the validity of the outsourced data is verified; (3) data privacy can be preserved for the auditor in auditing process; (4) computation cost to produce data tag is very low. Finally, we also show that our scheme is provably secure in the random oracle model, the security of the proposed scheme is related to the computational Diffie–Hellman problem and hash function problem.  相似文献   

11.
Most existing works to secure cloud devote to remote integrity check, search and computing on encrypted data. In this paper, we deal with simultaneous authentication and secrecy when data are uploaded to cloud. Observing that cloud is most interesting to companies in which multiple authorized employees are allowed to upload data, we propose a general framework for secure data upload in an identity-based setting. We present and employ identity-based signcryption (IBSC) to meet this goal. As it is shown that it is challenging to construct IBSC scheme in the standard model and most IBSC schemes are realized in the random oracle model which is regarded weak to capture the realistic adversaries, we propose a new IBSC scheme simultaneously performing encryption and signature with cost less than the signature-then-encryption approach. The identity based feature eliminates the complicated certificates management in signcryption schemes in the traditional public-key infrastructure (PKI) setting. Our IBSC scheme exploits Boneh et al.’s strongly unforgeable signature and Paterson et al.’s identity-based signature. The scheme is shown to satisfy semantic security and strong unforgeability. The security relies on the well-defined bilinear decision Diffie-Hellman (BDDH) assumption and the proof is given in the standard model. With our IBSC proposal, a secure data upload scheme is instantiated with simultaneous authentication and secrecy in a multi-user setting.  相似文献   

12.
The emergence of cloud computing has made it become an attractive solution for large-scale data processing and storage applications. Cloud infrastructures provide users a remote access to powerful computing capacity, large storage space and high network bandwidth to deploy various applications. With the support of cloud computing, many large-scale applications have been migrated to cloud infrastructures instead of running on in-house local servers. Among these applications, continuous write applications (CWAs) such as online surveillance systems, can significantly benefit due to the flexibility and advantages of cloud computing. However, with specific characteristics such as continuous data writing and processing, and high level demand of data availability, cloud service providers prefer to use sophisticated models for provisioning resources to meet CWAs’ demands while minimizing the operational cost of the infrastructure. In this paper, we present a novel architecture of multiple cloud service providers (CSPs) or commonly referred to as Cloud-of-Clouds. Based on this architecture, we propose two operational cost-aware algorithms for provisioning cloud resources for CWAs, namely neighboring optimal resource provisioning algorithm and global optimal resource provisioning algorithm, in order to minimize the operational cost and thereby maximizing the revenue of CSPs. We validate the proposed algorithms through comprehensive simulations. The two proposed algorithms are compared against each other to assess their effectiveness, and with a commonly used and practically viable round-robin approach. The results demonstrate that NORPA and GORPA outperform the conventional round-robin algorithm by reducing the operational cost by up to 28 and 57 %, respectively. The low complexity of the proposed cost-aware algorithms allows us to apply it to a realistic Cloud-of-Clouds environment in industry as well as academia.  相似文献   

13.

Background

One of the tasks in the 2017 iDASH secure genome analysis competition was to enable training of logistic regression models over encrypted genomic data. More precisely, given a list of approximately 1500 patient records, each with 18 binary features containing information on specific mutations, the idea was for the data holder to encrypt the records using homomorphic encryption, and send them to an untrusted cloud for storage. The cloud could then homomorphically apply a training algorithm on the encrypted data to obtain an encrypted logistic regression model, which can be sent to the data holder for decryption. In this way, the data holder could successfully outsource the training process without revealing either her sensitive data, or the trained model, to the cloud.

Methods

Our solution to this problem has several novelties: we use a multi-bit plaintext space in fully homomorphic encryption together with fixed point number encoding; we combine bootstrapping in fully homomorphic encryption with a scaling operation in fixed point arithmetic; we use a minimax polynomial approximation to the sigmoid function and the 1-bit gradient descent method to reduce the plaintext growth in the training process.

Results

Our algorithm for training over encrypted data takes 0.4–3.2 hours per iteration of gradient descent.

Conclusions

We demonstrate the feasibility but high computational cost of training over encrypted data. On the other hand, our method can guarantee the highest level of data privacy in critical applications.
  相似文献   

14.
secureBLAST     
secureBLAST supplements NCBI wwwblast with features necessary to control in an easy manageable way usage of BLAST data sets and their update. The concept we implemented allows to offer on a single BLAST server several data sets with individually configurable access rights. Security is provided by user authentication and encryption of the http traffic via SSL. By using secureBLAST, the administration of users and databases can be done via a web interface. Therefore, secureBLAST is valuable for institutions that have to restrict access to their datasets or just want to administer BLAST servers via a web interface.  相似文献   

15.

Background

Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts.

Results

In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure.

Conclusions

Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org.  相似文献   

16.
An interface program has been developed for users of MS-DOScomputers and the GenBank(R) gene sequence files in their disketteformat. With the program a user is able to produce keyword,author and entry name listings of GenBank items or to selectGenBank sequences for viewing, printing or decoding. The decodeoption uncompresses sequence data and yields a character filewhich has the format used on GenBank magnetic tapes. Programoptions are chosen by selecting items from command menus. Whilethe program is designed primarily for hard disk operation, italso allows users of diskette-based computers to work with GenBankfiles. Received on July 15, 1987; accepted on July 15, 1987  相似文献   

17.
Cloud computing, an on-demand computation model that consists of large data-centers (Clouds) managed by cloud providers, offers storage and computation needs for cloud users based on service level agreements (SLAs). Services in cloud computing are offered at relatively low cost. The model, therefore, forms a great target for many applications, such as startup businesses and e-commerce applications. The area of cloud computing has grown rapidly in the last few years; yet, it still faces some obstacles. For example, there is a lack of mechanisms that guarantee for cloud users the quality that they are actually getting, compared to the quality of service that is specified in SLAs. Another example is the concern of security, privacy and trust, since users lose control over their data and programs once they are sent to cloud providers. In this paper, we introduce a new architecture that aids the design and implementation of attestation services. The services monitor cloud-based applications to ensure software quality, such as security, privacy, trust and usability of cloud-based applications. Our approach is a user-centric approach through which users have more control on their own data/applications. Further, the proposed approach is a cloud-based approach where the powers of the clouds are utilized. Simulation results show that many services can be designed based on our architecture, with limited performance overhead.  相似文献   

18.
ESTAP--an automated system for the analysis of EST data   总被引:2,自引:0,他引:2  
The EST Analysis Pipeline (ESTAP) is a set of analytical procedures that automatically verify, cleanse, store and analyze ESTs generated on high-throughput platforms. It uses a relational database to store sequence data and analysis results, which facilitates both the search for specific information and statistical analysis. ESTAP provides for easy viewing of the original and cleansed data, as well as the analysis results via a Web browser. It also allows the data owner to submit selected sequences to dbEST in a semi-automated fashion.  相似文献   

19.
Cloud computing environment came about in order to effectively manage and use enormous amount of data that have become available with the development of the Internet. Cloud computing service is widely used not only to manage the users’ IT resources, but also to use enterprise IT resources in an effective manner. Various security threats have occurred while using cloud computing and plans for reaction are much needed, since they will eventually elevate to security threats to enterprise information. Plans to strengthen the security of enterprise information by using cloud security will be proposed in this research. These cloud computing security measures must be supported by the governmental policies. Publications on guidelines to information protection will raise awareness among the users and service providers. System of reaction must be created in order to constantly monitor and to promptly respond to any security accident. Therefore, both technical countermeasures and governmental policy must be supported at the same time. Cloud computing service is expanding more than ever, thus active research on cloud computing security is expected.  相似文献   

20.
Public key encryption with keyword search plays very important role in the outsourced data management. In most of public key encryption schemes with keyword search, the server can unlimitedly execute keyword search ability after obtaining a trapdoor information of a keyword. To restrict the ability of the server’s unlimited search, we propose a novel public key encryption with revocable keyword search by combining hash chain and anonymous multi-receiver encryption scheme in this paper. The scheme can not only achieve security property of the indistinguishability of ciphertexts against an adaptive chosen keywords attack, but also resist off-line keyword guess attack. By comparison with Yu et al.’s scheme, our scheme is more efficient in terms of computational cost and communication overhead for the whole system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号