首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cluster Computing - In this study, a secure and coordinated blockchain based energy trading system for Electric Vehicles (EVs) is presented. The major goals of this study are to provide secure and...  相似文献   

2.
植物DNA条形码与生物多样性数据共享平台构建   总被引:1,自引:0,他引:1  
DNA条形码基于较短的DNA序列实现物种的快速、准确鉴定, 不仅加快了全球生物物种的鉴定和分类步伐, 也为生物多样性的管理、保护和可持续利用提供了新思路和研究方法。植物DNA条形码标准数据库的不断完善, 将使植物多样性信息的快速获取成为可能; 将不同类型数据资源整合、共享和利用, 构建植物DNA条形码数据共享平台, 是满足公众对物种准确鉴定和快速认知的重要支撑。本文介绍了近年来植物DNA条形码的研究进展; 植物DNA条形码参考数据库的研发现状和存在的问题。结合上述问题, 围绕“大数据”时代背景, 对如何管理和使用好海量的植物信息, 如何构建数据共享平台提出了一些设想: (1)数据共享平台的元数据应尽可能翔实、丰富、准确和多关联; (2)数据标准应统一规范; (3)查询入口方便、迅速、多样, 易于管理, 便于实现更大程度的数据共享和全球化的合作交流。  相似文献   

3.

Background

Metagenomics method directly sequences and analyses genome information from microbial communities. There are usually more than hundreds of genomes from different microbial species in the same community, and the main computational tasks for metagenomic data analyses include taxonomical and functional component examination of all genomes in the microbial community. Metagenomic data analysis is both data- and computation- intensive, which requires extensive computational power. Most of the current metagenomic data analysis softwares were designed to be used on a single computer or single computer clusters, which could not match with the fast increasing number of large metagenomic projects' computational requirements. Therefore, advanced computational methods and pipelines have to be developed to cope with such need for efficient analyses.

Result

In this paper, we proposed Parallel-META, a GPU- and multi-core-CPU-based open-source pipeline for metagenomic data analysis, which enabled the efficient and parallel analysis of multiple metagenomic datasets and the visualization of the results for multiple samples. In Parallel-META, the similarity-based database search was parallelized based on GPU computing and multi-core CPU computing optimization. Experiments have shown that Parallel-META has at least 15 times speed-up compared to traditional metagenomic data analysis method, with the same accuracy of the results http://www.computationalbioenergy.org/parallel-meta.html.

Conclusion

The parallel processing of current metagenomic data would be very promising: with current speed up of 15 times and above, binning would not be a very time-consuming process any more. Therefore, some deeper analysis of the metagenomic data, such as the comparison of different samples, would be feasible in the pipeline, and some of these functionalities have been included into the Parallel-META pipeline.
  相似文献   

4.
In this paper we present SNUAGE, a platform-as-a-service security framework for building secure and scalable multi-layered services based on the cloud computing model. SNUAGE ensures the authenticity, integrity, and confidentiality of data communication over the network links by creating a set of security associations between the data-bound components on the presentation layer and their respective data sources on the data persistence layer. SNUAGE encapsulates the security procedures, policies, and mechanisms in these security associations at the service development stage to form a collection of isolated and protected security domains. The secure communication among the entities in one security domain is governed and controlled by a standalone security processor and policy attached to this domain. This results into: (1) a safer data delivery mechanism that prevents security vulnerabilities in one domain from spreading to the other domains and controls the inter-domain information flow to protect the privacy of network data, (2) a reusable security framework that can be employed in existing platform-as-a-service environments and across diverse cloud computing service models, and (3) an increase in productivity and delivery of reliable and secure cloud computing services supported by a transparent programming model that relieves application developers from the intricate details of security programming. Last but not least, SNUAGE contributes to a major enhancement in the energy consumption and performance of supported cloud services by providing a suitable execution container in its protected security domains for a wide suite of energy- and performance-efficient cryptographic constructs such as those adopted by policy-driven and content-based security protocols. An energy analysis of the system shows, via real energy measurements, major savings in energy consumption on the consumer devices as well as on the cloud servers. Moreover, a sample implementation of the presented security framework is developed using Java and deployed and tested in a real cloud computing infrastructure using the Google App Engine service platform. Performance benchmarks show that the proposed framework provides a significant throughput enhancement compared to traditional network security protocols such as the Secure Sockets Layer and the Transport Layer Security protocols.  相似文献   

5.
6.
The emergent needs of the bioinformatics community challenge current information systems. The pace of biological data generation far outstrips Moore's Law. Therefore, a gap continues to widen between the capabilities to produce biological (molecular and cell) data sets and the capability to manage and analyze these data sets. As a result, Federal investments in large data set generation produces diminishing returns in terms of the community's capabilities of understanding biology and leveraging that understanding to make scientific and technological advances that improve society. We are building an open framework to address various data management issues including data and tool interoperability, nomenclature and data communication standardization, and database integration. PathPort, short for Pathogen Portal, employs a generic, web-services based framework to deal with some of the problems identified by the bioinformatics community. The motivating research goal of a scalable system to provide data management and analysis for key pathosystems, especially relating to molecular data, has resulted in a generic framework using two major components. On the server-side, we employ web-services. On the client-side, a Java application called ToolBus acts as a client-side "bus" for contacting data and tools and viewing results through a single, consistent user interface.  相似文献   

7.
8.
9.
Nasirian  Sara  Faghani  Farhad 《Cluster computing》2021,24(2):997-1032
Cluster Computing - The unprecedented growth in data volume results in an urgent need for a dramatic increase in the size of data center networks. Accommodating millions to even billions of servers...  相似文献   

10.
Gao  Hang  Gao  Tiegang 《Cluster computing》2022,25(1):707-725

To protect the security of data outsourced to the cloud, the tampers detection and recovery for outsourced image have aroused the concern of people. A secure tampering detection and lossless recovery for medical images (MI) using permutation ordered binary (POB) number system is proposed. In the proposed scheme, the region of interest (ROI) of MI is first extracted, and then, ROI is divided into some no-overlapping blocks, and image encoding is conducted on these blocks based on the better compression performance of JPEG-LS for medical image. After that, the generated compression data by all the blocks are divided into high 4-bit and low 4-bit planes, and shuffling and combination are used to generate two plane images. Owing to the substantial redundancies space in the compressed data, the data of each plane are spread to the size of the original image. Lastly, authentication data of two bits is obtained for every pixel and inserted into the pixel itself within the each plane, and the corresponding 10-bit data is transformed into the POB value of 8-bit. Furthermore, encryption is implemented on the above image to produce two shares which can be outsourced to the cloud server. The users can detect tampered part and recover original image when they down load the shares from the cloud. Extensive experiments on some ordinary medical image and COVID-19 image datasets show that the proposed approach can locate the tampered parts within the MI, and the original MI can be recovered without any loss even if one of the shares are totally destroyed, or two shares are tampered at the ration not more than 50%. Some comparisons and analysis are given to show the better performance of the scheme.

  相似文献   

11.
Cluster Computing - Cloud computing model offers various platforms services and provides a scalable, on-demand service at any time-anywhere manner. However, in the outsourcing strategy, users no...  相似文献   

12.
Cluster, consisting of a group of computers, is to act as a whole system to provide users with computer resources. Each computer is a node of this cluster. Cluster computer refers to a system consisting of a complete set of computers connected to each other. With the rapid development of computer technology, cluster computing technique with high performance–cost ratio has been widely applied in distributed parallel computing. For the large-scale close data in group enterprise, a heterogeneous data integration model was built under cluster environment based on cluster computing, XML technology and ontology theory. Such model could provide users unified and transparent access interfaces. Based on cluster computing, the work has solved the heterogeneous data integration problems by means of Ontology and XML technology. Furthermore, good application effect has been achieved compared with traditional data integration model. Furthermore, it was proved that this model improved the computing capacity of system, with high performance–cost ratio. Thus, it is hoped to provide support for decision-making of enterprise managers.  相似文献   

13.
14.
A film-handling machine (robot) has been built which can, in conjunction with a commercially available film densitometer, exchange and digitize over 300 electron micrographs per day. Implementation of robotic film handling effectively eliminates the delay and tedium associated with digitizing images when data are initially recorded on photographic film. The modulation transfer function (MTF) of the commercially available densitometer is significantly worse than that of a high-end, scientific microdensitometer. Nevertheless, its signal-to-noise ratio (S/N) is quite excellent, allowing substantial restoration of the output to "near-to-perfect" performance. Due to the large area of the standard electron microscope film that can be digitized by the commercial densitometer (up to 10,000 x 13,680 pixels with an appropriately coded holder), automated film digitization offers a fast and inexpensive alternative to high-end CCD cameras as a means of acquiring large amounts of image data in electron microscopy.  相似文献   

15.
Liu  Yanhui  Zhang  Jianbiao  Zhan  Jing 《Cluster computing》2021,24(2):1331-1345
Cluster Computing - With the development of the Internet of Things (IoT) field, more and more data are generated by IoT devices and transferred over the network. However, a large amount of IoT data...  相似文献   

16.
Gan X  Liew AW  Yan H 《Nucleic acids research》2006,34(5):1608-1619
Gene expressions measured using microarrays usually suffer from the missing value problem. However, in many data analysis methods, a complete data matrix is required. Although existing missing value imputation algorithms have shown good performance to deal with missing values, they also have their limitations. For example, some algorithms have good performance only when strong local correlation exists in data while some provide the best estimate when data is dominated by global structure. In addition, these algorithms do not take into account any biological constraint in their imputation. In this paper, we propose a set theoretic framework based on projection onto convex sets (POCS) for missing data imputation. POCS allows us to incorporate different types of a priori knowledge about missing values into the estimation process. The main idea of POCS is to formulate every piece of prior knowledge into a corresponding convex set and then use a convergence-guaranteed iterative procedure to obtain a solution in the intersection of all these sets. In this work, we design several convex sets, taking into consideration the biological characteristic of the data: the first set mainly exploit the local correlation structure among genes in microarray data, while the second set captures the global correlation structure among arrays. The third set (actually a series of sets) exploits the biological phenomenon of synchronization loss in microarray experiments. In cyclic systems, synchronization loss is a common phenomenon and we construct a series of sets based on this phenomenon for our POCS imputation algorithm. Experiments show that our algorithm can achieve a significant reduction of error compared to the KNNimpute, SVDimpute and LSimpute methods.  相似文献   

17.
The Cu(I)-catalyzed azide-alkyne cycloaddition (CuAAC) allows the efficient and complete functionalization of dendrimers with preformed Gd chelates (prelabeling) to give monodisperse macromolecular contrast agents (CAs) for magnetic resonance imaging (MRI). This monodispersity contrasts with the typical distribution of materials obtained by classical routes and facilitates the characterization and quality control demanded for clinical applications. The potential of a new family of PEG-dendritic CA based on a gallic acid-triethylene glycol (GATG) core functionalized with up to 27 Gd complexes has been explored in vitro and in vivo, showing contrast enhancements similar to those of Gadomer-17, which reveals them to be a promising platform for the development of CA for MRI.  相似文献   

18.
Increasingly, high-dimensional genomics data are becoming available for many organisms.Here, we develop OrthoClust for simultaneously clustering data across multiple species. OrthoClust is a computational framework that integrates the co-association networks of individual species by utilizing the orthology relationships of genes between species. It outputs optimized modules that are fundamentally cross-species, which can either be conserved or species-specific. We demonstrate the application of OrthoClust using the RNA-Seq expression profiles of Caenorhabditis elegans and Drosophila melanogaster from the modENCODE consortium. A potential application of cross-species modules is to infer putative analogous functions of uncharacterized elements like non-coding RNAs based on guilt-by-association.

Electronic supplementary material

The online version of this article (doi:10.1186/gb-2014-15-8-r100) contains supplementary material, which is available to authorized users.  相似文献   

19.
Despite the centrality of epistemic issues in biobank knowledge generation, there is currently a lacuna in research addressing the epistemic assumptions of biobank science and data sharing. The article addresses this lacuna. Using the insights of philosophical and sociological analysis, we examine standardization and harmonization and central challenges biobank data-sharing faces. We defend two central epistemic values, namely “spatial and temporal translatability” and “epistemic adequacy” which foster effective biobank knowledge generation. The first refers to the way in which biobank data need to be re-usable and re-purposable by bioscience researchers who did not create them. Given the perennial issues of data mutability and incommensurability, we also propose “epistemic adequacy.” In uncovering the social and epistemic foundations of biobank science, we emphasize issues essential for achieving effective and transparent biobank practice and productive communication and engagement with the public about the nature, potential and limits of biobanks.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号