共查询到20条相似文献,搜索用时 15 毫秒
1.
Cluster Computing - In this study, a secure and coordinated blockchain based energy trading system for Electric Vehicles (EVs) is presented. The major goals of this study are to provide secure and... 相似文献
3.
The emergent needs of the bioinformatics community challenge current information systems. The pace of biological data generation far outstrips Moore's Law. Therefore, a gap continues to widen between the capabilities to produce biological (molecular and cell) data sets and the capability to manage and analyze these data sets. As a result, Federal investments in large data set generation produces diminishing returns in terms of the community's capabilities of understanding biology and leveraging that understanding to make scientific and technological advances that improve society. We are building an open framework to address various data management issues including data and tool interoperability, nomenclature and data communication standardization, and database integration. PathPort, short for Pathogen Portal, employs a generic, web-services based framework to deal with some of the problems identified by the bioinformatics community. The motivating research goal of a scalable system to provide data management and analysis for key pathosystems, especially relating to molecular data, has resulted in a generic framework using two major components. On the server-side, we employ web-services. On the client-side, a Java application called ToolBus acts as a client-side "bus" for contacting data and tools and viewing results through a single, consistent user interface. 相似文献
5.
To protect the security of data outsourced to the cloud, the tampers detection and recovery for outsourced image have aroused the concern of people. A secure tampering detection and lossless recovery for medical images (MI) using permutation ordered binary (POB) number system is proposed. In the proposed scheme, the region of interest (ROI) of MI is first extracted, and then, ROI is divided into some no-overlapping blocks, and image encoding is conducted on these blocks based on the better compression performance of JPEG-LS for medical image. After that, the generated compression data by all the blocks are divided into high 4-bit and low 4-bit planes, and shuffling and combination are used to generate two plane images. Owing to the substantial redundancies space in the compressed data, the data of each plane are spread to the size of the original image. Lastly, authentication data of two bits is obtained for every pixel and inserted into the pixel itself within the each plane, and the corresponding 10-bit data is transformed into the POB value of 8-bit. Furthermore, encryption is implemented on the above image to produce two shares which can be outsourced to the cloud server. The users can detect tampered part and recover original image when they down load the shares from the cloud. Extensive experiments on some ordinary medical image and COVID-19 image datasets show that the proposed approach can locate the tampered parts within the MI, and the original MI can be recovered without any loss even if one of the shares are totally destroyed, or two shares are tampered at the ration not more than 50%. Some comparisons and analysis are given to show the better performance of the scheme. 相似文献
6.
Cluster Computing - The unprecedented growth in data volume results in an urgent need for a dramatic increase in the size of data center networks. Accommodating millions to even billions of servers... 相似文献
8.
Cluster Computing - Cloud computing model offers various platforms services and provides a scalable, on-demand service at any time-anywhere manner. However, in the outsourcing strategy, users no... 相似文献
9.
Cluster, consisting of a group of computers, is to act as a whole system to provide users with computer resources. Each computer is a node of this cluster. Cluster computer refers to a system consisting of a complete set of computers connected to each other. With the rapid development of computer technology, cluster computing technique with high performance–cost ratio has been widely applied in distributed parallel computing. For the large-scale close data in group enterprise, a heterogeneous data integration model was built under cluster environment based on cluster computing, XML technology and ontology theory. Such model could provide users unified and transparent access interfaces. Based on cluster computing, the work has solved the heterogeneous data integration problems by means of Ontology and XML technology. Furthermore, good application effect has been achieved compared with traditional data integration model. Furthermore, it was proved that this model improved the computing capacity of system, with high performance–cost ratio. Thus, it is hoped to provide support for decision-making of enterprise managers. 相似文献
10.
Cluster Computing - With the development of the Internet of Things (IoT) field, more and more data are generated by IoT devices and transferred over the network. However, a large amount of IoT data... 相似文献
11.
Gene expressions measured using microarrays usually suffer from the missing value problem. However, in many data analysis methods, a complete data matrix is required. Although existing missing value imputation algorithms have shown good performance to deal with missing values, they also have their limitations. For example, some algorithms have good performance only when strong local correlation exists in data while some provide the best estimate when data is dominated by global structure. In addition, these algorithms do not take into account any biological constraint in their imputation. In this paper, we propose a set theoretic framework based on projection onto convex sets (POCS) for missing data imputation. POCS allows us to incorporate different types of a priori knowledge about missing values into the estimation process. The main idea of POCS is to formulate every piece of prior knowledge into a corresponding convex set and then use a convergence-guaranteed iterative procedure to obtain a solution in the intersection of all these sets. In this work, we design several convex sets, taking into consideration the biological characteristic of the data: the first set mainly exploit the local correlation structure among genes in microarray data, while the second set captures the global correlation structure among arrays. The third set (actually a series of sets) exploits the biological phenomenon of synchronization loss in microarray experiments. In cyclic systems, synchronization loss is a common phenomenon and we construct a series of sets based on this phenomenon for our POCS imputation algorithm. Experiments show that our algorithm can achieve a significant reduction of error compared to the KNNimpute, SVDimpute and LSimpute methods. 相似文献
12.
A film-handling machine (robot) has been built which can, in conjunction with a commercially available film densitometer, exchange and digitize over 300 electron micrographs per day. Implementation of robotic film handling effectively eliminates the delay and tedium associated with digitizing images when data are initially recorded on photographic film. The modulation transfer function (MTF) of the commercially available densitometer is significantly worse than that of a high-end, scientific microdensitometer. Nevertheless, its signal-to-noise ratio (S/N) is quite excellent, allowing substantial restoration of the output to "near-to-perfect" performance. Due to the large area of the standard electron microscope film that can be digitized by the commercial densitometer (up to 10,000 x 13,680 pixels with an appropriately coded holder), automated film digitization offers a fast and inexpensive alternative to high-end CCD cameras as a means of acquiring large amounts of image data in electron microscopy. 相似文献
13.
The Cu(I)-catalyzed azide-alkyne cycloaddition (CuAAC) allows the efficient and complete functionalization of dendrimers with preformed Gd chelates (prelabeling) to give monodisperse macromolecular contrast agents (CAs) for magnetic resonance imaging (MRI). This monodispersity contrasts with the typical distribution of materials obtained by classical routes and facilitates the characterization and quality control demanded for clinical applications. The potential of a new family of PEG-dendritic CA based on a gallic acid-triethylene glycol (GATG) core functionalized with up to 27 Gd complexes has been explored in vitro and in vivo, showing contrast enhancements similar to those of Gadomer-17, which reveals them to be a promising platform for the development of CA for MRI. 相似文献
14.
As outsourcing data centers emerge to host applications and services from many different organizations, it is critical for
data center owners to isolate different applications while dynamically and optimally allocate sharable resources among them.
To address this issue, we propose a virtual-appliance-based autonomic resource provisioning framework for large virtualized
data centers. We present the architecture of the data center with enriched autonomic features. We define a non-linear constrained
optimization model for dynamic resource provisioning and present a novel analytic solution. Key factors, including virtualization
overhead and reconfiguration delay, are incorporated into the model. Experimental results based on a prototype demonstrate
that the system-level performance has been greatly improved by taking advantage of fine-grained server consolidation, and
the whole system exhibits flexible adaptation in failure scenarios. Experiments with the impact of switching delay also show
the efficiency of the framework due to significantly reduced provisioning time.
相似文献
15.
Increasingly, high-dimensional genomics data are becoming available for many organisms.Here, we develop OrthoClust for simultaneously clustering data across multiple species. OrthoClust is a computational framework that integrates the co-association networks of individual species by utilizing the orthology relationships of genes between species. It outputs optimized modules that are fundamentally cross-species, which can either be conserved or species-specific. We demonstrate the application of OrthoClust using the RNA-Seq expression profiles of Caenorhabditis elegans and Drosophila melanogaster from the modENCODE consortium. A potential application of cross-species modules is to infer putative analogous functions of uncharacterized elements like non-coding RNAs based on guilt-by-association. Electronic supplementary materialThe online version of this article (doi:10.1186/gb-2014-15-8-r100) contains supplementary material, which is available to authorized users. 相似文献
16.
Despite the centrality of epistemic issues in biobank knowledge generation, there is currently a lacuna in research addressing the epistemic assumptions of biobank science and data sharing. The article addresses this lacuna. Using the insights of philosophical and sociological analysis, we examine standardization and harmonization and central challenges biobank data-sharing faces. We defend two central epistemic values, namely “spatial and temporal translatability” and “epistemic adequacy” which foster effective biobank knowledge generation. The first refers to the way in which biobank data need to be re-usable and re-purposable by bioscience researchers who did not create them. Given the perennial issues of data mutability and incommensurability, we also propose “epistemic adequacy.” In uncovering the social and epistemic foundations of biobank science, we emphasize issues essential for achieving effective and transparent biobank practice and productive communication and engagement with the public about the nature, potential and limits of biobanks. 相似文献
18.
Many animal species, from arthropods to apes, share food. This paper presents a new framework that categorizes nonkin food sharing according to two axes: (1) the interval between sharing and receiving the benefits of sharing, and (2) the currency units in which benefits accrue to the sharer (especially food versus nonfood). Sharers can obtain immediate benefits from increased foraging efficiency, predation avoidance, mate provisioning, or manipulative mutualism. Reciprocity, trade, status enhancement and group augmentation can delay benefits. When benefits are delayed or when food is exchanged for nonfood benefits, maintaining sharing can become more difficult because animals face discounting and currency conversion problems. Explanations that involve delayed or nonfood benefits may require specialized adaptations to account for timing and currency-exchange problems. The immediate, selfish fitness benefits that a sharer may gain through by-product or manipulative mutualism, however, apply to various food-sharing situations across many species and may provide a simpler, more general explanation of sharing. 相似文献
19.
Direct volume rendering of large and unstructured datasets demands high computational power and memory bandwidth. Developing an efficient parallel algorithm requires a deep understanding of the bottlenecks involved in the solutions for this problem. In this work, we make a thorough analysis of the overhead components involved in parallel volume raycasting of unstructured grids for high-resolution images on distributed environments. This evaluation has revealed potential opportunities for performance improvements. The result is a novel approach to distributed memory raycasting that includes different acceleration techniques to enhance ray distribution, face projection, memory locality, and message exchanging, while maintaining load balance. We report the gains achieved in each phase and in the complete parallel algorithm when compared with a conventional approach. 相似文献
20.
One of the most important goals of biological investigation is to uncover gene functional relations. In this study we propose a framework for extraction and integration of gene functional relations from diverse biological data sources, including gene expression data, biological literature and genomic sequence information. We introduce a two-layered Bayesian network approach to integrate relations from multiple sources into a genome-wide functional network. An experimental study was conducted on a test-bed of Arabidopsis thaliana. Evaluation of the integrated network demonstrated that relation integration could improve the reliability of relations by combining evidence from different data sources. Domain expert judgments on the gene functional clusters in the network confirmed the validity of our approach for relation integration and network inference. 相似文献
|