首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
MOSIX is a cluster management system that supports preemptive process migration. This paper presents the MOSIX Direct File System Access (DFSA), a provision that can improve the performance of cluster file systems by allowing a migrated process to directly access files in its current location. This capability, when combined with an appropriate file system, could substantially increase the I/O performance and reduce the network congestion by migrating an I/O intensive process to a file server rather than the traditional way of bringing the file's data to the process. DFSA is suitable for clusters that manage a pool of shared disks among multiple machines. With DFSA, it is possible to migrate parallel processes from a client node to file servers for parallel access to different files. Any consistent file system can be adjusted to work with DFSA. To test its performance, we developed the MOSIX File-System (MFS) which allows consistent parallel operations on different files. The paper describes DFSA and presents the performance of MFS with and without DFSA.  相似文献   

2.
The use of mobile computers is gaining popularity. There is an increasing trend in the number of users with laptops, PDAs, and smart phones. Access to information repositories in the future will be dominated by mobile clients rather than traditional “fixed” clients. These mobile clients download information by periodically connecting to repositories of data stored in either databases or file systems. Such mobile clients constitute a new and different kind of workload and exhibit a different access pattern than seen in traditional client server systems. Though file systems have been modified to handle clients that can download information, disconnect, and later reintegrate, databases have not been redesigned to accommodate mobile clients. There is a need to support mobile clients in the context of client server databases. This paper is about organizing the database server to take into consideration the access patterns of mobile clients. We propose the concept of hoard attributes which capture these access patterns. Three different techniques for organizing data on the server based on the hoard attribute are presented. We argue that each technique is suited for a particular workload. The workload is a combination of requests from mobile clients and traditional clients. This reorganization also allows us to address issues of concurrency control, disconnection and replica control in mobile databases. We present simulation results that show the performance of server reorganization using hoard attributes. We also provide an elaborate discussion of issues resulting from this reorganization in this new paradigm taking into account both mobile and traditional clients. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

3.
This paper describes a novel technique for establishing a virtual file system that allows data to be transferred user-transparently and on-demand across computing and storage servers of a computational grid. Its implementation is based on extensions to the Network File System (NFS) that are encapsulated in software proxies. A key differentiator between this approach and previous work is the way in which file servers are partitioned: while conventional file systems share a single (logical) server across multiple users, the virtual file system employs multiple proxy servers that are created, customized and terminated dynamically, for the duration of a computing session, on a per-user basis. Furthermore, the solution does not require modifications to standard NFS clients and servers. The described approach has been deployed in the context of the PUNCH network-computing infrastructure, and is unique in its ability to integrate unmodified, interactive applications (even commercial ones) and existing computing infrastructure into a network computing environment. Experimental results show that: (1) the virtual file system performs well in comparison to native NFS in a local-area setup, with mean overheads of 1 and 18%, for the single-client execution of the Andrew benchmark in two representative computing environments, (2) the average overhead for eight clients can be reduced to within 1% of native NFS with the use of concurrent proxies, (3) the wide-area performance is within 1% of the local-area performance for a typical compute-intensive PUNCH application (SimpleScalar), while for the I/O-intensive application Andrew the wide-area performance is 5.5 times worse than the local-area performance.  相似文献   

4.
This paper presents a recovery protocol for block I/O operations in Slice, a storage system architecture for high-speed LANs incorporating network-attached block storage. The goal of the Slice architecture is to provide a network file service with scalable bandwidth and capacity while preserving compatibility with off-the-shelf clients and file server appliances. The Slice prototype virtualizes the Network File System (NFS) protocol by interposing a request switching filter at the client's interface to the network storage system. The distributed Slice architecture separates functions typically combined in central file servers, introducing new challenges for failure atomicity. This paper presents a protocol for atomic file operations and recovery in the Slice architecture, and related support for reliable file storage using mirrored striping. Experimental results from the Slice prototype show that the protocol has low cost in the common case, allowing the system to deliver client file access bandwidths approaching gigabit-per-second network speeds.  相似文献   

5.
Several scientific instruments suppliers are offering complete networking and automation packages for analytical laboratories. Nevertheless there is still considerable work to be done in the area of standardization of file formats generated by different data acquisition systems supplied by scientific instruments manufacturers. Recent work on the netCDF transfer protocol for mass spectrometry data suggests that good progress is being made in the area of data formats.Our laboratory operates a number of diverse instruments, including two high resolution systems (ZAB 2F, 70 SEQ) and one quadrupole (QMD 1000) from Fisons Instruments, one ion trap system from Finnigan (ITS 40) and one pyrolysis mass spectrometer from Horizon Instruments (RAPyD-400), all equipped with autosamplers. This instruments are physically located in two distinct laboratories. The data systems are based on very different computers, including a DEC PDP-11/24, a VAX 4000/90 and several PCs.The large amount of data produced by the MS laboratory and the implementation of GLPs, (Good Laboratory Practices) and GALPs (Good Automated Laboratory Practices) prompted us to examine the possibility of networking the instrumentation in a client/server computing environment. All instrument data systems have been connected to the institute network via ethernet, using either DECnet or TCP/IP. A VAXCluster consisting in a VAXStation 4000/90 host and a VAXStation 3100 satellite has been configured as a server using DEC PATHWORKS V.4.1 server software. This allows for file, disk, application and print services to all the PC clients connected network wide. Unattended distributed backup and restore services for PC hard disks are implemented. Mass spectrometry data files are permanently archived in their original format on 4 Gbyte tape cartridges and stored for later retrieval. Files can be transferred to any office PC running the appropriate mass spectrometry software. A centralized spectra and structure information management system based on the MassLib (Chemical Concepts) software allows for library searches using the SISCOM algorithm after specific file conversion programs or using JCAMP-DX files. Furthermore, the mass spectrometer data systems are readied for their eventual incorporation into a LIMS.  相似文献   

6.
7.
This paper presents a data management solution which allows fast Virtual Machine (VM) instantiation and efficient run-time execution to support VMs as execution environments in Grid computing. It is based on novel distributed file system virtualization techniques and is unique in that: (1) it provides on-demand cross-domain access to VM state for unmodified VM monitors; (2) it enables private file system channels for VM instantiation by secure tunneling and session-key based authentication; (3) it supports user-level and write-back disk caches, per-application caching policies and middleware-driven consistency models; and (4) it leverages application-specific meta-data associated with files to expedite data transfers. The paper reports on its performance in wide-area setups using VMware-based VMs. Results show that the solution delivers performance over 30% better than native NFS and with warm caches it can bring the application-perceived overheads below 10% compared to a local-disk setup. The solution also allows a VM with 1.6 GB virtual disk and 320 MB virtual memory to be cloned within 160 seconds for the first clone and within 25 seconds for subsequent clones. Ming Zhao is a PhD candidate in the department of Electrical and Computer Engineering and a member of the Advance Computing and Information Systems Laboratory, at University of Florida. He received the degrees of BE and ME from Tsinghua University. His research interests are in the areas of computer architecture, operating systems and distributed computing. Jian Zhang is a PhD student in the Department of Electrical and Computer Engineering at University of Florida and a member of the Advance Computing and Information Systems Laboratory (ACIS). Her research interest is in virtual machines and Grid computing. She is a member of the IEEE and the ACM. Renato J. Figueiredo received the B.S. and M.S. degrees in Electrical Engineering from the Universidade de Campinas in 1994 and 1995, respectively, and the Ph.D. degree in Electrical and Computer Engineering from Purdue University in 2001. From 2001 until 2002 he was on the faculty of the School of Electrical and Computer Engineering of Northwestern University at Evanston, Illinois. In 2002 he joined the Department of Electrical and Computer Engineering of the University of Florida as an Assistant Professor. His research interests are in the areas of computer architecture, operating systems, and distributed systems.  相似文献   

8.
Parallel file systems have been developed in recent years to ease the I/O bottleneck of high-end computing system. These advanced file systems offer several data layout strategies in order to meet the performance goals of specific I/O workloads. However, while a layout policy may perform well on some I/O workload, it may not perform as well for another. Peak I/O performance is rarely achieved due to the complex data access patterns. Data access is application dependent. In this study, a cost-intelligent data access strategy based on the application-specific optimization principle is proposed. This strategy improves the I/O performance of parallel file systems. We first present examples to illustrate the difference of performance under different data layouts. By developing a cost model which estimates the completion time of data accesses in various data layouts, the layout can better match the application. Static layout optimization can be used for applications with dominant data access patterns, and dynamic layout selection with hybrid replications can be used for applications with complex I/O patterns. Theoretical analysis and experimental testing have been conducted to verify the proposed cost-intelligent layout approach. Analytical and experimental results show that the proposed cost model is effective and the application-specific data layout approach can provide up to a 74% performance improvement for data-intensive applications.  相似文献   

9.
A tendon locking mechanism (TLM) in the digits of the feet has been described previously only in bats and birds. In bats, this mechanism typically consists of a patch of tuberculated fibrocartilage cells on the plantar surface of the proximal flexor tendons, and a corresponding plicated portion of the adjacent flexor tendon sheath. The two components mesh together like parts of a ratchet, locking the digit in a flexed position until the mechanism is disengaged. This system apparently allows bats to hang for long periods of time with reduced muscular activity. In this study, we document for the first time the presence of a similar tendon lock in dermopterans, an occurrence that provides additional support for the hypothesis that dermopterans and bats are sister taxa. The present work also includes observations on the morphology of the digital tendon system in chiropteran species not previously examined, including members of the Craseonycteridae, Mystacinidae and Kerivoulinae. Unlike other bats that have a TLM,Craseonycteris andKerivoula have a plicated proximal tendon sheath but lack distinct tubercles on the flexor tendon. This condition may be related to small body size or may represent an evolutionary intermediate between the presence of a well-developed TLM and the complete absence of this structure. Phyllostomids apparently lack the ratchet-like TLM typical of other bats, instead exhibiting modifications of the tendon sheath that may contribute to its function as a friction lock. Consideration of the distribution of TLM structures in the context of previous phylogenetic hypotheses suggests that a ratchet-type tendon lock was lost and reexpressed at least once and perhaps several times within Microchiroptera. The friction lock is an autapomorphy of Phyllostomidae.  相似文献   

10.
Cloud computing should inherently support various types of data-intensive workloads with different storage access patterns. This makes a high-performance storage system in the Cloud an important component. Emerging flash device technologies such as solid state drives (SSDs) are a viable choice for building high performance computing (HPC) cloud storage systems to address more fine-grained data access patterns. However, the bit-per-dollar SSD price is still higher than the prices of HDDs. This study proposes an optimized progressive file layout (PFL) method to leverage the advantages of SSDs in a parallel file system such as Lustre so that small file I/O performance can be significantly improved. A PFL can dynamically adjust chunk sizes and stripe patterns according to various I/O traffics. Extensive experimental results show that this approach (i.e. building a hybrid storage system based on a combination of SSDs and HDDs) can actually achieve balanced throughput over mixed I/O workloads consisting of large and small file access patterns.  相似文献   

11.
Operant and maze tasks in mice are limited by the small number of trials possible in a session before mice lose motivation. We hypothesized that by manipulating reward size and session length, motivation, and hence performance, would be maintained in an automated T-maze. We predicted that larger rewards and shorter sessions would improve acquisition; and smaller rewards and shorter sessions would maintain higher and less variable performance. Eighteen C57BL/6J mice (9 per sex) acquired (criterion 8/10 correct) and performed a spatial discrimination, with one of 3 reward sizes (.02, .04, or .08 g) and one of 3 session schedules (15, 30, or 45 min sessions). Each mouse had a total of 360 min of access to the maze per night, for two nights, and averaged 190 trials. Analysis used split-plot GLM with contrasts testing for linear effects. Acquisition of the discrimination was unaffected by reward size or session length/interval. After-criterion average performance improved as reward size decreased. After-criterion variability in performance was also affected. Variability increased as reward size increased. Session length/interval did not affect any outcome. We conclude that an automated maze, with suitable reward sizes, can sustain performance with low variability, at 5-10 times faster than traditional methods.  相似文献   

12.
Taking advantage of distributed storage technology and virtualization technology, cloud storage systems provide virtual machine clients customizable storage service. They can be divided into two types: distributed file system and block level storage system. There are two disadvantages in existing block level storage system: Firstly, Some of them are tightly coupled with their cloud computing environments. As a result, it’s hard to extend them to support other cloud computing platforms; Secondly, The bottleneck of volume server seriously affects the performance and reliability of the whole system. In this paper we present a lightweighted block-level storage system for clouds—ORTHRUS, based on virtualization technology. We first design the architecture with multiple volume servers and its workflows, which can improve system performance and avoid the problem. Secondly, we propose a Listen-Detect-Switch mechanism for ORTHRUS to deal with contingent volume servers’ failure. At last we design a strategy that dynamically balances load between multiple volume servers. We characterize machine capability and load quantity with black box model, and implement the dynamic load balance strategy which is based on genetic algorithm. Extensive experimental results show that the aggregated I/O throughputs of ORTHRUS are significantly improved (approximately two times of that in Orthrus), and both I/O throughputs and IOPS are also remarkably improved (about 1.8 and 1.2 times, respectively) by our dynamic load balance strategy.  相似文献   

13.
A lease is a token which grants its owner exclusive access to a resource for a defined span of time. In order to be able to tolerate failures, leases need to be coordinated by distributed processes. We present FaTLease, an algorithm for fault-tolerant lease negotiation in distributed systems. It is built on the Paxos algorithm for distributed consensus, but avoids Paxos’ main performance bottleneck of requiring persistent state. This property makes our algorithm particularly useful for applications that can not dispense any disk bandwidth. Our experiments show that FaTLease scales up to tens of thousands of concurrent leases and can negotiate thousands of leases per second in both LAN and WAN environments.  相似文献   

14.
In a health control service environment, that is, a periodic, membership AMHTS type of comprehensive health check-up system, where clinical data evaluation especially an evaluation in terms of subject-specific normal ranges, is most important, the medical information system is required to handle: (1) Various network types files; (2) real-time immediacy; (3) an asserted reliability to meet personal health control purposes. As in other computer applications already successfully used, an indexed direct assess method (IDAM) developed is our solution. It allows us to provide multiple indices for the file network, instead of inverted files, a unique index-to-record relationship, preventing any unrecoverable chaining destruction and, thereby, provides any network type access a stable access time. Furthermore, for research purposes, a data integrity for on-line access and batch access was attained as well as a retrieval language system with a multiple key retrieval function.  相似文献   

15.
Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.  相似文献   

16.
Adaptive Sector Grouping to Reduce False Sharing in Distributed RAID   总被引:1,自引:0,他引:1  
Distributed redundant array of inexpensive disks (RAID) is often embedded in a cluster architecture. In a centralized RAID subsystem, the false sharing problem does not exist, because the disk array allows only mutually exclusive access by one user at a time. However, the problem does exist in a distributed RAID architecture, because multiple accesses may occur simultaneously in a distributed environment. This problem will seriously limit the effectiveness of collective I/O operations in network-based, cluster computing. Traditional accesses to disks in a RAID are done at block level. The block granularity is large, say 32 KB, often resulting in false sharing among fragments in the block. The false sharing problem becomes worse when the block size or the stripe unit becomes too large. To solve this problem, we propose an adaptive sector grouping approach to accessing a distributed RAID. Each sector has a fine grain of 512 B. Multiple sectors are grouped together to match with the data block size. The grouped sector has a variable size that can be adaptively adjusted by software. Benchmark experiments reveal the positive effects of this adaptive access scheme on the performance of a RAID. Our scheme can reduce the collective I/O access time without increasing the buffer size. Both theoretical analysis and experimental results demonstrate the performance gain in using grouped sectors for fast access of distributed RAID.  相似文献   

17.
105 volunteer clients completed single sessions of dream interpretation using the Hill (1996) model, with half randomly assigned to waking life interpretation and the other half to parts of self interpretation in the insight stage of the Hill model. No differences were found between waking life and parts of self interpretations, suggesting that therapists can use either type of dream interpretation. Volunteer clients who had positive attitudes toward dreams and presented pleasant dreams had better session outcome; in addition, volunteer clients who had pleasant dreams gained more insight into their dreams. Results suggest that therapists doing single sessions of dream interpretation need to be cautious about working with dreams when volunteer clients have negative attitudes toward dreams and present unpleasant dreams.  相似文献   

18.
19.
QoS and Contention-Aware Multi-Resource Reservation   总被引:1,自引:0,他引:1  
To provide Quality of Service (QoS) guarantee in distributed services, it is necessary to reserve multiple computing and communication resources for each service session. Meanwhile, techniques have been available for the reservation and enforcement of various types of resources. Therefore, there is a need to create an integrated framework for coordinated multi-resource reservation. One challenge in creating such a framework is the complex relation between the end-to-end application-level QoS and the corresponding end-to-end resource requirement. Furthermore, the goals of (1) providing the best end-to-end QoS for each distributed service session and (2) increasing the overall reservation success rate of all service sessions are in conflict with each other. In this paper, we present a QoS and contention-aware framework of end-to-end multi-resource reservation for distributed services. The framework assumes a reservation-enabled environment, where each type of resource can be reserved. The framework consists of (1) a component-based QoS-Resource Model, (2) a runtime system architecture for coordinated reservation, and (3) a runtime algorithm for the computation of end-to-end multi-resource reservation plans. The algorithm provides a solution to alleviating the conflict between the QoS of an individual service session and the success rate of all service sessions. More specifically, for each service session, the algorithm computes an end-to-end reservation plan, such that it guarantees the highest possible end-to-end QoS level under the current end-to-end resource availability, and requires the lowest percentage of bottleneck resource(s) among all feasible reservation plans. Our simulation results show excellent performance of this algorithm.  相似文献   

20.
The transfer of scientific data has emerged as a significant challenge, as datasets continue to grow in size and demand for open access sharing increases. Current methods for file transfer do not scale well for large files and can cause long transfer times. In this study we present BioTorrents, a website that allows open access sharing of scientific data and uses the popular BitTorrent peer-to-peer file sharing technology. BioTorrents allows files to be transferred rapidly due to the sharing of bandwidth across multiple institutions and provides more reliable file transfers due to the built-in error checking of the file sharing technology. BioTorrents contains multiple features, including keyword searching, category browsing, RSS feeds, torrent comments, and a discussion forum. BioTorrents is available at http://www.biotorrents.net.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号