首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
MOSIX is a cluster management system that supports preemptive process migration. This paper presents the MOSIX Direct File System Access (DFSA), a provision that can improve the performance of cluster file systems by allowing a migrated process to directly access files in its current location. This capability, when combined with an appropriate file system, could substantially increase the I/O performance and reduce the network congestion by migrating an I/O intensive process to a file server rather than the traditional way of bringing the file's data to the process. DFSA is suitable for clusters that manage a pool of shared disks among multiple machines. With DFSA, it is possible to migrate parallel processes from a client node to file servers for parallel access to different files. Any consistent file system can be adjusted to work with DFSA. To test its performance, we developed the MOSIX File-System (MFS) which allows consistent parallel operations on different files. The paper describes DFSA and presents the performance of MFS with and without DFSA.  相似文献   

2.
This paper presents a framework for building and deploying protocols for migrating mobile agents over the Internet. The framework enables network protocols for agent migration to be naturally implemented within mobile agents and then dynamically deployed at remote hosts by migrating the agents that perform the protocols. It is built on a hierarchical mobile agent system, called MobileSpaces, and several protocols for migrating agents for managing cluster computing systems have been designed and implemented based on the framework. This paper describes the framework and its prototype implementation, which uses Java as both the implementation language and the protocol development language.  相似文献   

3.
Taking advantage of distributed storage technology and virtualization technology, cloud storage systems provide virtual machine clients customizable storage service. They can be divided into two types: distributed file system and block level storage system. There are two disadvantages in existing block level storage system: Firstly, Some of them are tightly coupled with their cloud computing environments. As a result, it’s hard to extend them to support other cloud computing platforms; Secondly, The bottleneck of volume server seriously affects the performance and reliability of the whole system. In this paper we present a lightweighted block-level storage system for clouds—ORTHRUS, based on virtualization technology. We first design the architecture with multiple volume servers and its workflows, which can improve system performance and avoid the problem. Secondly, we propose a Listen-Detect-Switch mechanism for ORTHRUS to deal with contingent volume servers’ failure. At last we design a strategy that dynamically balances load between multiple volume servers. We characterize machine capability and load quantity with black box model, and implement the dynamic load balance strategy which is based on genetic algorithm. Extensive experimental results show that the aggregated I/O throughputs of ORTHRUS are significantly improved (approximately two times of that in Orthrus), and both I/O throughputs and IOPS are also remarkably improved (about 1.8 and 1.2 times, respectively) by our dynamic load balance strategy.  相似文献   

4.
Performance management of communication networks is critical for speed, reliability, and flexibility of information exchange between different components, subsystems, and sectors (e.g., factory, engineering design, and administration) of production process organizations in the environment of computer integrated manufacturing (CIM). Essential to this distributed total manufacturing system is the integrated communications network over which the information leading to process interactions and plant management and control is exchanged. Such a network must be capable of handling heterogeneous traffic resulting from intermachine communications at the factory floor, CAD drawings, design specifications, and administrative information. The objective is to improve the efficiency in handling various types of messages, e.g., control signals, sensor data, and production orders, by on-line adjustment of the parameters of the network protocol. This paper presents a conceptual design, development, and implementation of a network performance management scheme for CIM applications including flexible manufacturing. The performance management algorithm is formulated using the concepts of: (1) Perturbation analysis of discrete event dynamic systems; (2) stochastic approximation; and (3) learning automata. The proposed concept for performance management can also serve as a general framework to assist design, operation, and management of flexible manufacturing systems. The performance management procedure has been tested via emulation on a network test bed that is based on the manufacturing automation protocol (MAP) which has been widely used for CIM networking. The conceptual design presented in this paper offers a step forward to bridging the gap between management standards and users' demands for efficient network operations since most standards such as ISO and IEEE address only the architecture, services, and interfaces for network management.  相似文献   

5.
File and Object Replication in Data Grids   总被引:23,自引:0,他引:23  
Data replication is a key issue in a Data Grid and can be managed in different ways and at different levels of granularity: for example, at the file level or object level. In the High Energy Physics community, Data Grids are being developed to support the distributed analysis of experimental data. We have produced a prototype data replication tool, the Grid Data Mirroring Package (GDMP) that is in production use in one physics experiment, with middleware provided by the Globus Toolkit used for authentication, data movement, and other purposes. We present here a new, enhanced GDMP architecture and prototype implementation that uses Globus Data Grid tools for efficient file replication. We also explain how this architecture can address object replication issues in an object-oriented database management system. File transfer over wide-area networks requires specific performance tuning in order to gain optimal data transfer rates. We present performance results obtained with GridFTP, an enhanced version of FTP, and discuss tuning parameters.  相似文献   

6.
This paper presents the syntax and semantics of a component-oriented rule-based language for specifying the formal models of manufacturing systems. A model captures the state of a component of the system in a set of first-order logic predicates, and it captures the semantics of the operations performed by this component in a set of rules that determine the preconditions and postconditions of an operation. The models are then used to plan the sequence of operations of each class of jobs to be manufactured by these systems. A plan-oriented fault detection and correction strategy is proposed. This strategy can automatically handle any combination of faults that may occur when monitoring the operations of manufacturing systems. A fault-tree is consulted prior to executing the scheduled operations of a plan, and the faults that affect the execution of these operations are handled subsequently. Resuming the original cyclic schedule is attempted, whenever feasible. As a proof of concept, a prototype implementation of both the main constructs of the component-oriented rule-based language and the planning and fault-recovery algorithms presented in this paper have been completed. This prototype is implemented on a Unix-based system in the Ada programming language. The specification of a manufacturing system is first expressed in the proposed language. These statements are then translated into Ada code. This code is next compiled by a Verdix Ada compiler and is executed in order to create and populate the model data structure of the system. A detailed plan of execution and a set of fault-recovery plans may then be derived for a job to be manufactured on this system.  相似文献   

7.
The communication architecture of the DIMMnet-1 network interface based on MEMOnet is described. MEMOnet is a class of a network interface plugged into a memory slot. This paper proposes three message transfer mechanisms named atomic on-the-fly sending (AOTF), block on-the-fly sending (BOTF) and OTF receiving with selective address translation. The DIMMnet-1 prototype will have an ASIC named Martini, two banks of PC133 based SO-DIMM slots and an 8 Gbps full duplex optical link. The software overhead incurred to generate a message is only 1 CPU cycle and the estimated hardware delay is 105 ns using AOTF. The estimated hardware delay for receiving to on chip memory using OTF receiver is 90 ns. The estimated achievable sending bandwidth of DIMMnet-1 using BOTF is 984 MB/s which was observed in our experiments. This bandwidth is 7.4 times higher than the maximum bandwidth of PCI. This high performance is available even when simultaneous sending and receiving are executed on a cheap personal computer with DIMM slots. This paper also discribes the effects of BOTF for a PCI-based NIC.  相似文献   

8.

A continuing trend in many scientific disciplines is the growth in the volume of data collected by scientific instruments and the desire to rapidly and efficiently distribute this data to the scientific community. As both the data volume and number of subscribers grows, a reliable network multicast is a promising approach to alleviate the demand for the bandwidth needed to support efficient data distribution to multiple, geographically-distributed, research communities. In prior work, we identified the need for a reliable network multicast: scientists engaged in atmospheric research subscribing to meteorological file-streams. An application called Local Data Manager (LDM) is used to disseminate meteorological data to hundreds of subscribers. This paper presents a high-performance, reliable network multicast solution, Dynamic Reliable File-Stream Multicast Service (DRFSM), and describes a trial deployment comprising eight university campuses connected via Research-and-Education Networks (RENs) and Internet2 and a DRFSM-enabled LDM (LDM7). Using this deployment, we evaluated the DRFSM architecture, which uses network multicast with a reliable transport protocol, and leverages Layer-2 (L2) multipoint Virtual LAN (VLAN/MPLS). A performance monitoring system was developed to collect the real-time performance of LDM7. The measurements showed that our proof-of-concept prototype worked significantly better than the current production LDM (LDM6) in two ways. First, LDM7 distributes data faster than LDM6. With six subscribers and a 100 Mbps bandwidth limit setting, an almost 22-fold improvement in delivery time was observed with LDM7. Second, LDM7 significantly reduces the bandwidth requirement needed to deliver data to subscribers. LDM7 needed 90% less bandwidth than LDM6 to achieve a 20 Mbps average throughput across four subscribers.

  相似文献   

9.
In this paper, we present a fault tolerant and recovery system called FRASystem (Fault Tolerant & Recovery Agent System) using multi-agent in distributed computing systems. Previous rollback-recovery protocols were dependent on an inherent communication and an underlying operating system, which caused a decline of computing performance. We propose a rollback-recovery protocol that works independently on an operating system and leads to an increasing portability and extensibility. We define four types of agents: (1) a recovery agent performs a rollback-recovery protocol after a failure, (2) an information agent constructs domain knowledge as a rule of fault tolerance and information during a failure-free operation, (3) a facilitator agent controls the communication between agents, (4) a garbage collection agent performs garbage collection of the useless fault tolerance information. Since agent failures may lead to inconsistent states of a system and a domino effect, we propose an agent recovery algorithm. A garbage collection protocol addresses the performance degradation caused by the increment of saved fault tolerance information in a stable storage. We implemented a prototype of FRASystem using Java and CORBA and experimented the proposed rollback-recovery protocol. The simulations results indicate that the performance of our protocol is better than previous rollback-recovery protocols which use independent checkpointing and pessimistic message logging without using agents. Our contributions are as follows: (1) this is the first rollback-recovery protocol using agents, (2) FRASystem is not dependent on an operating system, and (3) FRASystem provides a portability and extensibility.  相似文献   

10.
It is extremely important to minimize network access time in constructing a high-performance PC cluster system. For an SCI-based PC cluster, it is possible to reduce the network access time by maintaining network cache in each cluster node. This paper presents a Network-Cache-Coherent-NUMA (NCC-NUMA) card that utilizes network cache for SCI-based PC clustering. The NCC-NUMA card is directly plugged into the PCI slot of each node, and contains shared memory, network cache, and interconnection modules. The network cache is maintained for the shared memory on the PCI bus of cluster nodes. The coherency mechanism between the network cache and the shared memory is based on the IEEE SCI standard. Both a simulator and an NCC-NUMA prototype card are developed to evaluate the performance of the system. According to the experiments, the cluster system with the NCC-NUMA card showed considerable improvements compared with an SCI-based cluster without network cache.  相似文献   

11.
We present gmblock, a block-level storage sharing system over Myrinet which uses an optimized I/O path to transfer data directly between the storage medium and the network, bypassing the host CPU and main memory bus of the storage server. It is device driver independent and retains the protection and isolation features of the OS. We evaluate the performance of a prototype gmblock server and find that: (a) the proposed techniques eliminate memory and peripheral bus contention, increasing remote I/O bandwidth significantly, in the order of 20–200% compared to an RDMA-based approach, (b) the impact of remote I/O to local computation becomes negligible, (c) the performance characteristics of RAID storage combined with limited NIC resources reduce performance. We introduce synchronized send operations to improve the degree of disk to network I/O overlapping. We deploy the OCFS2 shared-disk filesystem over gmblock and show gains for various application benchmarks, provided I/O scheduling can eliminate the disk bottleneck due to concurrent access.  相似文献   

12.
The public cloud storage auditing with deduplication has been studied to assure the data integrity and improve the storage efficiency for cloud storage in recent years. The cloud, however, has to store the link between the file and its data owners to support the valid data downloading in previous schemes. From this file-owner link, the cloud server can identify which users own the same file. It might expose the sensitive relationship among data owners of this multi-owners file, which seriously harms the data owners’ privacy. To address this problem, we propose an identity-protected secure auditing and deduplicating data scheme in this paper. In the proposed scheme, the cloud cannot learn any useful information on the relationship of data owners. Different from existing schemes, the cloud does not need to store the file-owner link for supporting valid data downloading. Instead, when the user downloads the file, he only needs to anonymously submit a credential to the cloud, and can download the file only if this credential is valid. Except this main contribution, our scheme has the following advantages over existing schemes. First, the proposed scheme achieves the constant storage, that is, the storage space is fully independent of the number of the data owners possessing the same file. Second, the proposed scheme achieves the constant computation. Only the first uploader needs to generate the authenticator for each file block, while subsequent owners do not need to generate it any longer. As a result, our scheme greatly reduces the storage overhead of the cloud and the computation overhead of data owners. The security analysis and experimental results show that our scheme is secure and efficient.  相似文献   

13.
Nowadays, quality of service (QoS) is very popular in various research areas like distributed systems, multimedia real-time applications and networking. The requirements of these systems are to satisfy reliability, uptime, security constraints and throughput as well as application specific requirements. The real-time multimedia applications are commonly distributed over the network and meet various time constraints across networks without creating any intervention over control flows. In particular, video compressors make variable bit-rate streams that mismatch the constant-bit-rate channels typically provided by classical real-time protocols, severely reducing the efficiency of network utilization. Thus, it is necessary to enlarge the communication bandwidth to transfer the compressed multimedia streams using Flexible Time Triggered- Enhanced Switched Ethernet (FTT-ESE) protocol. FTT-ESE provides automation to calculate the compression level and change the bandwidth of the stream. This paper focuses on low-latency multimedia transmission over Ethernet with dynamic quality-of-service (QoS) management. This proposed framework deals with a dynamic QoS for multimedia transmission over Ethernet with FTT-ESE protocol. This paper also presents distinct QoS metrics based both on the image quality and network features. Some experiments with recorded and live video streams show the advantages of the proposed framework. To validate the solution we have designed and implemented a simulator based on the Matlab/Simulink, which is a tool to evaluate different network architecture using Simulink blocks.  相似文献   

14.
The rapid growth of Internet applications has made communication anonymity an increasingly important or even indispensable security requirement. Onion routing has been employed as an infrastructure for anonymous communication over a public network, which provides anonymous connections that are strongly resistant to both eavesdropping and traffic analysis. However, existing onion routing protocols usually exhibit poor performance due to repeated encryption operations. In this paper, we first present an improved anonymous multi-receiver identity-based encryption (AMRIBE) scheme, and an improved identity-based one-way anonymous key agreement (IBOWAKE) protocol. We then propose an efficient onion routing protocol named AIB-OR that provides provable security and strong anonymity. Our main approach is to use our improved AMRIBE scheme and improved IBOWAKE protocol in onion routing circuit construction. Compared with other onion routing protocols, AIB-OR provides high efficiency, scalability, strong anonymity and fault tolerance. Performance measurements from a prototype implementation show that our proposed AIB-OR can achieve high bandwidths and low latencies when deployed over the Internet.  相似文献   

15.
《Life sciences》1995,57(1):PL7-PL12
The in vitro antiepileptic activity of the synthetic glucocorticoid dexamethasone (DEX) was tested in rat hippocampal slices on the CA1 epileptiform activity induced by sodium penicillin (PEN). Slice perfusion with 1 mM PEN produced within 60 min the development of a CA1 epileptiform bursting made up of an increase of the primary CA1 population spike followed by the appearance of secondary epileptiform population spikes. Slice perfusion with 100 μM DEX together with PEN (1 mM) partially prevented but did not block the expression of the CA1 epileptiform bursting as evidenced by a significant (P < 0.05) reduction of the duration of the bursting due to the epileptogenic agent. Slice perfusion with 50 μM DEX together with PEN (1 mM) failed to prevent or block the expression of the CA1 penicillin-induced epileptiform bursting. A 60 min slice pretreatment with 50–100 μM DEX followed by a slice perfusion with 50–100 μM DEX together with PEN (1 mM) prevented the expression of the CA1 epileptiform bursting. Cycloheximide (1 μM), a protein synthesis inhibitor, perfused together with DEX reverted the inhibitory effects of dexamethasone on the expression of the penicillin-induced CA1 epileptiform bursting. The results indicate that the synthetic glucocorticoid DEX presents concentration- and time-related in vitro. antiepileptic effects. In addition, the data suggest that this inhibitory effect occurs via a protein synthesis-dependent mechanism.  相似文献   

16.
With the increasing number of scientific applications manipulating huge amounts of data, effective high-level data management is an increasingly important problem. Unfortunately, so far the solutions to the high‐level data management problem either require deep understanding of specific storage architectures and file layouts (as in high-performance file storage systems) or produce unsatisfactory I/O performance in exchange for ease-of-use and portability (as in relational DBMSs). In this paper we present a novel application development environment which is built around an active meta-data management system (MDMS) to handle high-level data in an effective manner. The key components of our three-tiered architecture are user application, the MDMS, and a hierarchical storage system (HSS). Our environment overcomes the performance problems of pure database-oriented solutions, while maintaining their advantages in terms of ease-of-use and portability. The high levels of performance are achieved by the MDMS, with the aid of user-specified, performance-oriented directives. Our environment supports a simple, easy-to-use yet powerful user interface, leaving the task of choosing appropriate I/O techniques for the application at hand to the MDMS. We discuss the importance of an active MDMS and show how the three components of our environment, namely the application, the MDMS, and the HSS, fit together. We also report performance numbers from our ongoing implementation and illustrate that significant improvements are made possible without undue programming effort. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

17.
Conflicts resolution is one of the key issues in maintaining consistency and in supporting smooth human–human interaction for real-time collaborative systems. This paper presents a novel approach of meta-operation conflict resolution for feature-based collaborative CAD system. Although commutative replicated data type (CRDT) is an emerging technique for conflict resolution, it is not capable of resolving conflicts among meta operations for 3D CAD systems. By defining 3 types of meta operations, this work extends CRDT capability to meta operation conflict resolution from 1D to 3D applications. The paper defines the dependency, casuality, conflict and compatible relations specific for 3D collaborative CAD systems. The conflicts of feature-based operations are automatically detected by tracking topological entity changes with the assistance of a persistent data structure, topological entity structure tree (\(TES\_Tree\)). An efficient commutativity-based confliction combination method is proposed to preserve the design intention of each user in a transparent way and maintains the eventual consistent state of the system. The proposed methods are tested in a prototype system with case studies, time complexity analysis and correctness proof.  相似文献   

18.
A costeffective secondary storage architecture for parallel computers is to distribute storage across all processors, which then engage in either computation or I/O, depending on the demands of the moment. A difficulty associated with this architecture is that access to storage on another processor typically requires the cooperation of that processor, which can be hard to arrange if the processor is engaged in other computation. One partial solution to this problem is to require that remote I/O operations occur only via collective calls. In this paper, we describe an alternative approach based on the use of singlesided communication operations such as Active Messages. We present an implementation of this basic approach called Distant I/O and present experimental results that quantify the lowlevel performance of DIO mechanisms. This technique is exploited to support noncollective parallel shared file model for a large outofcore scientific application with very high I/O bandwidth requirements. The achieved performance exceeds by a wide margin the performance of a well equipped PIOFS parallel filesystem on the IBM SP.  相似文献   

19.
This paper describes a novel technique for establishing a virtual file system that allows data to be transferred user-transparently and on-demand across computing and storage servers of a computational grid. Its implementation is based on extensions to the Network File System (NFS) that are encapsulated in software proxies. A key differentiator between this approach and previous work is the way in which file servers are partitioned: while conventional file systems share a single (logical) server across multiple users, the virtual file system employs multiple proxy servers that are created, customized and terminated dynamically, for the duration of a computing session, on a per-user basis. Furthermore, the solution does not require modifications to standard NFS clients and servers. The described approach has been deployed in the context of the PUNCH network-computing infrastructure, and is unique in its ability to integrate unmodified, interactive applications (even commercial ones) and existing computing infrastructure into a network computing environment. Experimental results show that: (1) the virtual file system performs well in comparison to native NFS in a local-area setup, with mean overheads of 1 and 18%, for the single-client execution of the Andrew benchmark in two representative computing environments, (2) the average overhead for eight clients can be reduced to within 1% of native NFS with the use of concurrent proxies, (3) the wide-area performance is within 1% of the local-area performance for a typical compute-intensive PUNCH application (SimpleScalar), while for the I/O-intensive application Andrew the wide-area performance is 5.5 times worse than the local-area performance.  相似文献   

20.
Methods for analyzing the amino-acid sequence of a protein for the purposes of predicting its three-dimensional structure were systematically analyzed using knowledge engineering techniques. The resulting entities (data) and relations (processing methods and constraints) have been represented within a generalized dependency network consisting of 29 nodes and over 100 links. It is argued that such a representation meets the requirements of knowledge-based systems in molecular biology. This network is used as the architecture for a prototype knowledge-based system that simulates logically the processes used in protein structure prediction. Although developed specifically for applications in protein structure prediction, the network architecture provides a strategy for tackling the general problem of orchestrating and integrating the diverse sources of knowledge that are characteristic of many areas of science.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号