首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Autonomous recovery in componentized Internet applications   总被引:1,自引:0,他引:1  
In this paper we show how to reduce downtime of J2EE applications by rapidly and automatically recovering from transient and intermittent software failures, without requiring application modifications. Our prototype combines three application-agnostic techniques: macroanalysis for fault detection and localization, microrebooting for rapid recovery, and external management of recovery actions. The individual techniques are autonomous and work across a wide range of componentized Internet applications, making them well-suited to the rapidly changing software of Internet services. The proposed framework has been integrated with JBoss, an open-source J2EE application server. Our prototype provides an execution platform that can automatically recover J2EE applications within seconds of the manifestation of a fault. Our system can provide a subset of a system's active end users with the illusion of continuous uptime, in spite of failures occurring behind the scenes, even when there is no functional redundancy in the system.  相似文献   

2.
A lease is a token which grants its owner exclusive access to a resource for a defined span of time. In order to be able to tolerate failures, leases need to be coordinated by distributed processes. We present FaTLease, an algorithm for fault-tolerant lease negotiation in distributed systems. It is built on the Paxos algorithm for distributed consensus, but avoids Paxos’ main performance bottleneck of requiring persistent state. This property makes our algorithm particularly useful for applications that can not dispense any disk bandwidth. Our experiments show that FaTLease scales up to tens of thousands of concurrent leases and can negotiate thousands of leases per second in both LAN and WAN environments.  相似文献   

3.
Fault tolerance in parallel systems has traditionally been achieved through a combination of redundancy and checkpointing methods. This notion has also been extended to message-passing systems with user-transparent process checkpointing and message logging. Furthermore, studies of multiple types of rollback and recovery have been reported in literature, ranging from communication-induced checkpointing to pessimistic and synchronous solutions. However, many of these solutions incorporate high overhead because of their inability to utilize application level information.This paper describes the design and implementation of MPI/FT, a high-performance MPI-1.2 implementation enhanced with low-overhead functionality to detect and recover from process failures. The strategy behind MPI/FT is that fault tolerance in message-passing middleware can be optimized based on an application's execution model derived from its communication topology and parallel programming semantics. MPI/FT exploits the specific characteristics of two parallel application execution models in order to optimize performance. MPI/FT also introduces the self-checking thread that monitors the functioning of the middleware itself. User aware checkpointing and user-assisted recovery are compatible with MPI/FT and complement the techniques used here.This paper offers a classification of MPI applications for fault tolerant MPI purposes and MPI/FT implementation discussed here provides different middleware versions specifically tailored to each of the two models studied in detail. The interplay of various parameters affecting the cost of fault tolerance is investigated. Experimental results demonstrate that the approach used to design and implement MPI/FT results in a low-overhead MPI-based fault tolerant communication middleware implementation.  相似文献   

4.
Regardless of the species, the development of a multicellular organism requires the precise execution of essential developmental processes including patterning, growth, proliferation and differentiation. The cell cycle, in addition to its role as coordinator of DNA replication and mitosis, is also a coordinator of developmental processes, and is a target of developmental signaling pathways. Perhaps because of its central role during development, the cell cycle mechanism, its regulation and its effects on developing tissues is remarkably complex. It was in this light that the Keystone meeting on the cell cycle and development at Snowbird, Utah in January 2004 was held.  相似文献   

5.
The authors assess mobile methods of x-ray computer-aided tomography (CAT) and suggest an organization and methodological scheme of its application. Their program of the first and up to now the only one in this country mobile CAT device is based on the new principles of mobile CAT application. It is realized in special hospitals of large regions, where the patients with the optimal indications for CAT are assembled. Over 15,000 examinations were carried out with the use of the suggested CAT program over 4 years, that resulted in detection of 1295 brain tumors, 804 cases with neoplastic involvement of the abdominal cavity and the retroperitoneal space. The authors claim that wide application of mobile CAT devices according to the program they suggest will help decide the problem of unavailability of such examinations, for it will rule out the principal cause of this unavailability--economic problems arising because of high price of this equipment. One mobile device may replace 3 permanent CAT devices, if used according to the program suggested by the authors.  相似文献   

6.
Developments in biocatalysis have been largely fuelled by consumer demands for new products, industrial attempts to improving existing process and minimizing waste, coupled with governmental measures to regulate consumer safety along with scientific advancements. One of the major hurdles to application of biocatalysis to chemical synthesis is unavailability of the desired enzyme to catalyse the reaction to allow for a viable process development. Even when the desired enzyme is available it often forces the process engineers to alter process parameters due to inadequacies of the enzyme, such as instability, inhibition, low yield or selectivity, etc. Developments in the field of enzyme or reaction engineering have allowed access to means to achieve the ends, such as directed evolution, de novo protein design, use of non‐conventional media, using new substrates for old enzymes, active‐site imprinting, altering temperature, etc. Utilization of enzyme discovery and improvement tools therefore provides a feasible means to overcome this problem. Judicious employment of these tools has resulted in significant advancements that have leveraged the research from laboratory to market thus impacting economic growth; however, there are further opportunities that have not yet been explored. The present review attempts to highlight some of these achievements and potential opportunities.  相似文献   

7.
Although success criteria for seagrass restoration have been in place for some time, there has been little consistency regarding how much habitat should be restored for every unit area lost (the replacement ratio). Extant success criteria focus on persistence, area, and habitat quality (shoot density). These metrics, while conservative, remain largely accepted for the seagrass ecosystem. Computation of the replacement ratio using economic tools has recently been integrated with seagrass restoration and is based on the intrinsic recovery rate of the injured seagrass beds themselves as compared with the efficacy of the restoration itself. In this application, field surveys of injured seagrass beds in the Florida Keys National Marine Sanctuary (FKNMS) were conducted over several years and provide the basis for computing the intrinsic recovery rate and thus, the replacement ratio. This computation is performed using the Habitat Equivalency Analysis (HEA) and determines the lost on-site services pertaining to the ecological function of an area as the result of an injury and sets this against the difference between intrinsic recovery and recovery afforded by restoration. Joining empirical field data with economic theory has produced a reasonable and typically conservative means of determining the level of restoration and this has been fully supported in Federal Court rulings. Having clearly defined project goals allows application of the success criteria in a predictable, consistent, reasonable, and fair manner.  相似文献   

8.
We show that coliphage 186 infection is dependent upon host initiation functions, dnaA and dnaC, which differentiates the phage from lambda and P2. The possibility is therefore entertained that the delay in 186 replication seen after infection of UV-irradiated bacterial cells reflects the temporary unavailability of one or both these functions. Infections with P1 and Mu need host dnaC but not dnaA and show some sensitivity to preirradiation of the host but are not as sensitive as 186.  相似文献   

9.
摘要:遗传操作系统,是研究基因和基因产物功能的一个极为重要的工具。超嗜热古菌遗传操作系统方面的研究落后于甲烷菌及嗜盐古菌中的研究,主要原因是选择标记的缺乏。然而,近十年来,在以硫化叶菌(Sulfolobus)为代表的超嗜热泉古菌和Thermococcus kodakaraensis为代表的超嗜热广古菌中,遗传操作系统研究取得了很大的进展。本文主要对这两种超嗜热古菌的遗传操作系统进展以及应用进行概述。  相似文献   

10.
The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches—Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.  相似文献   

11.
A key problem in executing performance critical applications on distributed computing environments (e.g. the Grid) is the selection of resources. Research related to “automatic resource selection” aims to allocate resources on behalf of users to optimize the execution performance. However, most of current approaches are based on the static principle (i.e. resource selection is performed prior to execution) and need detailed application-specific information. In the paper, we introduce a novel on-line automatic resource selection approach. This approach is based on a simple control theory: the application continuously reports the Execution Satisfaction Degree (ESD) to the middleware Application Agent (AA), which relies on the reported ESD values to learn the execution behavior and tune the computing environment by adding/replacing/deleting resources during the execution in order to satisfy users’ performance requirements. We introduce two different policies applied to this approach to enable the AA to learn and tune the computing environment: the Utility Classification policy and the Desired Processing Power Estimation (DPPE) policy. Each policy is validated by an iterative application and a non-iterative application to demonstrate that both policies are effective to support most kinds of applications.  相似文献   

12.
Long-running applications are often subject to failures. Once failures occur, it will lead to unacceptable system overheads. The checkpoint technology is used to reduce the losses in the event of a failure. For the two-level checkpoint recovery scheme used in the long-running tasks, it is unavoidable for the system to periodically transfer huge memory context to a remote stable storage. Therefore, the overheads of setting checkpoints and the re-computing time become a critical issue which directly impacts the system total overheads. Motivated by these concerns, this paper presents a new model by introducing i-checkpoints into the existing two-level checkpoint recovery scheme to deal with the more probable failures with the smaller cost and the faster speed. The proposed scheme is independent of the specific failure distribution type and can be applied to different failure distribution types. We respectively make analyses between the two-level incremental and two-level checkpoint recovery schemes with the Weibull distribution and exponential distribution, both of which fit with the actual failure distribution best. The comparison results show that the total overheads of setting checkpoints, the total re-computing time and the system total overheads in the two-level incremental checkpoint recovery scheme are all significantly smaller than those in the two-level checkpoint recovery scheme. At last, limitations of our study are discussed, and at the same time, open questions and possible future work are given.  相似文献   

13.
260 structures published in Inorganica Chimica Acta present some problem about their crystal and molecular symmetry, moreover 41 compounds have been already corrected in this journal or in other journals. The unavailability of the structure factors tables has prevented a complete correction, in fact, we have reached a definitive result only for 52 crystalline compounds. These are divided into five categories: (A) incorrect Laue Group (19 examples), (B) omission of a centre of symmetry (20 examples), (C) incorrect Laue group and omission of a centre of symmetry (one example) (D) omission of a centre of symmetry coupled with a failure to recognize systematic absences (10 examples) and finally (E) non-space group translations (two examples). Moreover, other 44 examples of non-space group translations might be present, but we have not reached a definitive result for the unavailability of the structure factor tables.  相似文献   

14.
Failure instances in distributed computing systems (DCSs) have exhibited temporal and spatial correlations, where a single failure instance can trigger a set of failure instances simultaneously or successively within a short time interval. In this work, we propose a correlated failure prediction approach (CFPA) to predict correlated failures of computing elements in DCSs. The approach models correlated-failure patterns using the concept of probabilistic shared risk groups and makes a prediction for correlated failures by exploiting an association rule mining approach in a parallel way. We conduct extensive experiments to evaluate the feasibility and effectiveness of CFPA using both failure traces from Los Alamos National Lab and simulated datasets. The experimental results show that the proposed approach outperforms other approaches in both the failure prediction performance and the execution time, and can potentially provide better prediction performance in a larger system.  相似文献   

15.
Signal transduction and the regulation of apoptosis: roles of ceramide   总被引:3,自引:0,他引:3  
Knowledge about the molecular regulators of apoptosis is rapidly expanding. Cell death signals emanating from death receptors or internal cell injury detectors launch a number of signaling pathways which converge on several key families of proteins including specialized proteases and endonucleases which play a critical role in the execution of the death order. In this review, we summarize recent discoveries relating to the signaling pathways involved, the death receptors, the caspase family of apoptotic proteases, Bcl-2 family members, the sphingolipid ceramide, and the tumor suppressor p53. In particular, we focus on the role played by ceramide as a coordinator of the stress response and as a candidate biostat in the detection of cell injury.  相似文献   

16.
Joint prostheses     
Since 34 years, many improvement occurred. Joint prosthesis have to be anatomic, simple, thin and not a large foreign body. New material are warranted for their solidity. Many types are manufactured. The most used are hip prosthesis. Junction between bone and prosthesis is not solved as yet. Acrylic cement was on interface junction but in order to solve the failures, new porous or rough surface materials allowed anchorage by the bone itself. Prosthesis longevity is limited by bone prosthesis interface. Prosthesis replacement is possible and will develop.  相似文献   

17.
Two basic strategies have been proposed for using transgenic Aedes aegypti mosquitoes to decrease dengue virus transmission: population reduction and population replacement. Here we model releases of a strain of Ae. aegypti carrying both a gene causing conditional adult female mortality and a gene blocking virus transmission into a wild population to assess whether such releases could reduce the number of competent vectors. We find this “reduce and replace” strategy can decrease the frequency of competent vectors below 50% two years after releases end. Therefore, this combined approach appears preferable to releasing a strain carrying only a female-killing gene, which is likely to merely result in temporary population suppression. However, the fixation of anti-pathogen genes in the population is unlikely. Genetic drift at small population sizes and the spatially heterogeneous nature of the population recovery after releases end prevent complete replacement of the competent vector population. Furthermore, releasing more individuals can be counter-productive in the face of immigration by wild-type mosquitoes, as greater population reduction amplifies the impact wild-type migrants have on the long-term frequency of the anti-pathogen gene. We expect the results presented here to give pause to expectations for driving an anti-pathogen construct to fixation by relying on releasing individuals carrying this two-gene construct. Nevertheless, in some dengue-endemic environments, a spatially heterogeneous decrease in competent vectors may still facilitate decreasing disease incidence.  相似文献   

18.
Degeneration of intervertebral disk (IVD) has been increased in recent years. The lumbar herniation can be cured using conservative and surgical procedures. Surgery is considered after failure of conservative treatment. Partial discectomy, fusion, and total disk replacement (TDR) are also common surgical treatments for degenerative disk disease. However, due to limitations and disadvantages of the current treatments, many studies have been carried out to approach the best design of mimicking natural disk. Recently, a new method of TDRs has been introduced using nature deformation of IVD by reinforced fibers of annulus fibrosis. Nonetheless, owing to limitations of experimental works on the human body, numerical studies of IVD may help to understand load transfer and biomechanical properties within the disks with reinforced fibers. In this study, a three-dimensional (3D) finite element model of the L2-L3 disk vertebrae unit with 12 vertical fibers embedded into annulus fibrosis was constructed. The IVD was subjected to compressive force, bending moment, and axial torsion. The most important parameters of disk failures were compared to that of experimental data. The results showed that the addition of reinforced fibers into the disk invokes a significant decrease of stress in the nucleus and annulus. The findings of this study may have implications not only for developing IVDs with reinforced fibers but also for the application of fiber reinforced IVD in orthopedics surgeries as a suitable implant.  相似文献   

19.
The performance skeleton of an application is a short running program whose performance in any scenario reflects the performance of the application it represents. Specifically, the execution time of the performance skeleton is a small fixed fraction of the execution time of the corresponding application in any execution environment. Such a skeleton can be employed to quickly estimate the performance of a large application under existing network and node sharing. This paper presents a framework for automatic construction of performance skeletons of a specified execution time and evaluates their use in performance prediction with CPU and network sharing. The approach is based on capturing the execution behavior of an application and automatically generating a synthetic skeleton program that reflects that execution behavior. The paper demonstrates that performance skeletons running for a few seconds can predict the application execution time fairly accurately. Relationship of skeleton execution time, application characteristics, and nature of resource sharing, to accuracy of skeleton based performance prediction, is analyzed in detail. The goal of this research is accurate performance estimation in heterogeneous and shared computational grids.
Jaspal Subhlok (Corresponding author)Email:
  相似文献   

20.
Defending from DDoS attacks have become more difficult because they have evolved in many ways. Absence of a specific predetermined pattern, increase of number of attack devices, and distributed execution of the DDoS attack makes hard the recognition of the attack sources and thus application of countermeasures. When the DDoS attack is being executed, most of the cases, the target cannot provide its services normally; this is not a significant problem for non-critical application, but, for availability critical services such as stock financial, stock market, or governmental, the effect of the attack may involve huge damage. In this paper, we propose a DDoS avoidance strategy to provide service availability to those preregistered important users. In the proposed strategy, we divide the attack scenario in different time points and provide alternative access channels to already authenticated and other valid users.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号