首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Liu  Peini  Guitart  Jordi 《Cluster computing》2022,25(2):847-868

Containerization technology offers an appealing alternative for encapsulating and operating applications (and all their dependencies) without being constrained by the performance penalties of using Virtual Machines and, as a result, has got the interest of the High-Performance Computing (HPC) community to obtain fast, customized, portable, flexible, and reproducible deployments of their workloads. Previous work on this area has demonstrated that containerized HPC applications can exploit InfiniBand networks, but has ignored the potential of multi-container deployments which partition the processes that belong to each application into multiple containers in each host. Partitioning HPC applications has demonstrated to be useful when using virtual machines by constraining them to a single NUMA (Non-Uniform Memory Access) domain. This paper conducts a systematical study on the performance of multi-container deployments with different network fabrics and protocols, focusing especially on Infiniband networks. We analyze the impact of container granularity and its potential to exploit processor and memory affinity to improve applications’ performance. Our results show that default Singularity can achieve near bare-metal performance but does not support fine-grain multi-container deployments. Docker and Singularity-instance have similar behavior in terms of the performance of deployment schemes with different container granularity and affinity. This behavior differs for the several network fabrics and protocols, and depends as well on the application communication patterns and the message size. Moreover, deployments on Infiniband are also more impacted by the computation and memory allocation, and because of that, they can exploit the affinity better.

  相似文献   

2.
Smith SL  Timmis J 《Bio Systems》2008,94(1-2):34-46
This paper presents a novel evolutionary algorithm inspired by protein/substrate binding exploited in enzyme genetic programming (EGP) and artificial immune networks. The immune network-inspired evolutionary algorithm has been developed in direct response to an application in clinical neurology, the diagnosis of Parkinson's disease. The inspiration for, and implementation of the algorithm is described and its performance to the application area considered.  相似文献   

3.
Much has been published on the application of genetically modified (GM) crops in Africa, but agricultural performance has hardly been addressed. This paper discusses the main consequences of GM crops on agricultural performance in Ethiopia. Three main criteria of performance - productivity, equitability and sustainability - are evaluated in the context of the Ethiopian agricultural sector. We conclude that the application of GM crops can improve the agricultural productivity and sustainability, whereas equitability cannot be stimulated and might even exacerbate the gap between socioeconomic classes. Before introducing GM crops to Ethiopian agriculture, regulatory issues should be addressed, public research should be fostered, and more ex ante values and socioeconomic studies should be included.  相似文献   

4.
The increasing role played by liquid chromatography‐mass spectrometry (LC‐MS)‐based proteomics in biological discovery has led to a growing need for quality control (QC) on the LC‐MS systems. While numerous quality control tools have been developed to track the performance of LC‐MS systems based on a pre‐defined set of performance factors (e.g., mass error, retention time), the precise influence and contribution of the performance factors and their generalization property to different biological samples are not as well characterized. Here, a web‐based application (QCMAP) is developed for interactive diagnosis and prediction of the performance of LC‐MS systems across different biological sample types. Leveraging on a standardized HeLa cell sample run as QC within a multi‐user facility, predictive models are trained on a panel of commonly used performance factors to pinpoint the precise conditions to a (un)satisfactory performance in three LC‐MS systems. It is demonstrated that the learned model can be applied to predict LC‐MS system performance for brain samples generated from an independent study. By compiling these predictive models into our web‐application, QCMAP allows users to benchmark the performance of their LC‐MS systems using their own samples and identify key factors for instrument optimization. QCMAP is freely available from: http://shiny.maths.usyd.edu.au/QCMAP/ .  相似文献   

5.
On‐site identification and quantification of chemicals is critical for promoting food safety, human health, homeland security risk assessment, and disease diagnosis. Surface‐enhanced Raman spectroscopy (SERS) has been widely considered as a promising method for on‐site analysis due to the advantages of nondestructive, abundant molecular information, and outstanding sensitivity. However, SERS for on‐site application has been restricted not only by the cost, performance, and portability of portable Raman instruments, but also by the sampling ability and signal enhancing performance of the SERS substrates. In recent years, the performance of SERS for on‐site analysis has been improved through portable Raman instruments, SERS substrates, and other combined technologies. In this review, popular commercial portable Raman spectrometers and the related technologies for on‐site analysis are compared. In addition, different types of SERS substrates for on‐site application are summarized. SERS combined with other technologies, such as electrochemical and microfluidics are also presented. The future perspective of SERS for on‐site analysis is also discussed.  相似文献   

6.
This paper presents a sequential learning algorithm and evaluates its performance on complex valued signal processing problems. The algorithm is referred to as Complex Minimal Resource Allocation Network (CMRAN) algorithm and it is an extension of the MRAN algorithm originally developed for online learning in real valued RBF networks. CMRAN has the ability to grow and prune the (complex) RBF network's hidden neurons to ensure a parsimonious network structure. The performance of the learning algorithm is illustrated using two applications from signal processing of communication systems. The first application considers identification of a nonlinear complex channel. The second application considers the use of CMRAN to QAM digital channel equalization problems. Simulation results presented clearly show that CMRAN is very effective in modeling and equalization with performance achieved often being superior to that of some of the well known methods.  相似文献   

7.
The microtip nozzle assembly described by Uk (1978) has been modified to provide a reliable and versatile sprayer for the laboratory application of pesticide formulations. The improved performance has been achieved through the introduction of knurled thimbles giving controlled adjustment of needle and stylus settings and by the provision of an offset liquid reservoir (0·1–2 ml) either pressurised or non-pressurised. The device, which delivers monosize droplets (in-flight diameter 50–500 μm) at velocities ranging from 0·5–20 m s-1 has been used for aqueous and oil solutions, water dispersible powders and emulsifiable concentrates of varying concentration (0·1-30g a.i. litre-1) and is particularly suitable for the application of radiolabeled chemicals.  相似文献   

8.
Basamid micro-granule is used worldwide as a broad spectrum soil fumigant generator and has replaced methyl bromide for many applications. A lot is known for decades regarding the factors determining the success of the application from soil preparation and conditions to the application and soil sealing or soil tarping, as well as the operations and hygienic measures after the fumigant contact time. This paper explains last 6 years studies regarding the improvement of application methods, both from the viewpoint of homogenous incorporation of the granule over the soil profile to become treated as well as from possible premature loss of the gaseous active methyl isothiocyanate (MITC) by using improved tarping materials. Both result in lower environmental exposure and better biological performance of the application. In that respect, product incorporation in soil was studied in France and in Italy with more recent commercially available Basamid application machinery, and 29 plastic films have been compared for their MITC barrier properties with an 'in house' developed method. Film testing allowed clear categorizing in standard (monolayer) films, V.I.F. (Virtually Impermeable Film) and T.I.F. (Totally Impermeable Film). The paper presents the methodology for granule incorporation study and results from trials with two specific Basamid application machines compared with a classic rotovator, the methodology and comparison of plastic film barrier properties testing, and directives to minimize exposure and to maximize performance.  相似文献   

9.
This paper describes a novel technique for establishing a virtual file system that allows data to be transferred user-transparently and on-demand across computing and storage servers of a computational grid. Its implementation is based on extensions to the Network File System (NFS) that are encapsulated in software proxies. A key differentiator between this approach and previous work is the way in which file servers are partitioned: while conventional file systems share a single (logical) server across multiple users, the virtual file system employs multiple proxy servers that are created, customized and terminated dynamically, for the duration of a computing session, on a per-user basis. Furthermore, the solution does not require modifications to standard NFS clients and servers. The described approach has been deployed in the context of the PUNCH network-computing infrastructure, and is unique in its ability to integrate unmodified, interactive applications (even commercial ones) and existing computing infrastructure into a network computing environment. Experimental results show that: (1) the virtual file system performs well in comparison to native NFS in a local-area setup, with mean overheads of 1 and 18%, for the single-client execution of the Andrew benchmark in two representative computing environments, (2) the average overhead for eight clients can be reduced to within 1% of native NFS with the use of concurrent proxies, (3) the wide-area performance is within 1% of the local-area performance for a typical compute-intensive PUNCH application (SimpleScalar), while for the I/O-intensive application Andrew the wide-area performance is 5.5 times worse than the local-area performance.  相似文献   

10.
Numerous microbes are antagonistic to plant-parasitic nematodes and soilborne plant-pathogenic fungi, but few of these organisms are commercially available for management of these pathogens. Inconsistent performance of applied biocontrol agents has proven to be a primary obstacle to the development of successful commercial products. One of the strategies for overcoming inconsistent performance is to combine the disease-suppressive activity of two (or more) beneficial microbes in a biocontrol preparation. Such combinations have potential for more extensive colonization of the rhizosphere, more consistent expression of beneficial traits under a broad range of soil conditions, and antagonism to a larger number of plant pests or pathogens than strains applied individually. Conversely, microbes applied in combination also may have antagonistic interactions with each other. Increased, decreased, and unaltered suppression of the target pathogen or pest has been observed when biocontrol microbes have been applied in combination. Unfortunately, the ecological basis for increased or decreased suppression has not been determined in many cases and needs further consideration. The complexity of interactions involved in the application of multiple organisms for biological control has slowed progress toward development of successful formulations. However, this approach has potential for overcoming some of the efficacy problems that occur with application of individual biocontrol agents.  相似文献   

11.
Efficient application scheduling is critical for achieving high performance in heterogeneous computing (HC) environments. Because of such importance, there are many researches on this problem and various algorithms have been proposed. Duplication-based algorithms are one kind of well known algorithms to solve scheduling problems, which achieve high performance on minimizing the overall completion time (makespan) of applications. However, they pursuit of the shortest makespan overly by duplicating some tasks redundantly, which leads to a large amount of energy consumption and resource waste. With the growing advocacy for green computing systems, energy conservation has been an important issue and gained a particular interest. An existing technique to reduce energy consumption of an application is dynamic voltage/frequency scaling (DVFS), whose efficiency is affected by the overhead of time and energy caused by voltage scaling. In this paper, we propose a new energy-aware scheduling algorithm with reduced task duplication called Energy-Aware Scheduling by Minimizing Duplication (EAMD), which takes the energy consumption as well as the makespan of an application into consideration. It adopts a subtle energy-aware method to search and delete redundant task copies in the schedules generated by duplication-based algorithms, and it is easier to operate than DVFS, and produces no extra time and energy consumption. This algorithm not only consumes less energy but also maintains good performance in terms of makespan compared with duplication-based algorithms. Two kinds of DAGs, i.e., randomly generated graphs and two real-world application graphs, are tested in our experiments. Experimental results show that EAMD can save up to 15.59 % energy consumption for HLD and HCPFD, two classic duplication-based algorithms. Several factors affecting the performance are also analyzed in the paper.  相似文献   

12.
The performance skeleton of an application is a short running program whose performance in any scenario reflects the performance of the application it represents. Specifically, the execution time of the performance skeleton is a small fixed fraction of the execution time of the corresponding application in any execution environment. Such a skeleton can be employed to quickly estimate the performance of a large application under existing network and node sharing. This paper presents a framework for automatic construction of performance skeletons of a specified execution time and evaluates their use in performance prediction with CPU and network sharing. The approach is based on capturing the execution behavior of an application and automatically generating a synthetic skeleton program that reflects that execution behavior. The paper demonstrates that performance skeletons running for a few seconds can predict the application execution time fairly accurately. Relationship of skeleton execution time, application characteristics, and nature of resource sharing, to accuracy of skeleton based performance prediction, is analyzed in detail. The goal of this research is accurate performance estimation in heterogeneous and shared computational grids.
Jaspal Subhlok (Corresponding author)Email:
  相似文献   

13.
Virtualization technology promises to provide better isolation and consolidation in traditional servers. However, with VMM (virtual machine monitor) layer getting involved, virtualization system changes the architecture of traditional software stack, bringing about limitations in resource allocating. The non-uniform VCPU (virtual CPU)-PCPU (physical CPU) mapping, deriving from both the configuration or the deployment of virtual machines and the dynamic runtime feature of applications, causes the different percentage of processor allocation in the same physical machine,and the VCPUs mapped these PCPUs will gain asymmetric performance. The guest OS, however, is agnostic to the non-uniformity. With assumption that all VCPUs have the same performance, it can carry out sub-optimal policies when allocating virtual resource for applications. Likewise, application runtime system can also make the same mistakes. Our focus in this paper is to understand the performance implications of the non-uniform VCPU-PCPU mapping in a virtualization system. Based on real measurements of a virtualization system with state of art multi-core processors running different commercial and emerging applications, we demonstrate that the presence of the non-uniform mapping has negative impacts on application’s performance predictability. This study aims to provide timely and practical insights on the problem of non-uniform VCPU mapping, when virtual machines being deployed and configured, in emerging cloud.  相似文献   

14.
Run time variability of parallel applications continues to present significant challenges to their performance and energy efficiency in high-performance computing (HPC) systems. When run times are extended and unpredictable, application developers perceive this as a degradation of system (or subsystem) performance. Extended run times directly contribute to proportionally higher energy consumption, potentially negating efforts by applications, or the HPC system, to optimize energy consumption using low-level control techniques, such as dynamic voltage and frequency scaling (DVFS). Therefore, successful systemic management of application run time performance can result in less wasted energy, or even energy savings. We have been studying run time variability in terms of communication time, from the perspective of the application, focusing on the interconnection network. More recently, our focus has shifted to developing a more complete understanding of the effects of HPC subsystem interactions on parallel applications. In this context, the set of executing applications on the HPC system is treated as a subsystem, along with more traditional subsystems like the communication subsystem, storage subsystem, etc. To gain insight into the run time variability problem, our earlier work developed a framework to emulate parallel applications (PACE) that stresses the communication subsystem. Evaluation of run time sensitivity to network performance of real applications is performed with a tool called PARSE, which uses PACE. In this paper, we propose a model defining application-level behavioral attributes, that collectively describes how applications behave in terms of their run time performance, as functions of their process distribution on the system (spacial locality), and subsystem interactions (communication subsystem degradation). These subsystem interactions are produced when multiple applications execute concurrently on the same HPC system. We also revisit our evaluation framework and tools to demonstrate the flexibility of our application characterization techniques, and the ease with which attributes can be quantified. The validity of the model is demonstrated using our tools with several parallel benchmarks and application fragments. Results suggest that it is possible to articulate application-level behavioral attributes as a tuple of numeric values that describe course-grained performance behavior.  相似文献   

15.
This paper presents the application of genetic algorithms to the performance optimization of asynchronous automatic assembly systems (AAS). These stochastic systems are subject to blocking and starvation effects that make complete analytic performance modeling difficult. Therefore, this paper extends genetic algorithms to stochastic systems. The performance of the genetic algorithm is measured through comparison with the results of stochastic quasi-gradient (SQM) methods to the same AAS. The genetic algorithm performs reasonably well in obtaining good solutions (as compared with results of SQM) in this stochastic optimization example, even though genetic algorithms were designed for application to deterministic systems. However, the genetic algorithm's performance does not appear to be superior to SQM.  相似文献   

16.
Molecular evolution and genetic engineering of C4 photosynthetic enzymes   总被引:16,自引:0,他引:16  
The majority of terrestrial plants, including many important crops such as rice, wheat, soybean, and potato, are classified as C(3) plants that assimilate atmospheric CO(2) directly through the C(3) photosynthetic pathway. C(4) plants, such as maize and sugarcane, evolved from C(3) plants, acquiring the C(4) photosynthetic pathway in addition to the C(3) pathway to achieve high photosynthetic performance and high water- and nitrogen-use efficiencies. Consequently, the transfer of C(4) traits to C(3) plants is one strategy being adopted for improving the photosynthetic performance of C(3) plants. The recent application of recombinant DNA technology has made considerable progress in the molecular engineering of photosynthetic genes in the past ten years. It has deepened understanding of the evolutionary scenario of the C(4) photosynthetic genes. The strategy, based on the evolutionary scenario, has enabled enzymes involved in the C(4) pathway to be expressed at high levels and in desired locations in the leaves of C(3) plants. Although overproduction of a single C(4) enzyme can alter the carbon metabolism of C(3) plants, it does not show any positive effects on photosynthesis. Transgenic C(3) plants overproducing multiple enzymes are now being produced for improving the photosynthetic performance of C(3) plants.  相似文献   

17.
The science cloud paradigm has been actively developed and investigated, but still requires a suitable model for science cloud system in order to support increasing scientific computation needs with high performance. This paper presents an effective provisioning model of science cloud, particularly for large-scale high throughput computing applications. In this model, we utilize job traces where a statistical method is applied to pick the most influential features to improve application performance. With these features, a system determines where VM is deployed (allocation) and which instance type is proper (provisioning). An adaptive evaluation step which is subsequent to the job execution enables our model to adapt to dynamical computing environments. We show performance achievements by comparing the proposed model with other policies through experiments and expect noticeable improvements on performance as well as reduction of cost from resource consumption through our model.  相似文献   

18.
Process analytical technology (PAT) has been gaining a lot of momentum in the biopharmaceutical community due to the potential for continuous real-time quality assurance resulting in improved operational control and compliance. Two of the key goals that have been outlined for PAT are "variability is managed by the process" and "product quality attributes can be accurately and reliably predicted over the design space established for materials used, process parameters, manufacturing, environmental, and other conditions". Recently, we have been examining the feasibility of applying different analytical tools for designing PAT applications for bioprocessing. We have previously shown that a commercially available online high performance liquid chromatography (HPLC) system can be used for analysis that can facilitate real-time decisions for column pooling based on product quality attributes (Rathore et al., 2008). In this article we test the feasibility of using a commercially available ultra- performance liquid chromatography (UPLC) system for real-time pooling of process chromatography columns. It is demonstrated that the UPLC system offers a feasible approach and meets the requirements of a PAT application. While the application presented here is of a reversed phase assay, the approach and the hardware can be easily applied to other modes of liquid chromatography.  相似文献   

19.
Magnetic field sensors are used in various fields of technology. In the past few years a large variety of magnetic field sensors has been established and the performance of these sensors has been improved enormously. In this review article all recent developments in the area of sensitive magnetic field sensory analysis (resolution better than 1 nT) are presented and examined regarding their parameters. This is mainly done under the aspect of application fields in biomedical engineering. A comparison of all commercial and available sensitive magnetic field sensors shows current and prospective ranges of application.  相似文献   

20.
High performance and distributed computing systems such as peta-scale, grid and cloud infrastructure are increasingly used for running scientific models and business services. These systems experience large availability variations through hardware and software failures. Resource providers need to account for these variations while providing the required QoS at appropriate costs in dynamic resource and application environments. Although the performance and reliability of these systems have been studied separately, there has been little analysis of the lost Quality of Service (QoS) experienced with varying availability levels. In this paper, we present a resource performability model to estimate lost performance and corresponding cost considerations with varying availability levels. We use the resulting model in a multi-phase planning approach for scheduling a set of deadline-sensitive meteorological workflows atop grid and cloud resources to trade-off performance, reliability and cost. We use simulation results driven by failure data collected over the lifetime of high performance systems to demonstrate how the proposed scheme better accounts for resource availability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号