首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Data centers, as resource providers, are expected to deliver on performance guarantees while optimizing resource utilization to reduce cost. Virtualization techniques provide the opportunity of consolidating multiple separately managed containers of virtual resources on underutilized physical servers. A key challenge that comes with virtualization is the simultaneous on-demand provisioning of shared physical resources to virtual containers and the management of their capacities to meet service-quality targets at the least cost. This paper proposes a two-level resource management system to dynamically allocate resources to individual virtual containers. It uses local controllers at the virtual-container level and a global controller at the resource-pool level. An important advantage of this two-level control architecture is that it allows independent controller designs for separately optimizing the performance of applications and the use of resources. Autonomic resource allocation is realized through the interaction of the local and global controllers. A novelty of the local controller designs is their use of fuzzy logic-based approaches to efficiently and robustly deal with the complexity and uncertainties of dynamically changing workloads and resource usage. The global controller determines the resource allocation based on a proposed profit model, with the goal of maximizing the total profit of the data center. Experimental results obtained through a prototype implementation demonstrate that, for the scenarios under consideration, the proposed resource management system can significantly reduce resource consumption while still achieving application performance targets.
Mazin YousifEmail:
  相似文献   

2.
Virtualization technology promises to provide better isolation and consolidation in traditional servers. However, with VMM (virtual machine monitor) layer getting involved, virtualization system changes the architecture of traditional software stack, bringing about limitations in resource allocating. The non-uniform VCPU (virtual CPU)-PCPU (physical CPU) mapping, deriving from both the configuration or the deployment of virtual machines and the dynamic runtime feature of applications, causes the different percentage of processor allocation in the same physical machine,and the VCPUs mapped these PCPUs will gain asymmetric performance. The guest OS, however, is agnostic to the non-uniformity. With assumption that all VCPUs have the same performance, it can carry out sub-optimal policies when allocating virtual resource for applications. Likewise, application runtime system can also make the same mistakes. Our focus in this paper is to understand the performance implications of the non-uniform VCPU-PCPU mapping in a virtualization system. Based on real measurements of a virtualization system with state of art multi-core processors running different commercial and emerging applications, we demonstrate that the presence of the non-uniform mapping has negative impacts on application’s performance predictability. This study aims to provide timely and practical insights on the problem of non-uniform VCPU mapping, when virtual machines being deployed and configured, in emerging cloud.  相似文献   

3.
A wide range of experimental studies have provided evidence that a night of sleep contributes to memory consolidation. Mental rotation (MR) skill is characterized by fundamental aspect of both cognitive and motor abilities which can be improved within practice sessions, but little is known about the effect of consolidation after MR practice. In the present study, we investigated the effect of MR training and the following corresponding day- and sleep-related time consolidations in taking into account the well-established gender difference in MR. Forty participants (20 women) practiced a computerized version of the Vandenberg and Kuse MR task. Performance was evaluated before MR training, as well as prior to, and after a night of sleep or a similar daytime interval. Data showed that while men outperformed women during the pre-training test, brief MR practice was sufficient for women to achieve equivalent performance. Only participants subjected to a night of sleep were found to enhance MR performance during the retest, independently of gender. These results provide first evidence that a night of sleep facilitates MR performance compared with spending a similar daytime interval, regardless gender of the participants. Since MR is known to involve motor processes, the present data might contribute to schedule relevant mental practice interventions for fruitful applications in rehabilitation and motor learning processes.  相似文献   

4.
I/O intensive applications have posed great challenges to computational scientists. A major problem of these applications is that users have to sacrifice performance requirements in order to satisfy storage capacity requirements in a conventional computing environment. Further performance improvement is impeded by the physical nature of these storage media even when state-of-the-art I/O optimizations are employed.In this paper, we present a distributed multi-storage resource architecture, which can satisfy both performance and capacity requirements by employing multiple storage resources. Compared to a traditional single storage resource architecture, our architecture provides a more flexible and reliable computing environment. This architecture can bring new opportunities for high performance computing as well as inherit state-of-the-art I/O optimization approaches that have already been developed. It provides application users with high-performance storage access even when they do not have the availability of a single large local storage archive at their disposal. We also develop an Application Programming Interface (API) that provides transparent management and access to various storage resources in our computing environment. Since I/O usually dominates the performance in I/O intensive applications, we establish an I/O performance prediction mechanism which consists of a performance database and a prediction algorithm to help users better evaluate and schedule their applications. A tool is also developed to help users automatically generate performance data stored in databases. The experiments show that our multi-storage resource architecture is a promising platform for high performance distributed computing.  相似文献   

5.
物质流分析(SFA)方法及研究进展   总被引:2,自引:0,他引:2  
张玲  袁增伟  毕军 《生态学报》2009,29(11):6189-6198
物质流分析(substance flow analysis,SFA)通过追踪经济-环境系统特定物质的输入、输出、贮存等过程,量化经济系统中物质流动与资源利用、环境效应之间的关系,为资源环境优化管理提供科学依据.系统阐述了SFA的内涵及发展历程,介绍了SFA方法体系,在此基础上对SFA研究现状进行了评述.分析表明,SFA是产业生态学领域内一种重要的产业代谢分析方法,它在污染物迁移路径追踪及环境影响分析、战略性资源生命周期代谢分析、物质社会存量分析等方面具有十分重要的应用价值.提出了SFA的未来应用领域和发展趋势.  相似文献   

6.
Cloud computing can leverage over-provisioned resources that are wasted in traditional data centers hosting production applications by consolidating tasks with lower QoS and SLA requirements. However, the dramatic fluctuation of workloads with lower QoS and SLA requirements may impact the performance of production applications. Frequent task eviction, killing and rescheduling operations also waste CPU cycles and create overhead. This paper aims to schedule hybrid workloads in the cloud data center to reduce task failures and increase resource utilization. The multi-prediction model, including the ARMA model and the feedback based online AR model, is used to predict the current and the future resource availability. Decision to accept or reject a new task is based on the available resources and task properties. Evaluations show that the scheduler can reduce the host overload and failed tasks by nearly 70%, and increase effective resource utilization by more than 65%. The task delay performance degradation is also acceptable.  相似文献   

7.
The replication variance of individual stimulus evaluations and scale utilization across a panelist's stimulus profile are simultaneously employed to develop statistics for assessing performance of panelists. The approach provides opportunities for comparison of panelists to each other, determination of attributes for which panelist confusion is observed, isolation of stimuli presenting unstable properties, and engagement of influence weights (based on relative precision) in subsequent analyses of data. Although the methodology has been developed for applications involving sensory panelists, the statistical concepts may be extended to other data collection scenarios involving replicated determinations from bounded quantitative measurement scales.  相似文献   

8.
In today’s scaled out systems, co-scheduling data analytics work with high priority user workloads is common as it utilizes better the vast hardware availability. User workloads are dominated by periodic patterns, with alternating periods of high and low utilization, creating promising conditions to schedule data analytics work during low activity periods. To this end, we show the effectiveness of machine learning models in accurately predicting user workload intensities, essentially by suggesting the most opportune time to co-schedule data analytics work. Yet, machine learning models cannot predict the effects of performance interference when co-scheduling is employed, as this constitutes a “new” observation. Specifically, in tiered storage systems, their hierarchical design makes performance interference even more complex, thus accurate performance prediction is more challenging. Here, we quantify the unknown performance effects of workload co-scheduling by enhancing machine learning models with queuing theory ones to develop a hybrid approach that can accurately predict performance and guide scheduling decisions in a tiered storage system. Using traces from commercial systems we illustrate that queuing theory and machine learning models can be used in synergy to surpass their respective weaknesses and deliver robust co-scheduling solutions that achieve high performance.  相似文献   

9.
Development of high-performance distributed applications, called metaapplications, is extremely challenging because of their complex runtime environment coupled with their requirements of high-performance and Quality of Service (QoS). Such applications typically run on a set of heterogeneous machines with dynamically varying loads, connected by heterogeneous networks possibly supporting a wide variety of communication protocols. In spite of the size and complexity of such applications, they must provide the high-performance and QoS mandated by their users. In order to achieve the goal of high-performance, they need to adaptively utilize their computational and communication resources. Apart from the requirements of adaptive resource utilization, such applications have a third kind of requirement related to remote access QoS. Different clients, although accessing a single server resource, may have differing QoS requirements from their remote connections. A single server resource may also need to provide different QoS for different clients, depending on various issues such as the amount of trust between the server and a given client. These QoS requirements can be encapsulated under the abstraction of remote access capabilities. Metaapplications need to address all the above three requirements in order to achieve the goal of high-performance and satisfy user expectations of QoS. This paper presents Open HPC++, a programming environment for high-performance applications running in a complex and heterogeneous run-time environment. Open HPC++ provides application level tools and mechanisms to satisfy application requirements of adaptive resource utilization and remote access capabilities. Open HPC++ is designed on the lines of CORBA and uses an Object Request Broker (ORB) to support seamless communication between distributed application components. In order to provide adaptive utilization of communication resources, it uses the principle of open implementation to open up the communication mechanisms of its ORB. By virtue of its open architecture, the ORB supports multiple, possibly custom, communication protocols, along with automatic and user controlled protocol selection at run-time. An extension of the same mechanism is used to support the concept of remote access capabilities. In order to support adaptive utilization of computational resources, Open HPC++ also provides a flexible yet powerful set of load-balancing mechanisms that can be used to implement custom load-balancing strategies. The paper also presents performance evaluations of Open HPC++ adaptivity and load-balancing mechanisms. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

10.
Nowadays, the improvement of R&D productivity is the primary commitment in pharmaceutical research, both in big pharma and smaller biotech companies. To reduce costs, to speed up the discovery process and to increase the chance of success, advanced methods of rational drug design are very helpful, as demonstrated by several successful applications. Among these, computational methods able to predict the binding affinity of small molecules to specific biological targets are of special interest because they can accelerate the discovery of new hit compounds. Here we provide an overview of the most widely used methods in the field of binding affinity prediction, as well as of our own work in developing BEAR, an innovative methodology specifically devised to overtake some limitations in existing approaches. The BEAR method was successfully validated against different biological targets, and proved its efficacy in retrieving active compounds from virtual screening campaigns. The results obtained so far indicate that BEAR may become a leading tool in the drug discovery pipeline. We primarily discuss advantages and drawbacks of each technique and show relevant examples and applications in drug discovery.  相似文献   

11.
We address the problem of predicting the position of a miRNA duplex on a microRNA hairpin via the development and application of a novel SVM-based methodology. Our method combines a unique problem representation and an unbiased optimization protocol to learn from mirBase19.0 an accurate predictive model, termed MiRduplexSVM. This is the first model that provides precise information about all four ends of the miRNA duplex. We show that (a) our method outperforms four state-of-the-art tools, namely MaturePred, MiRPara, MatureBayes, MiRdup as well as a Simple Geometric Locator when applied on the same training datasets employed for each tool and evaluated on a common blind test set. (b) In all comparisons, MiRduplexSVM shows superior performance, achieving up to a 60% increase in prediction accuracy for mammalian hairpins and can generalize very well on plant hairpins, without any special optimization. (c) The tool has a number of important applications such as the ability to accurately predict the miRNA or the miRNA*, given the opposite strand of a duplex. Its performance on this task is superior to the 2nts overhang rule commonly used in computational studies and similar to that of a comparative genomic approach, without the need for prior knowledge or the complexity of performing multiple alignments. Finally, it is able to evaluate novel, potential miRNAs found either computationally or experimentally. In relation with recent confidence evaluation methods used in miRBase, MiRduplexSVM was successful in identifying high confidence potential miRNAs.  相似文献   

12.
Plant microRNAs (miRNAs) affect only a small number of targets with high sequence complementarity, while animal miRNAs usually have hundreds of targets with limited complementarity. We used artificial miRNAs (amiRNAs) to determine whether the narrow action spectrum of natural plant miRNAs reflects only intrinsic properties of the plant miRNA machinery or whether it is also due to past selection against natural miRNAs with broader specificity. amiRNAs were designed to target individual genes or groups of endogenous genes. Like natural miRNAs, they had varying numbers of target mismatches. Previously determined parameters of target selection for natural miRNAs could accurately predict direct targets of amiRNAs. The specificity of amiRNAs, as deduced from genome-wide expression profiling, was as high as that of natural plant miRNAs, supporting the notion that extensive base pairing with targets is required for plant miRNA function. amiRNAs make an effective tool for specific gene silencing in plants, especially when several related, but not identical, target genes need to be downregulated. We demonstrate that amiRNAs are also active when expressed under tissue-specific or inducible promoters, with limited nonautonomous effects. The design principles for amiRNAs have been generalized and integrated into a Web-based tool (http://wmd.weigelworld.org).  相似文献   

13.
The performance skeleton of an application is a short running program whose performance in any scenario reflects the performance of the application it represents. Specifically, the execution time of the performance skeleton is a small fixed fraction of the execution time of the corresponding application in any execution environment. Such a skeleton can be employed to quickly estimate the performance of a large application under existing network and node sharing. This paper presents a framework for automatic construction of performance skeletons of a specified execution time and evaluates their use in performance prediction with CPU and network sharing. The approach is based on capturing the execution behavior of an application and automatically generating a synthetic skeleton program that reflects that execution behavior. The paper demonstrates that performance skeletons running for a few seconds can predict the application execution time fairly accurately. Relationship of skeleton execution time, application characteristics, and nature of resource sharing, to accuracy of skeleton based performance prediction, is analyzed in detail. The goal of this research is accurate performance estimation in heterogeneous and shared computational grids.
Jaspal Subhlok (Corresponding author)Email:
  相似文献   

14.
Neurofeedback (NF) is a tool that has proven helpful in the treatment of various disorders such as epilepsy or attention deficit disorder (ADHD). Depending on the respective application, a high number of training sessions might be necessary before participants can voluntarily modulate the electroencephalographic (EEG) rhythms as instructed. In addition, many individuals never learn to do so despite numerous training sessions. Thus, we are interested in determining whether or not performance during the early training sessions can be used to predict if a participant will learn to regulate the EEG rhythms. Here, we propose an easy to use, but accurate method for predicting the performance of individual participants. We used a sample set of sensorimotor rhythm (SMR 12–15 Hz) NF training sessions (experiment 1) to predict the performance of the participants of another study (experiment 2). We then used the data obtained in experiment 2 to predict the performance of participants in experiment 1. We correctly predicted the performance of 12 out of 13 participants in the first group and all 14 participants in the second group; however, we were not able to make these predictions before the end of the eleventh training session.  相似文献   

15.
Gossip protocols and services provide a means by which failures can be detected in large, distributed systems in an asynchronous manner without the limits associated with reliable multicasting for group communications. Extending the gossip protocol such that a system reaches consensus on detected faults can be performed via a flat structure, or it can be hierarchically distributed across cooperating layers of nodes. In this paper, the performance of gossip services employing flat and hierarchical schemes is analyzed on an experimental testbed in terms of consensus time, resource utilization and scalability. Performance associated with a hierarchically arranged gossip scheme is analyzed with varying group sizes and is shown to scale well. Resource utilization of the gossip-style failure detection and consensus service is measured in terms of network bandwidth utilization and CPU utilization. Analytical models are developed for resource utilization and performance projections are made for large system sizes.  相似文献   

16.
Several algorithms have been developed that use amino acid sequences to predict whether or not a protein or a region of a protein is disordered. These algorithms make accurate predictions for disordered regions that are 30 amino acids or longer, but it is unclear whether the predictions can be directly related to the backbone dynamics of individual amino acid residues. The nuclear Overhauser effect between the amide nitrogen and hydrogen (NHNOE) provides an unambiguous measure of backbone dynamics at single residue resolution and is an excellent tool for characterizing the dynamic behavior of disordered proteins. In this report, we show that the NHNOE values for several members of a family of disordered proteins are highly correlated with the output from three popular algorithms used to predict disordered regions from amino acid sequence. This is the first test between an experimental measure of residue specific backbone dynamics and disorder predictions. The results suggest that some disorder predictors can accurately estimate the backbone dynamics of individual amino acids in a long disordered region.  相似文献   

17.
Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system’s uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.  相似文献   

18.
AlphaFold2 is a promising new tool for researchers to predict protein structures and generate high-quality models, with low backbone and global root-mean-square deviation (RMSD) when compared with experimental structures. However, it is unclear if the structures predicted by AlphaFold2 will be valuable targets of docking. To address this question, we redocked ligands in the PDBbind datasets against the experimental co-crystallized receptor structures and against the AlphaFold2 structures using AutoDock-GPU. We find that the quality measure provided during structure prediction is not a good predictor of docking performance, despite accurately reflecting the quality of the alpha carbon alignment with experimental structures. Removing low-confidence regions of the predicted structure and making side chains flexible improves the docking outcomes. Overall, despite high-quality prediction of backbone conformation, fine structural details limit the naive application of AlphaFold2 models as docking targets.  相似文献   

19.
One of the main challenges faced by biological applications is to predict protein subcellular localization in automatic fashion accurately. To achieve this in these applications, a wide variety of machine learning methods have been proposed in recent years. Most of them focus on finding the optimal classification scheme and less of them take the simplifying the complexity of biological systems into account. Traditionally, such bio-data are analyzed by first performing a feature selection before classification. Motivated by CS (Compressed Sensing) theory, we propose the methodology which performs compressed learning with a sparseness criterion such that feature selection and dimension reduction are merged into one analysis. The proposed methodology decreases the complexity of biological system, while increases protein subcellular localization accuracy. Experimental results are quite encouraging, indicating that the aforementioned sparse methods are quite promising in dealing with complicated biological problems, such as predicting the subcellular localization of Gram-negative bacterial proteins.  相似文献   

20.
The purpose of this paper is to review the economic and social implications of cloned cattle, their products, and their offspring as related to production agriculture. Cloning technology in cattle has several applications outside of traditional production agriculture. These applications can include bio-medical applications, such as the production of pharmaceuticals in the blood or milk of transgenic cattle. Cloning may also be useful in the production of research models. These models may or may not include genetic modifications. Uses in agriculture include many applications of the technology. These include making genetic copies of elite seed stock and prize winning show cattle. Other purposes may range from "insurance" to making copies of cattle that have sentimental value, similar to cloning of pets. Increased selection opportunities available with cloning may provide for improvement in genetic gain. The ultimate goal of cloning has often been envisioned as a system for producing quantity and uniformity of the perfect dairy cow. However, only if heritability were 100%, would clone mates have complete uniformity. Changes in the environment may have significant impact on the productivity and longevity of the resulting clones. Changes in consumer preferences and economic input costs may all change the definition of the perfect cow. The cost of producing such animals via cloning must be economically feasible to meet the intended applications. Present inefficiencies limit cloning opportunities to highly valued animals. Improvements are necessary to move the applications toward commercial application. Cloning has additional obstacles to conquer. Social and regulatory acceptance of cloning is paramount to its utilization in production agriculture. Regulatory acceptance will need to address the animal, its products, and its offspring. In summary, cloning is another tool in the animal biotechnology toolbox, which includes artificial insemination, sexing of semen, embryo sexing and in vitro fertilization. While it will not replace any of the above mentioned, its degree of utilization will depend on both improvement in efficiency as well as social and regulatory acceptance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号