首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Enabling dynamic data centers with a smart bare-metal server platform   总被引:2,自引:0,他引:2  
Ever increasing data center complexity poses a significant burden on IT administrators. This burden can become unbearable without the help of self-managed systems that monitor themselves and automatically modify their state in order to carry out business processes according to high level objectives set by service level agreements (SLA) and policies. Among the key IT management tasks that must be automated and enhanced to realize the idea of an autonomic and highly dynamic data center, are discovery, configuration, and provisioning of new servers. In this direction, this paper describes pre-boot capabilities endowing the bare metal server with the ability to be discovered, queried, configured, and provisioned at time zero using industry standards like Common Information Model (CIM), CIM-XML, and Service Location Protocol (SLP). The capabilities are implemented as a payload of an Intel® Extensible Firmware Interface (EFI)-compliant BIOS, the Intel® Rapid Boot Toolkit (IRBT), allowing a resource manager to discover a new server during pre-boot, possibly in a bare-metal state, and then perform an asset inventory, configure the server including CPU-specific settings, and provision it with the most appropriate image. All these tasks may be carried out based on decisions taken by the resource manager according to server capabilities, application requirements, SLAs, and high-level policies. Additionally, this system uses reliable protocols, thus minimizing error possibilities. Future work is proposed, including the integration of a persistent hypervisor for enhanced management capabilities.  相似文献   

2.
Aquatic invasive species (AIS) threaten freshwater ecosystem structure and function worldwide. Such changes trigger a variety of negative impacts on lake recreation and the economics of individuals, towns, and states. Recent studies suggest environmental educational efforts have been an effective tool in raising public awareness of AIS; however, we lack a more general understanding of how public versus manager knowledge of invasive species and their threats to freshwater ecosystems and human livelihoods. To fill this gap, we surveyed New Hampshire lake users and interviewed lake managers to 1) identify the key issues surrounding AIS management; and 2) assess public awareness of the AIS problem and their management at three lakes in New Hampshire. Our interviews with managers suggest that educational outreach is a key mechanism for combatting the AIS problem. The general public surveys further differentiated respondents into two groups that differed demography and in AIS knowledge and concern. Together, our results indicate that AIS management efforts depend heavily on funding, regional cooperation, and commitment of individual managers and lake-users. Blanket outreach campaigns have been effective in expanding the knowledge reach of AIS, but they remain too general to engage the public in AIS spread prevention and eradication. Instead, integrating practices of rapid response and appeals to responsibility norms among lake users is paramount for combatting the AIS problem in this region.  相似文献   

3.
High work stress has been consistently associated with disturbed autonomic balance, specifically, lowered vagal cardiac control and increased sympathetic activity, which may lead to increased cardiovascular risk. Stress management procedures have been proposed to reduce autonomic dysfunctions related to work stress in different categories of workers exposed to heightened work demands, while a limited number of studies addressed this issue in managers. The present study was aimed at evaluating the effectiveness of a respiratory sinus arrhythmia (RSA) biofeedback (BF) intervention on psychological and physiological outcomes, in managers with high-level work responsibilities. Thirty-one managers leading outstanding private or public companies were randomly assigned to either a RSA-BF training (RSA-BF; N = 16) or a control group (N = 15). The RSA-BF training consisted of five weekly 45 min sessions, designed to increase RSA, whereas controls had to provide a daily stress diary once a week. After the training, managers in both groups reported reduced heart rate at rest, lower anxiety levels and improvement in health-related quality of life. More importantly, managers in the RSA-BF group showed increased vagal control (as indexed by increased RSA), decreased sympathetic arousal (as indexed by reduced skin conductance and systolic blood pressure) and lower emotional interferences, compared to managers in the control group. Results from this study showed that RSA-BF training was effective in improving cardiac autonomic balance at rest. Moreover, findings from this study underline the effectiveness of biofeedback in reducing psychophysiological negative outcomes associated with stress in managers.  相似文献   

4.
As outsourcing data centers emerge to host applications and services from many different organizations, it is critical for data center owners to isolate different applications while dynamically and optimally allocate sharable resources among them. To address this issue, we propose a virtual-appliance-based autonomic resource provisioning framework for large virtualized data centers. We present the architecture of the data center with enriched autonomic features. We define a non-linear constrained optimization model for dynamic resource provisioning and present a novel analytic solution. Key factors, including virtualization overhead and reconfiguration delay, are incorporated into the model. Experimental results based on a prototype demonstrate that the system-level performance has been greatly improved by taking advantage of fine-grained server consolidation, and the whole system exhibits flexible adaptation in failure scenarios. Experiments with the impact of switching delay also show the efficiency of the framework due to significantly reduced provisioning time.
Zhihui DuEmail:
  相似文献   

5.
A virtual server is a server whose location in an internet is virtual; it may move from one physical site to another, and it may span a dynamically changing number of physical sites. In particular, during periods of high load, it may grow to new machines, while in other times it may shrink into a single host, and may even allow other virtual servers to run on the same host. This paper describes the design and architecture of Symphony, a management infrastructure for executing virtual servers in internet settings. This design is based on combining CORBA technology with group communication capabilities, for added reliability and fault tolerance.  相似文献   

6.
Qiao LA  Zhu J  Liu Q  Zhu T  Song C  Lin W  Wei G  Mu L  Tao J  Zhao N  Yang G  Liu X 《Nucleic acids research》2004,32(14):4175-4181
The integration of bioinformatics resources worldwide is one of the major concerns of the biological community. We herein established the BOD (Bioinformatics on demand) system to use Grid computing technology to set up a virtual workbench via a web-based platform, to assist researchers performing customized comprehensive bioinformatics work. Users will be able to submit entire search queries and computation requests, e.g. from DNA assembly to gene prediction and finally protein folding, from their own office using the BOD end-user web interface. The BOD web portal parses the user's job requests into steps, each of which may contain multiple tasks in parallel. The BOD task scheduler takes an entire task, or splits it into multiple subtasks, and dispatches the task or subtasks proportionally to computation node(s) associated with the BOD portal server. A node may further split and distribute an assigned task to its sub-nodes using a similar strategy. In the end, the BOD portal server receives and collates all results and returns them to the user. BOD uses a pipeline model to describe the user's submitted data and stores the job requests/status/results in a relational database. In addition, an XML criterion is established to capture task computation program details.  相似文献   

7.
In this work we are focusing on reducing response time and bandwidth requirements for high performance web server. Many researches have been done in order to improve web server performance by modifying the web server architecture. In contrast to these approaches, we take a different point of view, in which we consider the web server performance in OS perspective rather than web server architecture itself. To achieve these purposes we are exploring two different approaches. The first is running web server within OS kernel. We use kHTTPd as our basis for implementation. But it has a several drawbacks such as copying data redundantly, synchronous write, and processing only static data. We propose some techniques to improve these flaws. The second approach is caching dynamic data. Dynamic data can seriously reduce the performance of web servers. Caching dynamic data has been thought difficult to cache because it often change a lot more frequently than static pages and because web server needs to access database to provide service with dynamic data. To this end, we propose a solution for higher performance web service by caching dynamic data using content separation between static and dynamic portions. Benchmark results using WebStone show that our architecture can improve server performance by up to 18 percent and can reduce user’s perceived latency significantly.  相似文献   

8.
We have built a microarray database, StressDB, for management of microarray data from our studies on stress-modulated genes in Arabidopsis. StressDB provides small user groups with a locally installable web-based relational microarray database. It has a simple and intuitive architecture and has been designed for cDNA microarray technology users. StressDB uses Windows(trade mark) 2000 as the centralized database server with Oracle(trade mark) 8i as the relational database management system. It allows users to manage microarray data and data-related biological information over the Internet using a web browser. The source-code is currently available on request from the authors and will soon be made freely available for downloading from our website athttp://arastressdb.cac.psu.edu.  相似文献   

9.
While mastery of the scientific literature is a strongly desirable trait for undergraduate students, the sheer volume of the current literature has complicated the challenge of teaching scientific literacy. Part of the response to this ever-increasing volume of resources includes formal instruction in the use of reference manager software while engaging students with the primary literature. This article describes the incorporation of the reference manager program Zotero into a chemical literature course to facilitate the use of digital resources and to better enable them to use proper citation skills in their technical writing.  相似文献   

10.
Video-on-demand (VOD) servers need to be efficiently designed in order to support a large number of users viewing the same or different videos at different rates. While considering a disk-array based VOD server, use of a shared buffer at the server end may be more economical than the sole use of dedicated buffers at each user's end. In this paper, we propose a simple buffer sharing architecture that may be used when disk-array based video servers are used. Our aim is to support the maximum number of users for a given number of video server disks while employing a simple scheme requiring less buffer space. The number of video segment retrievals that can occur within a certain time (the service round) is maximum when the scan disk scheduling algorithm is used. Consequently, we shall assume use of the scan algorithm for disk retrieval. The VOD server has a buffer manager that directs retrieved segments to appropriate buffer locations depending on their release and deadlines. The release and deadlines of segments are such that buffer requirement at the user's set-top box is minimized to two video segments while avoiding video starvation and buffer overflow at the user's end. We propose a novel scheme for the operation of the shared buffer that aims at increasing buffer utilization and decreasing cell loss due to buffer overflow. An ATM based broadband network is assumed and all segments are stored in buffers as fixed length ATM cells. We also use a novel scheme for grouping frames into segments and illustrate its advantages over earlier ones. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

11.
In mass spectrometry-based protein quantification, peptides that are shared across different protein sequences are often discarded as being uninformative with respect to each of the parent proteins. We investigate the use of shared peptides which are ubiquitous (~50% of peptides) in mass spectrometric data-sets for accurate protein identification and quantification. Different from existing approaches, we show how shared peptides can help compute the relative amounts of the proteins that contain them. Also, proteins with no unique peptide in the sample can still be analyzed for relative abundance. Our article uses shared peptides in protein quantification and makes use of combinatorial optimization to reduce the error in relative abundance measurements. We describe the topological and numerical properties required for robust estimates, and use them to improve our estimates for ill-conditioned systems. Extensive simulations validate our approach even in the presence of experimental error. We apply our method to a model of Arabidopsis thaliana root knot nematode infection, and investigate the differential role of several protein family members in mediating host response to the pathogen.  相似文献   

12.
Virtualization is widely used in cloud computing environments to efficiently manage resources, but it also raises several challenges. One of them is the fairness issue of resource allocation among virtual machines. Traditional virtualized resource allocation approaches distribute physical resources equally without taking into account the actual workload of each virtual machine and thus often leads to wasting. In this paper, we propose a virtualized resource auction and allocation model (VRAA) based on incentive and penalty to correct this wasting problem. In our approach, we use Nash equilibrium of cooperative games to fairly allocate resources among multiple virtual machines to maximize revenue of the system. To illustrate the effectiveness of the proposed approach, we then apply the basic laws of auction gaming to investigate how CPU allocation and contention can affect applications’ performance (i.e., response time), and its effect on CPU utilization. We find that in our VRAA model, the fairness index is high, and the resource allocation is closely proportional to the actual workloads of the virtual machines, so the wasting of resources is reduced. Experiment results show that our model is general, and can be applied to other virtualized non-CPU resources.  相似文献   

13.
With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.  相似文献   

14.
EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a 'node', a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets biomedical scientists in Switzerland and elsewhere, offering them access to a collection of important sequence analysis tools mirrored from other sites or developed locally. We describe here the Swiss EMBnet node web site (http://www.ch.embnet.org), which presents a number of original services not available anywhere else.  相似文献   

15.
COVID-19 vaccines have been approved for children of age five and older in many countries. However, there is an ongoing debate as to whether children should be vaccinated and at what priority. In this work, we use mathematical modeling and optimization to study how vaccine allocations to different age groups effect epidemic outcomes. In particular, we consider the effect of extending vaccination campaigns to include the vaccination of children. When vaccine availability is limited, we consider Pareto-optimal allocations with respect to competing measures of the number of infections and mortality and systematically study the trade-offs among them. In the scenarios considered, when some weight is given to the number of infections, we find that it is optimal to allocate vaccines to adolescents in the age group 10-19, even when they are assumed to be less susceptible than adults. We further find that age group 0-9 is included in the optimal allocation for sufficiently high values of the basic reproduction number.  相似文献   

16.
In this article we consider the problem of determining the minimum cost configuration (number of machines and pallets) for a flexible manufacturing system with the constraint of meeting a prespecified throughput, while simultaneously allocating the total workload among the machines (or groups of machines). Our procedure allows consideration of upper and lower bounds on the workload at each machine group. These bounds arise as a consequence of precedence constraints among the various operations and/or limitations on the number or combinations of operations that can be assigned to a machine because of constraints on tool slots or the space required to store assembly components. Earlier work on problems of this nature assumes that the workload allocation is given. For the single-machine-type problem we develop an efficient implicit enumeration procedure that uses fathoming rules to eliminate dominated configurations, and we present computational results. We discuss how this procedure can be used as a building block in solving the problem with multiple machine types.  相似文献   

17.
We describe a probabilistic approach to simultaneous image segmentation and intensity estimation for complementary DNA microarray experiments. The approach overcomes several limitations of existing methods. In particular, it (a) uses a flexible Markov random field approach to segmentation that allows for a wider range of spot shapes than existing methods, including relatively common 'doughnut-shaped' spots; (b) models the image directly as background plus hybridization intensity, and estimates the two quantities simultaneously, avoiding the common logical error that estimates of foreground may be less than those of the corresponding background if the two are estimated separately; and (c) uses a probabilistic modeling approach to simultaneously perform segmentation and intensity estimation, and to compute spot quality measures. We describe two approaches to parameter estimation: a fast algorithm, based on the expectation-maximization and the iterated conditional modes algorithms, and a fully Bayesian framework. These approaches produce comparable results, and both appear to offer some advantages over other methods. We use an HIV experiment to compare our approach to two commercial software products: Spot and Arrayvision.  相似文献   

18.
We present a rough-cut analysis tool that quickly determines a few potential cost-effective designs at the initial design stage of flexible assembly systems (FASs) prior to a detailed analysis such as simulation. It uses quantitative methods for selecting and configuring the components of an FAS suitable for medium to high volumes of several similar products. The system is organized as a series of assembly stations linked with an automated material-handling system moving parts in a unidirectional flow. Each station consists of a single machine or of identical parallel machines. The methods exploit the ability of flexible hardware to switch almost instantaneously from product to product. Our approach is particularly suitable where the product mix is expected to be stable, since we combine the hardware-configuration phase with the task-allocation phase. For the required volume of products, we use integer programming to select the number of stations and the number of machines at each station and to allocate tasks to stations. We use queueing network analysis, which takes into account the mean and variance of processing times among different products to determine the necessary capacity of the material-handling system. We iterate between the two analyses to find the combined solution with the lowest costs. Work-in-process costs are also included in the analysis. Computational results are presented.  相似文献   

19.
There is growing incentive to reduce the power consumed by large-scale data centers that host online services such as banking, retail commerce, and gaming. Virtualization is a promising approach to consolidating multiple online services onto a smaller number of computing resources. A virtualized server environment allows computing resources to be shared among multiple performance-isolated platforms called virtual machines. By dynamically provisioning virtual machines, consolidating the workload, and turning servers on and off as needed, data center operators can maintain the desired quality-of-service (QoS) while achieving higher server utilization and energy efficiency. We implement and validate a dynamic resource provisioning framework for virtualized server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a lookahead control scheme. The proposed approach accounts for the switching costs incurred while provisioning virtual machines and explicitly encodes the corresponding risk in the optimization problem. Experiments using the Trade6 enterprise application show that a server cluster managed by the controller conserves, on average, 22% of the power required by a system without dynamic control while still maintaining QoS goals. Finally, we use trace-based simulations to analyze controller performance on server clusters larger than our testbed, and show how concepts from approximation theory can be used to further reduce the computational burden of controlling large systems.
Guofei JiangEmail:
  相似文献   

20.
目的:探究吲哚箐绿联合亚甲蓝在子宫内膜癌术中前哨淋巴结识别中的应用价值。方法:选取2016年8月~2017年9月我院收治的子宫内膜癌患者93例,采用随机数字表法分为两组。对照组将亚甲蓝蓝染的淋巴结作为前哨淋巴结,观察组在对照组的基础上使用吲哚箐绿,蓝染和荧光显影的淋巴结作为前哨淋巴结。比较两组患者的前哨淋巴结切除时间、术中出血量、淋巴结切除数量、主动脉旁淋巴结切除例数、前哨淋巴结识别成功率,并比较两种方法的准确率、敏感性和特异性。术后随访12个月,对两组患者的复发情况和相关并发症进行比较。结果:两组患者的前哨淋巴结切除时间、术中出血量、淋巴结切除数量和主动脉旁淋巴结切除例数比较均无统计学差异(P>0.05);观察组的前哨淋巴结识别成功率、准确率和特异性均显著高于对照组(P<0.05),两组敏感度、复发率比较均无统计学差异(P>0.05)。两组在随访期间均未发生皮肤坏死、过敏或永久性着色等相关不良反应。结论:吲哚箐绿联合亚甲蓝在子宫内膜癌术中识别前哨淋巴结的应用价值显著高于单用亚甲蓝。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号