首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There are typically multiple heterogeneous servers providing various services in cloud computing. High power consumption of these servers increases the cost of running a data center. Thus, there is a problem of reducing the power cost with tolerable performance degradation. In this paper, we optimize the performance and power consumption tradeoff for multiple heterogeneous servers. We consider the following problems: (1) optimal job scheduling with fixed service rates; (2) joint optimal service speed scaling and job scheduling. For problem (1), we present the Karush-Kuhn-Tucker (KKT) conditions and provide a closed-form solution. For problem (2), both continuous speed scaling and discrete speed scaling are considered. In discrete speed scaling, the feasible service rates are discrete and bounded. We formulate the problem as an MINLP problem and propose a distributed algorithm by online value iteration, which has lower complexity than a centralized algorithm. Our approach provides an analytical way to manage the tradeoff between performance and power consumption. The simulation results show the gain of using speed scaling, and also prove the effectiveness and efficiency of the proposed algorithms.  相似文献   

2.
The complexity and requirements of web applications are increasing in order to meet more sophisticated business models (web services and cloud computing, for instance). For this reason, characteristics such as performance, scalability and security are addressed in web server cluster design. Due to the rising energy costs and also to environmental concerns, energy consumption in this type of system has become a main issue. This paper shows energy consumption reduction techniques that use a load forecasting method, combined with DVFS (Dynamic Voltage and Frequency Scaling) and dynamic configuration techniques (turning servers on and off), in a soft real-time web server clustered environment. Our system promotes energy consumption reduction while maintaining user’s satisfaction with respect to request deadlines being met. The results obtained show that prediction capabilities increase the QoS (Quality of Service) of the system, while maintaining or improving the energy savings over state-of-the-art power management mechanisms. To validate this predictive policy, a web application running a real workload profile was deployed in an Apache server cluster testbed running Linux.  相似文献   

3.
In this work we are focusing on reducing response time and bandwidth requirements for high performance web server. Many researches have been done in order to improve web server performance by modifying the web server architecture. In contrast to these approaches, we take a different point of view, in which we consider the web server performance in OS perspective rather than web server architecture itself. To achieve these purposes we are exploring two different approaches. The first is running web server within OS kernel. We use kHTTPd as our basis for implementation. But it has a several drawbacks such as copying data redundantly, synchronous write, and processing only static data. We propose some techniques to improve these flaws. The second approach is caching dynamic data. Dynamic data can seriously reduce the performance of web servers. Caching dynamic data has been thought difficult to cache because it often change a lot more frequently than static pages and because web server needs to access database to provide service with dynamic data. To this end, we propose a solution for higher performance web service by caching dynamic data using content separation between static and dynamic portions. Benchmark results using WebStone show that our architecture can improve server performance by up to 18 percent and can reduce user’s perceived latency significantly.  相似文献   

4.
As the number of cores per node keeps increasing, it becomes increasingly important for MPI to leverage shared memory for intranode communication. This paper investigates the design and optimization of MPI collectives for clusters of NUMA nodes. We develop performance models for collective communication using shared memory and we demonstrate several algorithms for various collectives. Experiments are conducted on both Xeon X5650 and Opteron 6100 InfiniBand clusters. The measurements agree with the model and indicate that different algorithms dominate for short vectors and long vectors. We compare our shared-memory allreduce with several MPI implementations—Open MPI, MPICH2, and MVAPICH2—that utilize system shared memory to facilitate interprocess communication. On a 16-node Xeon cluster and 8-node Opteron cluster, our implementation achieves on geometric average 2.3X and 2.1X speedup over the best MPI implementation, respectively. Our techniques enable an efficient implementation of collective operations on future multi- and manycore systems.  相似文献   

5.
6.
A number of biological data resources (i.e. databases and data analytical tools) are searchable and usable on-line thanks to the internet and the World Wide Web (WWW) servers. The output from the web server is easy for us to browse. However, it is laborious and sometimes impossible for us to write a computer program that finds a useful data resource, sends a proper query and processes the output. It is a serious obstacle to the integration of distributed heterogeneous data resources. To solve the issue, we have implemented a SOAP (Simple Object Access Protocol) server and web services that provide a program-friendly interface. The web services are accessible at http://www.xml.nig.ac.jp/.  相似文献   

7.
Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one-dimensional differential equation model of weight change based on the energy balance equation paired to an algebraic relationship between fat-free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8±1.3 kg for the underfeeding studies and 2.5±1.6 kg for the overfeeding study. Comparison of the model predictions to other one-dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight-change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight-change studies.  相似文献   

8.
Qiu J  Sheffler W  Baker D  Noble WS 《Proteins》2008,71(3):1175-1182
Protein structure prediction is an important problem of both intellectual and practical interest. Most protein structure prediction approaches generate multiple candidate models first, and then use a scoring function to select the best model among these candidates. In this work, we develop a scoring function using support vector regression (SVR). Both consensus-based features and features from individual structures are extracted from a training data set containing native protein structures and predicted structural models submitted to CASP5 and CASP6. The SVR learns a scoring function that is a linear combination of these features. We test this scoring function on two data sets. First, when used to rank server models submitted to CASP7, the SVR score selects predictions that are comparable to the best performing server in CASP7, Zhang-Server, and significantly better than all the other servers. Even if the SVR score is not allowed to select Zhang-Server models, the SVR score still selects predictions that are significantly better than all the other servers. In addition, the SVR is able to select significantly better models and yield significantly better Pearson correlation coefficients than the two best Quality Assessment groups in CASP7, QA556 (LEE), and QA634 (Pcons). Second, this work aims to improve the ability of the Robetta server to select best models, and hence we evaluate the performance of the SVR score on ranking the Robetta server template-based models for the CASP7 targets. The SVR selects significantly better models than the Robetta K*Sync consensus alignment score.  相似文献   

9.
Storing enormous amount of data on hybrid storage systems has become a widely accepted solution for today’s production level applications in order to trade off the performance and cost. However, how to improve the performance of large scale storage systems with hybrid components (e.g. solid state disks, hard drives and tapes) and complicated user behaviors is not fully explored. In this paper, we conduct an in-depth case study (we call it FastStor) on designing a high performance hybrid storage system to support one of the world’s largest satellite images distribution systems operated by the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) center. We demonstrate how to combine conventional caching policies with innovative current popularity oriented and user-specific prefetching algorithms to improve the performance of the EROS system. We evaluate the effectiveness of our proposed solution using over 5 million real world user download requests provided by EROS. Our experimental results show that using the Least Recently Used (LRU) caching policy alone, we are able to achieve an overall 64 % or 70 % hit ratio on a 100 TB or 200 TB FTP server farm composed of Solid State Disks (SSDs) respectively. The hit ratio can be further improved to 70 % (for 100 TB SSDs) and 76 % (for 200 TB SSDs) if intelligent prefetching algorithms are used together with LRU.  相似文献   

10.
Metal removal potential of indigenous mining microorganisms from acid mine drainage (AMD) has been well recognised in situ at mine sites. However, their removal capacity requires to be investigated for AMD treatment. In the reported study, the capacity of an indigenous AMD microbial consortium dominated with Klebsormidium sp., immobilised in a photo-rotating biological contactor (PRBC), was investigated for removing various elements from a multi-ion synthetic AMD. The synthetic AMD was composed of major (Cu, Mn, Mg, Zn, Ca, Na, Ni) and trace elements (Fe, Al, Cr, Co, Se, Ag, Mo) at initial concentrations of 2 to 100 mg/L and 0.005 to 1 mg/L, respectively. The PRBC was operated for two 7-day batch periods under pH conditions of 3 and 5. The maximum removal was observed after 3 and 6 days at pH 3 and 5, respectively. Daily water analysis data demonstrated the ability of the algal–microbial biofilm to remove an overall average of 25–40 % of the major elements at pH 3 in the order of Na?>?Cu?>?Ca?>?Mg?>?Mn?>?Ni?>?Zn, whereas a higher removal (35–50 %) was observed at pH 5 in the order of Cu?>?Mn?>?Mg?>?Ca?>?Ni?>?Zn?>?Na. The removal efficiency of the system for trace elements varied extensively between 3 and 80 % at the both pH conditions. The batch data results demonstrated the ability for indigenous AMD algal–microbial biofilm for removing a variety of elements from AMD in a PRBC. The work presents the potential for further development and scale-up to use PBRC inoculated with AMD microorganisms at mine sites for first or secondary AMD treatment.  相似文献   

11.
Novotny M  Madsen D  Kleywegt GJ 《Proteins》2004,54(2):260-270
When a new protein structure has been determined, comparison with the database of known structures enables classification of its fold as new or belonging to a known class of proteins. This in turn may provide clues about the function of the protein. A large number of fold comparison programs have been developed, but they have never been subjected to a comprehensive and critical comparative analysis. Here we describe an evaluation of 11 publicly available, Web-based servers for automatic fold comparison. Both their functionality (e.g., user interface, presentation, and annotation of results) and their performance (i.e., how well established structural similarities are recognized) were assessed. The servers were subjected to a battery of performance tests covering a broad spectrum of folds as well as special cases, such as multidomain proteins, Calpha-only models, new folds, and NMR-based models. The CATH structural classification system was used as a reference. These tests revealed the strong and weak sides of each server. On the whole, CE, DALI, MATRAS, and VAST showed the best performance, but none of the servers achieved a 100% success rate. Where no structurally similar proteins are found by any individual server, it is recommended to try one or two other servers before any conclusions concerning the novelty of a fold are put on paper.  相似文献   

12.
During recent years many protein fold recognition methods have been developed, based on different algorithms and using various kinds of information. To examine the performance of these methods several evaluation experiments have been conducted. These include blind tests in CASP/CAFASP, large scale benchmarks, and long-term, continuous assessment with newly solved protein structures. These studies confirm the expectation that for different targets different methods produce the best predictions, and the final prediction accuracy could be improved if the available methods were combined in a perfect manner. In this article a neural-network-based consensus predictor, Pcons, is presented that attempts this task. Pcons attempts to select the best model out of those produced by six prediction servers, each using different methods. Pcons translates the confidence scores reported by each server into uniformly scaled values corresponding to the expected accuracy of each model. The translated scores as well as the similarity between models produced by different servers is used in the final selection. According to the analysis based on two unrelated sets of newly solved proteins, Pcons outperforms any single server by generating approximately 8%-10% more correct predictions. Furthermore, the specificity of Pcons is significantly higher than for any individual server. From analyzing different input data to Pcons it can be shown that the improvement is mainly attributable to measurement of the similarity between the different models. Pcons is freely accessible for the academic community through the protein structure-prediction metaserver at http://bioinfo.pl/meta/.  相似文献   

13.
Fold recognition techniques assist the exploration of protein structures, and web-based servers are part of the standard set of tools used in the analysis of biochemical problems. Despite their success, current methods are only able to predict the correct fold in a relatively small number of cases. We propose an approach that improves the selection of correct folds from among the results of two methods implemented as web servers (SAMT99 and 3DPSSM). Our approach is based on the training of a system of neural networks with models generated by the servers and a set of associated characteristics such as the quality of the sequence-structure alignment, distribution of sequence features (sequence-conserved positions and apolar residues), and compactness of the resulting models. Our results show that it is possible to detect adequate folds to model 80% of the sequences with a high level of confidence. The improvements achieved by taking into account sequence characteristics open the door to future improvements by directly including such factors in the step of model generation. This approach has been implemented as an automatic system LIBELLULA, available as a public web server at http://www.pdg.cnb.uam.es/servers/libellula.html.  相似文献   

14.
An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.  相似文献   

15.
Protein-protein docking plays an important role in the computational prediction of the complex structure between two proteins. For years, a variety of docking algorithms have been developed, as witnessed by the critical assessment of prediction interactions (CAPRI) experiments. However, despite their successes, many docking algorithms often require a series of manual operations like modeling structures from sequences, incorporating biological information, and selecting final models. The difficulties in these manual steps have significantly limited the applications of protein-protein docking, as most of the users in the community are nonexperts in docking. Therefore, automated docking like a web server, which can give a comparable performance to human docking protocol, is pressingly needed. As such, we have participated in the blind CAPRI experiments for Rounds 38-45 and CASP13-CAPRI challenge for Round 46 with both our HDOCK automated docking web server and human docking protocol. It was shown that our HDOCK server achieved an “acceptable” or higher CAPRI-rated model in the top 10 submitted predictions for 65.5% and 59.1% of the targets in the docking experiments of CAPRI and CASP13-CAPRI, respectively, which are comparable to 66.7% and 54.5% for human docking protocol. Similar trends can also be observed in the scoring experiments. These results validated our HDOCK server as an efficient automated docking protocol for nonexpert users. Challenges and opportunities of automated docking are also discussed.  相似文献   

16.
Evaluation of protein structure prediction methods is difficult and time-consuming. Here, we describe EVA, a web server for assessing protein structure prediction methods, in an automated, continuous and large-scale fashion. Currently, EVA evaluates the performance of a variety of prediction methods available through the internet. Every week, the sequences of the latest experimentally determined protein structures are sent to prediction servers, results are collected, performance is evaluated, and a summary is published on the web. EVA has so far collected data for more than 3000 protein chains. These results may provide valuable insight to both developers and users of prediction methods. AVAILABILITY: http://cubic.bioc.columbia.edu/eva. CONTACT: eva@cubic.bioc.columbia.edu  相似文献   

17.
Weather forecasting is essential in various applications such as olive smart farming. Farmers use the predicted weather data to take appropriate actions with the aim of increasing the crop production. Many deep learning models have been developed for tackling such a problem. However, olive groves are located in remote areas with no Internet connectivity, therefore these models are not applicable as they require either powerful processors or communication with cloud servers for inference. In this work, we propose a deep learning encoder-decoder model that uses a seasonal attention mechanism for time series forecasting of weather variables. The proposed model is non-complex, yet more powerful, compared to the more complex models in the literature. We use this model as the core of a framework that preprocess the training and testing data, train the model, and deploy the model on a resource-constrained microcontroller. Using real-life weather datasets of Spanish, Greek, and Chinese weather stations, we prove that the proposed model achieves a higher prediction accuracy compared to the existing literature. More specifically, the achieved prediction mean absolute error (MAE) is 2.13 °C and root mean squared error (RMSE) is 2.64 °C. This outstanding accuracy performance is achieved with the model requiring only 37.6 kB of memory for storing the model parameters with a total memory requirement of 50.1 kB. Since the model is relatively non-complex, we implement it on the Raspberry Pi Pico platform which has a very low cost with minimal power consumption compared to other embedded platforms. We also build a prototype and test it to verify the model's ability to achieve the target objective in real-life scenarios.  相似文献   

18.
There is growing incentive to reduce the power consumed by large-scale data centers that host online services such as banking, retail commerce, and gaming. Virtualization is a promising approach to consolidating multiple online services onto a smaller number of computing resources. A virtualized server environment allows computing resources to be shared among multiple performance-isolated platforms called virtual machines. By dynamically provisioning virtual machines, consolidating the workload, and turning servers on and off as needed, data center operators can maintain the desired quality-of-service (QoS) while achieving higher server utilization and energy efficiency. We implement and validate a dynamic resource provisioning framework for virtualized server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a lookahead control scheme. The proposed approach accounts for the switching costs incurred while provisioning virtual machines and explicitly encodes the corresponding risk in the optimization problem. Experiments using the Trade6 enterprise application show that a server cluster managed by the controller conserves, on average, 22% of the power required by a system without dynamic control while still maintaining QoS goals. Finally, we use trace-based simulations to analyze controller performance on server clusters larger than our testbed, and show how concepts from approximation theory can be used to further reduce the computational burden of controlling large systems.
Guofei JiangEmail:
  相似文献   

19.
Content-Aware Dispatching Algorithms for Cluster-Based Web Servers   总被引:1,自引:0,他引:1  
Cluster-based Web servers are leading architectures for highly accessed Web sites. The most common Web cluster architecture consists of replicated server nodes and a Web switch that routes client requests among the nodes. In this paper, we consider content-aware Web switches that can use application level information to assign client requests. We evaluate the performance of some representative state-of-the-art dispatching algorithms for Web switches operating at layer 7 of the OSI protocol stack. Specifically, we consider dispatching algorithms that use only client information as well as the combination of client and server information for load sharing, reference locality or service partitioning. We demonstrate through a wide set of simulation experiments that dispatching policies aiming to improve locality in server caches give best results for traditional Web publishing sites providing static information and some simple database searches. On the other hand, when we consider more recent Web sites providing dynamic and secure services, dispatching policies that aim to share the load are the most effective.  相似文献   

20.

Background

The accurate prediction of ligand binding residues from amino acid sequences is important for the automated functional annotation of novel proteins. In the previous two CASP experiments, the most successful methods in the function prediction category were those which used structural superpositions of 3D models and related templates with bound ligands in order to identify putative contacting residues. However, whilst most of this prediction process can be automated, visual inspection and manual adjustments of parameters, such as the distance thresholds used for each target, have often been required to prevent over prediction. Here we describe a novel method FunFOLD, which uses an automatic approach for cluster identification and residue selection. The software provided can easily be integrated into existing fold recognition servers, requiring only a 3D model and list of templates as inputs. A simple web interface is also provided allowing access to non-expert users. The method has been benchmarked against the top servers and manual prediction groups tested at both CASP8 and CASP9.

Results

The FunFOLD method shows a significant improvement over the best available servers and is shown to be competitive with the top manual prediction groups that were tested at CASP8. The FunFOLD method is also competitive with both the top server and manual methods tested at CASP9. When tested using common subsets of targets, the predictions from FunFOLD are shown to achieve a significantly higher mean Matthews Correlation Coefficient (MCC) scores and Binding-site Distance Test (BDT) scores than all server methods that were tested at CASP8. Testing on the CASP9 set showed no statistically significant separation in performance between FunFOLD and the other top server groups tested.

Conclusions

The FunFOLD software is freely available as both a standalone package and a prediction server, providing competitive ligand binding site residue predictions for expert and non-expert users alike. The software provides a new fully automated approach for structure based function prediction using 3D models of proteins.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号