首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes solutions to monitor the load and to balance the load of cloud data center. The proposed solutions work in two phases and graph theoretical concepts are applied in both phases. In the first phase, cloud data center is modeled as a network graph. This network graph is augmented with minimum dominating set concept of graph theory for monitoring its load. For constructing minimum dominating set, this paper proposes a new variant of minimum dominating set (V-MDS) algorithm and is compared with existing construction algorithms proposed by Rooji and Fomin. The V-MDS approach of querying cloud data center load information is compared with Central monitor approach. The second phase focuses on system and network-aware live virtual machine migration for load balancing cloud data center. For this, a new system and traffic-aware live VM migration for load balancing (ST-LVM-LB) algorithm is proposed and is compared with existing benchmarked algorithms dynamic management algorithm (DMA) and Sandpiper. To study the performance of the proposed algorithms, CloudSim3.0.3 simulator is used. The experimental results show that, V-MDS algorithm takes quadratic time complexity, whereas Rooji and Fomin algorithms take exponential time complexity. Then the V-MDS approach for querying Cloud Data Center load information is compared with the Central monitor approach and the experimental result shows that the proposed approach reduces the number of message updates by half than the Central monitor approach. The experimental results show on load balancing that the developed ST-LVM-LB algorithm triggers lesser Virtual Machine migrations, takes lesser time and migration cost to migrate with minimum network overhead. Thus the proposed algorithms improve the service delivery performance of cloud data center by incorporating graph theoretical solutions in monitoring and balancing the load.  相似文献   

2.
This paper focuses on devising an efficient algorithm for load balancing on the promising biswapped interconnection networks which were recently proposed as a better architecture over the well-known OTIS networks. The proposed algorithm is called GPM which reduces the number of load balancing steps of the existed algorithms obviously. GPM algorithm first schedules load flows on inter-groups links to achieve the balanced status among groups. Then a general load balancing strategy is executed in each of all groups to balance processor loads. The analytical model proves that GPM algorithm is efficient and results of computer simulation experiment indicate that GPM can implement load balancing in biswapped network interconnected environments efficiently, in terms of various parameters.  相似文献   

3.
The evolution of control algorithms for closed-loop regulation of blood glucose levels is described. Because of the rapid response time of the BIOSTATOR Glucose Analyzer, a derivative algorithm can be applied to replace the previous generation of "predictor" algorithms for the calculation of dynamic insulin infusion rates.  相似文献   

4.
Tropical forests are significant carbon sinks and their soils’ carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms—including the model tuning and predictor selection—were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models’ predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction.  相似文献   

5.
3D morphing is a popular technique for creating a smooth transition between two objects. In this paper we integrate volume morphing and rendering in a distributed network environment to speed up the computation efficiency. We describe our proposed system architecture of distributed volume morphing and the proposed algorithms, along with their implementation and performance on the networked workstations. A load evaluation function is proposed to partition the workload and the workstation cluster for better load balancing and then to improve the performance under highly uneven load situation. The performance evaluation for five load balancing strategies are conducted. Among them, the strategy ‘Request’ performs the best in terms of speedup. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

6.
Problems related to the flow management of a flexible manufacturing system (FMS) are here formulated in terms of combinatorial optimization. We consider a system consisting of several multitool automated machines, each one equipped with a possibly different tool set and linked to each other by a transportation system for part moving. The system operates with a given production mix. The focused flow-management problem is that of finding the part routings allowing for an optimal machine workload balancing. The problem is formulated in terms of a particular capacity assignment problem. With the proposed approach, a balanced solution can be achieved by routing parts on a limited number of different paths. Such a balancing routing can be found in polynomial time. We also give polynomial-time and-space algorithms for choosing, among all workload-balancing routings, the ones that minimize the global amount of part transfer among all machines.  相似文献   

7.
The availability of low cost microcomputers and the evolution of computer networks have increased the development of distributed systems. In order to get a better process allocation on distributed environments, several load balancing algorithms have been proposed. Generally, these algorithms consider as the information policy’s load index the length of the CPU’s process waiting queue. This paper modifies the Server-Initiated Lowest algorithm by using a load index based on the resource occupation. Using this load index the Server-Initiated Lowest algorithm is compared to the Stable symmetrically initiated, which nowadays is defined as the best choice. The comparisons are made by using simulations. The simulations showed that the modified Server-Initiated Lowest algorithm had better results than the Symmetrically Initiated one.  相似文献   

8.
Task scheduling is one of the most challenging aspects to improve the overall performance of cloud computing and optimize cloud utilization and Quality of Service (QoS). This paper focuses on Task Scheduling optimization using a novel approach based on Dynamic dispatch Queues (TSDQ) and hybrid meta-heuristic algorithms. We propose two hybrid meta-heuristic algorithms, the first one using Fuzzy Logic with Particle Swarm Optimization algorithm (TSDQ-FLPSO), the second one using Simulated Annealing with Particle Swarm Optimization algorithm (TSDQ-SAPSO). Several experiments have been carried out based on an open source simulator (CloudSim) using synthetic and real data sets from real systems. The experimental results demonstrate the effectiveness of the proposed approach and the optimal results is provided using TSDQ-FLPSO compared to TSDQ-SAPSO and other existing scheduling algorithms especially in a high dimensional problem. The TSDQ-FLPSO algorithm shows a great advantage in terms of waiting time, queue length, makespan, cost, resource utilization, degree of imbalance, and load balancing.  相似文献   

9.
Simulated annealing (SA) is a general-purpose optimization technique widely used in various combinatorial optimization problems. However, the main drawback of this technique is a long computation time required to obtain a good quality of solution. Clusters have emerged as a feasible and popular platform for parallel computing in many applications. Computing nodes on many of the clusters available today are temporally heterogeneous. In this study, multiple Markov chain (MMC) parallel simulated annealing (PSA) algorithms have been implemented on a temporally heterogeneous cluster of workstations to solve the graph partitioning problem and their performance has been analyzed in detail. Temporal heterogeneity of a cluster of workstations is harnessed by employing static and dynamic load balancing techniques to further improve efficiency and scalability of the MMC PSA algorithms.  相似文献   

10.
Identifying genes indispensable for an organism‘s life and their characteristics is one of the central questions in current biological research, and hence it would be helpful to develop computational approaches towards the prediction of essential genes. The performance of a predictor is usually measured by the area under the receiver operating characteristic curve (AUC). We propose a novel method by implementing genetic algorithms to maximize the partial AUC that is restricted to a specific interval of lower false positive rate (FPR), the region relevant to follow-up experimental validation. Our predictor uses various features based on sequence information, proteinprotein interaction network topology, and gene expression profiles. A feature selection wrapper was developed to alleviate the over-fitting problem and to weigh each feature’s relevance to prediction. We evaluated our method using the proteome of budding yeast. Our implementation of genetic algorithms maximizing the partial AUC below 0.05 or 0.10 of FPR outperformed other popular classification methods. [BMB Reports 2013; 46(1): 41-46]  相似文献   

11.
12.
In heterogeneous environments, dynamic scheduling algorithms are a powerful tool towards performance improvement of scientific applications via load balancing. However, these scheduling techniques employ heuristics that require prior knowledge about workload via profiling resulting in higher overhead as problem sizes and number of processors increase. In addition, load imbalance may appear only at run-time, making profiling work tedious and sometimes even obsolete. Recently, the integration of dynamic loop scheduling algorithms into a number of scientific applications has been proven effective. This paper reports on performance improvements obtained by integrating the Adaptive Weighted Factoring, a recently proposed dynamic loop scheduling technique that addresses these concerns, into two scientific applications: computational field simulation on unstructured grids, and N-Body simulations. Reported experimental results confirm the benefits of using this methodology, and emphasize its high potential for future integration into other scientific applications that exhibit substantial performance degradation due to load imbalance.  相似文献   

13.
PurposeIt is vital to appropriately power clinical trials towards discovery of novel disease-modifying therapies for Parkinson’s disease (PD). Thus, it is critical to improve prediction of outcome in PD patients.MethodsWe systematically probed a range of robust predictor algorithms, aiming to find best combinations of features for significantly improved prediction of motor outcome (MDS-UPDRS-III) in PD. We analyzed 204 PD patients with 18 features (clinical measures; dopamine-transporter (DAT) SPECT imaging measures), performing different randomized arrangements and utilizing data from 64%/6%/30% of patients in each arrangement for training/training validation/final testing. We pursued 3 approaches: i) 10 predictor algorithms (accompanied with automated machine learning hyperparameter tuning) were first applied on 32 experimentally created combinations of 18 features, ii) we utilized Feature Subset Selector Algorithms (FSSAs) for more systematic initial feature selection, and iii) considered all possible combinations between 18 features (262,143 states) to assess contributions of individual features.ResultsA specific set (set 18) applied to the LOLIMOT (Local Linear Model Trees) predictor machine resulted in the lowest absolute error 4.32 ± 0.19, when we firstly experimentally created 32 combinations of 18 features. Subsequently, 2 FSSAs (Genetic Algorithm (GA) and Ant Colony Optimization (ACO)) selecting 5 features, combined with LOLIMOT, reached an error of 4.15 ± 0.46. Our final analysis indicated that longitudinal motor measures (MDS-UPDRS-III years 0 and 1) were highly significant predictors of motor outcome.ConclusionsWe demonstrate excellent prediction of motor outcome in PD patients by employing automated hyperparameter tuning and optimal utilization of FSSAs and predictor algorithms.  相似文献   

14.
As a result of genome and other sequencing projects, the gap between the number of known protein sequences and the number of known protein structural classes is widening rapidly. In order to narrow this gap, it is vitally important to develop a computational prediction method for fast and accurately determining the protein structural class. In this paper, a novel predictor is developed for predicting protein structural class. It is featured by employing a support vector machine learning system and using a different pseudo-amino acid composition (PseAA), which was introduced to, to some extent, take into account the sequence-order effects to represent protein samples. As a demonstration, the jackknife cross-validation test was performed on a working dataset that contains 204 non-homologous proteins. The predicted results are very encouraging, indicating that the current predictor featured with the PseAA may play an important complementary role to the elegant covariant discriminant predictor and other existing algorithms.  相似文献   

15.
The integration of multiple predictors promises higher prediction accuracy than the accuracy that can be obtained with a single predictor. The challenge is how to select the best predictor at any given moment. Traditionally, multiple predictors are run in parallel and the one that generates the best result is selected for prediction. In this paper, we propose a novel approach for predictor integration based on the learning of historical predictions. Compared with the traditional approach, it does not require running all the predictors simultaneously. Instead, it uses classification algorithms such as k-Nearest Neighbor (k-NN) and Bayesian classification and dimension reduction technique such as Principal Component Analysis (PCA) to forecast the best predictor for the workload under study based on the learning of historical predictions. Then only the forecasted best predictor is run for prediction. Our experimental results show that it achieved 20.18% higher best predictor forecasting accuracy than the cumulative MSE based predictor selection approach used in the popular Network Weather Service system. In addition, it outperformed the observed most accurate single predictor in the pool for 44.23% of the performance traces.
Renato J. FigueiredoEmail:
  相似文献   

16.
The delivery of scalable, rich multimedia applications and services on the Internet requires sophisticated technologies for transcoding, distributing, and streaming content. Cloud computing provides an infrastructure for such technologies, but specific challenges still remain in the areas of task management, load balancing, and fault tolerance. To address these issues, we propose a cloud-based distributed multimedia streaming service (CloudDMSS), which is designed to run on all major cloud computing services. CloudDMSS is highly adapted to the structure and policies of Hadoop, thus it has additional capacities for transcoding, task distribution, load balancing, and content replication and distribution. To satisfy the design requirements of our service architecture, we propose four important algorithms: content replication, system recovery for Hadoop distributed multimedia streaming, management for cloud multimedia management, and streaming resource-based connection (SRC) for streaming job distribution. To evaluate the proposed system, we conducted several different performance tests on a local testbed: transcoding, streaming job distribution using SRC, streaming service deployment and robustness to data node and task failures. In addition, we performed three different tests in an actual cloud computing environment, Cloudit 2.0: transcoding, streaming job distribution using SRC, and streaming service deployment.  相似文献   

17.
Compressed sensing has shown to be promising to accelerate magnetic resonance imaging. In this new technology, magnetic resonance images are usually reconstructed by enforcing its sparsity in sparse image reconstruction models, including both synthesis and analysis models. The synthesis model assumes that an image is a sparse combination of atom signals while the analysis model assumes that an image is sparse after the application of an analysis operator. Balanced model is a new sparse model that bridges analysis and synthesis models by introducing a penalty term on the distance of frame coefficients to the range of the analysis operator. In this paper, we study the performance of the balanced model in tight frame based compressed sensing magnetic resonance imaging and propose a new efficient numerical algorithm to solve the optimization problem. By tuning the balancing parameter, the new model achieves solutions of three models. It is found that the balanced model has a comparable performance with the analysis model. Besides, both of them achieve better results than the synthesis model no matter what value the balancing parameter is. Experiment shows that our proposed numerical algorithm constrained split augmented Lagrangian shrinkage algorithm for balanced model (C-SALSA-B) converges faster than previously proposed algorithms accelerated proximal algorithm (APG) and alternating directional method of multipliers for balanced model (ADMM-B).  相似文献   

18.
Flexibility in part process representation and in highly adaptive routing algorithms are two major sources for improvement in the control of flexible manufacturing systems (FMSs). This article reports the investigation of the impact of these two kinds of flexibilities on the performance of the system. We argue that, when feasible, the choices of operations and sequencing of the part process plans should be deferred until detailed knowledge about the real-time factory state is available. To test our ideas, a flexible routing control simulation system (FRCS) was constructed and a programming language for modeling FMS part process plans, control strategies, and environments of the FMS was designed and implemented. In addition, a scheme for implementing flexible process routing called data flow dispatching rule (DFDR) was derived. The simulation results indicate that flexible processing can reduce mean flow time while increasing system throughput and machine utilization. We observed that this form of flexibility makes automatic load balancing of the machines possible. On the other hand, it also makes the control and scheduling process more complicated and calls for new control algorithms.  相似文献   

19.

Aim

Ideally, datasets for species distribution modelling (SDM) contain evenly sampled records covering the entire distribution of the species, confirmed absences and auxiliary ecophysiological data allowing informed decisions on relevant predictors. Unfortunately, these criteria are rarely met for marine organisms for which distributions are too often only scantly characterized and absences generally not recorded. Here, we investigate predictor relevance as a function of modelling algorithms and settings for a global dataset of marine species.

Location

Global marine.

Methods

We selected well‐studied and identifiable species from all major marine taxonomic groups. Distribution records were compiled from public sources (e.g., OBIS, GBIF, Reef Life Survey) and linked to environmental data from Bio‐ORACLE and MARSPEC. Using this dataset, predictor relevance was analysed under different variations of modelling algorithms, numbers of predictor variables, cross‐validation strategies, sampling bias mitigation methods, evaluation methods and ranking methods. SDMs for all combinations of predictors from eight correlation groups were fitted and ranked, from which the top five predictors were selected as the most relevant.

Results

We collected two million distribution records from 514 species across 18 phyla. Mean sea surface temperature and calcite are, respectively, the most relevant and irrelevant predictors. A less clear pattern was derived from the other predictors. The biggest differences in predictor relevance were induced by varying the number of predictors, the modelling algorithm and the sample selection bias correction. The distribution data and associated environmental data are made available through the R package marinespeed and at http://marinespeed.org .

Main conclusions

While temperature is a relevant predictor of global marine species distributions, considerable variation in predictor relevance is linked to the SDM set‐up. We promote the usage of a standardized benchmark dataset (MarineSPEED) for methodological SDM studies.
  相似文献   

20.
Based on Bayesian Networks, methods were created that address protein sequence-based bacterial subcellular location prediction. Distinct predictive algorithms for the eight bacterial subcellular locations were created. Several variant methods were explored. These variations included differences in the number of residues considered within the query sequence - which ranged from the N-terminal 10 residues to the whole sequence - and residue representation - which took the form of amino acid composition, percentage amino acid composition, or normalised amino acid composition. The accuracies of the best performing networks were then compared to PSORTB. All individual location methods outperform PSORTB except for the Gram+ cytoplasmic protein predictor, for which accuracies were essentially equal, and for outer membrane protein prediction, where PSORTB outperforms the binary predictor. The method described here is an important new approach to method development for subcellular location prediction. It is also a new, potentially valuable tool for candidate subunit vaccine selection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号