首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cheng  Feng  Huang  Yifeng  Tanpure  Bhavana  Sawalani  Pawan  Cheng  Long  Liu  Cong 《Cluster computing》2022,25(1):619-631

As the services provided by cloud vendors are providing better performance, achieving auto-scaling, load-balancing, and optimized performance along with low infrastructure maintenance, more and more companies migrate their services to the cloud. Since the cloud workload is dynamic and complex, scheduling the jobs submitted by users in an effective way is proving to be a challenging task. Although a lot of advanced job scheduling approaches have been proposed in the past years, almost all of them are designed to handle batch jobs rather than real-time workloads, such as that user requests are submitted at any time with any amount of numbers. In this work, we have proposed a Deep Reinforcement Learning (DRL) based job scheduler that dispatches the jobs in real time to tackle this problem. Specifically, we focus on scheduling user requests in such a way as to provide the quality of service (QoS) to the end-user along with a significant reduction of the cost spent on the execution of jobs on the virtual instances. We have implemented our method by Deep Q-learning Network (DQN) model, and our experimental results demonstrate that our approach can significantly outperform the commonly used real-time scheduling algorithms.

  相似文献   

2.
In hybrid clouds, there is a technique named cloud bursting which can allow companies to expand their capacity to meet the demands of peak workloads in a low-priced manner. In this work, a cost-aware job scheduling approach based on queueing theory in hybrid clouds is proposed. The job scheduling problem in the private cloud is modeled as a queueing model. A genetic algorithm is applied to achieve optimal queues for jobs to improve the utilization rate of the private cloud. Then, the task execution time is predicted by back propagation neural network. The max–min strategy is applied to schedule tasks according to the prediction results in hybrid clouds. Experiments show that our cost-aware job scheduling algorithm can reduce the average job waiting time and average job response time in the private cloud. In additional, our proposed job scheduling algorithm can improve the system throughput of the private cloud. It also can reduce the average task waiting time, average task response time and total costs in hybrid clouds.  相似文献   

3.
Reactive scheduling is a procedure followed in production systems to react to unforeseen events that disturb the normal operation of the system. In this paper, a novel operations insertion heuristic is proposed to solve the deadlock-free reactive scheduling problem in flexible job shops, upon the arrival of new jobs. The heuristic utilizes rank matrices (Latin rectangles) to insert new jobs in schedules, while preventing the occurrence of deadlocks or resolving them using the available buffer space (if any). Jobs with alternative processing routes through the system are also considered. The heuristic can be employed to execute two reactive scheduling approaches in a timely efficient manner; to insert the new jobs in the already existing schedule (job insertion) or to reschedule all the jobs in the system (total rescheduling). Using experimental design and analysis of variance (ANOVA), the relative performance of the two approaches is studied and analyzed to provide some measures and guidelines for selecting the appropriate reactive scheduling approach for different problem settings. Three measures of performance are considered in the analysis; efficiency of the revised schedules in terms of the mean flow time, resulting system nervousness, and the required solution time. The results show that, on average, job insertion obtains revised schedules featuring significantly lower system nervousness and slightly higher mean flow time than total rescheduling. However, depending on the system size, number and processing times of the new jobs, and the available flexibility in the system, a trade-off between the two approaches should sometimes be considered.  相似文献   

4.
Many real world situations exist where job scheduling is required. This is the case of some entities, machines, or workers who have to execute certain jobs as soon as possible. Frequently what happens is that several workers or machines are not available to perform their activities during some time periods, due to different circumstances. This paper deals with these situations, and considers stochastic scheduling models to study these problems. When scheduling models are used in practice, they have to take into account that some machines may not be working. That temporal lack of machine availability is known as breakdowns, which happen randomly at any time. The times required to repair those machines are also random variables. The jobs have operations with stochastic processing times, their own release times, and there is no precedence between them. Each job is divided into operations and each operation is performed on the corresponding specialized machine. In addition, in the problems considered, the order in which the operations of each job are done is irrelevant. We develop a heuristic approach to solve these stochastic open-shop scheduling problems where random machine breakdowns can happen. The proposed approach is general and it does not depend on the distribution types of the considered random input data. It provides solutions to minimize the expected makespan. Computational experiences are also reported. The results show that the proposed approach gives a solid performance, finding suitable solutions with short CPU times.  相似文献   

5.
There are typically multiple heterogeneous servers providing various services in cloud computing. High power consumption of these servers increases the cost of running a data center. Thus, there is a problem of reducing the power cost with tolerable performance degradation. In this paper, we optimize the performance and power consumption tradeoff for multiple heterogeneous servers. We consider the following problems: (1) optimal job scheduling with fixed service rates; (2) joint optimal service speed scaling and job scheduling. For problem (1), we present the Karush-Kuhn-Tucker (KKT) conditions and provide a closed-form solution. For problem (2), both continuous speed scaling and discrete speed scaling are considered. In discrete speed scaling, the feasible service rates are discrete and bounded. We formulate the problem as an MINLP problem and propose a distributed algorithm by online value iteration, which has lower complexity than a centralized algorithm. Our approach provides an analytical way to manage the tradeoff between performance and power consumption. The simulation results show the gain of using speed scaling, and also prove the effectiveness and efficiency of the proposed algorithms.  相似文献   

6.
Usually, most of the typical job shop scheduling approaches deal with the processing sequence of parts in a fixed routing condition. In this paper, we suggest a genetic algorithm (GA) to solve the job-sequencing problem for a production shop that is characterized by flexible routing and flexible machines. This means that all parts, of all part types, can be processed through alternative routings. Also, there can be several machines for each machine type. To solve these general scheduling problems, a genetic algorithm approach is proposed and the concepts of virtual and real operations are introduced. Chromosome coding and genetic operators of GAs are defined during the problem solving. A minimum weighted tardiness objective function is used to define code fitness, which is used for selecting species and producing a new generation of codes. Finally, several experimental results are given.  相似文献   

7.
This paper starts with a discussion of computer aided shift scheduling. After a brief review of earlier approaches, two conceptualizations of this field are introduced: First, shift scheduling as a field that ranges from extremely stable rosters at one pole to rather market-like approaches on the other pole. Unfortunately, already small alterations of a scheduling problem (e.g., the number of groups, the number of shifts) may call for rather different approaches and tools. Second, their environment shapes scheduling problems and scheduling has to be done within idiosyncratic organizational settings. This calls for the amalgamation of scheduling with other tasks (e.g., accounting) and for reflections whether better solutions might become possible by changes in the problem definition (e.g., other service levels, organizational changes). Therefore shift scheduling should be understood as a highly connected problem. Building upon these two conceptualizations, a few examples of software that ease scheduling in some areas of this field are given and future research questions are outlined.  相似文献   

8.
With the growing uncertainty and complexity in the manufacturing environment, most scheduling problems have been proven to be NP-complete and this can degrade the performance of conventional operations research (OR) techniques. This article presents a system-attribute-oriented knowledge-based scheduling system (SAOSS) with inductive learning capability. With the rich heritage from artificial intelligence (AI), SAOSS takes a multialgorithm paradigm which makes it more intelligent, flexible, and suitable than others for tackling complicated, dynamic scheduling problems. SAOSS employs an efficient and effective inductive learning method, a continuous iterative dichotomister 3 (CID3) algorithm, to induce decision rules for scheduling by converting corresponding decision trees into hidden layers of a self-generated neural network. Connection weights between hidden units imply the scheduling heuristics, which are then formulated into scheduling rules. An FMS scheduling problem is also given for illustration. The scheduling results show that the system-attribute-oriented knowledge-based approach is capable of addressing dynamic scheduling problems.  相似文献   

9.
There can be different approaches to the management of resources within the context of multi-project scheduling problems. In general, approaches to multi-project scheduling problems consider the resources as a pool shared by all projects. On the other hand, when projects are distributed geographically or sharing resources between projects is not preferred, then this resource sharing policy may not be feasible. In such cases, the resources must be dedicated to individual projects throughout the project durations. This multi-project problem environment is defined here as the resource dedication problem (RDP). RDP is defined as the optimal dedication of resource capacities to different projects within the overall limits of the resources and with the objective of minimizing a predetermined objective function. The projects involved are multi-mode resource constrained project scheduling problems with finish to start zero time lag and non-preemptive activities and limited renewable and nonrenewable resources. Here, the characterization of RDP, its mathematical formulation and two different solution methodologies are presented. The first solution approach is a genetic algorithm employing a new improvement move called combinatorial auction for RDP, which is based on preferences of projects for resources. Two different methods for calculating the projects’ preferences based on linear and Lagrangian relaxation are proposed. The second solution approach is a Lagrangian relaxation based heuristic employing subgradient optimization. Numerical studies demonstrate that the proposed approaches are powerful methods for solving this problem.  相似文献   

10.
Nowadays, scientists and companies are confronted with multiple competing goals such as makespan in high-performance computing and economic cost in Clouds that have to be simultaneously optimised. Multi-objective scheduling of scientific applications in these systems is therefore receiving increasing research attention. Most existing approaches typically aggregate all objectives in a single function, defined a-priori without any knowledge about the problem being solved, which negatively impacts the quality of the solutions. In contrast, Pareto-based approaches having as outcome a set of (nearly) optimal solutions that represent a tradeoff among the different objectives, have been scarcely studied. In this paper, we analyse MOHEFT, a Pareto-based list scheduling heuristic that provides the user with a set of tradeoff optimal solutions from which the one that better suits the user requirements can be manually selected. We demonstrate the potential of our method for multi-objective workflow scheduling on the commercial Amazon EC2 Cloud. We compare the quality of the MOHEFT tradeoff solutions with two state-of-the-art approaches using different synthetic and real-world workflows: the classical HEFT algorithm for single-objective scheduling and the SPEA2* genetic algorithm used in multi-objective optimisation problems. The results demonstrate that our approach is able to compute solutions of higher quality than SPEA2*. In addition, we show that MOHEFT is more suitable than SPEA2* for workflow scheduling in the context of commercial Clouds, since the genetic-based approach is unable of dealing with some of the constraints imposed by these systems.  相似文献   

11.
The stochastic nature of both patient arrivals and lengths of stay leads inevitably to periodic bed shortages in healthcare units. Physicians are challenged to fit demand to service capacity. If all beds are occupied eligible patients are usually referred to another ward or hospital and scheduled surgeries may be cancelled. Lack of beds may also have consequences for patients, who may be discharged in advance when the number of occupied beds is so high as to compromise the medical care of new incoming patients. In this paper we deal with the problem of obtaining efficient bed-management policies. We introduce a queuing control problem in which neither the arrival rates nor the number of servers can be modified. Bed occupancy control is addressed by modifying the service time rates, to make them dependent on the state of the system. The objective functions are two quality-of-service components: to minimize patient rejections and to minimize the length of stay shortening. The first objective has a clear mathematical formulation: minimize the probability of rejecting a patient. The second objective admits several formulations. Four different expressions, all leading to nonlinear optimization problems, are proposed. The solutions of these optimization problems define different control policies. We obtain the analytical solutions by adopting Markov-type assumptions and comparing them in terms of the two quality-of-service components. We extend these results to the general case using optimization with simulation, and propose a way to simulate general length of stay distributions enabling the inclusion of state-dependent service rates.  相似文献   

12.
The planning, scheduling, and control of manufacturing systems can all be viewed as problem-solving activities. In flexible manufacturing systems (FMSs), the computer program carrying out these problem-solving activities must additionally be able to handle the shorter lead time, the flexibility of job routing, the multiprocessing environment, the dynamic changing states, and the versatility of machines. This article presents an artificial intelligence (AI) method to perform manufacturing problem solving. Since the method is driven by manufacturing scenarios represented by symbolic patterns, it is referred to as pattern-directed. The method is based on three AI techniques. The first is the pattern-directed inference technique to capture the dynamic nature of FMSs. The second is the nonlinear planning technique to construct schedules and assign resources. The third is the inductive learning method to generate the pattern-directed heuristics. This article focuses on solving the FMS scheduling problem. In addition, this article reports the computation results to evaluate the utility of various heuristic functions, to identify important design parameters, and to analyze the resulting computational performance in using the pattern-directed approach for manufacturing problem-solving tasks such as scheduling.  相似文献   

13.
Data centers are the backbone of cloud infrastructure platform to support large-scale data processing and storage. More and more business-to-consumer and enterprise applications are based on cloud data center. However, the amount of data center energy consumption is inevitably lead to high operation costs. The aim of this paper is to comprehensive reduce energy consumption of cloud data center servers, network, and cooling systems. We first build an energy efficient cloud data center system including its architecture, job and power consumption model. Then, we combine the linear regression and wavelet neural network techniques into a prediction method, which we call MLWNN, to forecast the cloud data center short-term workload. Third, we propose a heuristic energy efficient job scheduling with workload prediction solution, which is divided into resource management strategy and online energy efficient job scheduling algorithm. Our extensive simulation performance evaluation results clearly demonstrate that our proposed solution has good performance and is very suitable for low workload cloud data center.  相似文献   

14.
This paper is part of an original approach of mathematical modeling for solving cyclic scheduling problems. More precisely, we consider the cyclic job shop. This kind of manufacturing systems is well fitted to medium and large production demands. Many methods have been proposed to solve the cyclic scheduling problem. Among them, we chose the exact techniques, and we focus on the mathematical programming approach. We proposed, in an earlier study, a mathematical programming model for cyclic scheduling with Work-In-Process minimization. We propose here several cutting techniques to improve the practical performances of the model resolution. Some numerical experiments are used to assess the relevance of our propositions. We made a comparison between the original mathematical model and the one endowed by the proposed cuts. This comparison is based on a set of benchmarks generated for this reason. In addition, we make another comparison based on some examples from the literature.  相似文献   

15.
Failure-aware workflow scheduling in cluster environments   总被引:1,自引:0,他引:1  
The goal of workflow application scheduling is to achieve minimal makespan for each workflow. Scheduling workflow applications in high performance cluster environments is an NP-Complete problem, and becomes more complicated when potential resource failures are considered. While more research on failure prediction has been witnessed in recent years to improve system availability and reliability, very few of them attack the problem in the context of workflow application scheduling. In this paper, we study how a workflow scheduler benefits from failure prediction and propose FLAW, a failure-aware workflow scheduling algorithm. We propose two important definitions on accuracy, Application Oblivious Accuracy (AOA) and Application Aware Accuracy (AAA), from the perspectives of system and scheduling respectively, as we observe that the prediction accuracy defined conventionally imposes different performance implications on different applications and fails to measure how that improves scheduling effectiveness. The comprehensive evaluation results using real failure traces show that FLAW performs well with practically achievable prediction accuracy by reducing the average makespan, the loss time and the number of job rescheduling.  相似文献   

16.
Ahmad S  Mizuguchi K 《PloS one》2011,6(12):e29104
Computational prediction of residues that participate in protein-protein interactions is a difficult task, and state of the art methods have shown only limited success in this arena. One possible problem with these methods is that they try to predict interacting residues without incorporating information about the partner protein, although it is unclear how much partner information could enhance prediction performance. To address this issue, the two following comparisons are of crucial significance: (a) comparison between the predictability of inter-protein residue pairs, i.e., predicting exactly which residue pairs interact with each other given two protein sequences; this can be achieved by either combining conventional single-protein predictions or making predictions using a new model trained directly on the residue pairs, and the performance of these two approaches may be compared: (b) comparison between the predictability of the interacting residues in a single protein (irrespective of the partner residue or protein) from conventional methods and predictions converted from the pair-wise trained model. Using these two streams of training and validation procedures and employing similar two-stage neural networks, we showed that the models trained on pair-wise contacts outperformed the partner-unaware models in predicting both interacting pairs and interacting single-protein residues. Prediction performance decreased with the size of the conformational change upon complex formation; this trend is similar to docking, even though no structural information was used in our prediction. An example application that predicts two partner-specific interfaces of a protein was shown to be effective, highlighting the potential of the proposed approach. Finally, a preliminary attempt was made to score docking decoy poses using prediction of interacting residue pairs; this analysis produced an encouraging result.  相似文献   

17.
Using the example of a membrane-supported biofilm reactor for industrial effluent treatment, different non-mechanistic approaches for the modelling of complex bioprocesses are presented and evaluated. The models were obtained employing feedforward artificial neural network analysis for the association of process operation with process performance. Three modelling approaches are discussed, i.e. autonomous static (AS) modelling, non-autonomous static (NAS) modelling, as well as a novel approach termed dynamic modelling with embedding of artificial neural network inputs. They are compared with regard to their ability to infer process performance for two different pollutant case studies, employing 1,2-dichloroethane and 3-chloro-4-methylaniline, respectively. The suitability of the different approaches was found to be strongly dependent on process configuration. Especially in configurations where lag times are apparent, the dynamic modelling approach was found to be superior, and process performance prediction was found to be strongly dependent on the history of process operation.  相似文献   

18.
Dynamically forecasting network performance using the Network Weather Service   总被引:18,自引:0,他引:18  
The Network Weather Service is a generalizable and extensible facility designed to provide dynamic resource performance forecasts in metacomputing environments. In this paper, we outline its design and detail the predictive performance of the forecasts it generates. While the forecasting methods are general, we focus on their ability to predict the TCP/IP end-to-end throughput and latency that is attainable by an application using systems located at different sites. Such network forecasts are needed both to support scheduling (Berman et al., 1996) and, by the metacomputing software infrastructure, to develop quality-of-service guarantees (DeFanti et al., to appear; Grimshaw et al., 1994). This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

19.
The dynamic of a series of responses for four pigeons is analyzed using a discrete fast Fourier transform (FFT) and a measure of behavior's predictability. FFTs of moment-to-moment response rates reliably exhibited a continuous distribution for three of the four birds, with most of the power falling in the low frequencies-red noise. An information analysis of the predictability of a series of inter-response times (IRTs) reveals that there is some gain in prediction by knowing past behavior; moreover, predictability increases the more past behaviors are taken into account. However, the further into the future these predictions are extended, the less reliable they become-entropy. These findings suggest that the dynamic controlling behavior at the level of the individual response is, to some extent, deterministic and probably chaotic.  相似文献   

20.
Information about the state of the system is of paramount importance in determining the dynamics underlying manufacturing systems. In this paper, we present an adaptive scheduling policy for dynamic manufacturing system scheduling using information obtained from snapshots of the system at various points in time. Specifically, the framework presented allows for information-based dynamic scheduling where information collected about the system is used to (1) adjust appropriate parameters in the system and (2) search or optimize using genetic algorithms. The main feature of this policy is that it tailors the dispatching rule to be used at a given point in time to the prevailing state of the system. Experimental studies indicate the superiority of the suggested approach over the alternative approach involving the repeated application of a single dispatching rule for randomly generated test problems as well as a real system. In pa ticular, its relative performance improves further when there are frequent disruptions and when disruptions are caused by the introduction of tight due date jobs and machine breakdown—two of the most common sources of disruption in most manufacturing systems. From an operational perspective, the most important characteristics of the pattern-directed scheduling approach are its ability to incorporate the idiosyncratic characteristics of the given system into the dispatching rule selection process and its ability to refine itself incrementally on a continual basis by taking new system parameters into account.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号