首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
While the MPP is still the most common architecture in supercomputer centers today, a simpler and cheaper machine configuration is appearing at many supercomputing sites. This alternative setup may be described simply as a collection of multiprocessors or a distributed server system. This collection of multiprocessors is fed by a single common stream of jobs, where each job is dispatched to exactly one of the multiprocessor machines for processing.The biggest question which arises in such distributed server systems is what is a good rule for assigning jobs to host machines: i.e. what is a good task assignment policy. Many task assignment policies have been proposed, but not systematically evaluated under supercomputing workloads.In this paper we start by comparing existing task assignment policies using a trace-driven simulation under supercomputing workloads. We validate our experiments by providing analytical proofs of the performance of each of these policies. These proofs also help provide much intuition. We find that while the performance of supercomputing servers varies widely with the task assignment policy, none of the above task assignment policies perform as well as we would like.We observe that all policies proposed thus far aim to balance load among the hosts. We propose a policy which purposely unbalances load among the hosts, yet, counter-to-intuition, is also fair in that it achieves the same expected slowdown for all jobs – thus no jobs are biased against. We evaluate this policy again using both trace-driven simulation and analysis. We find that the performance of the load unbalancing policy is significantly better than the best of those policies which balance load.  相似文献   

2.
The two important but often conflicting metrics for any primary care practice are: (1) Timely Access and (2) Patient-physician Continuity. Timely access focuses on the ability of a patient to get access to a physician (or provider, in general) as soon as possible. Patient–physician continuity refers to building a strong or permanent relationship between a patient and a specific physician by maximizing patient visits to that physician. In the past decade, a new paradigm called advanced access or open access has been adopted by practices nationwide to encourage physicians to “do today’s work today.” However, most clinics still reserve pre-scheduled slots for long lead-time appointments due to patient preference and clinical necessities. Therefore, an important problem for clinics is how to optimally manage and allocate limited physician capacities as much as possible to meet the two types of demand—pre-scheduled (non-urgent) and open access (urgent, as perceived by the patient)—while simultaneously maximizing timely access and patient–physician continuity. In this study we adapt ideas of manufacturing process flexibility to capacity management in a primary care practice. Flexibility refers to the ability of a primary care physician to see patients of other physicians. We develop generalizable analytical algorithms for capacity allocation for an individual physician and a two physician practice. For multi-physician practices, we use a two-stage stochastic integer programming approach to investigate the value of flexibility. We find that flexibility has the greatest benefit when system workload is balanced, when the physicians have unequal workloads, and when the number of physicians in the practice increases. We also find that partial flexibility, which restricts the number of physicians a patient sees and thereby promotes continuity, simultaneously succeeds in providing high levels of timely access.  相似文献   

3.
Loading problems in flexible manufacturing systems involve assigning operations for selected part types and their associated tools to machines or machine groups. One of the objectives might be to maximize the expected production rate (throughput) of the system. Because of the difficulty in dealing with this objective directly, a commonly used surrogate objective is the closeness of the actual workload allocation to the continuous workload allocation that maximizes throughput. We test several measures of closeness and discuss correlations between these measures and throughput. Using the best measure, we show how to modify an existing branch and bound algorithm which was developed for the case of equal target workloads for all machine groups to accommodate unequal target workloads. We also develop a new branch and bound algorithm which can be used for both types of problems. The efficiency of the algorithm in finding optimal solutions is achieved through the application of better branching rules and improved dominance results. Computational results on randomly generated test problems indicate that the new algorithm performs well.  相似文献   

4.
In this study, we address the meta-task scheduling problem in heterogeneous computing (HC) systems, which is to find a task assignment that minimizes the schedule length of a meta-task composed of several independent tasks with no data dependencies. The fact that the meta-task scheduling problem in HC systems is NP-hard has motivated the development of many heuristic scheduling algorithms. These heuristic algorithms, however, neglect the stochastic nature of task execution times in an attempt to minimize a deterministic objective function, which is the maximum of the expected values of machine loads. Contrary to existing heuristics, we account for this stochastic nature by modeling task execution times as random variables. We, then, formulate a stochastic scheduling problem where the objective is to minimize the expected value of the maximum of machine loads. We prove that this new objective is underestimated by the deterministic objective function and that an optimal task assignment obtained with respect to the deterministic objective function could be inefficient in a real computing platform. In order to solve the stochastic scheduling problem posed, we develop a genetic algorithm based scheduling heuristic. Our extensive simulation studies show that the proposed genetic algorithm can produce better task assignments as compared to existing heuristics. Specifically, we observe a performance improvement on the relative cost heuristic (M.-Y. Wu and W. Shu, A high-performance mapping algorithm for heterogeneous computing systems, in: Int. Parallel and Distributed Processing Symposium, San Francisco, CA, April 2001) by up to 61%.  相似文献   

5.
In this work we consider the problem of selecting a set of patients among a given waiting list of elective patients and assigning them to a set of available operating room blocks. We assume a block scheduling strategy in which the number and the length of available blocks are given. As each block is related to a specific day, by assigning a patient to a block his/her surgery date is fixed, as well. Each patient is characterized by a recommended maximum waiting time and an uncertain surgery duration. In practical applications, new patients enter the waiting list continuously. Patient selection and assignment is performed by surgery departments on a short-term, usually a week, regular base. We propose a so-called rolling horizon approach for the patient selection and assignment. At each iteration short-term patient assignment is decided. However, in a look-ahead perspective, a longer planning horizon is considered when looking for the patient selection. The mid-term assignment over the next \(n\) weeks is generated by solving an ILP problem, minimizing a penalty function based on total waiting time and tardiness of patients. The approach is iteratively applied by shifting ahead the mid-term planning horizon. When applying the first week solution, unpredictable extensions of surgeries may disrupt the schedule. Such disruptions are recovered in the next iteration: the mid-term solution is rescheduled limiting the number of variations from the previously computed plan. Besides, the approach allows to deal with new patient arrivals. To keep limited the number of disruptions due to uncertain surgery duration, we propose also a robust formulation of the ILP problem. The deterministic and the robust formulation based frameworks are compared over a set of instances, including different stochastic realization of surgery times.  相似文献   

6.
7.
Divisible load scenarios occur in modern media server applications since most multimedia applications typically require access to continuous and discrete data. A high performance Continuous Media (CM) server greatly depends on the ability of its disk IO subsystem to serve both types of workloads efficiently. Disk scheduling algorithms for mixed media workloads, although they play a central role in this task, have been overlooked by related research efforts. These algorithms must satisfy several stringent performance goals, such as achieving low response time and ensuring fairness, for the discrete-data workload, while at the same time guaranteeing the uninterrupted delivery of continuous data, for the continuous-data workload. The focus of this paper is on disk scheduling algorithms for mixed media workloads in a multimedia information server. We propose novel algorithms, present a taxonomy of relevant algorithms, and study their performance through experimentation. Our results show that our algorithms offer drastic improvements in discrete request average response times, are fair, serve continuous requests without interruptions, and that the disk technology trends are such that the expected performance benefits can be even greater in the future.  相似文献   

8.
Many retailers find it useful to partition customers into multiple classes based on certain characteristics. We consider the case in which customers are primarily distinguished by whether they are willing to wait for backordered demand. A firm that faces demand from customers that are differentiated in this way may want to adopt an inventory management policy that takes advantage of this differentiation. We propose doing so by imposing a critical level (CL) policy: when inventory is at or below the critical level demand from those customers that are willing to wait is backordered, while demand from customers unwilling to wait will still be served as long as there is any inventory available. This policy reserves inventory for possible future demands from impatient customers by having other, patient, customers wait. We model a system that operates a continuous review replenishment policy, in which a base stock policy is used for replenishments. Demands as well as lead times are stochastic. We develop an exact and efficient procedure to determine the average infinite horizon performance of a given CL policy. Leveraging this procedure we develop an efficient algorithm to determine the optimal CL policy parameters. Then, in a numerical study we compare the cost of the optimal CL policy to the globally optimal state-dependent policy along with two alternative, more naïve, policies. The CL policy is slightly over 2 % from optimal, whereas the alternative policies are 7 and 27 % from optimal. We also study the sensitivity of our policy to the coefficient of variation of the lead time distribution, and find that the optimal CL policy is fairly insensitive, which is not the case for the globally optimal policy.  相似文献   

9.
Genetic algorithms are powerful search methods inspired by Darwinian evolution. To date, they have been applied to the solution of many optimization problems because of the easy use of their properties and their robustness in finding good solutions to difficult problems. The good operation of genetic algorithms is due in part to its two main variation operators, namely, crossover and mutation operators. Typically, in the literature, we find the use of a single crossover and mutation operator. However, there are studies that have shown that using multi-operators produces synergy and that the operators are mutually complementary. Using multi-operators is not a simple task because which operators to use and how to combine them must be determined, which in itself is an optimization problem. In this paper, it is proposed that the task of exploring the different combinations of the crossover and mutation operators can be carried out by evolutionary computing. The crossover and mutation operators used are those typically used for solving the traveling salesman problem. The process of searching for good combinations was effective, yielding appropriate and synergic combinations of the crossover and mutation operators. The numerical results show that the use of the combination of operators obtained by evolutionary computing is better than the use of a single operator and the use of multi-operators combined in the standard way. The results were also better than those of the last operators reported in the literature.  相似文献   

10.
Ongoing negotiations on the general practitioner contract raise the question of remunerating general practitioners for increased workload resulting from the shift from secondary to primary care. A review of the literature shows that there is little evidence on whether a shift of services from secondary to primary care is responsible for general practitioners'' increased workload, and scope for making generalisations is limited. The implication is that general practitioners have little more than anecdotal evidence to support their claims of greatly increased workloads, and there is insufficient evidence to make informed decisions about remunerating general practitioners for the extra work resulting from the changes. Lack of evidence does not, however, mean that there is no problem with workload. It will be increasingly important to identify mechanisms for ensuring that resources follow workload.  相似文献   

11.
World population is expected to grow from the present 6.8 billion people to about 9 billion by 2050. The growing need for nutritious and healthy food will increase the demand for fisheries products from marine sources, whose productivity is already highly stressed by excessive fishing pressure, growing organic pollution, toxic contamination, coastal degradation and climate change. Looking towards 2050, the question is how fisheries governance, and the national and international policy and legal frameworks within which it is nested, will ensure a sustainable harvest, maintain biodiversity and ecosystem functions, and adapt to climate change. This paper looks at global fisheries production, the state of resources, contribution to food security and governance. It describes the main changes affecting the sector, including geographical expansion, fishing capacity-building, natural variability, environmental degradation and climate change. It identifies drivers and future challenges, while suggesting how new science, policies and interventions could best address those challenges.  相似文献   

12.
Hušek et al. (Popul Ecol 55:363–375, 2013 ) showed that the numerical response of storks to vole prey was stronger in regions where variability in vole density was higher. This finding is, at first sight, in contradiction with the predictions of life-history theory in stochastic environments. Since the stork productivity-vole density relationship is concave, theory predicts a negative association between the temporal variability in vole density and stork productivity. Here, we illustrate this negative effect of vole variability on stork productivity with a simple mathematical model relating expected stork productivity to vole dynamics. When comparing model simulations to the observed mean density and variability of thirteen Czech and Polish vole populations, we find that the observed positive effect of vole variability on stork numerical response is most likely due to an unusual positive correlation between mean and variability of vole density.  相似文献   

13.
14.
Estimates of transmitted HIV drug-resistance prevalence vary widely among and within epidemiological surveys. Interpretation of trends from available survey data is therefore difficult. Because the emergence of drug-resistance involves small populations of infected drug-resistant individuals, the role of stochasticity (chance events) is likely to be important. The question addressed here is: how much variability in transmitted HIV drug-resistance prevalence patterns arises due to intrinsic stochasticity alone, i.e., if all starting conditions in the different epidemics surveyed were identical? This ‘thought experiment’ gives insight into the minimum expected variabilities within and among epidemics. A simple stochastic mathematical model was implemented. Our results show that stochasticity alone can generate a significant degree of variability and that this depends on the size and variation of the pool of new infections when drug treatment is first introduced. The variability in transmitted drug-resistance prevalence within an epidemic (i.e., the temporal variability) is large when the annual pool of all new infections is small (fewer than 200, typical of the HIV epidemics in Central European and Scandinavian countries) but diminishes rapidly as that pool grows. Epidemiological surveys involving hundreds of new infections annually are therefore needed to allow meaningful interpretation of temporal trends in transmitted drug-resistance prevalence within individual epidemics. The stochastic variability among epidemics shows a similar dependence on the pool of new infections if treatment is introduced after endemic equilibrium is established, but can persist even when there are more than 10,000 new infections annually if drug therapy is introduced earlier. Stochastic models may therefore have an important role to play in interpreting differences in transmitted drug-resistance prevalence trends among epidemiological surveys.  相似文献   

15.
The routing mix problem in flexible assembly systems is considered. The problem consists of assigning the operations for each part to the machines, with the two objectives of balancing the machine workloads and minimizing the burden of the transportation system. These two objectives are sometimes conflicting, since the latter tends to support assigning operations to the same machine(s) as much as possible, and this may be bad for workload balancing. A linear programming problem is presented that, given a constraint on the workload of each machine, finds one solution that minimizes the overall time spent moving the parts from one machine to another. Since such a linear program may have an exponential number of variables, an efficient column generation technique to solve the problem is devised. The efficiency of the method is validated by experiments on a large number of random problems.  相似文献   

16.
This paper considers the problem of configuring a printed circuit board (PCB) assembly line experiencing uncertainty in demand and capacity. The PCB assembly process involves a single line of automatic placement machines, a variety of board types, and a number of component types. The line is set up only once, at the beginning of a production cycle, to eliminate setups between board types. Using this strategy, the line therefore can assemble all different types of PCBs without feeder changes. The problem then becomes to partition component types to the different machines in the hope of processing all boards quickly with a good workload balance. In this paper, the board demands and machine breakdowns are random but follow some probability distribution, which can be predicted from past observations of the system. We formulate this problem as a stochastic mixed-integer programming formulation with the objective of minimizing the expected makespan for assembling all PCBs during a production cycle. The results obtained indicate significant improvement over the existing methods. We hope that this research will provide more PCB assembly facilities with models and techniques to hedge against variable forecasts and capacity plans  相似文献   

17.
In this paper we investigate a manufacturer’s sustainable sourcing strategy that includes recycled materials. To produce a short life-cycle electronic good, strategic raw materials can be bought from virgin material suppliers in advance of the season and via emergency shipments, as well as from a recycler. Hence, we take into account virgin and recycled materials from different sources simultaneously. Recycling makes it possible to integrate raw materials out of steadily increasing waste streams back into production processes. Considering stochastic prices for recycled materials, stochastic supply quantities from the recycler and stochastic demand as well as their potential dependencies, we develop a single-period inventory model to derive the order quantities for virgin and recycled raw materials to determine the related costs and to evaluate the effectiveness of the sourcing strategy. We provide managerial insights into the benefits of such a green sourcing approach with recycling and compare this strategy to standard sourcing without recycling. We conduct a full factorial design and a detailed numerical sensitivity analysis on the key input parameters to evaluate the cost savings potential. Furthermore, we consider the effects of correlations between the stochastic parameters. Green sourcing is especially beneficial in terms of cost savings for high demand variability, high prices of virgin raw material and low expected recycling prices as well as for increasing standard deviation of the recycling price. Besides these advantages it also contributes to environmental sustainability as, compared to sourcing without recycling, it reduces the total quantity ordered and, hence, emissions are reduced.  相似文献   

18.
The objective of the present study was to evaluate flowcytometry analysis (FCA) as a tool for rapidly and objectively estimating the percentage of cells infected with Cryptosporidium parvum in an in vitro model. We compared the results to those obtained with immunofluorescence assay (IFA) and evaluated the intra-assay variability of both assays and the inter-assay variability of IFA. Human ileocecal adenocarcinoma cells (HCT-8) were infected with different doses of excysted oocysts. After 24 hours, cells were analysed by FCA and by IFA using a monoclonal antibody that recognises a C. parvum antigenic protein and a lectin that binds with glycoproteins present in the parasitophorous vacuoles. The coefficient of variability in terms of the percentage of infected cells was lower for FCA (i.e., 13-14%) than for IFA (i.e., 27-38% when performed by a single operator and 19-22% when performed by three operators), suggesting that FCA is more accurate, in that it is not subject to operator expertise. FCA also has the advantage of allowing the entire culture to be examined, thus avoiding problems with heterogeneity among microscopic fields. In light of these results, this method could also be used to test new anti-Cryptosporidium drugs.  相似文献   

19.
Power management in large-scale computational environments can significantly benefit from predictive models. Such models provide information about the power consumption behavior of workloads prior to running them. Power consumption depends on the characteristics of both the machine and the workload. However, combinational features such as the cache miss rate cannot be considered due to their unavailability before running the workload. Therefore, pre-execution power modeling requires both machine-independent workload characteristics and workload-independent machine characteristics. In this paper the predictive modeling problem is tackled by the proposal of a two-stage modeling framework. In the first stage, a machine learning approach is taken to predict single-threaded workload power consumption at a specific frequency. The second stage analytically scales this output to any intended thread/frequency configuration. Experimental results show that the proposed approach can yield highly accurate predictions about workload power consumption with an average error of 3.7 % on six different test platforms.  相似文献   

20.
System setup problems in flexible manufacturing systems deal with short-term planning problems such as part type selection, machine grouping, operation assignment, tooling, fixture and pallet allocation, and routing. In this article, we consider three of the subproblems: part type selection, machine grouping, and loading. We suggest a heuristic approach to solve the subproblems consistently with the objective of maximizing the expected production rate. The proposed procedure includes routines to generate all possible machine grouping alternatives for a given set of machines, to obtain optimal target workloads for each grouping alternative, and to allocate operations and tools to machine groups. These routines are executed iteratively until a good solution to the system setup problem is obtained. Computational experience is reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号