首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Divisible Load Scheduling in Systems with Limited Memory   总被引:3,自引:0,他引:3  
In this work we consider scheduling divisible loads on a distributed computing system with limited available memory. The communication delays and heterogeneity of the system are taken into account. The problem studied consists in finding such a distribution of the load that the communication and computation time is the shortest possible. A new robust method is proposed to solve the problem of finding optimal distribution of computations on star network, and networks in which binomial trees can be embedded (meshes, hypercubes, multistage interconnections). We demonstrate that in many cases memory limitations do not restrict efficiency of parallel processing as much as computation and communication speeds.  相似文献   

2.
In this paper, we consider the problem of scheduling divisible loads on arbitrary graphs with the objective to minimize the total processing time of the entire load submitted for processing. We consider an arbitrary graph network comprising heterogeneous processors interconnected via heterogeneous links in an arbitrary fashion. The divisible load is assumed to originate at any processor in the network. We transform the problem into a multi-level unbalanced tree network and schedule the divisible load. We design systematic procedures to identify and eliminate any redundant processor–link pairs (those pairs whose consideration in scheduling will penalize the performance) and derive an optimal tree structure to obtain an optimal processing time, for a fixed sequence of load distribution. Since the algorithm thrives to determine an equivalent number of processors (resources) that can be used for processing the entire load, we refer to this approach as resource-aware optimal load distribution (RAOLD) algorithm. We extend our study by applying the optimal sequencing theorem proposed for single-level tree networks in the literature for multi-level tree for obtaining an optimal solution. We evaluate the performance for a wide range of arbitrary graphs with varying connectivity probabilities and processor densities. We also study the effect of network scalability and connectivity. We demonstrate the time performance when the point of load origination differs in the network and highlight certain key features that may be useful for algorithm and/or network system designers. We evaluate the time performance with rigorous simulation experiments under different system parameters for the ease of a complete understanding.  相似文献   

3.
In this paper, divisible load scheduling in a linear network of processors is presented. The cases of processing load originating at the boundary and also at the interior of the network are considered. An equivalent tree network for the given linear network is derived. Using this equivalent tree network, we prove all the results obtained in the earlier studies. The equivalent tree network methodology presented in this paper, is more general than the earlier results, because in this approach, we can solve the scheduling problem even in an hetrogeneous linear network. The earlier studies considered only homogeneous linear network.  相似文献   

4.
We study the problem of scheduling a divisible load in a three-dimensional mesh of processors. The objective is to find partition of a load into shares and distribution of load shares among processors which minimize load processing time subject to communication delays involved in sending load from one processor to another. We propose a new scheduling algorithm which distributes load in a sequence of stages across the network, each stage brings load to a set of processors located at the same distance from the load source. A key feature of our solution is that sets of processors receive load in the order of decreasing processing capacities. We call this scheduling strategy Largest Layer First. A theorem about the processing time attained by the algorithm is stated. Performance of the algorithm is compared to earlier results.  相似文献   

5.
A new model for divisible load problem is introduced. Its characteristics are analyzed. Optimal load distribution algorithms on the new model are presented for the tree-network and linear network. Applications that fit our model are briefly described. We show that our model outperforms the existing model such as Cheng–Robertazzi model. We show that the linear model is equivalent to a single-level tree network if the intermediate processors do not follow the store-and-forward communication model, but they follow the store-and-bypass model. This paper introduces the concept of store-and-bypass for divisible load theory.  相似文献   

6.
We analyze the parallel time and speedup for processing a divisible load on (1) a linear array with a corner initial processor; (2) a linear array with an interior initial processor; (3) a mesh with a corner initial processor; (4) a mesh with an interior initial processor; (5) a b-ary complete tree with the root as the initial processor; (6) a pyramid with the apex as the initial processor. Due to communication overhead and limited network connectivity, the speedup of parallel processing for a divisible load on static interconnection networks with constant node degrees is bounded from above by a quantity independent of network size. It is shown that for the above six cases, as the network size becomes large, the asymptotic speedup is approximately , 2 , 3/4, 43/4, (b–1), and 3, respectively, where is the ratio of the time for computing a unit load to the time for communicating a unit load. We also investigate divisible load distribution on hypercubes. Our strategy takes advantage of the recursive structure of a hypercube. It is proven that linear speedup can be achieved as the communication cost becomes smaller and smaller.  相似文献   

7.
Theories of short term memory often include a limited capacity "buffer". Such a buffer contains items which do not decay at all but are overwritten by new data. I show that one of the experiments that fueled the buffer concept, the free recall experiments by Murdock (J Exp Psychol 64(5):482-488, 1962), does not contain such a buffer.  相似文献   

8.
Reactive scheduling is a procedure followed in production systems to react to unforeseen events that disturb the normal operation of the system. In this paper, a novel operations insertion heuristic is proposed to solve the deadlock-free reactive scheduling problem in flexible job shops, upon the arrival of new jobs. The heuristic utilizes rank matrices (Latin rectangles) to insert new jobs in schedules, while preventing the occurrence of deadlocks or resolving them using the available buffer space (if any). Jobs with alternative processing routes through the system are also considered. The heuristic can be employed to execute two reactive scheduling approaches in a timely efficient manner; to insert the new jobs in the already existing schedule (job insertion) or to reschedule all the jobs in the system (total rescheduling). Using experimental design and analysis of variance (ANOVA), the relative performance of the two approaches is studied and analyzed to provide some measures and guidelines for selecting the appropriate reactive scheduling approach for different problem settings. Three measures of performance are considered in the analysis; efficiency of the revised schedules in terms of the mean flow time, resulting system nervousness, and the required solution time. The results show that, on average, job insertion obtains revised schedules featuring significantly lower system nervousness and slightly higher mean flow time than total rescheduling. However, depending on the system size, number and processing times of the new jobs, and the available flexibility in the system, a trade-off between the two approaches should sometimes be considered.  相似文献   

9.
The study of working memory capacity is of outmost importance in cognitive psychology as working memory is at the basis of general cognitive function. Although the working memory capacity limit has been thoroughly studied, its origin still remains a matter of strong debate. Only recently has the role of visual saliency in modulating working memory storage capacity been assessed experimentally and proved to provide valuable insights into working memory function. In the computational arena, attractor networks have successfully accounted for psychophysical and neurophysiological data in numerous working memory tasks given their ability to produce a sustained elevated firing rate during a delay period. Here we investigate the mechanisms underlying working memory capacity by means of a biophysically-realistic attractor network with spiking neurons while accounting for two recent experimental observations: 1) the presence of a visually salient item reduces the number of items that can be held in working memory, and 2) visually salient items are commonly kept in memory at the cost of not keeping as many non-salient items.OUR MODEL SUGGESTS THAT WORKING MEMORY CAPACITY IS DETERMINED BY TWO FUNDAMENTAL PROCESSES: encoding of visual items into working memory and maintenance of the encoded items upon their removal from the visual display. While maintenance critically depends on the constraints that lateral inhibition imposes to the mnemonic activity, encoding is limited by the ability of the stimulated neural assemblies to reach a sufficiently high level of excitation, a process governed by the dynamics of competition and cooperation among neuronal pools. Encoding is therefore contingent upon the visual working memory task and has led us to introduce the concept of effective working memory capacity (eWMC) in contrast to the maximal upper capacity limit only reached under ideal conditions.  相似文献   

10.
In the present paper the problem of chemostat modelling using the neural networks techniques is considered. A feedforward neural network with time delay feedback connections from and to the output neurons, which take into account the culture memory is proposed. A model of the growth of a strain Saccharomyces cerevisiae on a glucose limited medium is developed. Simulation investigations are carried out. The results are discussed.  相似文献   

11.
This paper investigates scheduling strategies for divisible jobs/loads originating from multiple sites in hierarchical networks with heterogeneous processors and communication channels. In contrast, most previous work in the divisible load scheduling theory (DLT) literature mainly addressed scheduling problems with loads originating from a single processor. This is one of the first works that address scheduling multiple loads from multiple sites in the DLT paradigm. In addition, scheduling multi-site jobs is common in Grids and other general distributed systems for resource sharing and coordination. An efficient static scheduling algorithm PPDD (Processor-set Partitioning and Data Distribution Algorithm) is proposed to near-optimally distribute multiple loads among all processors so that the overall processing time of all jobs is minimized. The PPDD algorithm is applied to two cases: when processors are equipped with front-ends and when they are not equipped with front-ends. The application of the algorithm to homogeneous systems is also studied. Further, several important properties exhibited by the PPDD algorithm are proven through lemmas. To implement the PPDD algorithm, we propose a communication strategy. In addition, we compare the performance of the PPDD algorithm with a Round-robin Scheduling Algorithm (RSA), which is most commonly used. Extensive case studies through numerical analysis have been conducted to verify the theoretical findings.  相似文献   

12.
The problem of scheduling jobs using wearing tools is studied. Tool wearing is assumed to be stochastic and the jobs are processed in one machining centre provided with a limited capacity tool magazine. The aim is to minimize the expected average completion time of the jobs by choosing their processing order and tool management decisions wisely. All jobs are available at the beginning of the planning period. This kind of situation is met in production planning of CNC-machines. Previous studies concerning this problem have either assumed deterministic wearing for the tools or omitted the wearing completely. In our formulation of the problem, tool wearing is stochastic and the problem becomes very hard to solve analytically. A heuristic based on genetic algorithms is therefore given for the joint problem of job scheduling and tool management. The algorithm searches the most beneficial job sequence when the tool management decisions are made by a removal rule taking into account the future planned usage of the tools. The cost of each job sequence is evaluated by simulating the job processing. Empirical tests with heuristics indicate that by taking the stochastic information into account, one can reduce the average job processing time considerably.  相似文献   

13.
Theories of short term memory often include a limited capacity “buffer”. Such a buffer contains items which do not decay at all but are overwritten by new data. I show that one of the experiments that fueled the buffer concept, the free recall experiments by Murdock (J Exp Psychol 64(5):482–488, 1962), does not contain such a buffer.  相似文献   

14.
The general problem scenario of this paper is the following: Jobs of various priorities, stationed in a common storage area, are waiting to be dispatched to two non-identical workstations. Any of the waiting jobs can be accessed from the storage at any given time. Each job can be processed on either of the workstations, but once a job has been assigned it may not be preempted. By job priority it is meant that a higher priority job has disptach preference over a lower priority job. The processing time of a job on a given workstation is assumed to be random, the distribution being dependent on the job type and the configuration of the workstation. Specifically, the first problem studied considers only two classes of jobs: (1) “hot” jobs, whose processing is to be expedited and thus have the higher dispatch priority, and (2) “routine” jobs which may be assigned to an available workstation only if the workstation has been rejected by all “hot” jobs. The processing times are assumed to be exponentially distributed with means depending on the job class and workstation. We assume that, on the average, one workstation is faster than the other with regard to processing any job. The dispatching objective for each job class is to minimize its expected flowtime. It is shown that threshold dispatching policies are optimal for this problem. That is, the faster processor should be utilized whenever possible, and for each class there exists an explicit threshold such that when the number of jobs of that class in the buffer exceeds this threshold then a job of that class is dispatched to the slower processor, otherwise these jobs wait for the faster processor to become available. For the higher priority jobs, this threshold is shown to be a function only of the various processing rates of the two workstations. For the lower priority jobs, the threshold also depends on the number of higher priority jobs in the buffer. The results is extended to a system with n priority classes. Again, it is shown that when the processing times are exponentially distributed with different rates and the dispatching objective for each class is to minimize its expected flowtime, the optimal dispatching policies are of threshold type. Explicit thresholds are easily derived.  相似文献   

15.
The flexible manufacturing system (FMS) considered in this paper is composed of two CNC machines working in series—a punching machine and a bending machine connected through rollers acting as a buffer system of finite capacity. The main difference between the present problem and the standard two-machine flow shop problem with finite intermediate capacity is precisely the buffer system, which in our problem consists of two stacks of parts supported by rollers: the first stack contains the output of the punching machine, while the second stack contains the input for the bending machine. When the second stack is empty, the first stack may be moved over. Furthermore, the capacity of each stack depends on the particular part type being processed. The FMS can manufacture a wide range of parts of different types. Processing times on the two machines are usually different so that an unbalance results in their total workload. Furthermore, whenever there is a change of the part type in production, the machines must be properly reset—that is, some tools need to be changed or repositioned. A second important difference between the present problem and the usual two-machine flow shop problem is the objective. Given a list ofp part types to be produced in known quantities, the problem considered here is how to sequence or alternate the production of the required part types so as to achieve various hierarchical targets: minimize the makespan (the total time needed to complete production) and, for instance, compress the idle periods of the machine with less workload into a few long enough intervals that could be utilized for maintenance or other reasons. Although Johnson's rule is optimal in some particular cases, the problem addressed in the paper isNP-hard in general: heuristic procedures are therefore provided.  相似文献   

16.
Puah WC  Cheok LP  Biro M  Ng WT  Wasser M 《BioTechniques》2011,51(1):49-50, 52-3
Automated microscopy enables in vivo studies in developmental biology over long periods of time. Time-lapse recordings in three or more dimensions to study the dynamics of developmental processes can produce huge data sets that extend into the terabyte range. However, depending on the available computational resources and software design, downstream processing of very large image data sets can become highly inefficient, if not impossible. To address the lack of available open source and commercial software tools to efficiently reorganize time-lapse data on a desktop computer with limited system resources, we developed TLM-Converter. The software either fragments oversized files or concatenates multiple files representing single time frames and saves the output files in open standard formats. Our application is undemanding on system resources as it does not require the whole data set to be loaded into the system memory. We tested our tool on time-lapse data sets of live Drosophila specimens recorded by laser scanning confocal microscopy. Image data reorganization dramatically enhances the productivity of time-lapse data processing and allows the use of downstream image analysis software that is unable to handle large data sets of ≥2 GB. In addition, saving the outputs in open standard image file formats enables data sharing between independently developed software tools.  相似文献   

17.
Recently, continuous downstream processing has become a topic of discussion and analysis at conferences while no industrial applications of continuous downstream processing for biopharmaceutical manufacturing have been reported. There is significant potential to increase the productivity of a Protein A capture step by converting the operation to simulated moving bed (SMB) mode. In this mode, shorter columns are operated at higher process flow and corresponding short residence times. The ability to significantly shorten the product residence time during loading without appreciable capacity loss can dramatically increase productivity of the capture step and consequently reduce the amount of Protein A resin required in the process. Previous studies have not considered the physical limitations of how short columns can be packed and the flow rate limitations due to pressure drop of stacked columns. In this study, we are evaluating the process behavior of a continuous Protein A capture column cycling operation under the known pressure drop constraints of a compressible media. The results are compared to the same resin operated under traditional batch operating conditions. We analyze the optimum system design point for a range of feed concentrations, bed heights, and load residence times and determine achievable productivity for any feed concentration and any column bed height. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:938–948, 2016  相似文献   

18.
UNICOR: a species connectivity and corridor network simulator   总被引:2,自引:0,他引:2  
We introduce UNIversal CORridor network simulator (UNICOR), a species connectivity and corridor identification tool. UNICOR applies Dijkstra's shortest path algorithm to individual‐based simulations. Outputs can be used to designate movement corridors, identify isolated populations, and prioritize conservation plans to promote species persistence. The key features include a driver‐module framework, connectivity mapping with thresholding and buffering, and calculation of graph theory metrics. Through parallel‐processing, computational efficiency is greatly improved, allowing analyses of large extents and entire populations. Previously available approaches are limited by prolonged computational times and poor algorithmic efficiency, restricting problem‐size and requiring artificial subsampling of populations.  相似文献   

19.
In this paper, the task scheduling in MapReduce is considered for geo-distributed data centers on heterogeneous networks. Adaptive heartbeats, job deadlines and data locality are concerned. Job deadlines are divided according to the maximum data volume of tasks. With the considered constraints, the task scheduling is formulated as an assignment problem in each heartbeat, in which adaptive heartbeats are calculated by the processing times of tasks, jobs are sequencing in terms of the divided deadlines and tasks are scheduled by the Hungarian algorithm. Taking into account both the data transfer and processing times, the most suitable data center for all mapped jobs are determined in the reduce phase. Experimental results show that the proposed algorithms outperform the current existing ones. The proposals with sorted task-sequences have better performance than those with random task-sequences.  相似文献   

20.
Real-time scheduling and load controls of FMSs are complex processes in which the control logic must consider a broad spectrum of instantaneous state variables while taking into account the probabilistic future impact of each decision at each time epoch. These processes are particularly important in the management of modern FMS environment, since they are known to have a significant impact on the FMS productive capacity and economic viability. In this article we outline the approach developed for dynamic load controls within an FMS producing a variety of glass lenses. Two revenue-influencing objective functions are evaluated for this capital-intensive facility. It is shown that by using Semi-Markovian modeling concepts, the FMS states need to be observed only at certain decision epochs. The mean holding time in each state is then obtained using the probability distribution function of the conditional state occupancy times. Several key performance measures are then derived by means of the value equations. In addition, the structure of the optimal policies are exemplified for a variety of operational parameters. It is shown that the optimal policies tend to generate higher buffer stocks of parts in those work centers having the highest revenue-generation rates. These buffer stocks get smaller and smaller as the relative processing capacity of the centers increases. Similar observations lead us to the introduction of several promising heuristics that capture the structural properties of the optimal policies with a significantly smaller computational effort. Results of the empirical evaluation of these heuristics are also analyzed here.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号