首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Real-time systems are increasingly appearing in complex and dynamic environments such as cruise controllers, life support systems and nuclear reactors. These systems contain components that sense, control and stabilize the environment towards achieving the mission or target. These consociate components synchronize, compute and control themselves locally or have a centralized component to achieve their mission. Distributed computing techniques improve the overall performance and reliability of large real-time systems with spread components. Partially Clairvoyant scheduling was introduced in Saksena, M., PhD thesis (1994) to determine the schedulability of hard real-time jobs with variable execution times. The problem of deciding the Partially Clairvoyant schedulability of a constrained set of jobs was well studied in Gerber, R., et al., IEEE Trans. Comput. 44(3), 471–479 (1995), Choi, S. and Agrawala, A.K., Real-Time Syst. 19(1), 5–40 (2000), Subramani, K., J. Math. Model. Algorithms 2(2), 97–119 (2003). These algorithms determine the schedulability of a job-set offline and produce a set of dispatch functions for each job in the job-set. The dispatch functions of a job depend on the start and execution times of the jobs sequenced before the job. The dispatching problem is concerned with the online computation of the start time interval of a job such that none of the constraints are violated. In certain situations, the dispatcher fails to dispatch a job as it takes longer to compute the interval within which the job has to be dispatched, this phenomenon is called Loss of Dispatchability. For a job-set of size n, sequential approaches using function lists suffer from two major drawbacks, viz., Ω(n) dispatching time and the Loss of Dispatchability phenomenon. Existing approaches to this problem have been along sequential lines, through the use of stored function lists. In this paper, we propose and evaluate three distributed dispatching algorithms for Partially Clairvoyant schedules. For a job-set of size n, the algorithms have dispatch times of O(1) per job. In the first algorithm, one processor executes all the jobs and other processors compute the dispatch functions. This scenario simplifies design and is better in situations where one processor controls all other devices. In the other algorithms, all the processors execute jobs pre-assigned to them and compute the dispatch functions; which is a plausible scenario in distributed controlling.
A. OsmanEmail:
  相似文献   

2.
Networks of workstations offer large amounts of unused processing time. Resource management systems are able to exploit this computing capacity by assigning compute-intensive tasks to idle workstations. To avoid interferences between multiple, concurrently running applications, such resource management systems have to schedule application jobs carefully. Continuously arriving jobs and dynamically changing amounts of available CPU capacity make traditional scheduling algorithms difficult to apply in workstation networks. Online scheduling algorithms promise better results by adapting schedules to changing situations. This paper compares six online scheduling algorithms by simulating several workload scenarios. Based on the insights gained by simulation, the three online scheduling algorithms performing best were implemented in the Winner resource management system. Experiments conducted with Winner in a real workstation network confirm the simulation results obtained. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

3.
The traditional assembly system consists of a series of balanced workstations operating at the same rate with fixed cycle times. Recent advances in technology allow more flexible assembly systems, in which workstations operate independently and cycle times vary from job to job. This article develops an analytical model for comparing the throughputs (jobs per hour) of assembly systems with fixed and variable cycle times. The throughputs are compared on a common basis by requiring that both systems allow sufficient processing time to ensure product quality and that they have the same total times in system per job. Results indicate that an assembly system with variable cycle times can operate at a significantly higher throughput than one with fixed cycle times, provided there is sufficient buffer storage space between workstations to accommodate queueing. This benefit must be weighed against possible increased capital investment and practical considerations associated with system control.  相似文献   

4.
In this paper, we propose a flexible neighbourhood search strategy for quay crane scheduling problems based on the framework of tabu search (TS) algorithm. In the literature, the container workload of a ship is partitioned into a number of fixed jobs to deal with the complexity of the problem. In this paper, we propose flexible jobs which are dynamically changed by TS throughout the search process to eliminate the impact of fixed jobs on the generated schedules. Alternative job sequences are investigated for quay cranes and a new quay crane dispatching policy is developed to generate schedules. Computational experiments conducted with problem instances available in the literature showed that our algorithm is capable of generating quality schedules for quay crane handling operations at reasonable times.  相似文献   

5.
A multiproduct assembly system produces a family of similar products, where the assembly of each product entails an ordered set of tasks. An assembly system consists of a sequence of workstations. For each workstation, we assign a subset of the assembly tasks to be performed at the workstation and select the type of assembly equipment or resource to be used by the workstation. The assembly of each product requires a visit to each workstation in the fixed sequence. The problem of system design is to find the system that is capable of producing all the products in the desired volumes at minimum cost. The system cost includes the fixed capital costs for the assembly equipment and tools and the variable operating costs for the various workstations. We present and illustrate an optimization procedure that assigns tasks to workstations and selects assembly equipment for each workstation.  相似文献   

6.
The view according to which damselfly males practice two alternative reproductive tactics of access to females is critically discussed. It is widely accepted that some males (“territorial” ones) have priority as potential female partners, while others (“sneakers” or “wanderers”) are incapable of retaining an individual territory. They have a chance of mating only by intruding briefly into the area defended by a “territorial” male when a female is present there. Thus, the tactics of a “territorial” male consists in waiting for a female in its territory and copulating with it “by agreement,” whereas non-territorial males resort to forced copulations. By observation of individually marked males (48 out of 118) it was shown that every male could be regarded as “territorial” during a certain period and as a “wanderer” before and after it. Thus, no correlation between the modes of space use by a male (residence/mobility) and the characters of its external morphology and/or signal behavior appears to be possible in principle. According to the data obtained, a more plausible explanation is that the female chooses not the male but the best area for oviposition. In addition, it was ascertained that adherence to forced copulations cannot constitute successful “tactics” since they rarely result in insemination, neither by “territorial” nor “non-territorial” males. In other words, we are dealing not with certain alternative tactics (i.e., specialized adaptive mechanisms that have evolved in the species) but simply with the results of different sets of circumstances at a given moment.  相似文献   

7.
The problem of scheduling jobs using wearing tools is studied. Tool wearing is assumed to be stochastic and the jobs are processed in one machining centre provided with a limited capacity tool magazine. The aim is to minimize the expected average completion time of the jobs by choosing their processing order and tool management decisions wisely. All jobs are available at the beginning of the planning period. This kind of situation is met in production planning of CNC-machines. Previous studies concerning this problem have either assumed deterministic wearing for the tools or omitted the wearing completely. In our formulation of the problem, tool wearing is stochastic and the problem becomes very hard to solve analytically. A heuristic based on genetic algorithms is therefore given for the joint problem of job scheduling and tool management. The algorithm searches the most beneficial job sequence when the tool management decisions are made by a removal rule taking into account the future planned usage of the tools. The cost of each job sequence is evaluated by simulating the job processing. Empirical tests with heuristics indicate that by taking the stochastic information into account, one can reduce the average job processing time considerably.  相似文献   

8.
A computer simulation model has been designed to predict the effects of changes in the work load and resources of an x-ray department. The model has been used to produce histograms of patient waiting times and to show the effect on these of introducing changes in the speed of processing films and in the numbers of cubicles and radiographers available. The predicted benefit of using a faster film processor has been confirmed in practice.  相似文献   

9.
While the MPP is still the most common architecture in supercomputer centers today, a simpler and cheaper machine configuration is appearing at many supercomputing sites. This alternative setup may be described simply as a collection of multiprocessors or a distributed server system. This collection of multiprocessors is fed by a single common stream of jobs, where each job is dispatched to exactly one of the multiprocessor machines for processing.The biggest question which arises in such distributed server systems is what is a good rule for assigning jobs to host machines: i.e. what is a good task assignment policy. Many task assignment policies have been proposed, but not systematically evaluated under supercomputing workloads.In this paper we start by comparing existing task assignment policies using a trace-driven simulation under supercomputing workloads. We validate our experiments by providing analytical proofs of the performance of each of these policies. These proofs also help provide much intuition. We find that while the performance of supercomputing servers varies widely with the task assignment policy, none of the above task assignment policies perform as well as we would like.We observe that all policies proposed thus far aim to balance load among the hosts. We propose a policy which purposely unbalances load among the hosts, yet, counter-to-intuition, is also fair in that it achieves the same expected slowdown for all jobs – thus no jobs are biased against. We evaluate this policy again using both trace-driven simulation and analysis. We find that the performance of the load unbalancing policy is significantly better than the best of those policies which balance load.  相似文献   

10.
In hybrid clouds, there is a technique named cloud bursting which can allow companies to expand their capacity to meet the demands of peak workloads in a low-priced manner. In this work, a cost-aware job scheduling approach based on queueing theory in hybrid clouds is proposed. The job scheduling problem in the private cloud is modeled as a queueing model. A genetic algorithm is applied to achieve optimal queues for jobs to improve the utilization rate of the private cloud. Then, the task execution time is predicted by back propagation neural network. The max–min strategy is applied to schedule tasks according to the prediction results in hybrid clouds. Experiments show that our cost-aware job scheduling algorithm can reduce the average job waiting time and average job response time in the private cloud. In additional, our proposed job scheduling algorithm can improve the system throughput of the private cloud. It also can reduce the average task waiting time, average task response time and total costs in hybrid clouds.  相似文献   

11.
12.
A sequential, pipeline processor (that we have named the ADC-500 preprocessor) has been developed which scene segments the three color image data from the ADC-500 optics one image element at a time, groups together image elements from each object in the scene and extracts features from each object. The processing occurs at television frame rates, requiring 16.7 msec to process the entire image. This speed was instrumental in allowing the ADC-500 automated differential analyzer to perform routine 500-cell differentials. The preprocessor also contains hardware which simplifies compilation of the three color histograms. The segmentation algorithms implemented in the preprocessor are multicolor extensions of the classical monochrome density histogram threshold method. For most cell image analysis tasks, a sequential pipeline processor of this type should be more economical and as fast or faster than a parallel processor.  相似文献   

13.
We investigate a difficult scheduling problem in a semiconductor manufacturing process that seeks to minimize the number of tardy jobs and makespan with sequence-dependent setup time, release time, due dates and tool constraints. We propose a mixed integer programming (MIP) formulation which treats tardy jobs as soft constraints so that our objective seeks the minimum weighted sum of makespan and heavily penalized tardy jobs. Although our polynomial-sized MIP formulation can correctly model this scheduling problem, it is so difficult that even a feasible solution can not be calculated efficiently for small-scale problems. We then propose a technique to estimate the upper bound for the number of jobs processed by a machine, and use it to effectively reduce the size of the MIP formulation. In order to handle real-world large-scale scheduling problems, we propose an efficient dispatching rule that assigns a job of the earliest due date to a machine with least recipe changeover (EDDLC) and try to re-optimize the solution by local search heuristics which involves interchange, translocation and transposition between assigned jobs. Our computational experiments indicate that EDDLC and our proposed reoptimization techniques are very efficient and effective. In particular, our method usually gives solutions very close to the exact optimum for smaller scheduling problems, and calculates good solutions for scheduling up to 200 jobs on 40 machines within 10 min.  相似文献   

14.
This article examines the performance effects caused by repeated part visits at the workstations of a flexible manufacturing system (FMS). Such repeated part visits to the same workstations are commonly associated with fixture changes for machining complex parts, reclamping, and remounting or reorienting them. Since each of the repeated visits to a workstation may require different processing requirements, the resulting queueing network does not have a product form solution. We therefore develop an approximate mean value analysis model for performance evaluation of an FMS that may produce multiple part types with distinct repeated visits. We provide numerical examples and validate the accuracy of our solution algorithm against simulation. These examples show that the proposed model produces accurate throughput and utilization predictions with minimal computational efforts. These examples reveal that increasing the total pallet population may result in a reduction of the aggregate throughput, and that the FMS's performance could be more sensitive to the mix of pallets and part routes than to the total number of pallets. Our model will be of use, in particular, when managers wish to control individual operations (e.g., to adjust individual operation times to achieve economic savings in tool wear and breakage costs) or to investigate the performance implications of route changes due to alternate assignments of particular manufacturing tasks to certain workstations.  相似文献   

15.
The design of a decision support system for capacity planning in clinical laboratories is discussed. The DSS supports decisions concerning the following questions: how should the laboratory be divided into job shops (departments/sections), how should staff be assigned to workstations and how should samples be assigned to workstations for testing. The decision support system contains modules for supporting decisions at the overall laboratory level (concerning the division of the laboratory into job shops) and for supporting decisions at the job shop level (assignment of staff to workstations and sample scheduling). Experiments with these modules are described showing both the functionality and the validity.  相似文献   

16.
Reactive scheduling is a procedure followed in production systems to react to unforeseen events that disturb the normal operation of the system. In this paper, a novel operations insertion heuristic is proposed to solve the deadlock-free reactive scheduling problem in flexible job shops, upon the arrival of new jobs. The heuristic utilizes rank matrices (Latin rectangles) to insert new jobs in schedules, while preventing the occurrence of deadlocks or resolving them using the available buffer space (if any). Jobs with alternative processing routes through the system are also considered. The heuristic can be employed to execute two reactive scheduling approaches in a timely efficient manner; to insert the new jobs in the already existing schedule (job insertion) or to reschedule all the jobs in the system (total rescheduling). Using experimental design and analysis of variance (ANOVA), the relative performance of the two approaches is studied and analyzed to provide some measures and guidelines for selecting the appropriate reactive scheduling approach for different problem settings. Three measures of performance are considered in the analysis; efficiency of the revised schedules in terms of the mean flow time, resulting system nervousness, and the required solution time. The results show that, on average, job insertion obtains revised schedules featuring significantly lower system nervousness and slightly higher mean flow time than total rescheduling. However, depending on the system size, number and processing times of the new jobs, and the available flexibility in the system, a trade-off between the two approaches should sometimes be considered.  相似文献   

17.
Abstract

We analysed the effects of patch size and isolation on vascular plants in Quercus cerris forest surrounding Rome (Italy). We randomly sampled 96 plots within 18 forest patches with homogeneous environmental variables; the patches ranged from 1.4 ha to 424.5 ha and were divided into four size classes. We performed the analyses at the patch level using linear regression. At the size class level, the analysis of species richness response to fragmentation (area effect) was performed with ANOVA, while the effect on community composition was analysed by means of PERMANOVA. We also investigated which species could be used as indicator species for each size class. Lastly, to evaluate the advantages of conserving several small patches as opposed to few large ones, we used a cumulative area approach ranking forest fragments. The correlation between species richness and patch area was positive, with a significant difference between the “large” and “small” size classes, while analysis on community composition showed that “large” versus “medium” and “large” versus “small” were significantly different. Nemoral species were recognised as indicators in the “large” class, and shrub and edge species in the “small” class. Our results indicate that 10 ha may be a suitable forest size threshold for planning and conservation.  相似文献   

18.
The results of the experiment showed that leaf elongation rate in two wheat cultivars decreased under soil water stress. Rewatering after water stress, growth restoration.of “Changle No.5” was faster than that of “Lumai No.5”. The osmotic adjustment ability of leaves in these two wheat cultivars increased to 0.41MPa for “Changle No.5” and 0.33MPa for “Lumai No.5” as water potential decreased. At the same leaf elongation rate water potential and osmotic potential of “Changle No5” decreased more than that of “Lumai No.5” Leaf elongation rate fell to zero as water potential and osmotic potential were –1.50MPa and –1.70MPa for “Changle No.5” and –1.20MPa and –1.30MPa for “Lumai No.5” The threshold turgor pressure of elongation growth in leaf cell was different being 0.22MPa for “Changle No.5’ and 0.15MPa for “Lumai No.5”. The difference in the gross extensible coefficient of growing leaf was very small.  相似文献   

19.
Large amounts of veterinary medicines are widely used as therapeutic drugs and feed additives (growth promoters) in China, the environmental presence of which possibly poses challenges to the environment and human health. Therefore, it is important to list the veterinary medicines that are considered to be of relatively high priority in China for environmental management. In this study, a three-stage prioritization scheme was applied to veterinary medicines in China. In Stage I, exposure assessment was conducted based on usage amounts and the possibility of entering the environment. In Stage II, the ecotoxicity and human health effects of compounds having a high potential to enter the environment were assessed. In Stage III, considering both the results of Stages I and II, veterinary medicines were assigned into four priority classifications. Using the approach, 38 compounds were assigned to “H,” 7 compounds to “M,” 2 compounds to “L,” and 22 compounds to “VL.” Among the top-ranked compounds, antibiotics, endoparasiticides, and aquacultural medicines accounted for 57.9%, 28.9%, and 10.5%, respectively. Insecticides used widely in China's aquaculture need to be taken into account due to their high priority rank. This is the first study on the prioritization of veterinary pharmaceuticals in China.  相似文献   

20.
The Network RamDisk: Using remote memory on heterogeneous NOWs   总被引:2,自引:0,他引:2  
Efficient data storage, a major concern in the modern computer industry, is mostly provided today by traditional magnetic disks. However, the cost of a disk transfer (measured in processor cycles) continues to increase with time, making disk accesses increasingly expensive. In this paper we describe the design, implementation and evaluation of a Network RamDisk device that uses main memory of remote workstations as a fasterthandisk storage device. In our study we propose various reliability policies, making the device tolerant to single workstation crashes. We show that the Network RamDisk is portable, flexible, and can operate under any of the existing Unix file systems. The Network RamDisk has been implemented both on the Linux and the Digital Unix operating systems, as a block device driver without any modifications to the kernel code. Using several real applications and benchmarks, we measure the performance of the Network RamDisk over an Ethernet and an ATM network, and find it to be usually four to eight times better than the magnetic disk. In one benchmark, our system was two orders of magnitude faster than the disk. We believe that a Network RamDisk can be efficiently used to provide reliable lowlatency access to files that would otherwise be stored on magnetic disks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号