首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 626 毫秒
1.
Flexible manufacturing systems (FMSs) for two-stage production may possess a variety of operating flexibilities in the form of tooling capabilities for the machines and alternative routings for each operation. In this paper, we compare the throughput performance of several flexible flow shop and job shop designs. We consider two-stage assembly flow shops with m parallel machines in stage 1 and a single assembly facility in stage 2. Every upstream operation can be processed by any one of the machines in stage 1 prior to the assembly stage. We also study a similar design where every stage 1 operation is processed by a predetermined machine. For both designs, we present heuristic algorithms with good worst-case error bounds and show that the average performance of these algorithms is near optimal. The algorithms presented are used to compare the performance of the two designs with each other and other related flexible flow shop designs. It is shown, both analytically and experimentally, that the mode of flexibility possessed by a design has implications on the throughput performance of the production system.  相似文献   

2.
The objective of this review paper is to describe the development and application of a suite of more than 40 computerized dairy farm decision support tools contained at the University of Wisconsin-Madison (UW) Dairy Management website http://DairyMGT.info. These data-driven decision support tools are aimed to help dairy farmers improve their decision-making, environmental stewardship and economic performance. Dairy farm systems are highly dynamic in which changing market conditions and prices, evolving policies and environmental restrictions together with every time more variable climate conditions determine performance. Dairy farm systems are also highly integrated with heavily interrelated components such as the dairy herd, soils, crops, weather and management. Under these premises, it is critical to evaluate a dairy farm following a dynamic integrated system approach. For this approach, it is crucial to use meaningful data records, which are every time more available. These data records should be used within decision support tools for optimal decision-making and economic performance. Decision support tools in the UW-Dairy Management website (http://DairyMGT.info) had been developed using combination and adaptation of multiple methods together with empirical techniques always with the primary goal for these tools to be: (1) highly user-friendly, (2) using the latest software and computer technologies, (3) farm and user specific, (4) grounded on the best scientific information available, (5) remaining relevant throughout time and (6) providing fast, concrete and simple answers to complex farmers’ questions. DairyMGT.info is a translational innovative research website in various areas of dairy farm management that include nutrition, reproduction, calf and heifer management, replacement, price risk and environment. This paper discusses the development and application of 20 selected (http://DairyMGT.info) decision support tools.  相似文献   

3.
This paper presents Scalanytics, a declarative platform that supports high-performance application layer analysis of network traffic. Scalanytics uses (1) stateful network packet processing techniques for extracting application layer data from network packets, (2) a declarative rule-based language called Analog for compactly specifying analysis pipelines from reusable modules, and (3) a task-stealing architecture for processing network packets at high throughput within these pipelines, by leveraging multi-core processing capabilities in a load-balanced manner without the need for explicit performance profiling. In a cluster of machines, Scalanytics further improves throughput through the use of a consistent-hashing based load partitioning strategy. Our evaluation on a 16-core machine demonstrate that Scalanytics achieves up to 11.4 \(\times \) improvement in throughput compared with the best uniprocessor implementation. Moreover, Scalanytics outperforms the Bro intrusion detection system by an order of magnitude when used for analyzing SMTP traffic. We further observed increased throughput when running Scalanytics pipelines across multiple machines.  相似文献   

4.
This paper presents a mathematical programming model to help select equipment for a flexible manufacturing system, i.e., the selection of the types and numbers of CNC machines, washing stations, load/unload stations, transportation vehicles, and pallets. The objective is to minimize equipment costs and work-in-process inventory cost, while fulfilling production requirements for an average period. Queueing aspects and part flow interactions are considered with the help of a Jacksonian-type closed queueing network model in order to evaluate the system's performance. Since the related decision problem of our model can be shown to be NP-complete, the proposed solution procedure is based on implicit enumeration. Four bounds are provided, two lower and two upper bounds. A tight lower bound is obtained by linearizing the model through the application of asymptotic bound analysis. Furthermore, asymptotic bound analysis allows the calculation of a lower bound for the number of pallets in the system. The first upper bound is given by the best feasible solution and the second is based on the anti-starshaped form of the throughput function.  相似文献   

5.
Semiconductor wafer fabrication lines can be characterized by re-entrant product flow, long production lead-time, large variety of production processes, and large capital investment. These distinctive characteristics make the flow control in the fab very complicated. Throughput rate and lead-time are among the most important performance measures. The throughput rate is usually determined by a bottleneck resource, and the lead-time depends on the machine utilization level and the amount of variability in the system. Due to the high efficiency of material handling and reduced particles, automated material handling systems such as automatic guided vehicles (AGVs), overhead hoist transporters (OHTs), and overhead shuttles (OHSs) are being widely used in wafer fabrication lines (wafer fabs) instead of human operators. Although a material handling system itself is seldom a bottleneck of production in a fab, it is important for that to effectively support the bottleneck machines to maximize the throughput and reduce production lead-time. This paper presents a vehicle dispatching procedure based on the concept of theory of constraints, in which vehicle dispatching decisions are made to utilize the bottleneck machines at the maximum level. Simulation experiments have been performed to compare the proposed vehicle dispatching procedure with existing ones under different levels of machine utilization, vehicle utilization, and local buffer capacity.  相似文献   

6.
Our experience of fisheries management is one of regular disappointments. As well as occasional spectacular collapses, fisheries have often had to make severe and painful adjustments in the face of overexploitation and overinvestment. The failures of fisheries management may result from failing to consider the management of fisheries as a whole system. A management-oriented paradigm (MOP) crosses the boundaries of traditional fisheries scientific, economic and policy research. It involves formulating management objectives that are measurable, specifying sets of decision rules, and specifying the data and methods to be used, all in such a way that the properties of the resultant system can be prospectively evaluated. The prospective evaluation of a management system involves the use of computer simulations and the development of performance measures that demonstrate the likely success of a management system in meeting its objectives.  相似文献   

7.
The capacity of a flexible manufacturing system (FMS) is optimized with the objective to maximize the system's throughput, while a budget constraint is considered. Decisions are performed on the capacity of machine groups (sets of identical machines), the transportation system and, in case of a significant cost impact, the number of pallets in the system. Throughput evaluation is achieved either by an open finite queueing network or by a closed queueing network if the number of pallets is included in the decision process. For both cases the solution procedure is based on the marginal allocation scheme.  相似文献   

8.
In this paper, we compare the operational performance of two machine-sharing configurations: total flexibility and chaining. We show that chaining captures most of the benefits of total flexibility while limiting the number of part types processed on any individual machine to only two. We examine the relative desirability of the two configurations under varying buffer sizes, loading conditions, number of machines, and setup times, as well as for different control policies. For nonzero setups times, we show that chained configurations can outperform fully flexible ones. This particularly is the case when either the number of machines or length of setup times is high. We also find that the effect of the system size on performance diminishes with the number of machines. This means that multiple smaller chains can perform almost as well as a single long one. Our results are consistent with the recent findings of Jordan and Graves (1995), who examined the economic benefits of chaining relative to full flexibility.  相似文献   

9.
Improving analytical throughput is the focus of many quantitative workflows being developed for early drug discovery. For drug candidate screening, it is common practice to use ultra-high performance liquid chromatography (U-HPLC) coupled with triple quadrupole mass spectrometry. This approach certainly results in short analytical run time; however, in assessing the true throughput, all aspects of the workflow needs to be considered, including instrument optimization and the necessity to re-run samples when information is missed. Here we describe a high-throughput metabolic stability assay with a simplified instrument set-up which significantly improves the overall assay efficiency. In addition, as the data is acquired in a non-biased manner, high information content of both the parent compound and metabolites is gathered at the same time to facilitate the decision of which compounds to proceed through the drug discovery pipeline.  相似文献   

10.
With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.  相似文献   

11.
In a heterogeneous wireless network, handover techniques are designed to facilitate anywhere/anytime service continuity for mobile users. Consistent best-possible access to a network with widely varying network characteristics requires seamless mobility management techniques. Hence, the vertical handover process imposes important technical challenges. Handover decisions are triggered for continuous connectivity of mobile terminals. However, bad network selection and overload conditions in the chosen network can cause fallout in the form of handover failure. In order to maintain the required Quality of Service during the handover process, decision algorithms should incorporate intelligent techniques. In this paper, a new and efficient vertical handover mechanism is implemented using a dynamic programming method from the operation research discipline. This dynamic programming approach, which is integrated with the Technique to Order Preference by Similarity to Ideal Solution (TOPSIS) method, provides the mobile user with the best handover decisions. Moreover, in this proposed handover algorithm a deterministic approach which divides the network into zones is incorporated into the network server in order to derive an optimal solution. The study revealed that this method is found to achieve better performance and QoS support to users and greatly reduce the handover failures when compared to the traditional TOPSIS method. The decision arrived at the zone gateway using this operational research analytical method (known as the dynamic programming knapsack approach together with Technique to Order Preference by Similarity to Ideal Solution) yields remarkably better results in terms of the network performance measures such as throughput and delay.  相似文献   

12.
This paper introduces a generic decision-making framework for assigning resources of a manufacturing system to production tasks. Resources are broadly defined production units, such as machines, human operators, or material handling vehicles; and tasks are activities performed by resources. In the specific context of FMS, resources correspond to individual machines; tasks correspond to operations to be performed on parts. The framework assumes a hierarchical structure of the system and calls for the execution of four consecutive steps to make a decision for the assignment of a resource to a task. These steps are 1) establishment of decision-making criteria, 2) formation of alternative assignments, 3) estimation of the consequences of the assignments, and 4) selection of the best alternative assignment. This framework has been applied to an existing FMS as an operational policy that decides what task will be executed on which resource of this FMS. Simulation runs provide some initial results of the application of this policy. It is shown that the policy provides flexibility in terms of system performance and computational effort.  相似文献   

13.
VPM tokens: virtual machine-aware power budgeting in datacenters   总被引:1,自引:0,他引:1  
Power consumption and cooling overheads are becoming increasingly significant for enterprise datacenters, affecting overall costs and the ability to extend resource capacities. To help mitigate these issues, active power management technologies are being deployed aggressively, including power budgeting, which enables improved power provisioning and can address critical periods when power delivery or cooling capabilities are temporarily reduced. Given the use of virtualization to encapsulate application components into virtual machines (VMs), however, such power management capabilities must address the interplay between budgeting physical resources and the performance of the virtual machines used to run these applications. This paper proposes a set of management components and abstractions for use by software power budgeting policies. The key idea is to manage power from a VM-centric point of view, where the goal is to be aware of global utility tradeoffs between different virtual machines (and their applications) when maintaining power constraints for the physical hardware on which they run. Our approach to VM-aware power budgeting uses multiple distributed managers integrated into the VirtualPower Management (VPM) framework whose actions are coordinated via a new abstraction, termed VPM tokens. An implementation with the Xen hypervisor illustrates technical benefits of VPM tokens that include up to 43% improvements in global utility, highlighting the ability to dynamically improve cluster performance while still meeting power budgets. We also demonstrate how VirtualPower based budgeting technologies can be leveraged to improve datacenter efficiency in the context of cooling infrastructure management.
Yogendra JoshiEmail:
  相似文献   

14.
Natural resource management (NRM) is becoming increasingly important at all scales, local, regional, national and global, because of an increasing human population and increasing per capita use of resources and space. Conflicts are intensifying between different interest groups. Production and conservation aspects are particularly debated because conservation often conflicts with economic and social sustainability. There is public demand for objective decision based NRM but limitations are all pervasive due to the spatial and temporal complexity and interdisciplinary nature.This special issue explores the use of spatial data and models to overcome some limitations of NRM decision making. The papers in this issue show modern approaches of natural resources management with a particular focus on spatial data collection, analysis and the development of spatial indicators. This issue presents a balanced mix of review and research papers that give examples of how to find or improve the spatial information base for evidence-based decision making.This overview makes the argument that understanding complex spatial pattern and processes, and the development of spatial indicators, is an essential aspect of evidence-based NRM. If spatial and temporal patterns are complex, ecological evidence from field data or experiments may have limited value for NRM and observational study designs become more appropriate for understanding complex spatial pattern and processes. Data quality should be documented as a combination of accuracy and spatio-temporal representativeness in order to be useful in the NRM decision process.  相似文献   

15.
MOSIX is a cluster management system that supports preemptive process migration. This paper presents the MOSIX Direct File System Access (DFSA), a provision that can improve the performance of cluster file systems by allowing a migrated process to directly access files in its current location. This capability, when combined with an appropriate file system, could substantially increase the I/O performance and reduce the network congestion by migrating an I/O intensive process to a file server rather than the traditional way of bringing the file's data to the process. DFSA is suitable for clusters that manage a pool of shared disks among multiple machines. With DFSA, it is possible to migrate parallel processes from a client node to file servers for parallel access to different files. Any consistent file system can be adjusted to work with DFSA. To test its performance, we developed the MOSIX File-System (MFS) which allows consistent parallel operations on different files. The paper describes DFSA and presents the performance of MFS with and without DFSA.  相似文献   

16.
We report the results of an evaluation project on three Beowulf type clusters. The purpose of this study was to assess both the performance of the clusters and the availability and quality of the software for cluster management and management of the available resources. This last goal could hardly be achieved because at the time this project was undertaken much of the management software was either very immature or not yet available. However, it was possible to assess the cluster performance both from the point of view of single program execution as well as with respect to throughput by loading the systems according to a predefined schedule via the available batch systems. To this end a set of application programs, ranging from astronomy to quantum chemistry, together with a synthetic benchmark were employed. From the results we wanted to derive answers about the viability of using cluster systems routinely in a multi-user environment with comparable maintenance cost and effort to that of an integrated parallel machine.  相似文献   

17.
The private sector decision making situations which LCA addresses mustalso eventually take theeconomic consequences of alternative products or product designs into account. However, neither the internal nor external economic aspects of the decisions are within the scope of developed LCA methodology, nor are they properly addressed by existing LCA tools. This traditional separation of life cycle environmental assessment from economic analysis has limited the influence and relevance of LCA for decision-making, and left uncharacterized the important relationships and trade-offs between the economic and life cycle environmental performance of alternative product design decision scenarios. Still standard methods of LCA can and have been tightly, logically, and practically integrated with standard methods for cost accounting, life cycle cost analysis, and scenario-based economic risk modeling. The result is an ability to take both economic and environmental performance — and their tradeoff relationships — into account in product/process design decision making.  相似文献   

18.
This paper describes the design and implementation of a parallel programming environment called Distributed Shared Array (DSA), which provides a shared global array abstract across different machines connected by a network. In DSA, users can define and use global arrays that can be accessed uniformly from any machines in the network. Explicit management of array area allocation, replication, and migration is achieved by explicit calls for array manipulation: defining array regions, reading and writing array regions, synchronization, and control of replication and migration. The DSA is integrated with Grid (Globus) services. This paper also describes the use of our model for gene cluster analysis, multiple alignment and molecular dynamics simulation. In these applications, global arrays are used for storing the distance matrix, alignment matrix and atom coordinates, respectively. Large array areas, which cannot be stored in the memory of individual machines, are made available by the DSA. Scalable performance of DSA was obtained compared to that of conventional parallel programs written in MPI.  相似文献   

19.
The high therapeutic and financial value offered by polyclonal antibodies and their fragments has prompted extensive commercialization for the treatment of a wide range of acute clinical indications. Large-scale manufacture typically includes antibody-specific chromatography steps that employ custom-made affinity matrices to separate product-specific IgG from the remainder of the contaminating antibody repertoire. The high cost of such matrices necessitates efficient process design in order to maximize their economic potential. Techniques that identify the most suitable operating conditions for achieving desired levels of manufacturing performance are therefore of significant utility. This paper describes the development of a computer model that incorporates the effects of capacity changes over consecutive chromatographic operational cycles in order to identify combinations of protein load and loading flowrate that satisfy preset constraints of product yield and throughput. The method is illustrated by application to the manufacture of DigiFab, an FDA-approved polyclonal antibody fragment purified from ovine serum, which is used to treat digoxin toxicity (Protherics U.K. Limited). The model was populated with data obtained from scale-down experimental studies of the commercial-scale affinity purification step, which correlated measured changes in matrix capacity with the total protein load and number of resin re-uses. To enable a tradeoff between yield and throughput, output values were integrated together into a single metric by multi-attribute decision-making techniques to identify the most suitable flowrate and feed concentration required for achieving target levels of DigiFab yield and throughput. Results indicated that reducing the flowrate by 70% (from the current level) and using a protein load at the midpoint of the range currently employed at production scale (approximately 200-500 g/L) would provide the most satisfactory tradeoff between yield and throughput.  相似文献   

20.
In this article we consider the problem of determining the minimum cost configuration (number of machines and pallets) for a flexible manufacturing system with the constraint of meeting a prespecified throughput, while simultaneously allocating the total workload among the machines (or groups of machines). Our procedure allows consideration of upper and lower bounds on the workload at each machine group. These bounds arise as a consequence of precedence constraints among the various operations and/or limitations on the number or combinations of operations that can be assigned to a machine because of constraints on tool slots or the space required to store assembly components. Earlier work on problems of this nature assumes that the workload allocation is given. For the single-machine-type problem we develop an efficient implicit enumeration procedure that uses fathoming rules to eliminate dominated configurations, and we present computational results. We discuss how this procedure can be used as a building block in solving the problem with multiple machine types.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号