首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Protein NMR peak assignment refers to the process of assigning a group of "spin systems" obtained experimentally to a protein sequence of amino acids. The automation of this process is still an unsolved and challenging problem in NMR protein structure determination. Recently, protein NMR peak assignment has been formulated as an interval scheduling problem (ISP), where a protein sequence P of amino acids is viewed as a discrete time interval I (the amino acids on P one-to-one correspond to the time units of I), each subset S of spin systems that are known to originate from consecutive amino acids from P is viewed as a "job" j(s), the preference of assigning S to a subsequence P of consecutive amino acids on P is viewed as the profit of executing job j(s) in the subinterval of I corresponding to P, and the goal is to maximize the total profit of executing the jobs (on a single machine) during I. The interval scheduling problem is max SNP-hard in general; but in the real practice of protein NMR peak assignment, each job j(s) usually requires at most 10 consecutive time units, and typically the jobs that require one or two consecutive time units are the most difficult to assign/schedule. In order to solve these most difficult assignments, we present an efficient 13/7-approximation algorithm for the special case of the interval scheduling problem where each job takes one or two consecutive time units. Combining this algorithm with a greedy filtering strategy for handling long jobs (i.e., jobs that need more than two consecutive time units), we obtain a new efficient heuristic for protein NMR peak assignment. Our experimental study shows that the new heuristic produces the best peak assignment in most of the cases, compared with the NMR peak assignment algorithms in the recent literature. The above algorithm is also the first approximation algorithm for a nontrivial case of the well-known interval scheduling problem that breaks the ratio 2 barrier.  相似文献   

2.
We investigate a difficult scheduling problem in a semiconductor manufacturing process that seeks to minimize the number of tardy jobs and makespan with sequence-dependent setup time, release time, due dates and tool constraints. We propose a mixed integer programming (MIP) formulation which treats tardy jobs as soft constraints so that our objective seeks the minimum weighted sum of makespan and heavily penalized tardy jobs. Although our polynomial-sized MIP formulation can correctly model this scheduling problem, it is so difficult that even a feasible solution can not be calculated efficiently for small-scale problems. We then propose a technique to estimate the upper bound for the number of jobs processed by a machine, and use it to effectively reduce the size of the MIP formulation. In order to handle real-world large-scale scheduling problems, we propose an efficient dispatching rule that assigns a job of the earliest due date to a machine with least recipe changeover (EDDLC) and try to re-optimize the solution by local search heuristics which involves interchange, translocation and transposition between assigned jobs. Our computational experiments indicate that EDDLC and our proposed reoptimization techniques are very efficient and effective. In particular, our method usually gives solutions very close to the exact optimum for smaller scheduling problems, and calculates good solutions for scheduling up to 200 jobs on 40 machines within 10 min.  相似文献   

3.
Reactive scheduling is a procedure followed in production systems to react to unforeseen events that disturb the normal operation of the system. In this paper, a novel operations insertion heuristic is proposed to solve the deadlock-free reactive scheduling problem in flexible job shops, upon the arrival of new jobs. The heuristic utilizes rank matrices (Latin rectangles) to insert new jobs in schedules, while preventing the occurrence of deadlocks or resolving them using the available buffer space (if any). Jobs with alternative processing routes through the system are also considered. The heuristic can be employed to execute two reactive scheduling approaches in a timely efficient manner; to insert the new jobs in the already existing schedule (job insertion) or to reschedule all the jobs in the system (total rescheduling). Using experimental design and analysis of variance (ANOVA), the relative performance of the two approaches is studied and analyzed to provide some measures and guidelines for selecting the appropriate reactive scheduling approach for different problem settings. Three measures of performance are considered in the analysis; efficiency of the revised schedules in terms of the mean flow time, resulting system nervousness, and the required solution time. The results show that, on average, job insertion obtains revised schedules featuring significantly lower system nervousness and slightly higher mean flow time than total rescheduling. However, depending on the system size, number and processing times of the new jobs, and the available flexibility in the system, a trade-off between the two approaches should sometimes be considered.  相似文献   

4.
The rehabilitation needs and goals of older people differ in many respects from those of the young. In younger individuals a crippling condition may affect an otherwise healthy body, while in an older person it may be superimposed on other pre-existing degenerative diseases. Thus, in older patients the restorative or rehabilitative phase rarely can be separated from the phase of definitive medical treatment.The primary goal of rehabilitation of younger individuals is usually vocational. In the older group this goal or objective is, by and large, secondary. This does not minimize, however, the value of medical and social services in the rehabilitation of older persons. The simple ability to care for his own personal needs can do much to help the elderly disabled patient regain his dignity and self-respect and remove his fears of becoming a burden on his family or society.  相似文献   

5.
Motivated by a layout design problem in the electronics industry, we study in this article the allocation of buffer space among a set of cells. Each cell processes a given part family and has its own revenue-cost structure. The objective of the optimal allocation is to maximize the net profit function (total production profits minus total buffer allocation costs). According to the flow pattern of jobs, the cells are categorized into two types. A type 1 cell is modeled as a Jackson network; a type 2 cell is modeled as an ordered-entry system with heterogeneous servers. Both models have finite waiting room, due to the buffer capacity allocated to the cells. We show that under quite general conditions, the production rate of each cell of either type is an incresing and concave function of its buffer allocation. Exploiting this property, a marginal allocation scheme efficiently solves the optimal buffer allocation problem under increasing concave production profits and convex buffer space costs.  相似文献   

6.
The problem of scheduling jobs using wearing tools is studied. Tool wearing is assumed to be stochastic and the jobs are processed in one machining centre provided with a limited capacity tool magazine. The aim is to minimize the expected average completion time of the jobs by choosing their processing order and tool management decisions wisely. All jobs are available at the beginning of the planning period. This kind of situation is met in production planning of CNC-machines. Previous studies concerning this problem have either assumed deterministic wearing for the tools or omitted the wearing completely. In our formulation of the problem, tool wearing is stochastic and the problem becomes very hard to solve analytically. A heuristic based on genetic algorithms is therefore given for the joint problem of job scheduling and tool management. The algorithm searches the most beneficial job sequence when the tool management decisions are made by a removal rule taking into account the future planned usage of the tools. The cost of each job sequence is evaluated by simulating the job processing. Empirical tests with heuristics indicate that by taking the stochastic information into account, one can reduce the average job processing time considerably.  相似文献   

7.
Many real world situations exist where job scheduling is required. This is the case of some entities, machines, or workers who have to execute certain jobs as soon as possible. Frequently what happens is that several workers or machines are not available to perform their activities during some time periods, due to different circumstances. This paper deals with these situations, and considers stochastic scheduling models to study these problems. When scheduling models are used in practice, they have to take into account that some machines may not be working. That temporal lack of machine availability is known as breakdowns, which happen randomly at any time. The times required to repair those machines are also random variables. The jobs have operations with stochastic processing times, their own release times, and there is no precedence between them. Each job is divided into operations and each operation is performed on the corresponding specialized machine. In addition, in the problems considered, the order in which the operations of each job are done is irrelevant. We develop a heuristic approach to solve these stochastic open-shop scheduling problems where random machine breakdowns can happen. The proposed approach is general and it does not depend on the distribution types of the considered random input data. It provides solutions to minimize the expected makespan. Computational experiences are also reported. The results show that the proposed approach gives a solid performance, finding suitable solutions with short CPU times.  相似文献   

8.
9.
Deng K  Chu T 《PloS one》2011,6(5):e19014
The inconsistency of predictions from solution concepts of conventional game theory with experimental observations is an enduring question. These solution concepts are based on the canonical rationality assumption that people are exclusively self-regarding utility maximizers. In this article, we think this assumption is problematic and, instead, assume that rational economic agents act as if they were maximizing their implicit utilities, which turns out to be a natural extension of the canonical rationality assumption. Implicit utility is defined by a player's character to reflect his personal weighting between cooperative, individualistic, and competitive social value orientations. The player who actually faces an implicit game chooses his strategy based on the common belief about the character distribution for a general player and the self-estimation of his own character, and he is not concerned about which strategies other players will choose and will never feel regret about his decision. It is shown by solving five paradigmatic games, the Dictator game, the Ultimatum game, the Prisoner's Dilemma game, the Public Goods game, and the Battle of the Sexes game, that the framework of implicit game and its corresponding solution concept, implicit equilibrium, based on this alternative assumption have potential for better explaining people's actual behaviors in social decision making situations.  相似文献   

10.
This paper investigates scheduling strategies for divisible jobs/loads originating from multiple sites in hierarchical networks with heterogeneous processors and communication channels. In contrast, most previous work in the divisible load scheduling theory (DLT) literature mainly addressed scheduling problems with loads originating from a single processor. This is one of the first works that address scheduling multiple loads from multiple sites in the DLT paradigm. In addition, scheduling multi-site jobs is common in Grids and other general distributed systems for resource sharing and coordination. An efficient static scheduling algorithm PPDD (Processor-set Partitioning and Data Distribution Algorithm) is proposed to near-optimally distribute multiple loads among all processors so that the overall processing time of all jobs is minimized. The PPDD algorithm is applied to two cases: when processors are equipped with front-ends and when they are not equipped with front-ends. The application of the algorithm to homogeneous systems is also studied. Further, several important properties exhibited by the PPDD algorithm are proven through lemmas. To implement the PPDD algorithm, we propose a communication strategy. In addition, we compare the performance of the PPDD algorithm with a Round-robin Scheduling Algorithm (RSA), which is most commonly used. Extensive case studies through numerical analysis have been conducted to verify the theoretical findings.  相似文献   

11.
盖玲 《生物数学学报》2009,24(1):166-170
本文考虑了工件的加工速度随交货期的接近不断加快的排序模型,给出了初始速度不为零情况下最小化加工时间的最优算法.并讨论了不完全信息情形下工件的最大延迟比该模型的研究结果可应用在种群对食物的最优收寻问题中.  相似文献   

12.
Given a two-person continuous non-constant sum game, a “solution” can be arrived at even in ignorance of the pay-off functions as each player tries to maximize his own pay-off by trial and error. The solution of the game considered here is either the intersection of two “optimal lines” (the case of the stable equilibrium) or parasitism of one player upon the other (the case of unstable equilibrium). The introduction of secondary satisfactions (linear combinations of the pay-offs) affects both the position and the stability of the equilibrium and thus the resulting primary satisfactions (the pay-offs). These effects of the matrix of transformation are investigated. In particular, it is shown that the social optimum, which is also the solution of the game where collusion is allowed, is attained if and only if the columns of the matrix are identical, and that this solution is generally stable. Some suggested connections with biological situations are discussed.  相似文献   

13.
SAGA: sequence alignment by genetic algorithm.   总被引:29,自引:0,他引:29       下载免费PDF全文
We describe a new approach to multiple sequence alignment using genetic algorithms and an associated software package called SAGA. The method involves evolving a population of alignments in a quasi evolutionary manner and gradually improving the fitness of the population as measured by an objective function which measures multiple alignment quality. SAGA uses an automatic scheduling scheme to control the usage of 22 different operators for combining alignments or mutating them between generations. When used to optimise the well known sums of pairs objective function, SAGA performs better than some of the widely used alternative packages. This is seen with respect to the ability to achieve an optimal solution and with regard to the accuracy of alignment by comparison with reference alignments based on sequences of known tertiary structure. The general attraction of the approach is the ability to optimise any objective function that one can invent.  相似文献   

14.
Organisms are expected to be sensitive to cues of genetic relatedness when making decisions about social behaviour. Relatedness can be assessed in several ways, one of which is phenotype matching: the assessment of similarity between others' traits and either one's own traits or those of known relatives. One candidate cue of relatedness in humans is facial resemblance. Here, I report the effects of an experimental manipulation of facial resemblance in a two-person sequential trust game. Subjects were shown faces of ostensible playing partners manipulated to resemble either themselves or an unknown person. Resemblance to the subject's own face raised the incidence of trusting a partner, but had no effect on the incidence of selfish betrayals of the partner's trust. Control subjects playing with identical pictures failed to show such an effect. In a second experiment, resemblance of the playing partner to a familiar (famous) person had no effect on either trusting or betrayals of trust.  相似文献   

15.
There can be different approaches to the management of resources within the context of multi-project scheduling problems. In general, approaches to multi-project scheduling problems consider the resources as a pool shared by all projects. On the other hand, when projects are distributed geographically or sharing resources between projects is not preferred, then this resource sharing policy may not be feasible. In such cases, the resources must be dedicated to individual projects throughout the project durations. This multi-project problem environment is defined here as the resource dedication problem (RDP). RDP is defined as the optimal dedication of resource capacities to different projects within the overall limits of the resources and with the objective of minimizing a predetermined objective function. The projects involved are multi-mode resource constrained project scheduling problems with finish to start zero time lag and non-preemptive activities and limited renewable and nonrenewable resources. Here, the characterization of RDP, its mathematical formulation and two different solution methodologies are presented. The first solution approach is a genetic algorithm employing a new improvement move called combinatorial auction for RDP, which is based on preferences of projects for resources. Two different methods for calculating the projects’ preferences based on linear and Lagrangian relaxation are proposed. The second solution approach is a Lagrangian relaxation based heuristic employing subgradient optimization. Numerical studies demonstrate that the proposed approaches are powerful methods for solving this problem.  相似文献   

16.
We study the problem of scheduling a divisible load in a three-dimensional mesh of processors. The objective is to find partition of a load into shares and distribution of load shares among processors which minimize load processing time subject to communication delays involved in sending load from one processor to another. We propose a new scheduling algorithm which distributes load in a sequence of stages across the network, each stage brings load to a set of processors located at the same distance from the load source. A key feature of our solution is that sets of processors receive load in the order of decreasing processing capacities. We call this scheduling strategy Largest Layer First. A theorem about the processing time attained by the algorithm is stated. Performance of the algorithm is compared to earlier results.  相似文献   

17.
Here we examine the roles of interpersonal valuation and gratitude in the formation of cooperative relationships. Building on prior work, we draw on the concept of a welfare tradeoff ratio (WTR), an internally computed index of the extent to which one person values another person's welfare relative to his or her own. We test several predictions regarding the effects of benefit delivery on changes in WTR, gratitude, and subsequent cooperation. We show that benefit delivery by a stranger: (i) raises the beneficiary's valuation of the stranger's welfare, (ii) predicts downstream cooperative behavior by the beneficiary toward the stranger, and (iii) is coincident with beneficiaries' expressions of gratitude. We find evidence that cooperation and gratitude, while both sparked via benefit delivery and both underpinned by estimates of welfare valuation, are nevertheless produced in parallel via different paths. Specifically, the updated magnitude—not the initial magnitude or degree of change—of a beneficiary's WTR toward a stranger predicts the beneficiary's downstream cooperative behavior. By contrast, the extent to which the beneficiary's WTR positively changes—and not the initial or updated WTR magnitude—predicts gratitude production, a feature proposed to reinforce the benefactor's actions and foreshadow future cooperative intent on the part of the beneficiary. Taken together, our findings point to the possibility that cooperative behavior might operate via internal estimates of welfare valuation, and that gratitude signals benefit reception and the intent to engage in a cooperative relationship.  相似文献   

18.
In this study younger and older persons were compared with regard to their stereotypes about both age groups, their self-concept and self-esteem. We examined the relation between age and stereotypes about younger and older adults, the relation between stereotypes about one's own age group and self-concepts, and the relation between self-concepts and self-esteem. Stereotypes and self-concepts were measured on two dimensions, warmth and competence. Twenty-eight younger adults (16-25 years) and 26 older adults (65-85 years) participated in this study. Both age groups perceived younger persons as more competent than older persons and older persons as more warm than younger persons. Older persons rate themselves higher than their in-group on competence and warmth. Younger respondents did the same, but on warmth only. A rating of the own person as more competent than the stereotype of their own age group, is related to self-esteem for older persons. Distancing oneself from negative stereotypes about one's own age group is an important key in maintaining high levels of self-esteem, but only in old age.  相似文献   

19.
Lovering RP 《Bioethics》2005,19(2):131-145
The traditional approach to the abortion debate revolves around numerous issues, such as whether the foetus is a person, whether the foetus has rights, and more. Don Marquis suggests that this traditional approach leads to a standoff and that the abortion debate 'requires a different strategy.' Hence his 'future of value' strategy, which is summarized as follows: (1) A normal foetus has a future of value. (2) Depriving a normal foetus of a future of value imposes a misfortune on it. (3) Imposing a misfortune on a normal foetus is prima facie wrong. (4) Therefore, depriving a normal foetus of a future of value is prima facie wrong. (5) Killing a normal foetus deprives it of a future value. (6) Therefore, killing a normal foetus is prima facie wrong. In this paper, I argue that Marquis's strategy is not different since it involves the concept of person--a concept deeply rooted in the traditional approach. Specifically, I argue that futures are valuable insofar as they are not only dominated by goods of consciousness, but are experienced by psychologically continuous persons. Moreover, I argue that his strategy is not sound since premise (1) is false. Specifically, I argue that a normal foetus, at least during the first trimester, is not a person. Thus, during that stage of development it is not capable of experiencing its future as a psychologically continuous person and, hence, it does not have a future of value.  相似文献   

20.
The general problem scenario of this paper is the following: Jobs of various priorities, stationed in a common storage area, are waiting to be dispatched to two non-identical workstations. Any of the waiting jobs can be accessed from the storage at any given time. Each job can be processed on either of the workstations, but once a job has been assigned it may not be preempted. By job priority it is meant that a higher priority job has disptach preference over a lower priority job. The processing time of a job on a given workstation is assumed to be random, the distribution being dependent on the job type and the configuration of the workstation. Specifically, the first problem studied considers only two classes of jobs: (1) “hot” jobs, whose processing is to be expedited and thus have the higher dispatch priority, and (2) “routine” jobs which may be assigned to an available workstation only if the workstation has been rejected by all “hot” jobs. The processing times are assumed to be exponentially distributed with means depending on the job class and workstation. We assume that, on the average, one workstation is faster than the other with regard to processing any job. The dispatching objective for each job class is to minimize its expected flowtime. It is shown that threshold dispatching policies are optimal for this problem. That is, the faster processor should be utilized whenever possible, and for each class there exists an explicit threshold such that when the number of jobs of that class in the buffer exceeds this threshold then a job of that class is dispatched to the slower processor, otherwise these jobs wait for the faster processor to become available. For the higher priority jobs, this threshold is shown to be a function only of the various processing rates of the two workstations. For the lower priority jobs, the threshold also depends on the number of higher priority jobs in the buffer. The results is extended to a system with n priority classes. Again, it is shown that when the processing times are exponentially distributed with different rates and the dispatching objective for each class is to minimize its expected flowtime, the optimal dispatching policies are of threshold type. Explicit thresholds are easily derived.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号