首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
MOTIVATION: Side-chain positioning is a central component of homology modeling and protein design. In a common formulation of the problem, the backbone is fixed, side-chain conformations come from a rotamer library, and a pairwise energy function is optimized. It is NP-complete to find even a reasonable approximate solution to this problem. We seek to put this hardness result into practical context. RESULTS: We present an integer linear programming (ILP) formulation of side-chain positioning that allows us to tackle large problem sizes. We relax the integrality constraint to give a polynomial-time linear programming (LP) heuristic. We apply LP to position side chains on native and homologous backbones and to choose side chains for protein design. Surprisingly, when positioning side chains on native and homologous backbones, optimal solutions using a simple, biologically relevant energy function can usually be found using LP. On the other hand, the design problem often cannot be solved using LP directly; however, optimal solutions for large instances can still be found using the computationally more expensive ILP procedure. While different energy functions also affect the difficulty of the problem, the LP/ILP approach is able to find optimal solutions. Our analysis is the first large-scale demonstration that LP-based approaches are highly effective in finding optimal (and successive near-optimal) solutions for the side-chain positioning problem.  相似文献   

2.
A theoretical investigation is presented which allows the calculation of rate constants and phenomenological parameters in states of maximal reaction rates for unbranched enzymic reactions. The analysis is based on the assumption that an increase in reaction rates was an important characteristic of the evolution of the kinetic properties of enzymes. The corresponding nonlinear optimization problem is solved taking into account the constraint that the rate constants of the elementary processes do not exceed certain upper limits. One-substrate-one-product reactions with two, three and four steps are treated in detail. Generalizations concern ordered uni-uni-reactions involving an arbitrary number of elementary steps. It could be shown that depending on the substrate and product concentrations different types of solutions can be found which are classified according to the number of rate constants assuming in the optimal state submaximal values. A general rule is derived concerning the number of possible solutions of the given optimization problem. For high values of the equilibrium constant one solution always applies to a very large range of the concentrations of the reactants. This solution is characterized by maximal values of the rate constants of all forward reactions and by non-maximal values of the rate constants of all backward reactions. Optimal kinetic parameters of ordered enzymic mechanisms with two substrates and one product (bi-uni-mechanisms) are calculated for the first time. Depending on the substrate and product concentrations a complete set of solutions is found. In all cases studied the model predicts a matching of the concentrations of the reactants and the corresponding Michaelis constants, which is in good accordance with the experimental data. It is discussed how the model can be applied to the calculation of the optimal kinetic design of real enzymes.  相似文献   

3.
The success of hierarchical production planning approaches for flexible manufacturing systems lies in the consistency of decision outcomes at various decision levels. For instance, the loading problem, which is solved at a lower level, may not yield a feasible loading solution to a set of part types selected at a higher level. This paper attemps to address the issue of recognizing the infeasibility of a loading solution. We present a modified loading model that includes a penalty for each operation not assigned to any machine. We develop a Lagrangian-based heuristic procedure and provide a sufficient condition on the quality of heuristic solutions that, if satisfied, will enable us to use the heuristic solutions to recognize the infeasibility of a loading problem. The proposed model and the dual-based heuristic can be effectively incorporated in an FMS hierarchical production planning approach that finds a good loading solution by iteratively comparing different part grouping scenarios.  相似文献   

4.
Given a multiple alignment of orthologous DNA sequences and a phylogenetic tree for these sequences, we investigate the problem of reconstructing a most parsimonious scenario of insertions and deletions capable of explaining the gaps observed in the alignment. This problem, called the Indel Parsimony Problem, is a crucial component of the problem of ancestral genome reconstruction, and its solution provides valuable information to many genome functional annotation approaches. We first show that the problem is NP-complete. Second, we provide an algorithm, based on the fractional relaxation of an integer linear programming formulation. The algorithm is fast in practice, and the solutions it produces are, in most cases, provably optimal. We describe a divide-and-conquer approach that makes it possible to solve very large instances on a simple desktop machine, while retaining guaranteed optimality. Our algorithms are tested and shown efficient and accurate on a set of 1.8 Mb mammalian orthologous sequences in the CFTR region.  相似文献   

5.
Metabolic models are typically characterized by a large number of parameters. Traditionally, metabolic control analysis is applied to differential equation-based models to investigate the sensitivity of predictions to parameters. A corresponding theory for constraint-based models is lacking, due to their formulation as optimization problems. Here, we show that optimal solutions of optimization problems can be efficiently differentiated using constrained optimization duality and implicit differentiation. We use this to calculate the sensitivities of predicted reaction fluxes and enzyme concentrations to turnover numbers in an enzyme-constrained metabolic model of Escherichia coli. The sensitivities quantitatively identify rate limiting enzymes and are mathematically precise, unlike current finite difference based approaches used for sensitivity analysis. Further, efficient differentiation of constraint-based models unlocks the ability to use gradient information for parameter estimation. We demonstrate this by improving, genome-wide, the state-of-the-art turnover number estimates for E. coli. Finally, we show that this technique can be generalized to arbitrarily complex models. By differentiating the optimal solution of a model incorporating both thermodynamic and kinetic rate equations, the effect of metabolite concentrations on biomass growth can be elucidated. We benchmark these metabolite sensitivities against a large experimental gene knockdown study, and find good alignment between the predicted sensitivities and in vivo metabolome changes. In sum, we demonstrate several applications of differentiating optimal solutions of constraint-based metabolic models, and show how it connects to classic metabolic control analysis.  相似文献   

6.
In comparative genomics, algorithms that sort permutations by reversals are often used to propose evolutionary scenarios of rearrangements between species. One of the main problems of such methods is that they give one solution while the number of optimal solutions is huge, with no criteria to discriminate among them. Bergeron et al. started to give some structure to the set of optimal solutions, in order to be able to deliver more presentable results than only one solution or a complete list of all solutions. However, no algorithm exists so far to compute this structure except through the enumeration of all solutions, which takes too much time even for small permutations. Bergeron et al. state as an open problem the design of such an algorithm. We propose in this paper an answer to this problem, that is, an algorithm which gives all the classes of solutions and counts the number of solutions in each class, with a better theoretical and practical complexity than the complete enumeration method. We give an example of how to reduce the number of classes obtained, using further constraints. Finally, we apply our algorithm to analyse the possible scenarios of rearrangement between mammalian sex chromosomes.  相似文献   

7.
This paper describes a computational method for solving optimal control problems involving large-scale, nonlinear, dynamical systems. Central to the approach is the idea that any optimal control problem can be converted into a standard nonlinear programming problem by parameterizing each control history using a set of nodal points, which then become the variables in the resulting parameter optimization problem. A key feature of the method is that it dispenses with the need to solve the two-point, boundary-value problem derived from the necessary conditions of optimal control theory. Gradient-based methods for solving such problems do not always converge due to computational errors introduced by the highly nonlinear characteristics of the costate variables. Instead, by converting the optimal control problem into a parameter optimization problem, any number of well-developed and proven nonlinear programming algorithms can be used to compute the near-optimal control trajectories. The utility of the parameter optimization approach for solving general optimal control problems for human movement is demonstrated by applying it to a detailed optimal control model for maximum-height human jumping. The validity of the near-optimal control solution is established by comparing it to a solution of the two-point, boundary-value problem derived on the basis of a bang-bang optimal control algorithm. Quantitative comparisons between model and experiment further show that the parameter optimization solution reproduces the major features of a maximum-height, countermovement jump (i.e., trajectories of body-segmental displacements, vertical and fore-aft ground reaction forces, displacement, velocity, and acceleration of the whole-body center of mass, pattern of lower-extremity muscular activity, jump height, and total ground contact time).  相似文献   

8.
Stochastic models, such as hidden Markov models or stochastic context-free grammars (SCFGs) can fail to return the correct, maximum likelihood solution in the case of semantic ambiguity. This problem arises when the algorithm implementing the model inspects the same solution in different guises. It is a difficult problem in the sense that proving semantic nonambiguity has been shown to be algorithmically undecidable, while compensating for it (by coalescing scores of equivalent solutions) has been shown to be NP-hard. For stochastic context-free grammars modeling RNA secondary structure, it has been shown that the distortion of results can be quite severe. Much less is known about the case when stochastic context-free grammars model the matching of a query sequence to an implicit consensus structure for an RNA family. We find that three different, meaningful semantics can be associated with the matching of a query against the model--a structural, an alignment, and a trace semantics. Rfam models correctly implement the alignment semantics, and are ambiguous with respect to the other two semantics, which are more abstract. We show how provably correct models can be generated for the trace semantics. For approaches, where such a proof is not possible, we present an automated pipeline to check post factum for ambiguity of the generated models. We propose that both the structure and the trace semantics are worth-while concepts for further study, possibly better suited to capture remotely related family members.  相似文献   

9.
Yang CN  Yang CB 《Bio Systems》2005,81(1):1-9
Among various DNA computing algorithms, it is very common to create an initial data pool that covers correct and incorrect answers at first place followed by a series of selection process to destroy the incorrect ones. The surviving DNA sequences are read as the solutions to the problem. However, algorithms based on such a brute force search will be limited to the problem size. That is, as the number of parameters in the studied problem grows, eventually the algorithm becomes impossible owing to the tremendous initial data pool size. In this theoretical work, we modify a well-known sticker model to design an algorithm that does not require an initial data pool for SAT problem. We propose to build solution sequences in parts to satisfy one clause in a step, and eventually solve the whole Boolean formula after a number of steps. Accordingly, the size of data pool grows from one sort of molecule to the number of solution assignments. The proposed algorithm is expected to provide a solution to SAT problem and become practical as the problem size scales up.  相似文献   

10.
Due to their increasing applicability in modern industry, flexible manufacturing systems (FMSs), their design, and their control have been studied extensively in the recent literature. One of the most important issues that has arisen in this context is the FMS scheduling problem. This article is concerned with a new model of an FMS system, motivated by the practical application that takes into account both machine and vehicle scheduling. For the case of a given machine schedule, a simple polynomial-time algorithm is presented that checks the feasibility of a vehicle schedule and constructs it whenever one exists. Then a dynamic programming approach to construct optimal machine and vehicle schedules is proposed. This technique results in a pseudopolynomialtime algorithm for a fixed number of machines.  相似文献   

11.
The optimal allocation of conservation resources between biodiverse conservation regions has generally been calculated using stochastic dynamic programming, or using myopic heuristics. These solutions are hard to interpret and may not be optimal. To overcome these two limitations, this paper approaches the optimal conservation resource allocation problem using optimal control theory. A solution using Pontryagin’s maximum principle provides novel insight into the general properties of efficient conservation resource allocation strategies, and allows more extensive testing of the performance of myopic heuristics. We confirmed that a proposed heuristic (minimize short-term loss) yields near-optimal results in complex allocation situations, and found that a qualitative allocation feature observed in previous analyses (bang-bang allocation) is a general property of the optimal allocation strategy.  相似文献   

12.
We consider the model of invasion prevention in a system of lakes that are connected via traffic of recreational boats. It is shown that in presence of an Allee effect, the general optimal control problem can be reduced to a significantly simpler stationary optimization problem of optimal invasion stopping. We consider possible values of model parameters for zebra mussels. The general N-lake control problem has to be solved numerically, and we show a number of typical features of solutions: distribution of control efforts in space and optimal stopping configurations related with the clusters in lake connection structure.  相似文献   

13.
Computing genomic distances between whole genomes is a fundamental problem in comparative genomics. Recent researches have resulted in different genomic distance definitions, for example, number of breakpoints, number of common intervals, number of conserved intervals, and Maximum Adjacency Disruption number. Unfortunately, it turns out that, in presence of duplications, most problems are NP-hard, and hence several heuristics have been recently proposed. However, while it is relatively easy to compare heuristics between them, until now very little is known about the absolute accuracy of these heuristics. Therefore, there is a great need for algorithmic approaches that compute exact solutions for these genomic distances. In this paper, we present a novel generic pseudo-boolean approach for computing the exact genomic distance between two whole genomes in presence of duplications, and put strong emphasis on common intervals under the maximum matching model. Of particular importance, we show three heuristics which provide very good results on a well-known public dataset of gamma-Proteobacteria.  相似文献   

14.
Society increasingly focuses on managing nature for the services it provides people rather than for the existence of particular species. How much biodiversity protection would result from this modified focus? Although biodiversity contributes to ecosystem services, the details of which species are critical, and whether they will go functionally extinct in the future, are fraught with uncertainty. Explicitly considering this uncertainty, we develop an analytical framework to determine how much biodiversity protection would arise solely from optimising net value from an ecosystem service. Using stochastic dynamic programming, we find that protecting a threshold number of species is optimal, and uncertainty surrounding how biodiversity produces services makes it optimal to protect more species than are presumed critical. We define conditions under which the economically optimal protection strategy is to protect all species, no species, and cases in between. We show how the optimal number of species to protect depends upon different relationships between species and services, including considering multiple services. Our analysis provides simple criteria to evaluate when managing for particular ecosystem services could warrant protecting all species, given uncertainty. Evaluating this criterion with empirical estimates from different ecosystems suggests that optimising some services will be more likely to protect most species than others.  相似文献   

15.
Two methods to improve on the accuracy of the Tikhonov regularization technique commonly used for the stable recovery of solutions to ill-posed problems are presented. These methods do not require a priori knowledge of the properties of the solution or of the error. Rather they exploit the observed properties of overregularized and underregularized Tikhonov solutions so as to impose linear constraints on the sought-after solution. The two methods were applied to the inverse problem of electrocardiography using a spherical heart-torso model and simulated inner-sphere (epicardial) and outer-sphere (body) potential distributions. It is shown that if the overregularized and underregularized Tikhonov solutions are chosen properly, the two methods yield epicardial solutions that are not only more accurate than the optimal Tikhonov solution but also provide other qualitative information, such as correct position of the extrema, not obtainable using ordinary Tikhonov regularization. A heuristic method to select the overregularized and underregularized solutions is discussed.  相似文献   

16.
A polymeric solution and a reinforcement phase can work as an injectable material to fill up bone defects. However, the properties of the solution should be suitable to enable the transport of that extra phase. Additionally, the use of biocompatible materials is a requirement for tissue regeneration. Thus, we intended to optimize a biocompatible polymeric solution able to carry hydroxyapatite microspheres into bone defects using an orthopedic injectable device. To achieve that goal, polymers usually regarded as biocompatible were selected, namely sodium carboxymethylcellulose, hydroxypropylmethylcellulose, and Na-alginate (ALG). The rheological properties of the polymeric solutions at different concentrations were assessed by viscosimetry before and after moist heat sterilization. In order to correlate rheological properties with injectability, solutions were tested using an orthopedic device applied for minimal invasive surgeries. Among the three polymers, ALG solutions presented the most suitable properties for our goal and a non-sterile ALG 6% solution was successfully used to perform preliminary injection tests of hydroxyapatite microspheres. Sterile ALG 7.25% solution was found to closely match non-sterile ALG 6% properties and it was selected as the optimal vehicle. Finally, sterile ALG 7.25% physical stability was studied at different temperatures over a 3-month period. It was observed that its rheological properties presented minor changes when stored at 25°C or at 4°C.  相似文献   

17.
Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from <1 ha to 2,500 km2. On average, RWR solutions were more efficient than Zonation solutions. Integer programming remains the only guaranteed way find an optimal solution, and heuristic algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.  相似文献   

18.
Evolutionary algorithms are widespread heuristic methods inspired by natural evolution to solve difficult problems for which analytical approaches are not suitable. In many domains experimenters are not only interested in discovering optimal solutions, but also in finding the largest number of different solutions satisfying minimal requirements. However, the formulation of an effective performance measure describing these requirements, also known as fitness function, represents a major challenge. The difficulty of combining and weighting multiple problem objectives and constraints of possibly varying nature and scale into a single fitness function often leads to unsatisfactory solutions. Furthermore, selective reproduction of the fittest solutions, which is inspired by competition-based selection in nature, leads to loss of diversity within the evolving population and premature convergence of the algorithm, hindering the discovery of many different solutions.Here we present an alternative abstraction of artificial evolution, which does not require the formulation of a composite fitness function. Inspired from viability theory in dynamical systems, natural evolution and ethology, the proposed method puts emphasis on the elimination of individuals that do not meet a set of changing criteria, which are defined on the problem objectives and constraints.Experimental results show that the proposed method maintains higher diversity in the evolving population and generates more unique solutions when compared to classical competition-based evolutionary algorithms. Our findings suggest that incorporating viability principles into evolutionary algorithms can significantly improve the applicability and effectiveness of evolutionary methods to numerous complex problems of science and engineering, ranging from protein structure prediction to aircraft wing design.  相似文献   

19.

Land use optimization as a resource allocation problem can be defined as the process of assigning different land uses to a region. Sustainable development also involves the exploitation of environmental resources, investment orientation, technology development, and industrial changes in a coordinated form. This paper studies the multi-objective sustainable land use planning problem and proposes an integrated framework, including simulation, forecasting, and optimization approaches for this problem. Land use optimization, a multifaceted process, requires complex decisions, including selection of land uses, forecasting land use allocation percentage, and assigning locations to land uses. The land use allocation percentage in the selected horizons is simulated and predicted by designing a System Dynamics (SD) model based on socio-economic variables. Furthermore, land use assignment is accomplished with a multi-objective integer programming model that is solved using augmented ε-constraint and non-dominated sorting genetic algorithm II (NSGA-II) methods. According to the results of the SD model, land use changes depend on population growth rate and labor productivity variables. Among the possible scenarios, a scenario focusing more on sustainable planning is chosen and the forecasting results of this scenario are used for optimal land use allocation. The computational results show that the augmented ε-constraint method cannot solve this problem even for medium sizes. The NSGA-II method not only solves the problem at large sizes over a reasonable time, but also generates good-quality solutions. NSGA-II showed better performance in metrics, including number of non-dominated Pareto solutions (NNPS), mean ideal distance (MID), and dispersion metric (DM). Integrated framework is implemented to allocate four types of land uses consisting of residential, commercial, industrial, and agricultural to a given region with 900 cells.

  相似文献   

20.
The purpose of the paper is to use analytical method and optimization tool to suggest a vaccination program intensity for a basic SIR epidemic model with limited resources for vaccination. We show that there are two different scenarios for optimal vaccination strategies, and obtain analytical solutions for the optimal control problem that minimizes the total cost of disease under the assumption of daily vaccine supply being limited. These solutions and their corresponding optimal control policies are derived explicitly in terms of initial conditions, model parameters and resources for vaccination. With sufficient resources, the optimal control strategy is the normal Bang–Bang control. However, with limited resources, the optimal control strategy requires to switch to time-variant vaccination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号