首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The optimal design and operation of dynamic bioprocesses gives in practice often rise to optimisation problems with multiple and conflicting objectives. As a result typically not a single optimal solution but a set of Pareto optimal solutions exist. From this set of Pareto optimal solutions, one has to be chosen by the decision maker. Hence, efficient approaches are required for a fast and accurate generation of the Pareto set such that the decision maker can easily and systematically evaluate optimal alternatives. In the current paper the multi-objective optimisation of several dynamic bioprocess examples is performed using the freely available ACADO Multi-Objective Toolkit (http://www.acadotoolkit.org). This toolkit integrates efficient multiple objective scalarisation strategies (e.g., Normal Boundary Intersection and (Enhanced) Normalised Normal Constraint) with fast deterministic approaches for dynamic optimisation (e.g., single and multiple shooting). It has been found that the toolkit is able to efficiently and accurately produce the Pareto sets for all bioprocess examples. The resulting Pareto sets are added as supplementary material to this paper.  相似文献   

2.
This paper demonstrates a simple graphical approach for the design and analysis of a bioprocess flowsheet in which process interactions are significant. Results are presented showing how the feasible space for operation can be simulated and used both to address key design and operating decisions and to identify suitable trade-offs between operating variables, such as fermentation growth rate and disruption conditions, in order to achieve prespecified levels of process performance. Using verified models to describe the production and isolation of an intracellular protein alcohol dehydrogenase (ADH) in yeast as a test bed, a series of so-called "windows of operation" are developed at growth rates in the range of 0.06-0.28 h(-1) and for a range of overall process specifications. The effects of altering the process design performance specification as defined by the level of cell debris removal and the overall process productivity on the size and position of the feasible space were investigated to demonstrate the sensitivity of the flowsheet to changes in process objectives. Using the approach it has been possible to visualise the processing trade-offs required to increase performance in terms of the level of cell debris removal by 50% and the overall process productivity by 400% from a defined base level. The approach provides a convenient tool when designing integrated bioprocesses by enabling process options to be compared visually and can help in achieving better process designs and accelerating process development for the biological process industry.  相似文献   

3.
Simulation may be used as a powerful tool for accelerating bioprocess design. This paper demonstrates the use of simulations in exploring the nature and impact of the interactions that exist in a typical bioprocess for the recovery of an intracellular protein. The study shows that an integrated approach to design must be adopted in order to achieve acceptable process designs. Data from a fed-batch fermentation, with verified models for cell harvesting, cell disruption and cell debris removal have been integrated to demonstrate the consequence of process design and operating decisions on the resulting process performance. The trade-offs between product recovery and the extent of cell debris removal for a range of operating conditions have been represented through a series of windows of operation which show how process conditions must be altered in order for given process performance levels to be realised. The capacity to account for process performance including the impact of interactions is seen as a pre-requisite for rigorous bioprocess sequence design and optimisation.  相似文献   

4.
The development of digital bioprocessing technologies is critical to operate modern industrial bioprocesses. This study conducted the first investigation on the efficiency of using physics-based and data-driven models for the dynamic optimisation of long-term bioprocess. More specifically, this study exploits a predictive kinetic model and a cutting-edge data-driven model to compute open-loop optimisation strategies for the production of microalgal lutein during a fed-batch operation. Light intensity and nitrate inflow rate are used as control variables given their key impact on biomass growth and lutein synthesis. By employing different optimisation algorithms, several optimal control sequences were computed. Due to the distinct model construction principles and sophisticated process mechanisms, the physics-based and the data-driven models yielded contradictory optimisation strategies. The experimental verification confirms that the data-driven model predicted a closer result to the experiments than the physics-based model. Both models succeeded in improving lutein intracellular content by over 40% compared to the highest previous record; however, the data-driven model outperformed the kinetic model when optimising total lutein production and achieved an increase of 40–50%. This indicates the possible advantages of using data-driven modelling for optimisation and prediction of complex dynamic bioprocesses, and its potential in industrial bio-manufacturing systems.  相似文献   

5.
The goal of protein engineering and design is to identify sequences that adopt three-dimensional structures of desired function. Often, this is treated as a single-objective optimization problem, identifying the sequence–structure solution with the lowest computed free energy of folding. However, many design problems are multi-state, multi-specificity, or otherwise require concurrent optimization of multiple objectives. There may be tradeoffs among objectives, where improving one feature requires compromising another. The challenge lies in determining solutions that are part of the Pareto optimal set—designs where no further improvement can be achieved in any of the objectives without degrading one of the others. Pareto optimality problems are found in all areas of study, from economics to engineering to biology, and computational methods have been developed specifically to identify the Pareto frontier. We review progress in multi-objective protein design, the development of Pareto optimization methods, and present a specific case study using multi-objective optimization methods to model the tradeoff between three parameters, stability, specificity, and complexity, of a set of interacting synthetic collagen peptides.  相似文献   

6.
In order for biobased industrial products to compete economically with petroleum-derived products, significant reduction in their processing cost is necessary. Since most bioprocesses are operated in batch or fed-batch mode, their optimization involves theoretical and computational challenges. Simulated annealing (SA), a stochastic optimization algorithm, is used in this study to solve a number of challenging optimization problems related to the design and operation of bioreactors. Two well-known case studies are considered in which the robustness and efficiency of the SA algorithm is demonstrated. More specifically, in the first case study it is shown that the global optimal solution located by SA achieves significant improved productivity when compared with the results of previous investigations. In the second case study a realistic objective function is considered where the economic performance of a bioprocess is optimized. SA exhibits impeccable performance and robustness and was able to locate the global optimal solution irrespective of the initial point selected.  相似文献   

7.
Durability and kinematics are two critical factors which must be considered during total knee replacement (TKR) implant design. It is hypothesized, however, that there exists a competing relationship between these two performance measures, such that improvement of one requires sacrifice with respect to the other. No previous studies have used rigorous and systematic methods to quantify this relationship. During this study, multiobjective design optimization (MOO) using the adaptive weighted sum (AWS) method is used to determine a set of Pareto-optimal implant designs considering durability and kinematics simultaneously. Previously validated numerical simulations and a parametric modeller are used in conjunction with the AWS method in order to generate a durability-versus-kinematics Pareto curve. In terms of kinematics, a design optimized for kinematics alone outperformed a design optimized for durability by 61.8%. In terms of durability, the design optimized for durability outperformed the kinematics-optimized design by 70.6%. Considering the entire Pareto curve, a balanced (1:1) trade-off could be obtained when equal weighting was placed on both performance measures; however improvement of one performance measure required greater sacrifices with respect to the other when the weighting was extremized. For the first time, the competing relationship between durability and kinematics was confirmed and quantified using optimization methods. This information can aid future developments in TKR design and can be expanded to other total joint replacement designs.  相似文献   

8.
Accurate and reliable models are required for a range of unit operations if simulations are to be used for accelerating the design and optimisation of bioprocesses. This paper presents results of pilot-plant studies that have been used to verify process simulations for a sequence of operations comprising of cell disruption, fractional protein precipitation and centrifugal separation. These have been tested using the purification of the enzyme alcohol dehydrogenase (ADH) from Saccharomyces cerevisiae as being representative of the recovery of a labile intracellular enzyme. Comparison of pilot-plant against simulated data highlights where improvements to the models are required and has resulted in increased confidence in the simulations for a wide range of conditions including the operational scale and the nature of the starting material. Results demonstrate the effectiveness of the verification approach for the development of reliable predictive models to assess the feasibility of process designs and performance of a train of bioprocess operations. UCL is the Biotechnology and Biological Sciences Research Council's sponsored Advanced Centre for Biochemical Engineering and the Council's support is gratefully acknowledged. The authors would like to thank I Ms. N. Abidi for technical assistance.  相似文献   

9.
《Biotechnology advances》2019,37(8):107444
Photosynthetic biogas upgrading using microalgae provides a promising alternative to commercial upgrading processes as it allows for carbon capture and re-use, improving the sustainability of the process in a circular economy system. A two-step absorption column-photobioreactor system employing alkaline carbonate solution and flat plate photobioreactors is proposed. Together with process optimisation, the choice of microalgae species is vital to ensure continuous performance with optimal efficiency. In this paper, in addition to critically assessing the system design and operation conditions for optimisation, five criteria are selected for choosing optimal microalgae species for biogas upgrading. These include: ability for mixotrophic growth; high pH tolerance; external carbonic anhydrase activity; high CO2 tolerance; and ease of harvesting. Based on such criteria, five common microalgae species were identified as potential candidates. Of these, Spirulina platensis is deemed the most favourable species. An industrial perspective of the technology further reveals the significant challenges for successful commercial application of microalgal upgrading of biogas, including: a significant land footprint; need for decreasing microalgae solution recirculation rate; and selecting preferable microalgae utilisation pathway.  相似文献   

10.
Soft sensors for on-line biomass measurements   总被引:4,自引:0,他引:4  
One of the difficulties encountered in control and optimisation of bioprocesses is the lack of reliable on-line sensors for their key state variables. This paper investigates the suitability of using on-line recurrent neural networks to predict biomass concentrations. Input variables of the proposed recurrent neural network are feed rate, liquid volume and dissolved oxygen. Experimental results revealed that the proposed neural network is able to predict biomass concentrations with an accuracy of ±11%.  相似文献   

11.
Determining the regulation of metabolic networks at genome scale is a hard task. It has been hypothesized that biochemical pathways and metabolic networks might have undergone an evolutionary process of optimization with respect to several criteria over time. In this contribution, a multi-criteria approach has been used to optimize parameters for the allosteric regulation of enzymes in a model of a metabolic substrate-cycle. This has been carried out by calculating the Pareto set of optimal solutions according to two objectives: the proper direction of flux in a metabolic cycle and the energetic cost of applying the set of parameters. Different Pareto fronts have been calculated for eight different "environments" (specific time courses of end product concentrations). For each resulting front the so-called knee point is identified, which can be considered a preferred trade-off solution. Interestingly, the optimal control parameters corresponding to each of these points also lead to optimal behaviour in all the other environments. By calculating the average of the different parameter sets for the knee solutions more frequently found, a final and optimal consensus set of parameters can be obtained, which is an indication on the existence of a universal regulation mechanism for this system.The implications from such a universal regulatory switch are discussed in the framework of large metabolic networks.  相似文献   

12.
Bioprocess design problems are frequently multivariate and complex. However, they may be visualised by a graphical representation of the design constraints and correlations governing both the process and system under consideration, namely windows of operation. Windows of operation exist at all stages of process design and find use both in the identification of key constraints from limited information, and also, with more detailed knowledge, the sensitivity of a process to design or operating changes. In this way windows of operation may be used to help understand and optimise a bioprocess design. In this paper the formulation, development and application of windows of operation is discussed for a range of biological processes including fermentation, protein recovery and biotransformation.UCL is the Biotechnology and Biological Sciences Research Council's Interdisciplinary Research Centre for Biochemical Engineering and the Council's support is gratefully acknowledged.  相似文献   

13.
Optimality models are frequently used in studies of long distance bird migration to help understand and predict migration routes, stopover strategies and fuelling behaviour in a spatially varying environment. These models typically evaluate bird behaviour by focusing on a single optimization currency, such as total migration time or energy-use, without explicitly considering trade-offs between the involved objectives. In this paper, we demonstrate that this classic single-objective approach downplays the importance of variability in bird behaviour. In the light of these considerations, we therefore propose to use a full multi-criteria optimization method to isolate the set of non-dominated, efficient or Pareto optimal solutions. Unlike single-objective optimization where there is only one combination of bird behaviour maximizing fitness, the Pareto solution set represents a range of optimal solutions to conflicting objectives. Our results demonstrate that this multi-objective approach provides important new ways of analyzing how environmental factors and behavioural constraints have driven the evolution of migratory behaviour.  相似文献   

14.
The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.  相似文献   

15.
There have been several proposals on how to apply the ant colony optimization (ACO) metaheuristic to multi-objective combinatorial optimization problems (MOCOPs). This paper proposes a new formulation of these multi-objective ant colony optimization (MOACO) algorithms. This formulation is based on adding specific algorithm components for tackling multiple objectives to the basic ACO metaheuristic. Examples of these components are how to represent multiple objectives using pheromone and heuristic information, how to select the best solutions for updating the pheromone information, and how to define and use weights to aggregate the different objectives. This formulation reveals more similarities than previously thought in the design choices made in existing MOACO algorithms. The main contribution of this paper is an experimental analysis of how particular design choices affect the quality and the shape of the Pareto front approximations generated by each MOACO algorithm. This study provides general guidelines to understand how MOACO algorithms work, and how to improve their design.  相似文献   

16.
In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability versus novelty, affinity versus specificity, activity versus immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not "dominated"; that is, no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, Protein Engineering Pareto FRontier (PEPFR), that hierarchically subdivides the objective space, using appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria.  相似文献   

17.
Like most real‐life processes, the operation of liquid–solid circulating fluidized bed (LSCFB) system for continuous protein recovery is associated with several objectives such as maximization of production rate and recovery of protein, and minimization of amount solid ion‐exchange resin requirement, all of which need to be optimized simultaneously. In this article, multiobjective optimization of a LSCFB system for continuous protein recovery was carried out using an experimentally validated mathematical model to find the scope for further improvements in its operation. Elitist non‐dominated sorting genetic algorithm with its jumping gene adaptation was used to solve a number of bi‐ and tri‐objective function optimization problems. The optimization resulted in Pareto‐optimal solution, which provides a broad range of non‐dominated solutions due to conflicting behavior of the operating parameters on the system performance indicators. Significant improvements were achieved, for example, the production rate at optimal operation increased by 33%, using 11% less solid compared to reported experimental results for the same recovery level. The effects of operating variables on the optimal solutions are discussed in detail. The multiobjective optimization study reported here can be easily extended for the improvement of LSCFB system for other applications. Biotechnol. Bioeng. 2009;103: 873–890. © 2009 Wiley Periodicals, Inc.  相似文献   

18.
Parallel miniaturized stirred tank bioreactors are an efficient tool for "high-throughput bioprocess design." As most industrial bioprocesses are pH-controlled and/or are operated in a fed-batch mode, an exact scale-down of these reactions with continuous dosing of fluids into the miniaturized bioreactors is highly desirable. Here, we present the development, characterization, and application of a novel concept for a highly integrated microfluidic device for a bioreaction block with 48 parallel milliliter-scale stirred tank reactors (V = 12 mL). The device consists of an autoclavable fluidic section to dispense up to three liquids individually per reactor. The fluidic section contains 144 membrane pumps, which are magnetically driven by a clamped-on actuator section. The micropumps are designed to dose 1.6 μL per pump lift. Each micropump enables a continuous addition of liquid with a flow rate of up to 3 mL h(-1) . Viscous liquids up to a viscosity of 8.2 mPa s (corresponds to a 60% v/v glycerine solution) can be pumped without changes in the flow rates. Thus, nearly all feeding solutions can be delivered, which are commonly used in bioprocesses. The functionality of the first prototype of this microfluidic device was demonstrated by double-sided pH-controlled cultivations of Saccharomyces cerevisiae based on signals of fluorimetric sensors embedded at the bottom of the bioreactors. Furthermore, fed-batch cultivations with constant and exponential feeding profiles were successfully performed. Thus, the presented novel microfluidic device will be a useful tool for parallel and, thus, efficient optimization of controlled fed-batch bioprocesses in small-scale stirred tank bioreactors. This can help to reduce bioprocess development times drastically.  相似文献   

19.
Neurons encounter unavoidable evolutionary trade-offs between multiple tasks. They must consume as little energy as possible while effectively fulfilling their functions. Cells displaying the best performance for such multi-task trade-offs are said to be Pareto optimal, with their ion channel configurations underpinning their functionality. Ion channel degeneracy, however, implies that multiple ion channel configurations can lead to functionally similar behaviour. Therefore, instead of a single model, neuroscientists often use populations of models with distinct combinations of ionic conductances. This approach is called population (database or ensemble) modelling. It remains unclear, which ion channel parameters in the vast population of functional models are more likely to be found in the brain. Here we argue that Pareto optimality can serve as a guiding principle for addressing this issue by helping to identify the subpopulations of conductance-based models that perform best for the trade-off between economy and functionality. In this way, the high-dimensional parameter space of neuronal models might be reduced to geometrically simple low-dimensional manifolds, potentially explaining experimentally observed ion channel correlations. Conversely, Pareto inference might also help deduce neuronal functions from high-dimensional Patch-seq data. In summary, Pareto optimality is a promising framework for improving population modelling of neurons and their circuits.  相似文献   

20.
In today’s highly competitive uncertain project environments, it is of crucial importance to develop analytical models and algorithms to schedule and control project activities so that the deviations from the project objectives are minimized. This paper addresses the integrated scheduling and control in multi-mode project environments. We propose an optimization model that models the dynamic behavior of projects and integrates optimal control into a practically relevant project scheduling problem. From the scheduling perspective, we address the discrete time/cost trade-off problem, whereas an optimal control formulation is used to capture the effect of project control. Moreover, we develop a solution algorithm for two particular instances of the optimal project control. This algorithm combines a tabu search strategy and nonlinear programming. It is applied to a large scale test bed and its efficiency is tested by means of computational experiments. To the best of our knowledge, this research is the first application of optimal control theory to multi-mode project networks. The models and algorithms developed in this research are targeted as a support tool for project managers in both scheduling and deciding on the timing and quantity of control activities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号