首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Upstream bioprocess characterization and optimization are time and resource‐intensive tasks. Regularly in the biopharmaceutical industry, statistical design of experiments (DoE) in combination with response surface models (RSMs) are used, neglecting the process trajectories and dynamics. Generating process understanding with time‐resolved, dynamic process models allows to understand the impact of temporal deviations, production dynamics, and provides a better understanding of the process variations that stem from the biological subsystem. The authors propose to use DoE studies in combination with hybrid modeling for process characterization. This approach is showcased on Escherichia coli fed‐batch cultivations at the 20L scale, evaluating the impact of three critical process parameters. The performance of a hybrid model is compared to a pure data‐driven model and the widely adopted RSM of the process endpoints. Further, the performance of the time‐resolved models to simultaneously predict biomass and titer is evaluated. The superior behavior of the hybrid model compared to the pure black‐box approaches for process characterization is presented. The evaluation considers important criteria, such as the prediction accuracy of the biomass and titer endpoints as well as the time‐resolved trajectories. This showcases the high potential of hybrid models for soft‐sensing and model predictive control.  相似文献   

2.
3.
4.
The principle of quality by design (QbD) has been widely applied to biopharmaceutical manufacturing processes. Process characterization is an essential step to implement the QbD concept to establish the design space and to define the proven acceptable ranges (PAR) for critical process parameters (CPPs). In this study, we present characterization of a Saccharomyces cerevisiae fermentation process using risk assessment analysis, statistical design of experiments (DoE), and the multivariate Bayesian predictive approach. The critical quality attributes (CQAs) and CPPs were identified with a risk assessment. The statistical model for each attribute was established using the results from the DoE study with consideration given to interactions between CPPs. Both the conventional overlapping contour plot and the multivariate Bayesian predictive approaches were used to establish the region of process operating conditions where all attributes met their specifications simultaneously. The quantitative Bayesian predictive approach was chosen to define the PARs for the CPPs, which apply to the manufacturing control strategy. Experience from the 10,000 L manufacturing scale process validation, including 64 continued process verification batches, indicates that the CPPs remain under a state of control and within the established PARs. The end product quality attributes were within their drug substance specifications. The probability generated with the Bayesian approach was also used as a tool to assess CPP deviations. This approach can be extended to develop other production process characterization and quantify a reliable operating region. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:799–812, 2016  相似文献   

5.
《Trends in biotechnology》2014,32(6):329-336
Increasingly elaborate and voluminous datasets are generated by the (bio)pharmaceutical industry and are a major challenge for application of PAT and QbD principles. Multivariate data analysis (MVDA) is required to delineate relevant process information from large multi-factorial and multi-collinear datasets. Here the key role of MVDA for industrial (bio)process data is discussed, with a focus on progress and limitations of MVDA as a PAT solution for biopharmaceutical cultivation processes. MVDA based models were proven useful and should be routinely implemented for bioprocesses. It is concluded that although the highest level of PAT with process control within its design space in real-time during manufacturing is not reached yet, MVDA will be central to reach this ultimate objective for cell cultivations.  相似文献   

6.
  1. Download : Download high-res image (200KB)
  2. Download : Download full-size image
  相似文献   

7.
Despite the advantages of mathematical bioprocess modeling, successful model implementation already starts with experimental planning and accordingly can fail at this early stage. For this study, two different modeling approaches (mechanistic and hybrid) based on a four-dimensional antibody-producing CHO fed-batch process are compared. Overall, 33 experiments are performed in the fractional factorial four-dimensional design space and separated into four different complex data partitions subsequently used for model comparison and evaluation. The mechanistic model demonstrates the advantage of prior knowledge (i.e., known equations) to get informative value relatively independently of the utilized data partition. The hybrid approach displayes a higher data dependency but simultaneously yielded a higher accuracy on all data partitions. Furthermore, our results demonstrate that independent of the chosen modeling framework, a smart selection of only four initial experiments can already yield a very good representation of a full design space independent of the chosen modeling structure. Academic and industry researchers are recommended to pay more attention to experimental planning to maximize the process understanding obtained from mathematical modeling.  相似文献   

8.
刘志凤  王勇 《生物工程学报》2021,37(5):1494-1509
20世纪90年代,Bailey及Stephanopoulos等提出了经典代谢工程的理念,旨在利用DNA重组技术对代谢网络进行改造,以达到细胞性能改善,目标产物增加的目的。自代谢工程诞生以来的30年,生命科学蓬勃发展,基因组学、系统生物学、合成生物学等新学科不断涌现,为代谢工程的发展注入了新的内涵与活力。经典代谢工程研究已进入到前所未有的系统代谢工程阶段。组学技术、基因组代谢模型、元件组装、回路设计、动态控制、基因组编辑等合成生物学工具与策略的应用,大大提升了复杂代谢的设计与合成能力;机器学习的介入以及进化工程与代谢工程的结合,为系统代谢工程的未来开辟了新的方向。文中对过去30年代谢工程的发展趋势作了梳理,介绍了代谢工程在发展中不断创新的理论与方法及其应用。  相似文献   

9.
Various types of unwanted and uncontrollable signal variations in MS‐based metabolomics and proteomics datasets severely disturb the accuracies of metabolite and protein profiling. Therefore, pooled quality control (QC) samples are often employed in quality management processes, which are indispensable to the success of metabolomics and proteomics experiments, especially in high‐throughput cases and long‐term projects. However, data consistency and QC sample stability are still difficult to guarantee because of the experimental operation complexity and differences between experimenters. To make things worse, numerous proteomics projects do not take QC samples into consideration at the beginning of experimental design. Herein, a powerful and interactive web‐based software, named pseudoQC, is presented to simulate QC sample data for actual metabolomics and proteomics datasets using four different machine learning‐based regression methods. The simulated data are used for correction and normalization of the two published datasets, and the obtained results suggest that nonlinear regression methods perform better than linear ones. Additionally, the above software is available as a web‐based graphical user interface and can be utilized by scientists without a bioinformatics background. pseudoQC is open‐source software and freely available at https://www.omicsolution.org/wukong/pseudoQC/ .  相似文献   

10.
  1. Download : Download high-res image (151KB)
  2. Download : Download full-size image
  相似文献   

11.
Continuous biopharmaceutical manufacturing is currently a field of intense research due to its potential to make the entire production process more optimal for the modern, ever-evolving biopharmaceutical market. Compared to traditional batch manufacturing, continuous bioprocessing is more efficient, adjustable, and sustainable and has reduced capital costs. However, despite its clear advantages, continuous bioprocessing is yet to be widely adopted in commercial manufacturing. This article provides an overview of the technological roadblocks for extensive adoptions and points out the recent advances that could help overcome them. In total, three key areas for improvement are identified: Quality by Design (QbD) implementation, integration of upstream and downstream technologies, and data and knowledge management. First, the challenges to QbD implementation are explored. Specifically, process control, process analytical technology (PAT), critical process parameter (CPP) identification, and mathematical models for bioprocess control and design are recognized as crucial for successful QbD realizations. Next, the difficulties of end-to-end process integration are examined, with a particular emphasis on downstream processing. Finally, the problem of data and knowledge management and its potential solutions are outlined where ontologies and data standards are pointed out as key drivers of progress.  相似文献   

12.
Single chain variable fragment-IgGs (scFv-IgG) are a class of bispecific antibodies consisting of two single chain variable fragments (scFv) that are fused to an intact IgG molecule. A common trend observed for expression of scFv-IgGs in mammalian cell culture is a higher level of aggregates (10%–30%) compared to mAbs, which results in lower purification yields in order to meet product quality targets. Furthermore, the high aggregate levels also pose robustness risks to a conventional mAb three column platform purification process which uses only the polishing steps (e.g., cation exchange chromatography [CEX]) for aggregate removal. Protein A chromatography with pH gradient elution, high performance tangential flow filtration (HP-TFF) and calcium phosphate precipitation were evaluated at the bench scale as means of introducing orthogonal aggregate removal capabilities into other aspects of the purification process. The two most promising process variants, namely Protein A pH gradient elution followed by calcium phosphate precipitation were evaluated at pilot scale, demonstrating comparable performance. Implementing Protein A chromatography with gradient elution and/or calcium phosphate precipitation removed a sufficient portion of the aggregate burden prior to the CEX polishing step, enabling CEX to be operated robustly under conditions favoring higher monomer yield. From starting aggregate levels ranging from 15% to 23% in the condition media, levels were reduced to between 2% and 3% at the end of the CEX step. The overall yield for the optimal process was 71%. Results of this work suggest an improved three-column mAb platform-like purification process for purification of high aggregate scFv-IgG bispecific antibodies is feasible. © 2018 The Authors. Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers. Biotechnol. Prog., 35: e2720, 2019  相似文献   

13.
BackgroundIn recent years, the availability of high throughput technologies, establishment of large molecular patient data repositories, and advancement in computing power and storage have allowed elucidation of complex mechanisms implicated in therapeutic response in cancer patients. The breadth and depth of such data, alongside experimental noise and missing values, requires a sophisticated human-machine interaction that would allow effective learning from complex data and accurate forecasting of future outcomes, ideally embedded in the core of machine learning design.ObjectiveIn this review, we will discuss machine learning techniques utilized for modeling of treatment response in cancer, including Random Forests, support vector machines, neural networks, and linear and logistic regression. We will overview their mathematical foundations and discuss their limitations and alternative approaches in light of their application to therapeutic response modeling in cancer.ConclusionWe hypothesize that the increase in the number of patient profiles and potential temporal monitoring of patient data will define even more complex techniques, such as deep learning and causal analysis, as central players in therapeutic response modeling.  相似文献   

14.
Quality by design (QbD) is a current structured approach to design processes yielding a quality product. Knowledge and process understanding cannot be achieved without proper experimental data; hence requirements for measurement error and frequency of measurement of bioprocess variables have to be defined. In this contribution, a model-based approach is used to investigate impact factors on calculated rates to predict the obtainable information from real-time measurements (= signal quality). Measurement error, biological activity, and averaging window (= period of observation) were identified as biggest impact factors on signal quality. Moreover, signal quality has been set in context with a quantifiable measure using statistical error testing, which can be used as a benchmark for process analytics and exploitation of data. Results have been validated with data from an E. coli batch process. This approach is useful to get an idea which process dynamics can be observed with a given bioprocess setup and sampling strategy beforehand.  相似文献   

15.
通过对检验工作各环节流程、质控情况的分析,制定了新的检验业务管理及质量控制流程,并利用数字化技术对流程各环节进行系统实现。检验全流程数字化管理及质量控制体系的构建,有效提高了检验工作的效率与质量安全,进一步促进了医院医疗质量管理水平的提高。  相似文献   

16.
Tangential flow microfiltration (MF) is a cost‐effective and robust bioprocess separation technique, but successful full scale implementation is hindered by the empirical, trial‐and‐error nature of scale‐up. We present an integrated approach leveraging at‐line process analytical technology (PAT) and mass balance based modeling to de‐risk MF scale‐up. Chromatography‐based PAT was employed to improve the consistency of an MF step that had been a bottleneck in the process used to manufacture a therapeutic protein. A 10‐min reverse phase ultra high performance liquid chromatography (RP‐UPLC) assay was developed to provide at‐line monitoring of protein concentration. The method was successfully validated and method performance was comparable to previously validated methods. The PAT tool revealed areas of divergence from a mass balance‐based model, highlighting specific opportunities for process improvement. Adjustment of appropriate process controls led to improved operability and significantly increased yield, providing a successful example of PAT deployment in the downstream purification of a therapeutic protein. The general approach presented here should be broadly applicable to reduce risk during scale‐up of filtration processes and should be suitable for feed‐forward and feed‐back process control. © 2015 American Institute of Chemical Engineers Biotechnol. Prog., 32:108–115, 2016  相似文献   

17.
18.
19.
The burgeoning pipeline for new biologic drugs has increased the need for high‐throughput process characterization to efficiently use process development resources. Breakthroughs in highly automated and parallelized upstream process development have led to technologies such as the 250‐mL automated mini bioreactor (ambr250?) system. Furthermore, developments in modern design of experiments (DoE) have promoted the use of definitive screening design (DSD) as an efficient method to combine factor screening and characterization. Here we utilize the 24‐bioreactor ambr250? system with 10‐factor DSD to demonstrate a systematic experimental workflow to efficiently characterize an Escherichia coli (E. coli) fermentation process for recombinant protein production. The generated process model is further validated by laboratory‐scale experiments and shows how the strategy is useful for quality by design (QbD) approaches to control strategies for late‐stage characterization. © 2015 American Institute of Chemical Engineers Biotechnol. Prog., 31:1388–1395, 2015  相似文献   

20.
The development of a biopharmaceutical production process usually occurs sequentially, and tedious optimization of each individual unit operation is very time-consuming. Here, the conditions established as optimal for one-step serve as input for the following step. Yet, this strategy does not consider potential interactions between a priori distant process steps and therefore cannot guarantee for optimal overall process performance. To overcome these limitations, we established a smart approach to develop and utilize integrated process models using machine learning techniques and genetic algorithms. We evaluated the application of the data-driven models to explore potential efficiency increases and compared them to a conventional development approach for one of our development products. First, we developed a data-driven integrated process model using gradient boosting machines and Gaussian processes as machine learning techniques and a genetic algorithm as recommendation engine for two downstream unit operations, namely solubilization and refolding. Through projection of the results into our large-scale facility, we predicted a twofold increase in productivity. Second, we extended the model to a three-step model by including the capture chromatography. Here, depending on the selected baseline-process chosen for comparison, we obtained between 50% and 100% increase in productivity. These data show the successful application of machine learning techniques and optimization algorithms for downstream process development. Finally, our results highlight the importance of considering integrated process models for the whole process chain, including all unit operations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号