首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
2.
Chromatography operations are identified as critical steps in a monoclonal antibody (mAb) purification process and can represent a significant proportion of the purification material costs. This becomes even more critical with increasing product titers that result in higher mass loads onto chromatography columns, potentially causing capacity bottlenecks. In this work, a mixed‐integer nonlinear programming (MINLP) model was created and applied to an industrially relevant case study to optimize the design of a facility by determining the most cost‐effective chromatography equipment sizing strategies for the production of mAbs. Furthermore, the model was extended to evaluate the ability of a fixed facility to cope with higher product titers up to 15 g/L. Examination of the characteristics of the optimal chromatography sizing strategies across different titer values enabled the identification of the maximum titer that the facility could handle using a sequence of single column chromatography steps as well as multi‐column steps. The critical titer levels for different ratios of upstream to dowstream trains where multiple parallel columns per step resulted in the removal of facility bottlenecks were identified. Different facility configurations in terms of number of upstream trains were considered and the trade‐off between their cost and ability to handle higher titers was analyzed. The case study insights demonstrate that the proposed modeling approach, combining MINLP models with visualization tools, is a valuable decision‐support tool for the design of cost‐effective facility configurations and to aid facility fit decisions. © 2013 The Authors. Published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 29:1472–1483, 2013  相似文献   

3.
Increases in cell culture titers in existing facilities have prompted efforts to identify strategies that alleviate purification bottlenecks while controlling costs. This article describes the application of a database‐driven dynamic simulation tool to identify optimal purification sizing strategies and visualize their robustness to future titer increases. The tool harnessed the benefits of MySQL to capture the process, business, and risk features of multiple purification options and better manage the large datasets required for uncertainty analysis and optimization. The database was linked to a discrete‐event simulation engine so as to model the dynamic features of biopharmaceutical manufacture and impact of resource constraints. For a given titer, the tool performed brute force optimization so as to identify optimal purification sizing strategies that minimized the batch material cost while maintaining the schedule. The tool was applied to industrial case studies based on a platform monoclonal antibody purification process in a multisuite clinical scale manufacturing facility. The case studies assessed the robustness of optimal strategies to batch‐to‐batch titer variability and extended this to assess the long‐term fit of the platform process as titers increase from 1 to 10 g/L, given a range of equipment sizes available to enable scale intensification efforts. Novel visualization plots consisting of multiple Pareto frontiers with tie‐lines connecting the position of optimal configurations over a given titer range were constructed. These enabled rapid identification of robust purification configurations given titer fluctuations and the facility limit that the purification suites could handle in terms of the maximum titer and hence harvest load. © 2012 American Institute of Chemical Engineers Biotechnol. Prog., 28: 1019–1028, 2012  相似文献   

4.
Biomolecular simulations at millisecond and longer time‐scales can provide vital insights into functional mechanisms. Because post‐simulation analyses of such large trajectory datasets can be a limiting factor in obtaining biological insights, there is an emerging need to identify key dynamical events and relating these events to the biological function online, that is, as simulations are progressing. Recently, we have introduced a novel computational technique, quasi‐anharmonic analysis (QAA) (Ramanathan et al., PLoS One 2011;6:e15827), for partitioning the conformational landscape into a hierarchy of functionally relevant sub‐states. The unique capabilities of QAA are enabled by exploiting anharmonicity in the form of fourth‐order statistics for characterizing atomic fluctuations. In this article, we extend QAA for analyzing long time‐scale simulations online. In particular, we present HOST4MD—a higher‐order statistical toolbox for molecular dynamics simulations, which (1) identifies key dynamical events as simulations are in progress, (2) explores potential sub‐states, and (3) identifies conformational transitions that enable the protein to access those sub‐states. We demonstrate HOST4MD on microsecond timescale simulations of the enzyme adenylate kinase in its apo state. HOST4MD identifies several conformational events in these simulations, revealing how the intrinsic coupling between the three subdomains (LID, CORE, and NMP) changes during the simulations. Further, it also identifies an inherent asymmetry in the opening/closing of the two binding sites. We anticipate that HOST4MD will provide a powerful and extensible framework for detecting biophysically relevant conformational coordinates from long time‐scale simulations. Proteins 2012. © 2012 Wiley Periodicals, Inc.  相似文献   

5.
A mixture of multivariate contaminated normal distributions is developed for model‐based clustering. In addition to the parameters of the classical normal mixture, our contaminated mixture has, for each cluster, a parameter controlling the proportion of mild outliers and one specifying the degree of contamination. Crucially, these parameters do not have to be specified a priori, adding a flexibility to our approach. Parsimony is introduced via eigen‐decomposition of the component covariance matrices, and sufficient conditions for the identifiability of all the members of the resulting family are provided. An expectation‐conditional maximization algorithm is outlined for parameter estimation and various implementation issues are discussed. Using a large‐scale simulation study, the behavior of the proposed approach is investigated and comparison with well‐established finite mixtures is provided. The performance of this novel family of models is also illustrated on artificial and real data.  相似文献   

6.
Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small‐scale model systems. Because of the importance of the results derived from these studies, the small‐scale model should be predictive of large scale. Typically, small‐scale bioreactors, which are considered superior to shake flasks in simulating large‐scale bioreactors, are used as the scale‐down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one‐sided pH control and their satellites (small‐scale runs conducted using the same post‐inoculation cultures and nutrient feeds) in 3‐L bioreactors and shake flasks indicated that shake flasks mimicked the large‐scale performance better than 3‐L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3‐L scale‐down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000‐L and shake flask runs, and differences between 15,000‐L and 3‐L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3‐L scale. By reducing the initial sparge rate in 3‐L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers Biotechnol. Prog., 31:1370–1380, 2015  相似文献   

7.
The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale‐up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi‐stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes <1,000 L, clarification using multi‐stage depth filtration offers cost savings compared to clarification using centrifugation. For bioreactor volumes >5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ~2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi‐product facility selected multi‐stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale‐up effects, and process robustness are examined. © 2013 The Authors. American Institute of Chemical Engineers Biotechnol. Prog., 29:1239–1245, 2013  相似文献   

8.
Disparity‐through‐time analyses can be used to determine how morphological diversity changes in response to mass extinctions, or to investigate the drivers of morphological change. These analyses are routinely applied to palaeobiological datasets, yet, although there is much discussion about how to best calculate disparity, there has been little consideration of how taxa should be sub‐sampled through time. Standard practice is to group taxa into discrete time bins, often based on stratigraphic periods. However, this can introduce biases when bins are of unequal size, and implicitly assumes a punctuated model of evolution. In addition, many time bins may have few or no taxa, meaning that disparity cannot be calculated for the bin and making it harder to complete downstream analyses. Here we describe a different method to complement the disparity‐through‐time tool‐kit: time‐slicing. This method uses a time‐calibrated phylogenetic tree to sample disparity‐through‐time at any fixed point in time rather than binning taxa. It uses all available data (tips, nodes and branches) to increase the power of the analyses, specifies the implied model of evolution (punctuated or gradual), and is implemented in R. We test the time‐slicing method on four example datasets and compare its performance in common disparity‐through‐time analyses. We find that the way we time sub‐sample taxa can change our interpretations of the results of disparity‐through‐time analyses. We advise using multiple methods for time sub‐sampling taxa, rather than just time binning, to gain a better understanding disparity‐through‐time.  相似文献   

9.
Monte‐Carlo simulation methods are commonly used for assessing the performance of statistical tests under finite sample scenarios. They help us ascertain the nominal level for tests with approximate level, e.g. asymptotic tests. Additionally, a simulation can assess the quality of a test on the alternative. The latter can be used to compare new tests and established tests under certain assumptions in order to determinate a preferable test given characteristics of the data. The key problem for such investigations is the choice of a goodness criterion. We expand the expected p‐value as considered by Sackrowitz and Samuel‐Cahn (1999) to the context of univariate equivalence tests. This presents an effective tool to evaluate new purposes for equivalence testing because of its independence of the distribution of the test statistic under null‐hypothesis. It helps to avoid the often tedious search for the distribution under null‐hypothesis for test statistics which have no considerable advantage over yet available methods. To demonstrate the usefulness in biometry a comparison of established equivalence tests with a nonparametric approach is conducted in a simulation study for three distributional assumptions.  相似文献   

10.
11.
Four groups of organophosphonate derivatives enantiomers were separated on N‐(3,5‐dinitrobenzoyl)‐S‐leucine chiral stationary phase. The three‐dimensional structures of the complexes between the single enantiotopic chiral compounds and chiral stationary phase have been studied using molecular model and molecular dynamics simulation. Detailed results regarding the conformation, auto‐docking, and thermodynamic estimation are presented. The elution order of the enantiomer could be determined from the energy. The predicted chiral discrimination was obtained by computational results. Chirality 25:101–106, 2013. © 2012 Wiley Periodicals, Inc.  相似文献   

12.
The essential‐oil profile of a Calamintha species, wild‐growing in the urban settings of the city of Ni? (South Serbia) and botanically tentatively identified as C. vardarensis (endemic species native to FYR Macedonia and East Serbia), has been statistically (multivariate statistical analyses) compared with those of other Calamintha species, including two previously investigated C. vardarensis populations, as a means of corroboration of the surprising occurence of this Calamintha population outside of its natural distributional range. Agglomerative hierarchical clustering reveals a close link of C. vardarensis from Ni? (with neo‐menthol (40.0%), menthone (21.8%), and pulegone (27.2%) as its major oil contributors) and C. vardarensis from FYR Macedonia.  相似文献   

13.
Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log‐normal model (Aitchison and Ho, 1989) cannot be used to fit multivariate count data with excess zero‐vectors; (ii) The multivariate zero‐inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero‐truncated/deflated count data and it is difficult to apply to high‐dimensional cases; (iii) The Type I multivariate zero‐adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods.  相似文献   

14.
Count data sets are traditionally analyzed using the ordinary Poisson distribution. However, such a model has its applicability limited as it can be somewhat restrictive to handle specific data structures. In this case, it arises the need for obtaining alternative models that accommodate, for example, (a) zero‐modification (inflation or deflation at the frequency of zeros), (b) overdispersion, and (c) individual heterogeneity arising from clustering or repeated (correlated) measurements made on the same subject. Cases (a)–(b) and (b)–(c) are often treated together in the statistical literature with several practical applications, but models supporting all at once are less common. Hence, this paper's primary goal was to jointly address these issues by deriving a mixed‐effects regression model based on the hurdle version of the Poisson–Lindley distribution. In this framework, the zero‐modification is incorporated by assuming that a binary probability model determines which outcomes are zero‐valued, and a zero‐truncated process is responsible for generating positive observations. Approximate posterior inferences for the model parameters were obtained from a fully Bayesian approach based on the Adaptive Metropolis algorithm. Intensive Monte Carlo simulation studies were performed to assess the empirical properties of the Bayesian estimators. The proposed model was considered for the analysis of a real data set, and its competitiveness regarding some well‐established mixed‐effects models for count data was evaluated. A sensitivity analysis to detect observations that may impact parameter estimates was performed based on standard divergence measures. The Bayesian ‐value and the randomized quantile residuals were considered for model diagnostics.  相似文献   

15.
Analysis by GC and GC/MS of the essential‐oil samples obtained from dry above‐ground parts of Hypericum rumeliacum Boiss . (collected in the flowering and fruit‐forming vegetative stages) allowed the identification of 212 components in total, comprising ≥97.8% of the total oil composition. In the flowering phase, the major identified volatile compounds were undecane (6.6%), dodecanal (10.8%), and germacrene D (14.1%), whereas α‐pinene (7.3%), β‐pinene (26.1%), (Z)‐β‐ocimene (8.5%), (E)‐β‐ocimene (10.2%), bicyclogermacrene (7.7%), and germacrene D (15.1%) were dominant in the fruit‐forming phase. Some of the minor constituents found in the studied oil samples (e.g., a homologous series of four 6‐alkyl‐5,6‐dihydro‐2H‐pyran‐2‐ones, i.e., massoia dodeca‐, trideca‐, tetradeca‐, and hexadecalactones) have a restricted occurrence in the Plant Kingdom, and their presence in Hypericum L. spp. has not been previously reported. The chemical compositions of the herein studied additional 34 oils obtained from selected Hypericum taxa were compared using multivariate statistical analysis (agglomerative hierarchical cluster analysis and principal component analysis). The results of these statistical analyses could not be used to either confirm or discard the existence of different H. rumeliacum chemotypes. However, they have implied that the volatile profile of this plant species is determined by the stage of its phenological development.  相似文献   

16.
Bioprocesses for therapeutic protein production typically require significant resources to be invested in their development. Underlying these efforts are analytical methods, which must be fit for the purpose of monitoring product and contaminants in the process. It is highly desirable, especially in early‐phase development when material and established analytical methods are limiting, to be able to determine what happens to the product and impurities at each process step with small sample volumes in a rapid and readily performed manner. This study evaluates the utility of surface‐enhanced laser desorption ionization mass spectroscopy (SELDI‐MS), known for its rapid analysis and minimal sample volumes, as an analytical process development tool. In‐process samples from an E. coli process for apolipoprotein A‐IM (ApoA‐IM) manufacture were used along with traditional analytical methods such as HPLC to check the SELDI‐MS results. ApoA‐IM is a naturally occurring variant of ApoA‐I that appears to confer protection against cardiovascular disease to those that carry the mutated gene. The results show that, unlike many other analytical methods, SELDI‐MS can handle early process samples that contain complex mixtures of biological molecules with limited sample pretreatment and thereby provide meaningful process‐relevant information. At present, this technique seems most suited to early‐phase development particularly when methods for traditional analytical approaches are still being established. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2010  相似文献   

17.
Genome‐scale flux balance analysis (FBA) is a powerful systems biology tool to characterize intracellular reaction fluxes during cell cultures. FBA estimates intracellular reaction rates by optimizing an objective function, subject to the constraints of a metabolic model and media uptake/excretion rates. A dynamic extension to FBA, dynamic flux balance analysis (DFBA), can calculate intracellular reaction fluxes as they change during cell cultures. In a previous study by Read et al. (2013), a series of informed amino acid supplementation experiments were performed on twelve parallel murine hybridoma cell cultures, and this data was leveraged for further analysis (Read et al., Biotechnol Prog. 2013;29:745–753). In order to understand the effects of media changes on the model murine hybridoma cell line, a systems biology approach is applied in the current study. Dynamic flux balance analysis was performed using a genome‐scale mouse metabolic model, and multivariate data analysis was used for interpretation. The calculated reaction fluxes were examined using partial least squares and partial least squares discriminant analysis. The results indicate media supplementation increases product yield because it raises nutrient levels extending the growth phase, and the increased cell density allows for greater culture performance. At the same time, the directed supplementation does not change the overall metabolism of the cells. This supports the conclusion that product quality, as measured by glycoform assays, remains unchanged because the metabolism remains in a similar state. Additionally, the DFBA shows that metabolic state varies more at the beginning of the culture but less by the middle of the growth phase, possibly due to stress on the cells during inoculation. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1163–1173, 2016  相似文献   

18.
Quantification of three‐dimensional (3D) refractive index (RI) with sub‐cellular resolution is achieved by digital holographic microtomography (DHμT) using quantitative phase images measured at multiple illumination angles. The DHμT system achieves sensitive and fast phase measurements based on iterative phase extraction algorithm and asynchronous phase shifting interferometry without any phase monitoring or active control mechanism. A reconstruction algorithm, optical diffraction tomography with projection on convex sets and total variation minimization, is implemented to substantially reduce the number of angular scattered fields needed for reconstruction without sacrificing the accuracy and quality of the reconstructed 3D RI distribution. Tomogram of a living CA9‐22 cell is presented to demonstrate the performance of the method. Further, a statistical analysis of the average RI of the nucleoli, the nucleus excluding the nucleoli and the cytoplasm of twenty CA9‐22 cells is performed. (© 2013 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
20.
Aim To test whether it is possible to establish a common biogeographical regionalization for plants and vertebrates in sub‐Saharan Africa (the Afrotropical Region), using objective multivariate methods. Location Sub‐Saharan Africa (Afrotropical Region). Methods We used 1° grid cell resolution databases for birds, mammals, amphibians and snakes (4142 vertebrate species) and c. 13% of the plants (5881 species) from the Afrotropical Region. These databases were analysed using cluster analysis techniques to define biogeographical regions. A β(sim) dissimilarity matrix was subjected to a hierarchical classification using the unweighted pair‐group method with arithmetic averages (UPGMA). The five group‐specific biogeographical regionalizations were compared against a regionalization developed from a combined database, and a regionalization that is maximally congruent with the five group‐specific datasets was determined using a consensus classification. The regionalizations were interpreted against measures of spatial turnover in richness and composition for the five datasets as well as the combined dataset. Results We demonstrate the existence of seven well‐defined and consistent biogeographical regions in sub‐Saharan Africa. These regionalizations are statistically defined and robust between groups, with minor taxon‐specific biogeographical variation. The proposed biogeographical regions are: Congolian, Zambezian, Southern African, Sudanian, Somalian, Ethiopian and Saharan. East Africa, the West African coast, and the transitions between the Congolian, Sudanian and Zambezian regions are unassigned. The Cape area in South Africa, Afromontane areas and the coastal region of East Africa do not emerge as distinct regions but are characterized by high neighbourhood heterogeneity, rapid turnover of species and high levels of narrow endemism. Main conclusions Species distribution data and modern cluster analysis techniques can be used to define biogeographical regions in Africa that reflect the patterns found in both vertebrates and plants. The consensus of the regionalizations between different taxonomic groups is high. These regions are broadly similar to those proposed using expert opinion approaches. Some previously proposed transitional zones are not recognized in this classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号