首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Trends in biotechnology》2023,41(8):1013-1026
The robustness of bioprocesses is becoming increasingly important. The main driving forces of this development are, in particular, increasing demands on product purities as well as economic aspects. In general, bioprocesses exhibit extremely high complexity and variability. Biological systems often have a much higher intrinsic variability compared with chemical processes, which makes the development and characterization of robust processes tedious task. To predict and control robustness, a clear understanding of interactions between input and output variables is necessary. Robust bioprocesses can be realized, for example, by using advanced control strategies for the different unit operations. In this review, we discuss the different biological, technical, and mathematical tools for the analysis and control of bioprocess robustness.  相似文献   

2.
Parametric-linkage analysis applied to large pedigrees with many affected individuals has helped in the identification of highly penetrant genes; but, for diseases lacking a clear Mendelian inheritance pattern or caused by several genes of low to moderate penetrance, a more robust strategy is nonparametric analysis applied to small sets of affected relatives, such as affected sib pairs. Here we show that the robustness of affected-sib-pair tests is related to the shape of the constraint set for the sibs'' identity-by-descent (IBD) probabilities. We also derive a set of constraints for the IBD probabilities of affected sib triples and use common features of the shapes of the two constrain sets to introduce new nonparametric tests (called "minmax" tests) that are more robust than those in current use. Asymptotic-power computations support the robustness of the proposed minmax tests.  相似文献   

3.
Evolution of adaptive phenotypic flexibility requires a system that can dynamically restore and update a functional phenotype in response to environmental change. The architecture of such a system evolves under the conflicting demands of versatility and robustness, and resolution of these demands should be particularly evident in organisms that require external inputs for reiterative trait production within a generation, such as in metabolic networks that underlie yearly acquisition of diet‐dependent coloration in birds. Here, we show that a key structural feature of carotenoid networks–redundancy of biochemical pathways–enables these networks to translate variable environmental inputs into consistent phenotypic outcomes. We closely followed life‐long changes in structure and utilization of metabolic networks in a large cohort of free‐living birds and found that greater individual experience with dietary change between molts leads to wider occupancy of the metabolic network and progressive accumulation of redundant pathways in a functionally active network. This generated a regime of emergent buffering whereby greater dietary experience was mechanistically linked to greater robustness of resulting traits and an increasing ability to retain and implement previous adaptive solutions. Thus, experience‐related buffering links evolvability and robustness in carotenoid‐metabolizing networks and we argue that this mechanistic principle facilitates the evolution of phenotypic flexibility.  相似文献   

4.
Finding optimal three-dimensional molecular configurations based on a limited amount of experimental and/or theoretical data requires efficient nonlinear optimization algorithms. Optimization methods must be able to find atomic configurations that are close to the absolute, or global, minimum error and also satisfy known physical constraints such as minimum separation distances between atoms (based on van der Waals interactions). The most difficult obstacles in these types of problems are that 1) using a limited amount of input data leads to many possible local optima and 2) introducing physical constraints, such as minimum separation distances, helps to limit the search space but often makes convergence to a global minimum more difficult. We introduce a constrained global optimization algorithm that is robust and efficient in yielding near-optimal three-dimensional configurations that are guaranteed to satisfy known separation constraints. The algorithm uses an atom-based approach that reduces the dimensionality and allows for tractable enforcement of constraints while maintaining good global convergence properties. We evaluate the new optimization algorithm using synthetic data from the yeast phenylalanine tRNA and several proteins, all with known crystal structure taken from the Protein Data Bank. We compare the results to commonly applied optimization methods, such as distance geometry, simulated annealing, continuation, and smoothing. We show that compared to other optimization approaches, our algorithm is able combine sparse input data with physical constraints in an efficient manner to yield structures with lower root mean squared deviation.  相似文献   

5.
The genetic code is known to have a high level of error robustness and has been shown to be very error robust compared to randomly selected codes, but to be significantly less error robust than a certain code found by a heuristic algorithm. We formulate this optimization problem as a Quadratic Assignment Problem and use this to formally verify that the code found by the heuristic algorithm is the global optimum. We also argue that it is strongly misleading to compare the genetic code only with codes sampled from the fixed block model, because the real code space is orders of magnitude larger. We thus enlarge the space from which random codes can be sampled from approximately 2.433 × 10(18) codes to approximately 5.908 × 10(45) codes. We do this by leaving the fixed block model, and using the wobble rules to formulate the characteristics acceptable for a genetic code. By relaxing more constraints, three larger spaces are also constructed. Using a modified error function, the genetic code is found to be more error robust compared to a background of randomly generated codes with increasing space size. We point out that these results do not necessarily imply that the code was optimized during evolution for error minimization, but that other mechanisms could be the reason for this error robustness.  相似文献   

6.
Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels.  相似文献   

7.
Biological robustness   总被引:14,自引:0,他引:14  
Robustness is a ubiquitously observed property of biological systems. It is considered to be a fundamental feature of complex evolvable systems. It is attained by several underlying principles that are universal to both biological organisms and sophisticated engineering systems. Robustness facilitates evolvability and robust traits are often selected by evolution. Such a mutually beneficial process is made possible by specific architectural features observed in robust systems. But there are trade-offs between robustness, fragility, performance and resource demands, which explain system behaviour, including the patterns of failure. Insights into inherent properties of robust systems will provide us with a better understanding of complex diseases and a guiding principle for therapy design.  相似文献   

8.
9.
10.
Abstract

The confinement method is a robust and conceptually simple free energy simulation method that allows the calculation of conformational free energy differences between highly dissimilar states. Application of the method to explicitly solvated systems requires a multi-stage simulation protocol for the calculation of desolvation free energies. Here we show that these desolvation free energies can be readily obtained from an implicit treatment, which is simpler and less costly. The accuracy and robustness of this protocol was shown by the calculation of conformational free energy differences of a series of explicitly solvated test systems. Given the accuracy and ease by which these free energy differences were obtained, the confinement method is promising for the treatment of conformational changes in large and complex systems.  相似文献   

11.
Mutational robustness is the degree to which a phenotype, such as fitness, is resistant to mutational perturbations. Since most of these perturbations will tend to reduce fitness, robustness provides an immediate benefit for the mutated individual. However, robust systems decay due to the accumulation of deleterious mutations that would otherwise have been cleared by selection. This decay has received very little theoretical attention. At equilibrium, a population or asexual lineage is expected to have a mutation load that is invariant with respect to the selection coefficient of deleterious alleles, so the benefit of robustness (at the level of the population or asexual lineage) is temporary. However, previous work has shown that robustness can be favoured when robustness loci segregate independently of the mutating loci they act upon. We examine a simple two-locus model that allows for intermediate rates of recombination and inbreeding to show that increasing the effective recombination rate allows for the evolution of greater mutational robustness.  相似文献   

12.
Because of the low number of observation points per animal attainable in insect movement studies, linear parameters are frequently used to quantify the data. These linear parameters are evaluated, as well as several home range estimators, by means of an empirical study of the bush cricket Phaneroptera falcata (Insecta: Ensifera) and a Monte Carlo simulation model. The examination of differences between complete and artificially reduced data sets, as well as between the “real” (i.e. simulated) data set and the data recorded by a simulated observer, allows us to quantify robustness and bias of the evaluated parameters. We show that nearly all tested methods are strongly influenced by the resight number of the investigated individuals. Hence, those parameters should be used cautiously in studies with few resights, i.e. in insect studies as well as studies on vertebrates. Results of earlier studies should be reconsidered and some comparisons between different studies are questionable. Estimates of home range using a 95% ellipse with fewer than five locations are extremely unreliable (overestimation). The minimum convex polygon leads to a clear underestimation. The robustness of both parameters is low. Among the home range parameters the kernel method is most robust, but it leads to an overestimating bias. The harmonic mean method is the only home range parameter whose results are comparable to the area actually used. However, this method requires a minimum number of 11 observation points per individual. Evaluating the linear parameters, the mean daily movement and the total recorded movement are inappropriate for statistical analyses, because of their high sensitivity to the resight number. Maximum activity radius and dispersal range are much more robust. They are not as sensitive to sample size and deviate little from the “real” values of the parameters. However, this bias is statistically significant. The mean activity radius of an individual is the most useful linear parameter. This measure is very robust down to a sample size of four individual locations and compares well with the real parameter values. Received: 12 March 1996 / Accepted: 30 September 1996  相似文献   

13.
Han B  Wang J 《Biophysical journal》2007,92(11):3755-3763
We study the origin of robustness of yeast cell cycle cellular network through uncovering its underlying energy landscape. This is realized from the information of the steady-state probabilities by solving a discrete set of kinetic master equations for the network. We discovered that the potential landscape of yeast cell cycle network is funneled toward the global minimum, G1 state. The ratio of the energy gap between G1 and average versus roughness of the landscape termed as robustness ratio (RR) becomes a quantitative measure of the robustness and stability for the network. The funneled landscape is quite robust against random perturbations from the inherent wiring or connections of the network. There exists a global phase transition between the more sensitive response or less self-degradation phase leading to underlying funneled global landscape with large RR, and insensitive response or more self-degradation phase leading to shallower underlying landscape of the network with small RR. Furthermore, we show that the more robust landscape also leads to less dissipation cost of the network. Least dissipation and robust landscape might be a realization of Darwinian principle of natural selection at cellular network level. It may provide an optimal criterion for network wiring connections and design.  相似文献   

14.
To contribute towards designing more cost-efficient, robust and flexible downstream processes for the manufacture of monoclonal antibodies (mAbs), a framework consisting of an evolutionary multiobjective optimization algorithm (EMOA) linked to a biomanufacturing process economics model is presented. The EMOA is tuned to discover sequences of chromatographic purification steps and column sizing strategies that provide the best trade-off with respect to multiple objectives including cost of goods per gram (COG/g), robustness in COG/g, and impurity removal capabilities. Additional complexities accounted for by the framework include uncertainties and constraints. The framework is validated on industrially relevant case studies varying in upstream and downstream processing train ratios, annual demands, and impurity loads. Results obtained by the framework are presented using a range of visualization tools, and indicate that the performance impact of uncertainty is a function of both the level of uncertainty and the objective being optimized, and that uncertainty can cause otherwise optimal processes to become suboptimal. The optimal purification processes discovered outperform the industrial standard with, e.g. savings in COG/g of up to 10%. Guidelines are provided for choosing an optimal purification process as a function of the objectives being optimized and impurity levels present.  相似文献   

15.
The robustness of large scale critical infrastructures, which can be modeled as complex networks, is of great significance. One of the most important means to enhance robustness is to optimize the allocation of resources. Traditional allocation of resources is mainly based on the topology information, which is neither realistic nor systematic. In this paper, we try to build a framework for searching for the most favorable pattern of node capacity allocation to reduce the vulnerability to cascading failures at a low cost. A nonlinear and multi-objective optimization model is proposed and tackled using a particle swarm optimization algorithm (PSO). It is found that the network becomes more robust and economical when less capacity is left on the heavily loaded nodes and the optimized network performs better resisting noise. Our work is helpful in designing a robust economical network.  相似文献   

16.
Conventional microbiology methods used to monitor microbial biofuels production are based on off-line analyses. The analyses are, unfortunately, insufficient for bioprocess optimization. Real time process control strategies, such as flow cytometry (FC), can be used to monitor bioprocess development (at-line) by providing single cell information that improves process model formulation and validation. This paper reviews the current uses and potential applications of FC in biodiesel, bioethanol, biomethane, biohydrogen and fuel cell processes. By highlighting the inherent accuracy and robustness of the technique for a range of biofuel processing parameters, more robust monitoring and control may be implemented to enhance process efficiency.  相似文献   

17.
Peak detection is one of the most important steps in mass spectrometry (MS) analysis. However, the detection result is greatly affected by severe spectrum variations. Unfortunately, most current peak detection methods are neither flexible enough to revise false detection results nor robust enough to resist spectrum variations. To improve flexibility, we introduce peak tree to represent the peak information in MS spectra. Each tree node is a peak judgment on a range of scales, and each tree decomposition, as a set of nodes, is a candidate peak detection result. To improve robustness, we combine peak detection and common peak alignment into a closed-loop framework, which finds the optimal decomposition via both peak intensity and common peak information. The common peak information is derived and loopily refined from the density clustering of the latest peak detection result. Finally, we present an improved ant colony optimization biomarker selection method to build a whole MS analysis system. Experiment shows that our peak detection method can better resist spectrum variations and provide higher sensitivity and lower false detection rates than conventional methods. The benefits from our peak-tree-based system for MS disease analysis are also proved on real SELDI data.  相似文献   

18.
Biological gene networks appear to be dynamically robust to mutation, stochasticity, and changes in the environment and also appear to be sparsely connected. Studies with computational models, however, have suggested that denser gene networks evolve to be more dynamically robust than sparser networks. We resolve this discrepancy by showing that misassumptions about how to measure robustness in artificial networks have inadvertently discounted the costs of network complexity. We show that when the costs of complexity are taken into account, that robustness implies a parsimonious network structure that is sparsely connected and not unnecessarily complex; and that selection will favor sparse networks when network topology is free to evolve. Because a robust system of heredity is necessary for the adaptive evolution of complex phenotypes, the maintenance of frugal network complexity is likely a crucial design constraint that underlies biological organization.  相似文献   

19.
Stable isotope-assisted metabolic flux analysis (MFA) is a powerful method to estimate carbon flow and partitioning in metabolic networks. At its core, MFA is a parameter estimation problem wherein the fluxes and metabolite pool sizes are model parameters that are estimated, via optimization, to account for measurements of steady-state or isotopically-nonstationary isotope labeling patterns. As MFA problems advance in scale, they require efficient computational methods for fast and robust convergence. The structure of the MFA problem enables it to be cast as an equality-constrained nonlinear program (NLP), where the equality constraints are constructed from the MFA model equations, and the objective function is defined as the sum of squared residuals (SSR) between the model predictions and a set of labeling measurements. This NLP can be solved by using an algebraic modeling language (AML) that offers state-of-the-art optimization solvers for robust parameter estimation and superior scalability to large networks. When implemented in this manner, the optimization is performed with no distinction between state variables and model parameters. During each iteration of such an optimization, the system state is updated instead of being calculated explicitly from scratch, and this occurs concurrently with improvement in the model parameter estimates. This optimization approach starkly contrasts with traditional “shooting” methods where the state variables and model parameters are kept distinct and the system state is computed afresh during each iteration of a stepwise optimization. Our NLP formulation uses the MFA modeling framework of Wiechert et al. [1], which is amenable to incorporation of the model equations into an NLP. The NLP constraints consist of balances on either elementary metabolite units (EMUs) or cumomers. In this formulation, both the steady-state and isotopically-nonstationary MFA (inst-MFA) problems may be solved as an NLP. For the inst-MFA case, the ordinary differential equation (ODE) system describing the labeling dynamics is transcribed into a system of algebraic constraints for the NLP using collocation. This large-scale NLP may be solved efficiently using an NLP solver implemented on an AML. In our implementation, we used the reduced gradient solver CONOPT, implemented in the General Algebraic Modeling System (GAMS). The NLP framework is particularly advantageous for inst-MFA, scaling well to large networks with many free parameters, and having more robust convergence properties compared to the shooting methods that compute the system state and sensitivities at each iteration. Additionally, this NLP approach supports the use of tandem-MS data for both steady-state and inst-MFA when the cumomer framework is used. We assembled a software, eiFlux, written in Python and GAMS that uses the NLP approach and supports both steady-state and inst-MFA. We demonstrate the effectiveness of the NLP formulation on several examples, including a genome-scale inst-MFA model, to highlight the scalability and robustness of this approach. In addition to typical inst-MFA applications, we expect that this framework and our associated software, eiFlux, will be particularly useful for applying inst-MFA to complex MFA models, such as those developed for eukaryotes (e.g. algae) and co-cultures with multiple cell types.  相似文献   

20.
Krauss P  Metzner C  Lange J  Lang N  Fabry B 《PloS one》2012,7(5):e36575
We present a method to reconstruct a disordered network of thin biopolymers, such as collagen gels, from three-dimensional (3D) image stacks recorded with a confocal microscope. The method is based on a template matching algorithm that simultaneously performs a binarization and skeletonization of the network. The size and intensity pattern of the template is automatically adapted to the input data so that the method is scale invariant and generic. Furthermore, the template matching threshold is iteratively optimized to ensure that the final skeletonized network obeys a universal property of voxelized random line networks, namely, solid-phase voxels have most likely three solid-phase neighbors in a 3 x 3 x 3 neighborhood. This optimization criterion makes our method free of user-defined parameters and the output exceptionally robust against imaging noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号