首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Optimization in dynamic optimization problems (DOPs) requires the optimization algorithms not only to locate, but also to continuously track the moving optima. Particle swarm optimization (PSO) is a population-based optimization algorithm, originally developed for static problems. Recently, several researchers have proposed variants of PSO for optimization in DOPs. This paper presents a novel multi-swarm PSO algorithm, namely competitive clustering PSO (CCPSO), designed specially for DOPs. Employing a multi-stage clustering procedure, CCPSO splits the particles of the main swarm over a number of sub-swarms based on the particles positions and on their objective function values. The algorithm automatically adjusts the number of sub-swarms and the corresponding region of each sub-swarm. In addition to the sub-swarms, there is also a group of free particles that explore the environment to locate new emerging optima or exploit the current optima which are not followed by any sub-swarm. The adaptive search strategy adopted by the sub-swarms improves both the exploitation and tracking characteristics of CCPSO. A set of experiments is conducted to study the behavior of the proposed algorithm in different DOPs and to provide guidelines for setting the algorithm’s parameters in different problems. The results of CCPSO on a variety of moving peaks benchmark (MPB) functions are compared with those of several state-of-the-art PSO algorithms, indicating the efficiency of the proposed model.  相似文献   

2.
This paper presents a study of the performance of TRIBES, an adaptive particle swarm optimization algorithm. Particle Swarm Optimization (PSO) is a biologically-inspired optimization method. Recently, researchers have used it effectively in solving various optimization problems. However, like most optimization heuristics, PSO suffers from the drawback of being greatly influenced by the selection of its parameter values. Thus, the common belief is that the performance of a PSO algorithm is directly related to the tuning of such parameters. Usually, such tuning is a lengthy, time consuming and delicate process. A new adaptive PSO algorithm called TRIBES avoids manual tuning by defining adaptation rules which aim at automatically changing the particles’ behaviors as well as the topology of the swarm. In TRIBES, the topology is changed according to the swarm behavior and the strategies of displacement are chosen according to the performances of the particles. A comparative study carried out on a large set of benchmark functions shows that the performance of TRIBES is quite competitive compared to most other similar PSO algorithms that need manual tuning of parameters. The performance evaluation of TRIBES follows the testing procedure introduced during the 2005 IEEE Conference on Evolutionary Computation. The main objective of the present paper is to perform a global study of the behavior of TRIBES under several conditions, in order to determine strengths and drawbacks of this adaptive algorithm.  相似文献   

3.
Network translation has recently been used to establish steady-state properties of mass action systems by corresponding the given system to a generalized one which is either dynamically or steady-state equivalent. In this work, we further use network translation to identify network structures which give rise to the well-studied property of absolute concentration robustness in the corresponding mass action systems. In addition to establishing the capacity for absolute concentration robustness, we show that network translation can often provide a method for deriving the steady-state value of the robust species. We furthermore present a MILP algorithm for the identification of translated chemical reaction networks that improves on previous approaches, allowing for easier application of the theory.  相似文献   

4.
An algorithm is presented for the optimization of molecular geometries and general nonquadratic functions using the nonlinear conjugate gradient method with a restricted step and restart procedure. The algorithm only requires the evaluation of the energy function and its gradient, therefor less memory storage is needed than for other conjugate gradient algorithms. Some numerical results are also presented and the efficiency and behaviour of the algorithm is compared with the standard conjugate gradient method. We also present comparisons of both conjugate gradient and variable metric methods with and without the trust region technique. One of the main conclusions of the present work is that a trust region always improves the converge of any optimization method. A sketch of the algorithm is also given.  相似文献   

5.
We present a new particle tracking software algorithm designed to accurately track the motion of low-contrast particles against a background with large variations in light levels. The method is based on a polynomial fit of the intensity around each feature point, weighted by a Gaussian function of the distance from the centre, and is especially suitable for tracking endogeneous particles in the cell, imaged with bright field, phase contrast or fluorescence optical microscopy. Furthermore, the method can simultaneously track particles of all different sizes, and allows significant freedom in their shape. The algorithm is evaluated using the quantitative measures of accuracy and precision of previous authors, using simulated images at variable signal-to-noise ratios. To these we add new tests: the error due to a non-uniform background, and the error due to two particles approaching each other. Finally the tracking of particles in real cell images is demonstrated. The method is made freely available for non-commercial use as a software package with a graphical user-interface, which can be run within the Matlab programming environment.  相似文献   

6.
We show in this paper how simple considerations about bio-arrays images lead to a peak segmentation allowing the genes activity analysis. Bio-arrays images have a particular structure and the aim of the paper is to present a mathematical method allowing their automatic processing. The differential geometry approach used here can be also employed for other types of images presenting grey level peaks corresponding to a functional activity or to a chemical concentration. The mathematical method is based on elementary techniques of differential geometry and dynamical systems theory and provides a simple efficient algorithm when the peaks to segment are isolated.  相似文献   

7.
We present a method for automatically extracting groups of orthologous genes from a large set of genomes by a new clustering algorithm on a weighted multipartite graph. The method assigns a score to an arbitrary subset of genes from multiple genomes to assess the orthologous relationships between genes in the subset. This score is computed using sequence similarities between the member genes and the phylogenetic relationship between the corresponding genomes. An ortholog cluster is found as the subset with the highest score, so ortholog clustering is formulated as a combinatorial optimization problem. The algorithm for finding an ortholog cluster runs in time O(|E| + |V| log |V|), where V and E are the sets of vertices and edges, respectively, in the graph. However, if we discretize the similarity scores into a constant number of bins, the runtime improves to O(|E| + |V|). The proposed method was applied to seven complete eukaryote genomes on which the manually curated database of eukaryotic ortholog clusters, KOG, is constructed. A comparison of our results with the manually curated ortholog clusters shows that our clusters are well correlated with the existing clusters  相似文献   

8.
We present an optimization model for fishing vessel evacuation during typhoon emergencies. Typhoon forecasts often do not accurately predict the typhoon path, so a risk assessment method is developed to evaluate the risk of fishing vessel evacuation processes. Risk assessment is the objective function of the fishing vessel evacuation optimization model. The model used is a many-to-many nonlinear assignment problem. A hybrid closed loop algorithm is used and its efficiency exceeds the traditional algorithm. The model is tested on a typhoon scenario based on harbors' distribution in Zhejiang Province, China.  相似文献   

9.
BACKGROUND: Several deterministic and stochastic combinatorial optimization algorithms have been applied to computational protein design and homology modeling. As structural targets increase in size, however, it has become necessary to find more powerful methods to address the increased combinatorial complexity. RESULTS: We present a new deterministic combinatorial search algorithm called 'Branch-and-Terminate' (B&T), which is derived from the Branch-and-Bound search method. The B&T approach is based on the construction of an efficient but very restrictive bounding expression, which is used for the search of a combinatorial tree representing the protein system. The bounding expression is used both to determine the optimal organization of the tree and to perform a highly effective pruning procedure named 'termination'. For some calculations, the B&T method rivals the current deterministic standard, dead-end elimination (DEE), sometimes finding the solution up to 21 times faster. A more significant feature of the B&T algorithm is that it can provide an efficient way to complete the optimization of problems that have been partially reduced by a DEE algorithm. CONCLUSIONS: The B&T algorithm is an effective optimization algorithm when used alone. Moreover, it can increase the problem size limit of amino acid sidechain placement calculations, such as protein design, by completing DEE optimizations that reach a point at which the DEE criteria become inefficient. Together the two algorithms make it possible to find solutions to problems that are intractable by either algorithm alone.  相似文献   

10.
MOTIVATION: The inference of biochemical networks, such as gene regulatory networks, protein-protein interaction networks, and metabolic pathway networks, from time-course data is one of the main challenges in systems biology. The ultimate goal of inferred modeling is to obtain expressions that quantitatively understand every detail and principle of biological systems. To infer a realizable S-system structure, most articles have applied sums of magnitude of kinetic orders as a penalty term in the fitness evaluation. How to tune a penalty weight to yield a realizable model structure is the main issue for the inverse problem. No guideline has been published for tuning a suitable penalty weight to infer a suitable model structure of biochemical networks. RESULTS: We introduce an interactive inference algorithm to infer a realizable S-system structure for biochemical networks. The inference problem is formulated as a multiobjective optimization problem to minimize simultaneously the concentration error, slope error and interaction measure in order to find a suitable S-system model structure and its corresponding model parameters. The multiobjective optimization problem is solved by the epsilon-constraint method to minimize the interaction measure subject to the expectation constraints for the concentration and slope error criteria. The theorems serve to guarantee the minimum solution for the epsilon-constrained problem to achieve the minimum interaction network for the inference problem. The approach could avoid assigning a penalty weight for sums of magnitude of kinetic orders.  相似文献   

11.
DNA can adopt different conformations depending on the base sequence, solvent, electrolyte composition and concentration, pH, temperature, and interaction with proteins. Here we present a model for calculating the three-dimensional atomic structure of double-stranded DNA oligomers. A theoretical energy function is used for calculating the interactions within the base steps and an empirical backbone function is used to restrict the conformational space accessible to the bases and to account for the conformational coupling of neighboring steps in a sequence. Conformational searching on large structures or a large number of structures is possible, because each base step can be described by just two primary degrees of freedom (slide and shift). A genetic algorithm is used to search for low-energy structures in slide-shift space, and this allows very rapid optimization of DNA oligomers. The other base step parameters have been previously optimized for all possible slide-shift sequence combinations, and a heuristic algorithm is used to add the atomic details of the backbone conformation in the final step of the calculation. The structures obtained by this method are very similar to the corresponding X-ray crystal structures observed experimentally. The average RMSD is 2.24 Angstroms for a set of 20 oligomer structures. For 15 of these sequences, the X-ray crystal structure is the global energy minimum. The other 5 are bistable sequences that have B-form global energy minima but crystallize as A-DNA.  相似文献   

12.
A multivariable on-line adaptive optimization algorithm using a bilevel forgetting factor method was developed and applied to a continuous baker's yeast culture in simulation and experimental studies to maximize the cellular productivity by manipulating the dilution rate and the temperature. The algorithm showed a good optimization speed and a good adaptability and reoptimization capability. The algorithm was able to stably maintain the process around the optimum point for an extended period of time. Two cases were investigated: an unconstrained and a constrained optimization. In the constrained optimization the ethanol concentration was used as an index for the baking quality of yeast cells. An equality constraint with a quadratic penalty was imposed on the ethanol concentration to keep its level close to a hypothetical "optimum" value. The developed algorithm was experimentally applied to a baker's yeast culture to demonstrate its validity. Only unconstrained optimization was carried out experimentally. A set of tuning parameter values was suggested after evaluating the results from several experimental runs. With those tuning parameter values the optimization took 50-90 h. At the attained steady state the dilution rate was 0.310 h(-1) the temperature 32.8 degrees C, and the cellular productivity 1.50 g/L/h.  相似文献   

13.
We present a three-dimensional tracking routine for nondiffraction-limited particles, which significantly reduces pixel bias. Our technique allows for increased resolution compared to that of previous methods, especially at low magnification or at high signal/noise ratio. This enables tracking with nanometer accuracy in a wide field of view and tracking of many particles. To reduce bias induced by pixelation, the tracking algorithm uses interpolation of the image on a circular grid to determine the x-, y-, and z-positions. We evaluate the proposed algorithm by tracking simulated images and compare it to well-known center-of-mass and cross-correlation methods. The final resolution of the described method improves up to an order of magnitude in three dimensions compared to conventional tracking methods. We show that errors in x,y-tracking can seriously affect z-tracking if interpolation is not used. We validate our results with experimental data obtained for conditions matching those used in the simulations. Finally, we show that the increased performance of the proposed algorithm uniquely enables it to extract accurate data for the persistence length and end-to-end distance of 107 DNA tethers in a single experiment.  相似文献   

14.
We describe a new automatic technique for the study of intracellular mobility. It is based on the visualization of colloidal gold particles by video-enhanced contrast light microscopy (nanometer video microscopy) combined with modern tracking algorithms and image processing hardware. The approach can be used for determining the complete statistics of saltatory motility of a large number of individual moving markers. Complete distributions of jump time, jump velocity, stop time, and orientation can be generated. We also show that this method allows one to study the characteristics of random motion in the cytoplasm of living cells or on cell membranes. The concept is illustrated by two studies. First we present the motility of colloidal gold in an in vitro system of microtubules and a protein extract containing a kinesin-like factor. The algorithm is thoroughly tested by manual tracking of the videotapes. The second study involves the motion of gold particles microinjected in the cytoplasm of PTK-2 cells. Here the results are compared to a study using the spreading of colloidal gold particles after microinjection.  相似文献   

15.
Random spherically constrained (RSC) single particle reconstruction is a method to obtain structures of membrane proteins embedded in lipid vesicles (liposomes). As in all single-particle cryo-EM methods, structure determination is greatly aided by reliable detection of protein “particles” in micrographs. After fitting and subtraction of the membrane density from a micrograph, normalized cross-correlation (NCC) and estimates of the particle signal amplitude are used to detect particles, using as references the projections of a 3D model. At each pixel position, the NCC is computed with only those references that are allowed by the geometric constraint of the particle’s embedding in the spherical vesicle membrane. We describe an efficient algorithm for computing this position-dependent correlation, and demonstrate its application to selection of membrane-protein particles, GluA2 glutamate receptors, which present very different views from different projection directions.  相似文献   

16.
Accurately determining a cryoEM particle's alignment parameters is crucial to high resolution single particle 3-D reconstruction. We developed Multi-Path Simulated Annealing, a Monte-Carlo type of optimization algorithm, for globally aligning the center and orientation of a particle simultaneously. A consistency criterion was developed to ensure the alignment parameters are correct and to remove some bad particles from a large pool of images of icosahedral particles. Without using any a priori model, this procedure is able to reconstruct a structure from a random initial model. Combining the procedure above with a new empirical double threshold particle selection method, we are able to pick tens of best quality particles to reconstruct a subnanometer resolution map from scratch. Using the best 62 particles of rice dwarf virus, the reconstruction reached 9.6A resolution at which four helices of the P3A subunit of RDV are resolved. Furthermore, with the 284 best particles, the reconstruction is improved to 7.9A resolution, and 21 of 22 helices and six of seven beta sheets are resolved.  相似文献   

17.
Cheon S  Liang F 《Bio Systems》2011,105(3):243-249
Recently, the stochastic approximation Monte Carlo algorithm has been proposed by Liang et al. (2007) as a general-purpose stochastic optimization and simulation algorithm. An annealing version of this algorithm was developed for real small protein folding problems. The numerical results indicate that it outperforms simulated annealing and conventional Monte Carlo algorithms as a stochastic optimization algorithm. We also propose one method for the use of secondary structures in protein folding. The predicted protein structures are rather close to the true structures.  相似文献   

18.
19.
Hidden Markov modeling (HMM) provides an effective approach for modeling single channel kinetics. Standard HMM is based on Baum's reestimation. As applied to single channel currents, the algorithm has the inability to optimize the rate constants directly. We present here an alternative approach by considering the problem as a general optimization problem. The quasi-Newton method is used for searching the likelihood surface. The analytical derivatives of the likelihood function are derived, thereby maximizing the efficiency of the optimization. Because the rate constants are optimized directly, the approach has advantages such as the allowance for model constraints and the ability to simultaneously fit multiple data sets obtained at different experimental conditions. Numerical examples are presented to illustrate the performance of the algorithm. Comparisons with Baum's reestimation suggest that the approach has a superior convergence speed when the likelihood surface is poorly defined due to, for example, a low signal-to-noise ratio or the aggregation of multiple states having identical conductances.  相似文献   

20.
Particle swarm optimization (PSO) is a population-based algorithm designed to find good solutions to optimization problems. However, if the problems are subject to noise, the quality of its results significantly deteriorates. Previous works have addressed such a deterioration by developing noise mitigation mechanisms to target specific issues such as handling the inaccurate memories of the particles and aiding the particles to correctly select their neighborhood best solutions. However, in spite of the improvements achieved, it still remains uncertain the extent to which these issues affect the particles, and the underlying reasons for the deterioration of the quality of the results. In this article, we formally define deception, blindness and disorientation as the conditions responsible for such a deterioration, and we develop a set of population statistics to measure the extent to which these conditions affect the particles throughout the search process. The population statistics are computed for the regular PSO algorithm and for PSO with equal resampling (PSO-ER) on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. The key findings that we reveal with the population statistics on optimization problems subject to noise are the following: (a) the quality of the results significantly deteriorates as particles suffer from large proportions of deception and blindness; (b) the presence of deception, blindness and disorientation, and their effects on the quality of results, makes the performance of the swarms sensitive to optimization problems subject to noise; (c) the incorporation of resampling methods into PSO significantly improves the quality of the results; and (d) it is better to first address the conditions of blindness and disorientation before addressing deception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号