首页 | 本学科首页   官方微博 | 高级检索  
     


Automated neuron model optimization techniques: a review
Authors:W. Van Geit  E. De Schutter  P. Achard
Affiliation:1. Computational Neuroscience Unit, Okinawa Institute of Science and Technology, 7542 Onna, Onna-Son, Okinawa, 904-0411, Japan
2. Theoretical Neurobiology, University of Antwerp, Universiteitsplein 1, Wilrijk, Antwerp, 2610, Belgium
3. Volen Center for Complex System, Brandeis University, 415 South Street, Waltham, MA, 02454, USA
Abstract:The increase in complexity of computational neuron models makes the hand tuning of model parameters more difficult than ever. Fortunately, the parallel increase in computer power allows scientists to automate this tuning. Optimization algorithms need two essential components. The first one is a function that measures the difference between the output of the model with a given set of parameter and the data. This error function or fitness function makes the ranking of different parameter sets possible. The second component is a search algorithm that explores the parameter space to find the best parameter set in a minimal amount of time. In this review we distinguish three types of error functions: feature-based ones, point-by-point comparison of voltage traces and multi-objective functions. We then detail several popular search algorithms, including brute-force methods, simulated annealing, genetic algorithms, evolution strategies, differential evolution and particle-swarm optimization. Last, we shortly describe Neurofitter, a free software package that combines a phase–plane trajectory density fitness function with several search algorithms.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号