首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
According to the basic optimization principle of artificial neural networks, a novel kind of neural network model for solving the quadratic programming problem is presented. The methodology is based on the Lagrange multiplier theory in optimization and seeks to provide solutions satisfying the necessary conditions of optimality. The equilibrium point of the network satisfies the Kuhn-Tucker condition for the problem. The stability and convergency of the neural network is investigated and the strategy of the neural optimization is discussed. The feasibility of the neural network method is verified with the computation examples. Results of the simulation of the neural network to solve optimum problems are presented to illustrate the computational power of the neural network method.  相似文献   

2.
一类求解约束非线性规划问题的神经网络模型   总被引:1,自引:0,他引:1  
提出一类求解闭凸集上非线性规划问题的神经网络模型。理论分析和计算机模拟表明在适当的假设下所提出的神经网络模型大范围指数级收敛于非线性规划问题的解集。本文神经网络所采用的方法属于广义的最速下降法,甚至当规划问题地正定二次时,本文的模型也比已有的神经网络模型简单。  相似文献   

3.
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.  相似文献   

4.
A trainable recurrent neural network, Simultaneous Recurrent Neural network, is proposed to address the scaling problem faced by neural network algorithms in static optimization. The proposed algorithm derives its computational power to address the scaling problem through its ability to "learn" compared to existing recurrent neural algorithms, which are not trainable. Recurrent backpropagation algorithm is employed to train the recurrent, relaxation-based neural network in order to associate fixed points of the network dynamics with locally optimal solutions of the static optimization problems. Performance of the algorithm is tested on the NP-hard Traveling Salesman Problem in the range of 100 to 600 cities. Simulation results indicate that the proposed algorithm is able to consistently locate high-quality solutions for all problem sizes tested. In other words, the proposed algorithm scales demonstrably well with the problem size with respect to quality of solutions and at the expense of increased computational cost for large problem sizes.  相似文献   

5.
A neural network model for solving constrained nonlinear optimization problems with bounded variables is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points. The network is shown to be completely stable and globally convergent to the solutions of constrained nonlinear optimization problems. A fuzzy logic controller is incorporated in the network to minimize convergence time. Simulation results are presented to validate the proposed approach.  相似文献   

6.
Constrained optimization problems arise in a wide variety of scientific and engineering applications. Since several single recurrent neural networks when applied to solve constrained optimization problems for real-time engineering applications have shown some limitations, cooperative recurrent neural network approaches have been developed to overcome drawbacks of these single recurrent neural networks. This paper surveys in details work on cooperative recurrent neural networks for solving constrained optimization problems and their engineering applications, and points out their standing models from viewpoint of both convergence to the optimal solution and model complexity. We provide examples and comparisons to shown advantages of these models in the given applications.  相似文献   

7.
8.
 A novel neural network approach using the maximum neuron model is presented for N-queens problems. The goal of the N-queens problem is to find a set of locations of N queens on an N×N chessboard such that no pair of queens commands each other. The maximum neuron model proposed by Takefuji et al. has been applied to two optimization problems where the optimization of objective functions is requested without constraints. This paper demonstrates the effectiveness of the maximum neuron model for constraint satisfaction problems through the N-queens problem. The performance is verified through simulations in up to 500-queens problems on the sequential mode, the N-parallel mode, and the N 2-parallel mode, where our maximum neural network shows the far better performance than the existing neural networks. Received: 4 June 1996/Accepted in revised form: 13 November 1996  相似文献   

9.
An algorithm using feedforward neural network model for determining optimal substrate feeding policies for fed-batch fermentation process is presented in this work. The algorithm involves developing the neural network model of the process using the sampled data. The trained neural network model in turn is used for optimization purposes. The advantages of this technique is that optimization can be achieved without detailed kinetic model of the process and the computation of gradient of objective function with respect to control variables is straightforward. The application of the technique is demonstrated with two examples, namely, production of secreted protein and invertase. The simulation results show that the discrete-time dynamics of fed-batch bioreactor can be satisfactorily approximated using a feedforward sigmoidal neural network. The optimal policies obtained with the neural network model agree reasonably well with the previously reported results.  相似文献   

10.
Special food safety supervision by means of intelligent models and methods is of great significance for the health of local people and tourists. Models like BP neural network have the problems of low accuracy and poor robustness in food safety prediction. So, firstly, the principal component analysis was used to extract the key factors that influenced the amount of coliform communities, which was applied to reduce the dimension of this model as the input variable of BP neural network. Secondly, both the particle swarm optimization (PSO) and BP neural network were implemented to optimize initial weights and threshold to obtain the optimal parameter, and a model was constructed to predict the amount of coliform bacteria in Dai Special Snacks, Sa pie, based on PSO-BP neural network model. Finally, the predicted value of the model is verified. The results show that MSE is 0.0097, MAPE is 0.3198 and MAE is 0.0079, respectively. It was clear that PSO-BP model was better accuracy and robustness. That means, this model can effectively predict the amount of coliform. The research has important guiding significance for the quality and the production of Sa pie.  相似文献   

11.
The neural network method of Hopfield and Tank claims to be able to find nearly-optimum solutions for discrete optimization problems, e.g. the travelling salesman problem. In the present paper, an example is given which shows that the Hopfield-Tank algorithm systematically prefers certain solutions even if the energy values of these solutions are clearly higher than the energy of the global minimum.  相似文献   

12.

Background

A profile-comparison method with position-specific scoring matrix (PSSM) is among the most accurate alignment methods. Currently, cosine similarity and correlation coefficients are used as scoring functions of dynamic programming to calculate similarity between PSSMs. However, it is unclear whether these functions are optimal for profile alignment methods. By definition, these functions cannot capture nonlinear relationships between profiles. Therefore, we attempted to discover a novel scoring function, which was more suitable for the profile-comparison method than existing functions, using neural networks.

Results

Although neural networks required derivative-of-cost functions, the problem being addressed in this study lacked them. Therefore, we implemented a novel derivative-free neural network by combining a conventional neural network with an evolutionary strategy optimization method used as a solver. Using this novel neural network system, we optimized the scoring function to align remote sequence pairs. Our results showed that the pairwise-profile aligner using the novel scoring function significantly improved both alignment sensitivity and precision relative to aligners using existing functions.

Conclusions

We developed and implemented a novel derivative-free neural network and aligner (Nepal) for optimizing sequence alignments. Nepal improved alignment quality by adapting to remote sequence alignments and increasing the expressiveness of similarity scores. Additionally, this novel scoring function can be realized using a simple matrix operation and easily incorporated into other aligners. Moreover our scoring function could potentially improve the performance of homology detection and/or multiple-sequence alignment of remote homologous sequences. The goal of the study was to provide a novel scoring function for profile alignment method and develop a novel learning system capable of addressing derivative-free problems. Our system is capable of optimizing the performance of other sophisticated methods and solving problems without derivative-of-cost functions, which do not always exist in practical problems. Our results demonstrated the usefulness of this optimization method for derivative-free problems.
  相似文献   

13.
This paper proposes a non-recurrent training algorithm, resilient propagation, for the Simultaneous Recurrent Neural network operating in relaxation-mode for computing high quality solutions of static optimization problems. Implementation details related to adaptation of the recurrent neural network weights through the non-recurrent training algorithm, resilient backpropagation, are formulated through an algebraic approach. Performance of the proposed neuro-optimizer on a well-known static combinatorial optimization problem, the Traveling Salesman Problem, is evaluated on the basis of computational complexity measures and, subsequently, compared to performance of the Simultaneous Recurrent Neural network trained with the standard backpropagation, and recurrent backpropagation for the same static optimization problem. Simulation results indicate that the Simultaneous Recurrent Neural network trained with the resilient backpropagation algorithm is able to locate superior quality solutions through comparable amount of computational effort for the Traveling Salesman Problem.  相似文献   

14.
This paper presents a new scheme for training MLPs which employs a relaxation method for multi-objective optimization. The algorithm works by obtaining a reduced set of solutions, from which the one with the best generalization is selected. This approach allows balancing between the training error and norm of network weight vectors, which are the two objective functions of the multi-objective optimization problem. The method is applied to classification and regression problems and compared with Weight Decay (WD), Support Vector Machines (SVMs) and standard Backpropagation (BP). It is shown that the systematic procedure for training proposed results on good generalization neural models, and outperforms traditional methods.  相似文献   

15.
This paper considers the robust stability of a class of neural networks with Markovian jumping parameters and time-varying delay. By employing a new Lyapunov-Krasovskii functional, a sufficient condition for the global exponential stability of the delayed Markovian jumping neural networks is established. The proposed condition is also extended to the uncertain cases, which are shown to be the improvement and extension of the existing ones. Finally, the validity of the results are illustrated by an example.  相似文献   

16.
A simple formulation of the TSP energy function is described which, in combination with a normalized Hopfield-Tank neural network, eliminates the difficulty in finding valid tours. This technique is applicable to many other optimization problems involving n-way decisions (such as VLSI layout and resource allocation) and is easily implemented in a VLSI neural network. The solution quality is shown to be dependent on the formation of seed-points which are influenced by the constraint penalties and the temperature (i.e. the neural gain). Near-optimal tours are found by annealing the network down to a critical temperature at which a single seed-point is dominant. The seed-points and critical temperature (which also affect standard Hopfield network solutions to the TSP) can be predicted with reasonable accuracy. It is also shown that the annealing process is not necessary and good tours result if the network is allowed to converge solely at the critical temperature. The seed-points can be eliminated entirely by assigning different temperatures to groups of neurons such that the tour evolves uniformly throughout the cities. The resulting network finds the optimum tour in a 30-city example in 30% of the trials.  相似文献   

17.
The self-organizing map (SOM), as a kind of unsupervised neural network, has been used for both static data management and dynamic data analysis. To further exploit its search abilities, in this paper we propose an SOM-based algorithm (SOMS) for optimization problems involving both static and dynamic functions. Furthermore, a new SOM weight updating rule is proposed to enhance the learning efficiency; this may dynamically adjust the neighborhood function for the SOM in learning system parameters. As a demonstration, the proposed SOMS is applied to function optimization and also dynamic trajectory prediction, and its performance compared with that of the genetic algorithm (GA) due to the similar ways both methods conduct searches.  相似文献   

18.
人工神经网络在发酵工业中的应用   总被引:2,自引:0,他引:2  
人工神经网络技术具有很强的非线性映射能力,用于系统的非线性建模,具有无可比拟的优势,广泛应用于发酵过程中培养基的优化和系统建模与控制方面,本主要介绍了人工神经网络的基本原理与使用方法,以及BP神经网络在非线性函数逼近的优点,详细介绍了其在发酵培养基优化,连续搅拌反应器神经网络估计,分批发酵及补料分批发酵过程建模与控制优化中的应用实例。  相似文献   

19.
One of the main challenges to the adaptionist program in general and the use of optimization models in behavioral and evolutionary ecology, in particular, is that organisms are so constrained' by ontogeny and phylogeny that they may not be able to attain optimal solutions, however those are defined. This paper responds to the challenge through the comparison of optimality and neural network models for the behavior of an individual polychaete worm. The evolutionary optimization model is used to compute behaviors (movement in and out of a tube) that maximize a measure of Darwinian fitness based on individual survival and reproduction. The neural network involves motor, sensory, energetic reserve and clock neuronal groups. Ontogeny of the neural network is the change of connections of a single individual in response to its experiences in the environment. Evolution of the neural network is the natural selection of initial values of connections between groups and learning rules for changing connections. Taken together, these can be viewed as design parameters. The best neural networks have fitnesses between 85% and 99% of the fitness of the evolutionary optimization model. More complicated models for polychaete worms are discussed. Formulation of a neural network model for host acceptance decisions by tephritid fruit flies leads to predictions about the neurobiology of the flies. The general conclusion is that neural networks appear to be sufficiently rich and plastic that even weak evolution of design parameters may be sufficient for organisms to achieve behaviors that give fitnesses close to the evolutionary optimal fitness, particularly if the behaviors are relatively simple.  相似文献   

20.
Recurrent neural networks with higher order connections, from here on referred to as higher-order neural networks (HONNs), may be used for the solution of combinatorial optimization problems. In Ref. 5 a mapping of the traveling salesman problem (TSP) onto a HONN of arbitrary order was developed, thereby creating a family of related networks that can be used to solve the TSP. In this paper, we explore the trade-off between network complexity and quality of solution that is made available by the HONN mapping of the TSP. The trade-off is investigated by undertaking an analysis of the stability of valid solutions to the TSP in a HONN of arbitrary order. The techniques used to perform the stability analysis are not new, but have been widely used elsewhere in the literature. The original contribution in this paper is the application of these techniques to a HONN of arbitrary order used to solve the TSP. The results of the stability analysis show that the quality of solution is improved by increasing the network complexity, as measured by the order of the network. Furthermore, it is shown that the Hopfield network, as the simplest network in the family of higher-order networks, is expected to produce the poorest quality of solution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号