首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
The global extended Kalman filtering (EKF) algorithm for recurrent neural networks (RNNs) is plagued by the drawback of high computational cost and storage requirement. In this paper, we present a local EKF training-pruning approach that can solve this problem. In particular, the by-products, obtained along with the local EKF training, can be utilized to measure the importance of the network weights. Comparing with the original global approach, the proposed local approach results in much lower computational cost and storage requirement. Hence, it is more practical in solving real world problems. Simulation showed that our approach is an effective joint-training-pruning method for RNNs under online operation.  相似文献   

2.
Trained radial basis function networks are well-suited for use in extracting rules and explanations because they contain a set of locally tuned units. However, for rule extraction to be useful, these networks must first be pruned to eliminate unnecessary weights. The pruning algorithm cannot search the network exhaustively because of the computational effort involved. It is shown that using multiple pruning methods with smart ordering of the pruning candidates, the number of weights in a radial basis function network can be reduced to a small fraction of the original number. The complexity of the pruning algorithm is quadratic (instead of exponential) in the number of network weights. Pruning performance is shown using a variety of benchmark problems from the University of California, Irvine machine learning database.  相似文献   

3.
Large-scale artificial neural networks have many redundant structures, making the network fall into the issue of local optimization and extended training time. Moreover, existing neural network topology optimization algorithms have the disadvantage of many calculations and complex network structure modeling. We propose a Dynamic Node-based neural network Structure optimization algorithm (DNS) to handle these issues. DNS consists of two steps: the generation step and the pruning step. In the generation step, the network generates hidden layers layer by layer until accuracy reaches the threshold. Then, the network uses a pruning algorithm based on Hebb’s rule or Pearson’s correlation for adaptation in the pruning step. In addition, we combine genetic algorithm to optimize DNS (GA-DNS). Experimental results show that compared with traditional neural network topology optimization algorithms, GA-DNS can generate neural networks with higher construction efficiency, lower structure complexity, and higher classification accuracy.  相似文献   

4.
This paper presents a pruning method for artificial neural networks (ANNs) based on the 'Lempel-Ziv complexity' (LZC) measure. We call this method the 'silent pruning algorithm' (SPA). The term 'silent' is used in the sense that SPA prunes ANNs without causing much disturbance during the network training. SPA prunes hidden units during the training process according to their ranks computed from LZC. LZC extracts the number of unique patterns in a time sequence obtained from the output of a hidden unit and a smaller value of LZC indicates higher redundancy of a hidden unit. SPA has a great resemblance to biological brains since it encourages higher complexity during the training process. SPA is similar to, yet different from, existing pruning algorithms. The algorithm has been tested on a number of challenging benchmark problems in machine learning, including cancer, diabetes, heart, card, iris, glass, thyroid, and hepatitis problems. We compared SPA with other pruning algorithms and we found that SPA is better than the 'random deletion algorithm' (RDA) which prunes hidden units randomly. Our experimental results show that SPA can simplify ANNs with good generalization ability.  相似文献   

5.
A new approach for nonlinear system identification and control based on modular neural networks (MNN) is proposed in this paper. The computational complexity of neural identification can be greatly reduced if the whole system is decomposed into several subsystems. This is obtained using a partitioning algorithm. Each local nonlinear model is associated with a nonlinear controller. These are also implemented by neural networks. The switching between the neural controllers is done by a dynamical switcher, also implemented by neural networks, that tracks the different operating points. The proposed multiple modelling and control strategy has been successfully tested on simulated laboratory scale liquid-level system.  相似文献   

6.
The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.  相似文献   

7.
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.  相似文献   

8.
In this paper, an online self-organizing scheme for Parsimonious and Accurate Fuzzy Neural Networks (PAFNN), and a novel structure learning algorithm incorporating a pruning strategy into novel growth criteria are presented. The proposed growing procedure without pruning not only simplifies the online learning process but also facilitates the formation of a more parsimonious fuzzy neural network. By virtue of optimal parameter identification, high performance and accuracy can be obtained. The learning phase of the PAFNN involves two stages, namely structure learning and parameter learning. In structure learning, the PAFNN starts with no hidden neurons and parsimoniously generates new hidden units according to the proposed growth criteria as learning proceeds. In parameter learning, parameters in premises and consequents of fuzzy rules, regardless of whether they are newly created or already in existence, are updated by the extended Kalman filter (EKF) method and the linear least squares (LLS) algorithm, respectively. This parameter adjustment paradigm enables optimization of parameters in each learning epoch so that high performance can be achieved. The effectiveness and superiority of the PAFNN paradigm are demonstrated by comparing the proposed method with state-of-the-art methods. Simulation results on various benchmark problems in the areas of function approximation, nonlinear dynamic system identification and chaotic time-series prediction demonstrate that the proposed PAFNN algorithm can achieve more parsimonious network structure, higher approximation accuracy and better generalization simultaneously.  相似文献   

9.
This paper proposes a non-recurrent training algorithm, resilient propagation, for the Simultaneous Recurrent Neural network operating in relaxation-mode for computing high quality solutions of static optimization problems. Implementation details related to adaptation of the recurrent neural network weights through the non-recurrent training algorithm, resilient backpropagation, are formulated through an algebraic approach. Performance of the proposed neuro-optimizer on a well-known static combinatorial optimization problem, the Traveling Salesman Problem, is evaluated on the basis of computational complexity measures and, subsequently, compared to performance of the Simultaneous Recurrent Neural network trained with the standard backpropagation, and recurrent backpropagation for the same static optimization problem. Simulation results indicate that the Simultaneous Recurrent Neural network trained with the resilient backpropagation algorithm is able to locate superior quality solutions through comparable amount of computational effort for the Traveling Salesman Problem.  相似文献   

10.
Rasmussen TK  Krink T 《Bio Systems》2003,72(1-2):5-17
Multiple sequence alignment (MSA) is one of the basic problems in computational biology. Realistic problem instances of MSA are computationally intractable for exact algorithms. One way to tackle MSA is to use Hidden Markov Models (HMMs), which are known to be very powerful in the related problem domain of speech recognition. However, the training of HMMs is computationally hard and there is no known exact method that can guarantee optimal training within reasonable computing time. Perhaps the most powerful training method is the Baum-Welch algorithm, which is fast, but bears the problem of stagnation at local optima. In the study reported in this paper, we used a hybrid algorithm combining particle swarm optimization with evolutionary algorithms to train HMMs for the alignment of protein sequences. Our experiments show that our approach yields better alignments for a set of benchmark protein sequences than the most commonly applied HMM training methods, such as Baum-Welch and Simulated Annealing.  相似文献   

11.
Zhang  Hancui  Zhou  Weida 《Cluster computing》2022,25(1):203-214

Virtual machine abnormal behavior detection is an effective way to help cloud platform administrators monitor the running status of cloud platform to improve the reliability of cloud platform, which has become one of the research hotspots in the field of cloud computing. Aiming at the problems of high computational complexity and high false alarm rate in the existing virtual machine anomaly monitoring mechanism of cloud platform, this paper proposed a two-stage virtual machine abnormal behavior-based detection mechanism. Firstly, a workload-based incremental clustering algorithm is used to monitor and analyze both the virtual machine workload information and performance index information. Then, an online anomaly detection mechanism based on the incremental local outlier factor algorithm is designed to enhance detection efficiency. By applying this two-phase detection mechanism, it can significantly reduce the computational complexity and meet the needs of real-time performance. The experimental results are verified on the mainstream Openstack cloud platform.

  相似文献   

12.

Background

Visualising the evolutionary history of a set of sequences is a challenge for molecular phylogenetics. One approach is to use undirected graphs, such as median networks, to visualise phylogenies where reticulate relationships such as recombination or homoplasy are displayed as cycles. Median networks contain binary representations of sequences as nodes, with edges connecting those sequences differing at one character; hypothetical ancestral nodes are invoked to generate a connected network which contains all most parsimonious trees. Quasi-median networks are a generalisation of median networks which are not restricted to binary data, although phylogenetic information contained within the multistate positions can be lost during the preprocessing of data. Where the history of a set of samples contain frequent homoplasies or recombination events quasi-median networks will have a complex topology. Graph reduction or pruning methods have been used to reduce network complexity but some of these methods are inapplicable to datasets in which recombination has occurred and others are procedurally complex and/or result in disconnected networks.

Results

We address the problems inherent in construction and reduction of quasi-median networks. We describe a novel method of generating quasi-median networks that uses all characters, both binary and multistate, without imposing an arbitrary ordering of the multistate partitions. We also describe a pruning mechanism which maintains at least one shortest path between observed sequences, displaying the underlying relations between all pairs of sequences while maintaining a connected graph.

Conclusion

Application of this approach to 5S rDNA sequence data from sea beet produced a pruned network within which genetic isolation between populations by distance was evident, demonstrating the value of this approach for exploration of evolutionary relationships.  相似文献   

13.
Reconstruction of a biological system from its experimental time series data is a challenging task in systems biology. The S-system which consists of a group of nonlinear ordinary differential equations (ODEs) is an effective model to characterize molecular biological systems and analyze the system dynamics. However, inference of S-systems without the knowledge of system structure is not a trivial task due to its nonlinearity and complexity. In this paper, a pruning separable parameter estimation algorithm (PSPEA) is proposed for inferring S-systems. This novel algorithm combines the separable parameter estimation method (SPEM) and a pruning strategy, which includes adding an l? regularization term to the objective function and pruning the solution with a threshold value. Then, this algorithm is combined with the continuous genetic algorithm (CGA) to form a hybrid algorithm that owns the properties of these two combined algorithms. The performance of the pruning strategy in the proposed algorithm is evaluated from two aspects: the parameter estimation error and structure identification accuracy. The results show that the proposed algorithm with the pruning strategy has much lower estimation error and much higher identification accuracy than the existing method.  相似文献   

14.
A new approach to the long-standing local minimum problem of molecular energy minimization is proposed. The approach relies upon a field of computer mathematics known as combinatorial optimization, together with methods of conformational analysis derived from distance geometry. The advantages over the usual numerical techniques of optimization are, first, that the algorithms derived are globally convergent, and second, that the mathematical problems involved are well-posed and suitable for study within the modern theory of computational complexity. In this paper we introduce the approach, and describe a computer program based on it.  相似文献   

15.
MOTIVATION: The evolution of viruses is very rapid and in addition to local point mutations (insertion, deletion, substitution) it also includes frequent recombinations, genome rearrangements and horizontal transfer of genetic materials (HGTS). Evolutionary analysis of viral sequences is therefore a complicated matter for two main reasons: First, due to HGTs and recombinations, the right model of evolution is a network and not a tree. Second, due to genome rearrangements, an alignment of the input sequences is not guaranteed. These facts encourage developing methods for inferring phylogenetic networks that do not require aligned sequences as input. RESULTS: In this work, we present the first computational approach which deals with both genome rearrangements and horizontal gene transfers and does not require a multiple alignment as input. We formalize a new set of computational problems which involve analyzing such complex models of evolution. We investigate their computational complexity, and devise algorithms for solving them. Moreover, we demonstrate the viability of our methods on several synthetic datasets as well as four biological datasets. AVAILABILITY: The code is available from the authors upon request.  相似文献   

16.
In this article, a novel technique for non-linear global optimization is presented. The main goal is to find the optimal global solution of non-linear problems avoiding sub-optimal local solutions or inflection points. The proposed technique is based on a two steps concept: properly keep decreasing the value of the objective function, and calculating the corresponding independent variables by approximating its inverse function. The decreasing process can continue even after reaching local minima and, in general, the algorithm stops when converging to solutions near the global minimum. The implementation of the proposed technique by conventional numerical methods may require a considerable computational effort on the approximation of the inverse function. Thus, here a novel Artificial Neural Network (ANN) approach is implemented to reduce the computational requirements of the proposed optimization technique. This approach is successfully tested on some highly non-linear functions possessing several local minima. The results obtained demonstrate that the proposed approach compares favorably over some current conventional numerical (Matlab functions) methods, and other non-conventional (Evolutionary Algorithms, Simulated Annealing) optimization methods.  相似文献   

17.
In this paper, we present a new evolutionary technique to train three general neural networks. Based on family competition principles and adaptive rules, the proposed approach integrates decreasing-based mutations and self-adaptive mutations to collaborate with each other. Different mutations act as global and local strategies respectively to balance the trade-off between solution quality and convergence speed. Our algorithm is then applied to three different task domains: Boolean functions, regular language recognition, and artificial ant problems. Experimental results indicate that the proposed algorithm is very competitive with comparable evolutionary algorithms. We also discuss the search power of our proposed approach.  相似文献   

18.
One popular learning algorithm for feedforward neural networks is the backpropagation (BP) algorithm which includes parameters, learning rate (eta), momentum factor (alpha) and steepness parameter (lambda). The appropriate selections of these parameters have large effects on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. In this paper, we shall present several classes of learning automata based solutions to the problem of adaptation of BP algorithm parameters. By interconnection of learning automata to the feedforward neural networks, we use learning automata scheme for adjusting the parameters eta, alpha, and lambda based on the observation of random response of the neural networks. One of the important aspects of the proposed schemes is its ability to escape from local minima with high possibility during the training period. The feasibility of proposed methods is shown through simulations on several problems.  相似文献   

19.
Despite of the many successful applications of backpropagation for training multi-layer neural networks, it has many drawbocks. For complex problems it may require a long time to train the networks, and it may not train at all. Long training time can be the result of the non-optimal parameters. It is not easy to choose appropriate value of the parameters for a particular problem. In this paper, by interconnection of fixed structure learning automata (FSLA) to the feedforward neural networks, we apply learning automata (LA) scheme for adjusting these parameters based on the observation of random response of neural networks. The main motivation in using learning automata as an adaptation algorithm is to use its capability of global optimization when dealing with multi-modal surface. The feasibility of proposed method is shown through simulations on three learning problems: exclusive-or, encoding problem, and digit recognition. The simulation results show that the adaptation of these parameters using this method not only increases the convergence rate of learning but it increases the likelihood of escaping from the local minima.  相似文献   

20.
MOTIVATION: Haplotype information has become increasingly important in analyzing fine-scale molecular genetics data, such as disease genes mapping and drug design. Parsimony haplotyping is one of haplotyping problems belonging to NP-hard class. RESULTS: In this paper, we aim to develop a novel algorithm for the haplotype inference problem with the parsimony criterion, based on a parsimonious tree-grow method (PTG). PTG is a heuristic algorithm that can find the minimum number of distinct haplotypes based on the criterion of keeping all genotypes resolved during tree-grow process. In addition, a block-partitioning method is also proposed to improve the computational efficiency. We show that the proposed approach is not only effective with a high accuracy, but also very efficient with the computational complexity in the order of O(m2n) time for n single nucleotide polymorphism sites in m individual genotypes. AVAILABILITY: The software is available upon request from the authors, or from http://zhangroup.aporc.org/bioinfo/ptg/ CONTACT: chen@elec.osaka-sandai.ac.jp SUPPLEMENTARY INFORMATION: Supporting materials is available from http://zhangroup.aporc.org/bioinfo/ptg/bti572supplementary.pdf  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号