首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A new structure and training method for multilayer neural networks is presented. The proposed method is based on cascade training of subnetworks and optimizing weights layer by layer. The training procedure is completed in two steps. First, a subnetwork, m inputs and n outputs as the style of training samples, is trained using the training samples. Secondly the outputs of the subnetwork is taken as the inputs and the outputs of the training sample as the desired outputs, another subnetwork with n inputs and n outputs is trained. Finally the two trained subnetworks are connected and a trained multilayer neural networks is created. The numerical simulation results based on both linear least squares back-propagation (LSB) and traditional back-propagation (BP) algorithm have demonstrated the efficiency of the proposed method.  相似文献   

2.
A new strategy is presented for the implementation of threshold logic functions with binary-output Cellular Neural Networks (CNNs). The objective is to optimize the CNNs weights to develop a robust implementation. Hence, the concept of generative set is introduced as a convenient representation of any linearly separable Boolean function. Our analysis of threshold logic functions leads to a complete algorithm that automatically provides an optimized generative set. New weights are deduced and a more robust CNN template assuming the same function can thus be implemented. The strategy is illustrated by a detailed example.  相似文献   

3.
A spectrophotometric method for simultaneous analysis of glycine and lysine is proposed by application of neural networks on the spectral kinetic data. The method is based on the reaction of glycine and lysine with 1,2-naphthoquinone-4-sulfonate (NQS) in slightly basic medium. On the basis of the difference in the rate between the two reactions, these two amino acids can be determined simultaneously in binary mixtures. Feed-forward neural networks have been trained to quantify considered amino acids in mixtures under optimum conditions. In this way, a one-layer network was trained. Sigmoidal and linear transfer functions were used in the hidden and output layers, respectively. Linear calibration graphs were obtained in the concentration range of 1 to 25microgml(-1) for glycine and 1 to 19microgml(-1) for lysine. The analytical performance of this method was characterized by the relative standard error. The proposed method was applied to the determination of considered amino acids in synthetic samples.  相似文献   

4.
The global extended Kalman filtering (EKF) algorithm for recurrent neural networks (RNNs) is plagued by the drawback of high computational cost and storage requirement. In this paper, we present a local EKF training-pruning approach that can solve this problem. In particular, the by-products, obtained along with the local EKF training, can be utilized to measure the importance of the network weights. Comparing with the original global approach, the proposed local approach results in much lower computational cost and storage requirement. Hence, it is more practical in solving real world problems. Simulation showed that our approach is an effective joint-training-pruning method for RNNs under online operation.  相似文献   

5.
MOTIVATION: In this paper, we present a secondary structure prediction method YASPIN that unlike the current state-of-the-art methods utilizes a single neural network for predicting the secondary structure elements in a 7-state local structure scheme and then optimizes the output using a hidden Markov model, which results in providing more information for the prediction. RESULTS: YASPIN was compared with the current top-performing secondary structure prediction methods, such as PHDpsi, PROFsec, SSPro2, JNET and PSIPRED. The overall prediction accuracy on the independent EVA5 sequence set is comparable with that of the top performers, according to the Q3, SOV and Matthew's correlations accuracy measures. YASPIN shows the highest accuracy in terms of Q3 and SOV scores for strand prediction. AVAILABILITY: YASPIN is available on-line at the Centre for Integrative Bioinformatics website (http://ibivu.cs.vu.nl/programs/yaspinwww/) at the Vrije University in Amsterdam and will soon be mirrored on the Mathematical Biology website (http://www.mathbio.nimr.mrc.ac.uk) at the NIMR in London. CONTACT: kxlin@nimr.mrc.ac.uk  相似文献   

6.
It has been shown that, by adding a chaotic sequence to the weight update during the training of neural networks, the chaos injection-based gradient method (CIBGM) is superior to the standard backpropagation algorithm. This paper presents the theoretical convergence analysis of CIBGM for training feedforward neural networks. We consider both the case of batch learning as well as the case of online learning. Under mild conditions, we prove the weak convergence, i.e., the training error tends to a constant and the gradient of the error function tends to zero. Moreover, the strong convergence of CIBGM is also obtained with the help of an extra condition. The theoretical results are substantiated by a simulation example.  相似文献   

7.
In a computer simulation, a neural network first received a simultaneous procedure, where the interstimulus interval (ISI) was 0 time-steps (ts). Output activations were near zero under this procedure. The network then received a forward-delay procedure where the ISI was 8 ts. Output activations increased to the near-maximum level faster than those of a control network that first received an explicitly unpaired procedure. Comparable results were obtained with rats that first received trials where a retractable lever was presented for 3s concurrently with access to water. Low-lever pressing was observed under this procedure. The rats then received trials where the lever was followed 15s after by water. Lever pressing appeared faster than a control group that received the 15-s ISI after an explicitly unpaired procedure. The model used in the simulation explains these results as connection-weight increments that promote little output activations in a simultaneous procedure, but facilitate acquisition in an optimal ISI.  相似文献   

8.
Multilayer feedforward neural networks with backpropagation algorithm have been used successfully in many applications. However, the level of generalization is heavily dependent on the quality of the training data. That is, some of the training patterns can be redundant or irrelevant. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. Nevertheless, generalization is carried out independently of the novel patterns to be approximated. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be predicted. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains: two artificial approximation problems and a real time series prediction problem. Results have been compared to standard backpropagation using the complete training data set and the new method shows better generalization abilities.  相似文献   

9.
The training of neural networks using the extended Kalman filter (EKF) algorithm is plagued by the drawback of high computational complexity and storage requirement that may become prohibitive even for networks of moderate size. In this paper, we present a local EKF training and pruning approach that can solve this problem. In particular, the by-products obtained along with the local EKF training can be utilized to measure the importance of the network weights. Comparing with the original global approach, the proposed local EKF training and pruning approach results in a much lower computational complexity and storage requirement. Hence, it is more practical in solving real world problems. The performance of the proposed algorithm is demonstrated on one medium- and one large-scale problems, namely, sunspot data prediction and handwritten digit recognition.  相似文献   

10.
The complex-valued backpropagation algorithm has been widely used in fields of dealing with telecommunications, speech recognition and image processing with Fourier transformation. However, the local minima problem usually occurs in the process of learning. To solve this problem and to speed up the learning process, we propose a modified error function by adding a term to the conventional error function, which is corresponding to the hidden layer error. The simulation results show that the proposed algorithm is capable of preventing the learning from sticking into the local minima and of speeding up the learning.  相似文献   

11.
12.
It is demonstrated that formation of cellular aggregates in a slowly rotating suspension is accompanied by a decrease in total cell concentration in the top layer of the suspension. Both the average particle size and the initial cell concentration of the homogeneous suspension, are parameters which determine the magnitude of the effect.The method is exemplified by
1. 1. aggregation of HeLa cells after treatment with neuraminidase;
2. 2. agglutination of HeLa cells with concanavalin A;
3. 3. agglutination of human erythrocytes with poly--lysine;
4. 4. agglutination of human erythrocytes with poly--lysine following pretreatment with neuraminidase.
  相似文献   

13.
The use of computer simulations as a neurophysiological tool creates new possibilities to understand complex systems and to test whether a given model can explain experimental findings. Simulations, however, require a detailed specification of the model, including the nerve cell action potential and synaptic transmission. We describe a neuron model of intermediate complexity, with a small number of compartments representing the soma and the dendritic tree, and equipped with Na+, K+, Ca2+, and Ca2+ dependent K+ channels. Conductance changes in the different compartments are used to model conventional excitatory and inhibitory synaptic interactions. Voltage dependent NMDA-receptor channels are also included, and influence both the electrical conductance and the inflow of Ca2+ ions. This neuron model has been designed for the analysis of neural networks and specifically for the simulation of the network generating locomotion in a simple vertebrate, the lamprey. By assigning experimentally established properties to the simulated cells and their synapses, it has been possible to verify the sufficiency of these properties to account for a number of experimental findings of the network in operation. The model is, however, sufficiently general to be useful for realistic simulation also of other neural systems.  相似文献   

14.
Almost all artificial neural networks are by default fully connected, which often implies a high redundancy and complexity. Little research has been devoted to the study of partially connected neural networks, despite its potential advantages like reduced training and recall time, improved generalization capabilities, reduced hardware requirements, as well as being a step closer to biological reality. This publication presents an extensive survey of the various kinds of partially connected neural networks, clustered into a clear framework, followed by a detailed comparative discussion.  相似文献   

15.
We propose a class of counting process models for analyzing firing times of an ensemble of neurons. We allow the counting process intensities to be unspecified, unknown functions of the times passed since the most recent firings. Under this assumption we derive a class of statistics with their respective thresholds as well as graphical methods for detecting neural connectivity. We introduce a model under which detection is shown to be certain for long series of observations. We suggest ways to classify interactions as inhibition or excitation and to estimate their strengths. The power of the proposed methods is compared by simulating observations from artificial networks. By analyzing empirically obtained series we obtain results which are consistent with those obtained from cross-correlation-based methods but in addition obtain new insights on further aspects of the interactions. Received: 7 February 1996 / Accepted in revised form: 5 March 1997  相似文献   

16.
A new learning algorithm is described for a general class of recurrent analog neural networks which ultimately settle down to a steady state. Recently, Pineda (Pineda 1987; Almeida 1987; Ikeda et al. 1988) has introduced a learning rule for the recurrent net in which the connection weights are adjusted so that the distance between the stable outputs of the current system and the desired outputs will be maximally decreased. In this method, many cycles are needed in order to get a target system. In each cycle, the recurrent net is run until it reaches a stable state. After that, the weight change is calculated by using a linearized recurrent net which receives the current error of the system as a bias input. In the new algorithm the weights are changed so that the total error of neuron outputs over the entire trajectory is minimized. The weights are adjusted in real time when the network is running. In this method, the trajectory to the target system can be controlled, whereas Pineda's algorithm only controls the position of the fixed point. The relation to the back propagation method (Hinton et al. 1986) is also discussed.  相似文献   

17.
Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, e.g. fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of RNNs may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.  相似文献   

18.
The synchronization frequency of neural networks and its dynamics have important roles in deciphering the working mechanisms of the brain. It has been widely recognized that the properties of functional network synchronization and its dynamics are jointly determined by network topology, network connection strength, i.e., the connection strength of different edges in the network, and external input signals, among other factors. However, mathematical and computational characterization of the relationships between network synchronization frequency and these three important factors are still lacking. This paper presents a novel computational simulation framework to quantitatively characterize the relationships between neural network synchronization frequency and network attributes and input signals. Specifically, we constructed a series of neural networks including simulated small-world networks, real functional working memory network derived from functional magnetic resonance imaging, and real large-scale structural brain networks derived from diffusion tensor imaging, and performed synchronization simulations on these networks via the Izhikevich neuron spiking model. Our experiments demonstrate that both of the network synchronization strength and synchronization frequency change according to the combination of input signal frequency and network self-synchronization frequency. In particular, our extensive experiments show that the network synchronization frequency can be represented via a linear combination of the network self-synchronization frequency and the input signal frequency. This finding could be attributed to an intrinsically-preserved principle in different types of neural systems, offering novel insights into the working mechanism of neural systems.  相似文献   

19.
 Synchronous firing of a population of neurons has been observed in many experimental preparations; in addition, various mathematical neural network models have been shown, analytically or numerically, to contain stable synchronous solutions. In order to assess the level of synchrony of a particular network over some time interval, quantitative measures of synchrony are needed. We develop here various synchrony measures which utilize only the spike times of the neurons; these measures are applicable in both experimental situations and in computer models. Using a mathematical model of the CA3 region of the hippocampus, we evaluate these synchrony measures and compare them with pictorial representations of network activity. We illustrate how synchrony is lost and synchrony measures change as heterogeneity amongst cells increases. Theoretical expected values of the synchrony measures for different categories of network solutions are derived and compared with results of simulations. Received: 6 June 1994/Accepted in revised form: 13 January 1995  相似文献   

20.
Determining processes constraining adaptation is a major challenge facing evolutionary biology, and sex allocation has proved a useful model system for exploring different constraints. We investigate the evolution of suboptimal sex allocation in a solitary parasitoid wasp system by modelling information acquisition and processing using artificial neural networks (ANNs) evolving according to a genetic algorithm. Theory predicts an instantaneous switch from the production of male to female offspring with increasing host size, whereas data show gradual changes. We found that simple ANNs evolved towards producing sharp switches in sex ratio, but additional biologically reasonable assumptions of costs of synapse maintenance, and simplification of the ANNs, led to more gradual adjustment. Switch sharpness was robust to uncertainty in fitness consequences of host size, challenging interpretations of previous empirical findings. Our results also question some intuitive hypotheses concerning the evolution of threshold traits and confirm how neural processing may constrain adaptive behaviour.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号