共查询到20条相似文献,搜索用时 15 毫秒
1.
Backpropagation, which is frequently used in Neural Network training, often takes a great deal of time to converge on an acceptable solution. Momentum is a standard technique that is used to speed up convergence and maintain generalization performance. In this paper we present the Windowed Momentum algorithm, which increases speedup over Standard Momentum. Windowed Momentum is designed to use a fixed width history of recent weight updates for each connection in a neural network. By using this additional information, Windowed Momentum gives significant speedup over a set of applications with same or improved accuracy. Windowed Momentum achieved an average speedup of 32% in convergence time on 15 data sets, including a large OCR data set with over 500,000 samples. In addition to this speedup, we present the consequences of sample presentation order. We show that Windowed Momentum is able to overcome these effects that can occur with poor presentation order and still maintain its speedup advantages. 相似文献
2.
Chaos in nervous system is a fascinating but controversial field of investigation. To approach the role of chaos in the real brain, we theoretically and numerically investigate the occurrence of chaos inartificial neural networks. Most of the time, recurrent networks (with feedbacks) are fully connected. This architecture being not biologically plausible, the occurrence of chaos is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this variance and on the slope of the transfer function, that allows a sustained activity and the occurrence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. The route towards chaos is numerically checked to be a quasi-periodic one, whatever the type of the first bifurcation is. Our results suggest that such high-dimensional networks behave like low-dimensional dynamical systems. 相似文献
3.
4.
Taking a global analogy with the structure of perceptual biological systems, we present a system composed of two layers of
real-valued sigmoidal neurons. The primary layer receives stimulating spatiotemporal signals, and the secondary layer is a
fully connected random recurrent network. This secondary layer spontaneously displays complex chaotic dynamics. All connections
have a constant time delay. We use for our experiments a Hebbian (covariance) learning rule. This rule slowly modifies the
weights under the influence of a periodic stimulus. The effect of learning is twofold: (i) it simplifies the secondary-layer
dynamics, which eventually stabilizes to a periodic orbit; and (ii) it connects the secondary layer to the primary layer,
and realizes a feedback from the secondary to the primary layer. This feedback signal is added to the incoming signal, and
matches it (i.e., the secondary layer performs a one-step prediction of the forthcoming stimulus). After learning, a resonant
behavior can be observed: the system resonates with familiar stimuli, which activates a feedback signal. In particular, this
resonance allows the recognition and retrieval of partial signals, and dynamic maintenence of the memory of past stimuli.
This resonance is highly sensitive to the temporal relationships and to the periodicity of the presented stimuli. When we
present stimuli which do not match in time or space, the feedback remains silent. The number of different stimuli for which
resonant behavior can be learned is analyzed. As with Hopfield networks, the capacity is proportional to the size of the second,
recurrent layer. Moreover, the high capacity displayed allows the implementation of our model on real-time systems interacting
with their environment. Such an implementation is reported in the case of a simple behavior-based recognition task on a mobile
robot. Finally, we present some functional analogies with biological systems in terms of autonomy and dynamic binding, and
present some hypotheses on the computational role of feedback connections.
Received: 27 April 2001 / Accepted in revised form: 15 January 2002 相似文献
5.
A neural network with a broad distribution of transmission delays was used to study numerically the retrieval of sequences having several types of correlations between successive patterns. In the case of sequences consisting of patterns correlated for finite time, the quality of retrieval was found to be (more or less) independent of the pattern correlation width and the delay distribution. On the other hand the quality of retrieval is dependent on these factors in the case of sequences with pattern correlation functions with long time tails. Finally we have studied to what extent the storage capacity depends on the pattern correlation function and the delay distribution. 相似文献
6.
This work presents a new class of neural network models constrained by biological levels of sparsity and weight-precision, and employing only local weight updates. Concept learning is accomplished through the rapid recruitment of existing network knowledge - complex knowledge being realised as a combination of existing basis concepts. Prior network knowledge is here obtained through the random generation of feedforward networks, with the resulting concept library tailored through distributional bias to suit a particular target class. Learning is exclusively local - through supervised Hebbian and Winnow updates - avoiding the necessity for backpropagation of error and allowing remarkably rapid learning. The approach is demonstrated upon concepts of varying difficulty, culminating in the well-known Monks and LED benchmark problems. 相似文献
7.
Genetic Algorithms have been successfully applied to the learning process of neural networks simulating artificial life. In previous research we compared mutation and crossover as genetic operators on neural networks directly encoded as real vectors (Manczer and Parisi 1990). With reference to crossover we were actually testing the building blocks hypothesis, as the effectiveness of recombination relies on the validity of such hypothesis. Even with the real genotype used, it was found that the average fitness of the population of neural networks is optimized much more quickly by crossover than it is by mutation. This indicated that the intrinsic parallelism of crossover is not reduced by the high cardinality, as seems reasonable and has indeed been suggested in GA theory (Antonisse 1989). In this paper we first summarize such findings and then propose an interpretation in terms of the spatial correlation of the fitness function with respect to the metric defined by the average steps of the genetic operators. Some numerical evidence of such interpretation is given, showing that the fitness surface appears smoother to crossover than it does to mutation. This confirms indirectly that crossover moves along privileged directions, and at the same time provides a geometric rationale for hyperplanes. 相似文献
8.
This paper presents an original mathematical framework based on graph theory which is a first attempt to investigate the dynamics of a model of neural networks with embedded spike timing dependent plasticity. The neurons correspond to integrate-and-fire units located at the vertices of a finite subset of 2D lattice. There are two types of vertices, corresponding to the inhibitory and the excitatory neurons. The edges are directed and labelled by the discrete values of the synaptic strength. We assume that there is an initial firing pattern corresponding to a subset of units that generate a spike. The number of activated externally vertices is a small fraction of the entire network. The model presented here describes how such pattern propagates throughout the network as a random walk on graph. Several results are compared with computational simulations and new data are presented for identifying critical parameters of the model. 相似文献
9.
Masa-aki Sato 《Biological cybernetics》1990,62(3):237-241
A new learning algorithm is described for a general class of recurrent analog neural networks which ultimately settle down to a steady state. Recently, Pineda (Pineda 1987; Almeida 1987; Ikeda et al. 1988) has introduced a learning rule for the recurrent net in which the connection weights are adjusted so that the distance between the stable outputs of the current system and the desired outputs will be maximally decreased. In this method, many cycles are needed in order to get a target system. In each cycle, the recurrent net is run until it reaches a stable state. After that, the weight change is calculated by using a linearized recurrent net which receives the current error of the system as a bias input. In the new algorithm the weights are changed so that the total error of neuron outputs over the entire trajectory is minimized. The weights are adjusted in real time when the network is running. In this method, the trajectory to the target system can be controlled, whereas Pineda's algorithm only controls the position of the fixed point. The relation to the back propagation method (Hinton et al. 1986) is also discussed. 相似文献
10.
In this paper, we propose a successive learning method in hetero-associative memories, such as Bidirectional Associative Memories and Multidirectional Associative Memories, using chaotic neural networks. It can distinguish unknown data from the stored known data and can learn the unknown data successively. The proposed model makes use of the difference in the response to the input data in order to distinguish unknown data from the stored known data. When input data is regarded as unknown data, it is memorized. Furthermore, the proposed model can estimate and learn correct data from noisy unknown data or incomplete unknown data by considering the temporal summation of the continuous data input. In addition, similarity to the physiological facts in the olfactory bulb of a rabbit found by Freeman are observed in the behavior of the proposed model. A series of computer simulations shows the effectiveness of the proposed model. 相似文献
11.
12.
Li-Chih Wang Hui-Min Chen Chih-Ming Liu 《Flexible Services and Manufacturing Journal》1995,7(2):147-175
With the growing uncertainty and complexity in the manufacturing environment, most scheduling problems have been proven to be NP-complete and this can degrade the performance of conventional operations research (OR) techniques. This article presents a system-attribute-oriented knowledge-based scheduling system (SAOSS) with inductive learning capability. With the rich heritage from artificial intelligence (AI), SAOSS takes a multialgorithm paradigm which makes it more intelligent, flexible, and suitable than others for tackling complicated, dynamic scheduling problems. SAOSS employs an efficient and effective inductive learning method, a continuous iterative dichotomister 3 (CID3) algorithm, to induce decision rules for scheduling by converting corresponding decision trees into hidden layers of a self-generated neural network. Connection weights between hidden units imply the scheduling heuristics, which are then formulated into scheduling rules. An FMS scheduling problem is also given for illustration. The scheduling results show that the system-attribute-oriented knowledge-based approach is capable of addressing dynamic scheduling problems. 相似文献
13.
Rezaei MA Abdolmaleki P Karami Z Asadabadi EB Sherafat MA Abrishami-Moghaddam H Fadaie M Forouzanfar M 《Journal of theoretical biology》2008,254(4):817-820
In this study, membrane proteins were classified using the information hidden in their sequences. It was achieved by applying the wavelet analysis to the sequences and consequently extracting several features, each of them revealing a proportion of the information content present in the sequence. The resultant features were made normalized and subsequently fed into a cascaded model developed in order to reduce the effect of the existing bias in the dataset, rising from the difference in size of the membrane protein classes. The results indicate an improvement in prediction accuracy of the model in comparison with similar works. The application of the presented model can be extended to other fields of structural biology due to its efficiency, simplicity and flexibility. 相似文献
14.
Modifying weights within a recurrent network to improve performance on a task has proven to be difficult. Echo-state networks in which modification is restricted to the weights of connections onto network outputs provide an easier alternative, but at the expense of modifying the typically sparse architecture of the network by including feedback from the output back into the network. We derive methods for using the values of the output weights from a trained echo-state network to set recurrent weights within the network. The result of this "transfer of learning" is a recurrent network that performs the task without requiring the output feedback present in the original network. We also discuss a hybrid version in which online learning is applied to both output and recurrent weights. Both approaches provide efficient ways of training recurrent networks to perform complex tasks. Through an analysis of the conditions required to make transfer of learning work, we define the concept of a "self-sensing" network state, and we compare and contrast this with compressed sensing. 相似文献
15.
An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks 总被引:1,自引:0,他引:1
Recently, a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden-layer feedforward neural networks (SLFNs). It was much faster than the traditional gradient-descent-based learning algorithms due to the analytical determination of output weights with the random choice of input weights and hidden layer biases. However, this algorithm often requires a large number of hidden units and thus slowly responds to new observations. Evolutionary extreme learning machine (E-ELM) was proposed to overcome this problem; it used the differential evolution algorithm to select the input weights and hidden layer biases. However, this algorithm required much time for searching optimal parameters with iterative processes and was not suitable for data sets with a large number of input features. In this paper, a new approach for training SLFNs is proposed, in which the input weights and biases of hidden units are determined based on a fast regularized least-squares scheme. Experimental results for many real applications with both small and large number of input features show that our proposed approach can achieve good generalization performance with much more compact networks and extremely high speed for both learning and testing. 相似文献
16.
The stability of brain networks with randomly connected excitatory and inhibitory neural populations is investigated using
a simplified physiological model of brain electrical activity. Neural populations are randomly assigned to be excitatory or
inhibitory and the stability of a brain network is determined by the spectrum of the network’s matrix of connection strengths.
The probability that a network is stable is determined from its spectral density which is numerically determined and is approximated
by a spectral distribution recently derived by Rajan and Abbott. The probability that a brain network is stable is maximum
when the total connection strength into a population is approximately zero and is shown to depend on the arrangement of the
excitatory and inhibitory connections and the parameters of the network. The maximum excitatory and inhibitory input into
a structure allowed by stability occurs when the net input equals zero and, in contrast to networks with randomly distributed
excitatory and inhibitory connections, substantially increases as the number of connections increases. Networks with the largest
excitatory and inhibitory input allowed by stability have multiple marginally stable modes, are highly responsive and adaptable
to external stimuli, have the same total input into each structure with minimal variance in the excitatory and inhibitory
connection strengths, and have a wide range of flexible, adaptable, and complex behavior. 相似文献
17.
Clustering with neural networks 总被引:3,自引:0,他引:3
Behzad Kamgar-Parsi J. A. Gualtieri J. E. Devaney Behrooz Kamgar-Parsi 《Biological cybernetics》1990,63(3):201-208
Partitioning a set ofN patterns in ad-dimensional metric space intoK clusters — in a way that those in a given cluster are more similar to each other than the rest — is a problem of interest in many fields, such as, image analysis, taxonomy, astrophysics, etc. As there are approximatelyK
N/K! possible ways of partitioning the patterns amongK clusters, finding the best solution is beyond exhaustive search whenN is large. We show that this problem, in spite of its exponential complexity, can be formulated as an optimization problem for which very good, but not necessarily optimal, solutions can be found by using a Hopfield model of neural networks. To obtain a very good solution, the network must start from many randomly selected initial states. The network is simulated on the MPP, a 128 × 128 SIMD array machine, where we use the massive parallelism not only in solving the differential equations that govern the evolution of the network, but also in starting the network from many initial states at once thus obtaining many solutions in one run. We achieve speedups of two to three orders of magnitude over serial implementations and the promise through Analog VLSI implementations of further speedups of three to six orders of magnitude.Supported by a National Research Council-NASA Research Associatship 相似文献
18.
In this paper a new learning rule for the coupling weights tuning of Hopfield like chaotic neural networks is developed in such a way that all neurons behave in a synchronous manner, while the desirable structure of the network is preserved during the learning process. The proposed learning rule is based on sufficient synchronization criteria, on the eigenvalues of the weight matrix belonging to the neural network and the idea of Structured Inverse Eigenvalue Problem. Our developed learning rule not only synchronizes all neuron’s outputs with each other in a desirable topology, but also enables us to enhance the synchronizability of the networks by choosing the appropriate set of weight matrix eigenvalues. Specifically, this method is evaluated by performing simulations on the scale-free topology. 相似文献
19.
20.
In this paper, an improved and much stronger RNH-QL method based on RBF network and heuristic Q-learning was put forward for route searching in a larger state space. Firstly, it solves the problem of inefficiency of reinforcement learning if a given problem’s state space is increased and there is a lack of prior information on the environment. Secondly, RBF network as weight updating rule, reward shaping can give an additional feedback to the agent in some intermediate states, which will help to guide the agent towards the goal state in a more controlled fashion. Meanwhile, with the process of Q-learning, it is accessible to the underlying dynamic knowledge, instead of the need of background knowledge of an upper level RBF network. Thirdly, it improves the learning efficiency by incorporating the greedy exploitation strategy to train the neural network, which has been testified by the experimental results. 相似文献