首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 7 毫秒
1.
2.
The neural network that efficiently and nearly optimally solves difficult optimization problems is defined. The convergence proof for the Markovian neural network that asynchronously updates its neurons' states is also presented. The comparison of the performance of the Markovian neural network with various combinatorial optimization methods in two domains is described. The Markovian neural network is shown to be an efficient tool for solving optimization problems.  相似文献   

3.
Cephalopods have arguably the largest and most complex nervous systems amongst the invertebrates; but despite the squid giant axon being one of the best studied nerve cells in neuroscience, and the availability of superb information on the morphology of some cephalopod brains, there is surprisingly little known about the operation of the neural networks that underlie the sophisticated range of behaviour these animals display. This review focuses on a few of the best studied neural networks: the giant fiber system, the chromatophore system, the statocyst system, the visual system and the learning and memory system, with a view to summarizing our current knowledge and stimulating new studies, particularly on the activities of identified central neurons, to provide a more complete understanding of networks within the cephalopod nervous system.  相似文献   

4.
A neural network that uses the basic Hebbian learning rule and the Bayesian combination function is defined. Analogously to Hopfield's neural network, the convergence for the Bayesian neural network that asynchronously updates its neurons' states is proved. The performance of the Bayesian neural network in four medical domains is compared with various classification methods. The Bayesian neural network uses more sophisticated combination function than Hopfield's neural network and uses more economically the available information. The naive Bayesian classifier typically outperforms the basic Bayesian neural network since iterations in network make too many mistakes. By restricting the number of iterations and increasing the number of fixed points the network performs better than the naive Bayesian classifier. The Bayesian neural network is designed to learn very quickly and incrementally.  相似文献   

5.
Clustering with neural networks   总被引:3,自引:0,他引:3  
Partitioning a set ofN patterns in ad-dimensional metric space intoK clusters — in a way that those in a given cluster are more similar to each other than the rest — is a problem of interest in many fields, such as, image analysis, taxonomy, astrophysics, etc. As there are approximatelyK N/K! possible ways of partitioning the patterns amongK clusters, finding the best solution is beyond exhaustive search whenN is large. We show that this problem, in spite of its exponential complexity, can be formulated as an optimization problem for which very good, but not necessarily optimal, solutions can be found by using a Hopfield model of neural networks. To obtain a very good solution, the network must start from many randomly selected initial states. The network is simulated on the MPP, a 128 × 128 SIMD array machine, where we use the massive parallelism not only in solving the differential equations that govern the evolution of the network, but also in starting the network from many initial states at once thus obtaining many solutions in one run. We achieve speedups of two to three orders of magnitude over serial implementations and the promise through Analog VLSI implementations of further speedups of three to six orders of magnitude.Supported by a National Research Council-NASA Research Associatship  相似文献   

6.
In a series of articles (Leung et al., 1973, 1974; Ogztöreli, 1972, 1975, 1978, 1979; Stein et al., 1974) we have investigated some of the physiologically significant properties of a general neural model. In these papers the nature of the oscillations occuring in the model has been briefly analyzed by omitting the effects of the discrete time-lags in the interaction of neurons, although these time-lags were incoporated in the general model. In the present work we investigate the effects of the time-lags on the oscillations which are intrinsic to the neural model, depending on the structural parameters such as external inputs, interaction coefficients, self-inhibition, self-excitation and selfadaptation coefficients. The numerical solution of the neural model, the computation of the steady-state solutions and the natural modes of the oscillations around the steady-state solutions are described.This work was partly supported by the Natural Sciences and Engineering Research Council of Canada under Grant NRC-A-4345 through the University of Alberta  相似文献   

7.
In the first Part explicit methods are given, following the work of Refs. [1–3], for the design of networks whose reverberations cannot exceed prefixed periods no matter how coefficients are changed, as well as of networks obeying pre-assigned constants of motion. In the second Part the role of coupling strengths in determining cyclic behaviors is investigated and shown to lead to new methods for the design of reverberating networks.  相似文献   

8.
The state of art in computer modelling of neural networks with associative memory is reviewed. The available experimental data are considered on learning and memory of small neural systems, on isolated synapses and on molecular level. Computer simulations demonstrate that realistic models of neural ensembles exhibit properties which can be interpreted as image recognition, categorization, learning, prototype forming, etc. A bilayer model of associative neural network is proposed. One layer corresponds to the short-term memory, the other one to the long-term memory. Patterns are stored in terms of the synaptic strength matrix. We have studied the relaxational dynamics of neurons firing and suppression within the short-term memory layer under the influence of the long-term memory layer. The interaction among the layers has found to create a number of novel stable states which are not the learning patterns. These synthetic patterns may consist of elements belonging to different non-intersecting learning patterns. Within the framework of a hypothesis of selective and definite coding of images in brain one can interpret the observed effect as the "idea? generating" process.  相似文献   

9.
Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.  相似文献   

10.
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.  相似文献   

11.
D Koruga 《Bio Systems》1990,23(4):297-303
We describe a new approach in the research of neural networks. This research is based on molecular networks in the neuron. If we use molecular networks as a sub-neuron factor of neural networks, it is a more realistic approach than today's concepts in this new computer technology field, because the artificial neural activity profile is similar to the profile of the action potential in the natural neuron. The molecular networks approach can be used in three technologies: neurocomputer, neurochip and molecular chip. This means that molecular networks open new fields of science and engineering called molecular-like machines and molecular machines.  相似文献   

12.
Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, e.g. fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of RNNs may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.  相似文献   

13.
14.
In the framework of the neural network theory effects similar to hypnotic displays are constructed. They are based on the associative paradigm involving non-linear interaction of excitatory and inhibitory channels with synaptic memory. The non-linearity of long-term memorizing processes may cause effects exhibited by blind spots, which are interpreted as the first stage of hypnosis. More complicated phenomena are discussed in terms of a two-layer network.  相似文献   

15.
Massively parallel (neural-like) networks are receiving increasing attention as a mechanism for expressing information processing models. By exploiting powerful primitive units and stability-preserving construction rules, various workers have been able to construct and test quite complex models, particularly in vision research. But all of the detailed technical work was concerned with the structure and behavior offixed networks. The purpose of this paper is to extend the methodology to cover several aspects of change and memory.  相似文献   

16.
Spontaneous behaviour in neural networks   总被引:1,自引:0,他引:1  
  相似文献   

17.
We analyse the stochastic properties of dynamical systems with finite populations of a few differentreplicator species. Our main interest is to evaluate the typicallifetime, i.e. the time for the extinction of the first species in the network, for different catalytic structures, as a function of the population size.  相似文献   

18.
In connection with some problems that arise in the study of neural networks random matrices are considered and the probability for them to have certain rank is investigated. Two models are studied in a simple-minded approach to problems of this type.On leave of absence from the Institute for Mathematical Sciences, Madras (India).  相似文献   

19.
20.
This paper presents a new approach to speed up the operation of time delay neural networks. The entire data are collected together in a long vector and then tested as a one input pattern. The proposed fast time delay neural networks (FTDNNs) use cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented time delay neural networks is less than that needed by conventional time delay neural networks (CTDNNs). Simulation results using MATLAB confirm the theoretical computations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号