共查询到20条相似文献,搜索用时 15 毫秒
1.
Previously, one of the authors proposed a new hypothesis on the organization of synaptic connections, and constructed a model of self-organizing multi-layered neural network cognitron (Fukushima, 1975). the cognitron consists of a number of neural layers with similar structure connected in a cascade one after another. We have modified the structure of the cognitron, and have developed a new network having an ability of associative memory. The new network, named a feedback-type cognitron, has not only the feedforward connections as in the conventional cognitron, but also modifiable feedback connections from the last-layer cells to the front-layer ones. This network has been simulated on a digital computer. If several stimulus patterns are repeatedly presented to the network, the interconnections between the cells are gradually organized. The feedback connections, as well as the conventional feedforward ones, are self-organized depending on the characteristies of the externally presented stimulus patterns. After adequate number of stimulus presentations, each cell usually acquires the selective responsiveness to one of the stimulus patterns which have been frequently given. That is, every different stimulus pattern becomes to elicit an individual response to the network. After the completion of the self-organization, several stimulus patterns are presented to the network, and the responses are observed. Once a stimulus is given to the network, the signal keeps circulating in the network even after cutting off the stimulus, and the response gradually changes. Even though an imperfect or an ambiguous pattern is presented, the response usually converges to one of the patterns which have been frequently given during the process of self-organization. In some cases, however, a new pattern which has never been presented before, emerges. It is seen that this feedback-type cognitron has characteristics quite similar to some functions of the brain, such as the associative recall of memory, or the creation of a new idea by intuition. 相似文献
2.
This paper describes a neural network model whose structure is designed to closely fit neuroanatomical and-physiological data, and not to be most suitable for rigorous mathematical analysis.It is shown by computer simulation that a process of self-organization that departs from a fixed retinotopic order at peripheral layers and includes hebbian modifications of synaptic connectivity at higher processing levels leads to a system that is capable of mimicking various functions of visual systems:In the initial state the overall structure of the network is preset, individual connections at higher levels are randomly selected and their strength is initialized with random numbers.For this model the outcome of the self-organization process is determined by the stimulation during the developmental phase. Depending on the type of stimuli used the model can either develop towards a featureselective preprocessor stage in a complex vision system or towards a subsystem for associative recall of abstract patterns.This flexibility supports the hypothesis that the principles embodied are rather universal and can account for the development of various nervous system structures.Presented at teh 9th Cybernetics-Congress, Göttingen, March 1986 相似文献
3.
On the basis of recent neurophysiological findings on the mammalian visual cortex, a selforganizing neural network model is proposed for the understanding of the development of complex cells. The model is composed of two kinds of connections from LGN cells to a complex cell. One is direct excitatory connections and the other is indirect inhibitory connections via simple cells. Inhibitory synapses between simple cells and complex cells are assumed to be modifiable. The model was simulated on a computer to confirm its behavior. 相似文献
4.
Existing neural network models are capable of tracking linear trajectories of moving visual objects. This paper describes an additional neural mechanism, disfacilitation, that enhances the ability of a visual system to track curved trajectories. The added mechanism combines information about an object's trajectory with information about changes in the object's trajectory, to improve the estimates for the object's next probable location. Computational simulations are presented that show how the neural mechanism can learn to track the speed of objects and how the network operates to predict the trajectories of accelerating and decelerating objects. 相似文献
5.
The MMSOM identification method, which had been presented by the authors, is improved to the multiple modeling by the irregular self-organizing map (MMISOM) using the irregular SOM (ISOM). Inputs to the neural networks are parameters of the instantaneous model computed adaptively at every instant. The neural network learns these models. The reference vectors of its output nodes are estimation of the parameters of the local models. At every instant, the model with closest output to the plant output is selected as the model of the plant. ISOM used in this paper is a graph of all the nodes and some of the weighted links between them to make a minimum spanning tree graph. It is shown in this paper that it is possible to add new models if the number of models is initially less than the appropriate one. The MMISOM shows more flexibility to cover the linear model space of the plant when the space is concave. 相似文献
6.
The MMSOM identification method, which had been presented by the authors, is improved to the multiple modeling by the irregular self-organizing map (MMISOM) using the irregular SOM (ISOM). Inputs to the neural networks are parameters of the instantaneous model computed adaptively at every instant. The neural network learns these models. The reference vectors of its output nodes are estimation of the parameters of the local models. At every instant, the model with closest output to the plant output is selected as the model of the plant. ISOM used in this paper is a graph of all the nodes and some of the weighted links between them to make a minimum spanning tree graph. It is shown in this paper that it is possible to add new models if the number of models is initially less than the appropriate one. The MMISOM shows more flexibility to cover the linear model space of the plant when the space is concave. 相似文献
7.
This paper presents a Spiking Neural Network (SNN) architecture for mobile robot navigation. The SNN contains 4 layers where dynamic synapses route information to the appropriate neurons in each layer and the neurons are modeled using the Leaky Integrate and Fire (LIF) model. The SNN learns by self-organizing its connectivity as new environmental conditions are experienced and consequently knowledge about its environment is stored in the connectivity. Also a novel feature of the proposed SNN architecture is that it uses working memory, where present and previous sensor states are stored. Results are presented for a wall following application. 相似文献
8.
Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position 总被引:5,自引:0,他引:5
Kunihiko Fukushima 《Biological cybernetics》1980,36(4):193-202
A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by learning without a teacher, and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname neocognitron. After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consits of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of S-cells, which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of C-cells similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any teacher during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cell of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern. 相似文献
9.
基于SOM神经网络的白河林业局森林健康分等评价 总被引:3,自引:0,他引:3
将自组织特征映射(SOM)神经网络引入森林健康评价领域,与地理信息系统技术(GIS)相结合,基于森林经营小班尺度,对长白山白河林业局3个主要森林类型(阔叶混交林、针阔混交林、长白落叶松林)的森林健康状况进行定量评价,并分析了不同平均年龄段、不同平均树高、不同郁闭度森林小班的健康状况.结果表明:SOM神经网络是自动化定量评价森林健康的一个较先进的方法,其用于森林健康分等评价的最大优点是不需要知道分等类别的先验知识,不需要事先人为确定分等评价因素指标的权重,能有效地克服主观因素的干扰,使分等结果更加客观准确;不同森林类型健康等级状况的比例排序为阔叶混交林Ⅲ>Ⅱ>Ⅰ>Ⅳ>Ⅴ,针阔混交林Ⅱ>Ⅳ>Ⅰ>Ⅲ>Ⅴ,长白落叶松林Ⅰ>Ⅱ>Ⅲ>Ⅴ>Ⅳ;相对来说,森林小班平均年龄越大、平均树高越高、郁闭度越高,呈健康状况的小班比例也越高.以上评价结果可为白河林业局的森林可持续经营和多功能利用提供理论支撑. 相似文献
10.
In order to probe into the self-organizing emergence of simple cell orientation selectivity, we tried to construct a neural
network model that consists of LGN neurons and simple cells in visual cortex and obeys the Hebbian learning rule. We investigated
the neural coding and representation of simple cells to a natural image by means of this model. The results show that the
structures of their receptive fields are determined by the preferred orientation selectivity of simple cells. However, they
are also decided by the emergence of self-organization in the unsupervision learning process. This kind of orientation selectivity
results from dynamic self-organization based on the interactions between LGN and cortex. 相似文献
11.
In order to probe into the self-organizing emergence of simple cell orientation selectivity, we tried to construct a neural network model that consists of LGN neurons and simple cells in visual cortex and obeys the Hebbian learning rule. We investigated the neural coding and representation of simple cells to a natural image by means of this model. The results show that the structures of their receptive fields are determined by the preferred orientation selectivity of simple cells. However, they are also decided by the emergence of self-organization in the unsupervision learning process. This kind of orientation selectivity results from dynamic self-organization based on the interactions between LGN and cortex. 相似文献
12.
An automated computer-based method for mapping of protein surface cavities was developed and applied to a set of 176 metalloproteinases containing zinc cations in their active sites. With very few exceptions, the cavity search routine detected the active site among the five largest cavities and produced reasonable active site surfaces. Cavities were described by means of solvent-accessible surface patches. For a given protein, these patches were calculated in three steps: (i) definition of cavity atoms forming surface cavities by a grid-based technique; (ii) generation of solvent accessible surfaces; (iii) assignment of an accessibility value and a generalized atom type to each surface point. Topological correlation vectors were generated from the set of surface points forming the cavities, and projected onto the plane by a self-organizing network. The resulting map of 865 enzyme cavities displays clusters of active sites that are clearly separated from the other cavities. It is demonstrated that both fully automated recognition of active sites, and prediction of enzyme class can be performed for novel protein structures at high accuracy. 相似文献
13.
A study is presented of a set of coupled nets proposed to function as a global competitive network. One net, of hidden nodes,
is composed solely of inhibitory neurons and is excitatorily driven and feeds back in a disinhibitory manner to an input net
which itself feeds excitatorily to a (cortical) output net. The manner in which the former input and hidden inhibitory net
function so as to enhance outputs as compared with inputs, and the further enhancements when the cortical net is added, are
explored both mathematically and by simulation. This is extended to learning on cortical afferent and lateral connections.
A global wave structure, arising on the inhibitory net in a similar manner to that of pattern formation in a negative laplacian
net, is seen to be important to all of these activities. Simulations are only performed in one dimension, although the global
nature of the activity is expected to extend to higher dimensions. Possible implications are briefly discussed.
Received: 21 November 1993/Accepted in revised form: 30 June 1994 相似文献
14.
This paper describes a model of a neural visual system of a higher animal, in which the capability of pattern recognition develops adaptively. To produce the adaptability, we adopted self-organizing cells, and with them modeled feature-detecting cells which were discovered by Hubel and Wiesel and whose plasticity was found by Blakemore and Cooper. Combining the self-organizing cells and the learning principle of a Perceptron-type system, we constructed a model of the whole visual system. The model is also equipped with an eye movement control mechanism for gazing, which reduces the number of selforganizing cells required for pattern recognition, thus contributing to their quick self-organization. Computer simulation and an experiment using a hardware simulator showed that self-organizing cells quickly become sensitive to the features often seen and that the resulted system can classify patterns with a rather small number of feature-detecting cells. 相似文献
15.
The capacities of a specially designed neural network for familiarity recognition and recollection have been compared. Recognition is based on calculating “image familiarity” as a modified Hopfield energy function in which the value of the inner sum is replaced by the sign of this value. This replacement makes the calculation of familiarity compatible with the basic dynamic equations of the Hopfield network and is in fact reduced to calculating the scalar product of the neuronet state vectors at two successive time steps. 相似文献
16.
Williamson R Chrachri A 《Philosophical transactions of the Royal Society of London. Series B, Biological sciences》2007,362(1479):473-481
Artificial neural networks (ANNs) have become increasingly sophisticated and are widely used for the extraction of patterns or meaning from complicated or imprecise datasets. At the same time, our knowledge of the biological systems that inspired these ANNs has also progressed and a range of model systems are emerging where there is detailed information not only on the architecture and components of the system but also on their ontogeny, plasticity and the adaptive characteristics of their interconnections. We describe here a biological neural network contained in the cephalopod statocysts; the statocysts are analogous to the vertebrae vestibular system and provide the animal with sensory information on its orientation and movements in space. The statocyst network comprises only a small number of cells, made up of just three classes of neurons but, in combination with the large efferent innervation from the brain, forms an 'active' sense organs that uses feedback and feed-forward mechanisms to alter and dynamically modulate the activity within cells and how the various components are interconnected. The neurons are fully accessible to physiological investigation and the system provides an excellent model for describing the mechanisms underlying the operation of a sophisticated neural network. 相似文献
17.
18.
This paper presents a pruning method for artificial neural networks (ANNs) based on the 'Lempel-Ziv complexity' (LZC) measure. We call this method the 'silent pruning algorithm' (SPA). The term 'silent' is used in the sense that SPA prunes ANNs without causing much disturbance during the network training. SPA prunes hidden units during the training process according to their ranks computed from LZC. LZC extracts the number of unique patterns in a time sequence obtained from the output of a hidden unit and a smaller value of LZC indicates higher redundancy of a hidden unit. SPA has a great resemblance to biological brains since it encourages higher complexity during the training process. SPA is similar to, yet different from, existing pruning algorithms. The algorithm has been tested on a number of challenging benchmark problems in machine learning, including cancer, diabetes, heart, card, iris, glass, thyroid, and hepatitis problems. We compared SPA with other pruning algorithms and we found that SPA is better than the 'random deletion algorithm' (RDA) which prunes hidden units randomly. Our experimental results show that SPA can simplify ANNs with good generalization ability. 相似文献
19.
We studied the dynamics of a neural network that has both recurrent excitatory and random inhibitory connections. Neurons started to become active when a relatively weak transient excitatory signal was presented and the activity was sustained due to the recurrent excitatory connections. The sustained activity stopped when a strong transient signal was presented or when neurons were disinhibited. The random inhibitory connections modulated the activity patterns of neurons so that the patterns evolved without recurrence with time. Hence, a time passage between the onsets of the two transient signals was represented by the sequence of activity patterns. We then applied this model to represent the trace eye blink conditioning, which is mediated by the hippocampus. We assumed this model as CA3 of the hippocampus and considered an output neuron corresponding to a neuron in CA1. The activity pattern of the output neuron was similar to that of CA1 neurons during trace eye blink conditioning, which was experimentally observed. 相似文献
20.
Wayne M. Getz 《Bulletin of mathematical biology》1991,53(6):805-823
Several critical issues associated with the processing of olfactory stimuli in animals (but focusing on insects) are discussed
with a view to designing a neural network which can process olfactory stimuli. This leads to the construction of a neural
network that can learn and identify the quality (direction cosines) of an input vector or extract information from a sequence
of correlated input vectors, where the latter corresponds to sampling a time varying olfactory stimulus (or other generically
similar pattern recognition problems). The network is constructed around a discrete time content-addressable memory (CAM)
module which basically satisfies the Hopfield equations with the addition of a unit time delay feedback. This modification
improves the convergence properties of the network and is used to control a switch which activates the learning or template
formation process when the input is “unknown”. The network dynamics are embedded within a sniff cycle which includes a larger
time delay (i.e. an integert
s
<1) that is also used to control the template formation switch. In addition, this time delay is used to modify the input into
the CAM module so that the more dominant of two mingling odors or an odor increasing against a background of odors is more
readily identified. The performance of the network is evaluated using Monte Carlo simulations and numerical results are presented. 相似文献