首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two types of auto-associative networks are well known, one based upon the correlation matrix and the other based upon the orthogonal projection matrix, both of which are calculated from the pattern vectors to be memorized. Although the latter type of networks have a desirable associative property compared to the former ones, they require, in conventional models, nonlocal calculation of the pattern vectors (i.e., pseudoinverse of a matrix) or some learning procedure based on the error correction paradigm. This paper proposes a new model of auto-associative networks in which the orthogonal projection is implemented in a special relation between the connections linking neuron-like elements. The connection weights can be determined by a Hebbian local learning, requiring no pseudoinverse calculation nor the error correction learning.  相似文献   

2.
Gaussian processes compare favourably with backpropagation neural networks as a tool for regression, and Bayesian neural networks have Gaussian process behaviour when the number of hidden neurons tends to infinity. We describe a simple recurrent neural network with connection weights trained by one-shot Hebbian learning. This network amounts to a dynamical system which relaxes to a stable state in which it generates predictions identical to those of Gaussian process regression. In effect an infinite number of hidden units in a feed-forward architecture can be replaced by a merely finite number, together with recurrent connections.  相似文献   

3.
A number of memory models have been proposed. These all have the basic structure that excitatory neurons are reciprocally connected by recurrent connections together with the connections with inhibitory neurons, which yields associative memory (i.e., pattern completion) and successive retrieval of memory. In most of the models, a simple mathematical model for a neuron in the form of a discrete map is adopted. It has not, however, been clarified whether behaviors like associative memory and successive retrieval of memory appear when a biologically plausible neuron model is used. In this paper, we propose a network model for associative memory and successive retrieval of memory based on Pinsky-Rinzel neurons. The state of pattern completion in associative memory can be observed with an appropriate balance of excitatory and inhibitory connection strengths. Increasing of the connection strength of inhibitory interneurons changes the state of memory retrieval from associative memory to successive retrieval of memory. We investigate this transition.  相似文献   

4.
This study compares the ability of excitatory, feed-forward neural networks to construct good transformations on their inputs. The quality of such a transformation is judged by the minimization of two information measures: the information loss of the transformation and the statistical dependency of the output. The networks that are compared differ from each other in the parametric properties of their neurons and in their connectivity. The particular network parameters studied are output firing threshold, synaptic connectivity, and associative modification of connection weights. The network parameters that most directly affect firing levels are threshold and connectivity. Networks incorporating neurons with dynamic threshold adjustment produce better transformations. When firing threshold is optimized, sparser synaptic connectivity produces a better transformation than denser connectivity. Associative modification of synaptic weights confers only a slight advantage in the construction of optimal transformations. Additionally, our research shows that some environments are better suited than others for recoding. Specifically, input environments high in statistical dependence, i.e. those environments most in need of recoding, are more likely to undergo successful transformations.  相似文献   

5.
The acts of learning and memory are thought to emerge from the modifications of synaptic connections between neurons, as guided by sensory feedback during behavior. However, much is unknown about how such synaptic processes can sculpt and are sculpted by neuronal population dynamics and an interaction with the environment. Here, we embodied a simulated network, inspired by dissociated cortical neuronal cultures, with an artificial animal (an animat) through a sensory-motor loop consisting of structured stimuli, detailed activity metrics incorporating spatial information, and an adaptive training algorithm that takes advantage of spike timing dependent plasticity. By using our design, we demonstrated that the network was capable of learning associations between multiple sensory inputs and motor outputs, and the animat was able to adapt to a new sensory mapping to restore its goal behavior: move toward and stay within a user-defined area. We further showed that successful learning required proper selections of stimuli to encode sensory inputs and a variety of training stimuli with adaptive selection contingent on the animat's behavior. We also found that an individual network had the flexibility to achieve different multi-task goals, and the same goal behavior could be exhibited with different sets of network synaptic strengths. While lacking the characteristic layered structure of in vivo cortical tissue, the biologically inspired simulated networks could tune their activity in behaviorally relevant manners, demonstrating that leaky integrate-and-fire neural networks have an innate ability to process information. This closed-loop hybrid system is a useful tool to study the network properties intermediating synaptic plasticity and behavioral adaptation. The training algorithm provides a stepping stone towards designing future control systems, whether with artificial neural networks or biological animats themselves.  相似文献   

6.
This report demonstrates the effectiveness of two processes in constructing simple feedforward networks which perform good transformations on their inputs. Good transformations are characterized by the minimization of two information measures: the information loss incurred with the transformation and the statistical dependency of the output. The two processes build appropriate synaptic connections in initially unconnected networks. The first process, synaptogenesis, creates new synaptic connections; the second process, associative synaptic modification, adjusts the connection strength of existing synapses. Synaptogenesis produces additional innervation for each output neuron until each output neuron achieves a firing rate of approximately 0.50. Associative modification of existing synaptic connections lends robustness to network construction by adjusting suboptimal choices of initial synaptic weights. Networks constructed using synaptogenesis and synaptic modification successfully preserve the information content of a variety of inputs. By recording a high-dimensional input into an output of much smaller dimension, these networks drastically reduce the statistical dependence of neuronal representations. Networks constructed with synaptogenesis and associative modification perform good transformations over a wide range of neuron firing thresholds.  相似文献   

7.
在信息编码能提高联想记忆的存贮能力和脑内存在主动活动机制的启发下,提出一个主动联想记忆模型。模型包括两个神经网络,其一为输入和输出网络,另一个为在学习时期能自主产生兴奋模式的主动网络。两个网络的神经元之间有突触联系。由于自主产生的兴奋模式与输入无关,并可能接近于相互正交,因此,本模型有较高的存贮能力。初步分析和计算机仿真证明:本模型确有比通常联想记忆模型高的存贮能力,特别是在输入模式间有高度相关情况下、最后,对提出的模型与双向自联想记忆和光学全息存贮机制的关系作了讨论。  相似文献   

8.
Finding out the physical structure of neuronal circuits that governs neuronal responses is an important goal for brain research. With fast advances for large-scale recording techniques, identification of a neuronal circuit with multiple neurons and stages or layers becomes possible and highly demanding. Although methods for mapping the connection structure of circuits have been greatly developed in recent years, they are mostly limited to simple scenarios of a few neurons in a pairwise fashion; and dissecting dynamical circuits, particularly mapping out a complete functional circuit that converges to a single neuron, is still a challenging question. Here, we show that a recent method, termed spike-triggered non-negative matrix factorization (STNMF), can address these issues. By simulating different scenarios of spiking neural networks with various connections between neurons and stages, we demonstrate that STNMF is a persuasive method to dissect functional connections within a circuit. Using spiking activities recorded at neurons of the output layer, STNMF can obtain a complete circuit consisting of all cascade computational components of presynaptic neurons, as well as their spiking activities. For simulated simple and complex cells of the primary visual cortex, STNMF allows us to dissect the pathway of visual computation. Taken together, these results suggest that STNMF could provide a useful approach for investigating neuronal systems leveraging recorded functional neuronal activity.  相似文献   

9.
The neural integrator of the oculomotor system is a privileged field for artificial neural network simulation. In this paper, we were interested in an improvement of the biologically plausible features of the Arnold-Robinson network. This improvement was done by fixing the sign of the connection weights in the network (in order to respect the biological Dale's Law). We also introduced a notion of distance in the network in the form of transmission delays between its units. These modifications necessitated the introduction of a general supervisor in order to train the network to act as a leaky integrator. When examining the lateral connection weights of the hidden layer, the distribution of the weights values was found to exhibit a conspicuous structure: the high-value weights were grouped in what we call clusters. Other zones are quite flat and characterized by low-value weights. Clusters are defined as particular groups of adjoining neurons which have strong and privileged connections with another neighborhood of neurons. The clusters of the trained network are reminiscent of the small clusters or patches that have been found experimentally in the nucleus prepositus hypoglossi, where the neural integrator is located. A study was conducted to determine the conditions of emergence of these clusters in our network: they include the fixation of the weight sign, the introduction of a distance, and a convergence of the information from the hidden layer to the motoneurons. We conclude that this spontaneous emergence of clusters in artificial neural networks, performing a temporal integration, is due to computational constraints, with a restricted space of solutions. Thus, information processing could induce the emergence of iterated patterns in biological neural networks. Received: 18 September 1996 / Accepted in revised form: 7 January 1997  相似文献   

10.
Dynamic recurrent neural networks were derived to simulate neuronal populations generating bidirectional wrist movements in the monkey. The models incorporate anatomical connections of cortical and rubral neurons, muscle afferents, segmental interneurons and motoneurons; they also incorporate the response profiles of four populations of neurons observed in behaving monkeys. The networks were derived by gradient descent algorithms to generate the eight characteristic patterns of motor unit activations observed during alternating flexion-extension wrist movements. The resulting model generated the appropriate input-output transforms and developed connection strengths resembling those in physiological pathways. We found that this network could be further trained to simulate additional tasks, such as experimentally observed reflex responses to limb perturbations that stretched or shortened the active muscles, and scaling of response amplitudes in proportion to inputs. In the final comprehensive network, motor units are driven by the combined activity of cortical, rubral, spinal and afferent units during step tracking and perturbations.The model displayed many emergent properties corresponding to physiological characteristics. The resulting neural network provides a working model of premotoneuronal circuitry and elucidates the neural mechanisms controlling motoneuron activity. It also predicts several features to be experimentally tested, for example the consequences of eliminating inhibitory connections in cortex and red nucleus. It also reveals that co-contraction can be achieved by simultaneous activation of the flexor and extensor circuits without invoking features specific to co-contraction.  相似文献   

11.
Spike-timing-dependent plasticity (STDP) is believed to structure neuronal networks by slowly changing the strengths (or weights) of the synaptic connections between neurons depending upon their spiking activity, which in turn modifies the neuronal firing dynamics. In this paper, we investigate the change in synaptic weights induced by STDP in a recurrently connected network in which the input weights are plastic but the recurrent weights are fixed. The inputs are divided into two pools with identical constant firing rates and equal within-pool spike-time correlations, but with no between-pool correlations. Our analysis uses the Poisson neuron model in order to predict the evolution of the input synaptic weights and focuses on the asymptotic weight distribution that emerges due to STDP. The learning dynamics induces a symmetry breaking for the individual neurons, namely for sufficiently strong within-pool spike-time correlation each neuron specializes to one of the input pools. We show that the presence of fixed excitatory recurrent connections between neurons induces a group symmetry-breaking effect, in which neurons tend to specialize to the same input pool. Consequently STDP generates a functional structure on the input connections of the network.  相似文献   

12.
Brain networks store new memories using functional and structural synaptic plasticity. Memory formation is generally attributed to Hebbian plasticity, while homeostatic plasticity is thought to have an ancillary role in stabilizing network dynamics. Here we report that homeostatic plasticity alone can also lead to the formation of stable memories. We analyze this phenomenon using a new theory of network remodeling, combined with numerical simulations of recurrent spiking neural networks that exhibit structural plasticity based on firing rate homeostasis. These networks are able to store repeatedly presented patterns and recall them upon the presentation of incomplete cues. Storage is fast, governed by the homeostatic drift. In contrast, forgetting is slow, driven by a diffusion process. Joint stimulation of neurons induces the growth of associative connections between them, leading to the formation of memory engrams. These memories are stored in a distributed fashion throughout connectivity matrix, and individual synaptic connections have only a small influence. Although memory-specific connections are increased in number, the total number of inputs and outputs of neurons undergo only small changes during stimulation. We find that homeostatic structural plasticity induces a specific type of “silent memories”, different from conventional attractor states.  相似文献   

13.
The connectome, or the entire connectivity of a neural system represented by a network, ranges across various scales from synaptic connections between individual neurons to fibre tract connections between brain regions. Although the modularity they commonly show has been extensively studied, it is unclear whether the connection specificity of such networks can already be fully explained by the modularity alone. To answer this question, we study two networks, the neuronal network of Caenorhabditis elegans and the fibre tract network of human brains obtained through diffusion spectrum imaging. We compare them to their respective benchmark networks with varying modularities, which are generated by link swapping to have desired modularity values. We find several network properties that are specific to the neural networks and cannot be fully explained by the modularity alone. First, the clustering coefficient and the characteristic path length of both C. elegans and human connectomes are higher than those of the benchmark networks with similar modularity. High clustering coefficient indicates efficient local information distribution, and high characteristic path length suggests reduced global integration. Second, the total wiring length is smaller than for the alternative configurations with similar modularity. This is due to lower dispersion of connections, which means each neuron in the C. elegans connectome or each region of interest in the human connectome reaches fewer ganglia or cortical areas, respectively. Third, both neural networks show lower algorithmic entropy compared with the alternative arrangements. This implies that fewer genes are needed to encode for the organization of neural systems. While the first two findings show that the neural topologies are efficient in information processing, this suggests that they are also efficient from a developmental point of view. Together, these results show that neural systems are organized in such a way as to yield efficient features beyond those given by their modularity alone.  相似文献   

14.
The nematode Caenorhabditis elegans, with information on neural connectivity, three-dimensional position and cell linage, provides a unique system for understanding the development of neural networks. Although C. elegans has been widely studied in the past, we present the first statistical study from a developmental perspective, with findings that raise interesting suggestions on the establishment of long-distance connections and network hubs. Here, we analyze the neuro-development for temporal and spatial features, using birth times of neurons and their three-dimensional positions. Comparisons of growth in C. elegans with random spatial network growth highlight two findings relevant to neural network development. First, most neurons which are linked by long-distance connections are born around the same time and early on, suggesting the possibility of early contact or interaction between connected neurons during development. Second, early-born neurons are more highly connected (tendency to form hubs) than later-born neurons. This indicates that the longer time frame available to them might underlie high connectivity. Both outcomes are not observed for random connection formation. The study finds that around one-third of electrically coupled long-range connections are late forming, raising the question of what mechanisms are involved in ensuring their accuracy, particularly in light of the extremely invariant connectivity observed in C. elegans. In conclusion, the sequence of neural network development highlights the possibility of early contact or interaction in securing long-distance and high-degree connectivity.  相似文献   

15.
In this paper, the oscillations and synchronization status of two different network connectivity patterns based on Izhikevich model are studied. One of the connectivity patterns is a randomly connected neuronal network, the other one is a small-world neuronal network. This Izhikevich model is a simple model which can not only reproduce the rich behaviors of biological neurons but also has only two equations and one nonlinear term. Detailed investigations reveal that by varying some key parameters, such as the connection weights of neurons, the external current injection, the noise of intensity and the neuron number, this neuronal network will exhibit various collective behaviors in randomly coupled neuronal network. In addition, we show that by changing the number of nearest neighbor and connection probability in small-world topology can also affect the collective dynamics of neuronal activity. These results may be instructive in understanding the collective dynamics of mammalian cortex.  相似文献   

16.
In this paper, the collective behaviors of a small-world neuronal network motivated by the anatomy of a mammalian cortex based on both Izhikevich model and Rulkov model are studied. The Izhikevich model can not only reproduce the rich behaviors of biological neurons but also has only two equations and one nonlinear term. Rulkov model is in the form of difference equations that generate a sequence of membrane potential samples in discrete moments of time to improve computational efficiency. These two models are suitable for the construction of large scale neural networks. By varying some key parameters, such as the connection probability and the number of nearest neighbor of each node, the coupled neurons will exhibit types of temporal and spatial characteristics. It is demonstrated that the implementation of GPU can achieve more and more acceleration than CPU with the increasing of neuron number and iterations. These two small-world network models and GPU acceleration give us a new opportunity to reproduce the real biological network containing a large number of neurons.  相似文献   

17.
This report continues our research into the effectiveness of adaptive synaptogenesis in constructing feed-forward networks which perform good transformations on their inputs. Good transformations are characterized by the maintenance of input information and the removal of statistical dependence. Adaptive synaptogenesis stochastically builds and sculpts a synaptic connectivity in initially unconnected networks using two mechanisms. The first, synaptogenesis, creates new, excitatory, feed-forward connections. The second, associative modification, adjusts the strength of existing synapses. Our previous implementations of synaptogenesis only incorporated a postsynaptic regulatory process, receptivity to new innervation (Adelsberger-Mangan and Levy 1993a, b). In the present study, a presynaptic regulatory process, presynaptic avidity, which regulates the tendency of a presynaptic neuron to participate in a new synaptic connection as a function of its total synaptic weight, is incorporated into the synaptogenesis process. In addition, we investigate a third mechanism, selective synapse removal. This process removes synapses between neurons whose firing is poorly correlated. Networks that are constructed with the presynaptic regulatory process maintain more information and remove more statistical dependence than networks constructed with postsynaptic receptivity and associative modification alone. Selective synapse removal also improves network performance, but only when implemented in conjunction with the presynaptic regulatory process. Received: 20 August 1993/Accepted in revised form: 16 April 1994  相似文献   

18.
Izhikevich神经元网络的同步与联想记忆   总被引:1,自引:0,他引:1  
联想记忆是人脑的一项重要功能。以Izhikevich神经元模型为节点,构建神经网络,神经元之间采用全连结的方式;以神经元群体的时空编码(spatio-temporal coding)理论研究所构建神经网络的联想记忆功能。在加入高斯白噪声的情况下,调节网络中神经元之间的连接强度的大小,当连接强度和噪声强度达到一个阈值时网络中部分神经元同步放电,实现了存储模式的联想记忆与恢复。仿真结果表明,神经元之间的连接强度在联想记忆的过程中发挥了重要的作用,噪声可以促使神经元间的同步放电,有助于神经网络实现存储模式的联想记忆与恢复。  相似文献   

19.
Autonomic oscillatory activities exist in almost every living thing and most of them are produced by rhythmic activities of the corresponding neural systems (locomotion, respiration, heart beat, etc.). This paper mathematically discusses sustained oscillations generated by mutual inhibition of the neurons which are represented by a continuous-variable model with a kind of fatigue or adaptation effect. If the neural network has no stable stationary state for constant input stimuli, it will generate and sustain some oscillation for any initial state and for any disturbance. Some sufficient conditions for that are given to three types of neural networks: lateral inhibition networks of linearly arrayed neurons, symmetric inhibition networks and cyclic inhibition networks. The result suggests that the adaptation of the neurons plays a very important role for the appearance of the oscillations. Some computer simulations of rhythic activities are also presented for cyclic inhibition networks consisting of a few neurons.  相似文献   

20.
How different is local cortical circuitry from a random network? To answer this question, we probed synaptic connections with several hundred simultaneous quadruple whole-cell recordings from layer 5 pyramidal neurons in the rat visual cortex. Analysis of this dataset revealed several nonrandom features in synaptic connectivity. We confirmed previous reports that bidirectional connections are more common than expected in a random network. We found that several highly clustered three-neuron connectivity patterns are overrepresented, suggesting that connections tend to cluster together. We also analyzed synaptic connection strength as defined by the peak excitatory postsynaptic potential amplitude. We found that the distribution of synaptic connection strength differs significantly from the Poisson distribution and can be fitted by a lognormal distribution. Such a distribution has a heavier tail and implies that synaptic weight is concentrated among few synaptic connections. In addition, the strengths of synaptic connections sharing pre- or postsynaptic neurons are correlated, implying that strong connections are even more clustered than the weak ones. Therefore, the local cortical network structure can be viewed as a skeleton of stronger connections in a sea of weaker ones. Such a skeleton is likely to play an important role in network dynamics and should be investigated further.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号