首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper investigates the possible role of neuroanatomical features in Pavlovian conditioning, via computer simulations with layered, feedforward artificial neural networks. The networks’ structure and functioning are described by a strongly bottom-up model that takes into account the roles of hippocampal and dopaminergic systems in conditioning. Neuroanatomical features were simulated as generic structural or architectural features of neural networks. We focused on the number of units per hidden layer and connectivity. The effect of the number of units per hidden layer was investigated through simulations of resistance to extinction in fully connected networks. Large networks were more resistant to extinction than small networks, a stochastic effect of the asynchronous random procedure used in the simulator to update activations and weights. These networks did not simulate second-order conditioning because weight competition prevented conditioning to a stimulus after conditioning to another. Partially connected networks simulated second-order conditioning and devaluation of the second-order stimulus after extinction of a similar first-order stimulus. Similar stimuli were simulated as nonorthogonal input-vectors.  相似文献   

2.
A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.  相似文献   

3.
Recently models of neural networks that can directly deal with complex numbers, complex-valued neural networks, have been proposed and several studies on their abilities of information processing have been done. Furthermore models of neural networks that can deal with quaternion numbers, which is the extension of complex numbers, have also been proposed. However they are all multilayer quaternion neural networks. This paper proposes models of fully connected recurrent quaternion neural networks, Hopfield-type quaternion neural networks. Since quaternion numbers are non-commutative on multiplication, some different models can be considered. We investigate dynamics of these proposed models from the point of view of the existence of an energy function and derive their conditions for existence.  相似文献   

4.
 In this paper, we study the combined dynamics of the neural activity and the synaptic efficiency changes in a fully connected network of biologically realistic neurons with simple synaptic plasticity dynamics including both potentiation and depression. Using a mean-field of technique, we analyzed the equilibrium states of neural networks with dynamic synaptic connections and found a class of bistable networks. For this class of networks, one of the stable equilibrium states shows strong connectivity and coherent responses to external input. In the other stable equilibrium, the network is loosely connected and responds non coherently to external input. Transitions between the two states can be achieved by positively or negatively correlated external inputs. Such networks can therefore switch between their phases according to the statistical properties of the external input. Non-coherent input can only “rcad” the state of the network, while a correlated one can change its state. We speculate that this property, specific for plastic neural networks, can give a clue to understand fully unsupervised learning models. Received: 8 August 1999 / Accepted in revised form: 16 March 2000  相似文献   

5.
A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities.  相似文献   

6.
We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot’s performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.  相似文献   

7.
For many biological networks, the topology of the network constrains its dynamics. In particular, feedback loops play a crucial role. The results in this paper quantify the constraints that (unsigned) feedback loops exert on the dynamics of a class of discrete models for gene regulatory networks. Conjunctive (resp. disjunctive) Boolean networks, obtained by using only the AND (resp. OR) operator, comprise a subclass of networks that consist of canalyzing functions, used to describe many published gene regulation mechanisms. For the study of feedback loops, it is common to decompose the wiring diagram into linked components each of which is strongly connected. It is shown that for conjunctive Boolean networks with strongly connected wiring diagram, the feedback loop structure completely determines the long-term dynamics of the network. A formula is established for the precise number of limit cycles of a given length, and it is determined which limit cycle lengths can appear. For general wiring diagrams, the situation is much more complicated, as feedback loops in one strongly connected component can influence the feedback loops in other components. This paper provides a sharp lower bound and an upper bound on the number of limit cycles of a given length, in terms of properties of the partially ordered set of strongly connected components.  相似文献   

8.
Information about external world is delivered to the brain in the form of structured in time spike trains. During further processing in higher areas, information is subjected to a certain condensation process, which results in formation of abstract conceptual images of external world, apparently, represented as certain uniform spiking activity partially independent on the input spike trains details. Possible physical mechanism of condensation at the level of individual neuron was discussed recently. In a reverberating spiking neural network, due to this mechanism the dynamics should settle down to the same uniform/ periodic activity in response to a set of various inputs. Since the same periodic activity may correspond to different input spike trains, we interpret this as possible candidate for information condensation mechanism in a network. Our purpose is to test this possibility in a network model consisting of five fully connected neurons, particularly, the influence of geometric size of the network, on its ability to condense information. Dynamics of 20 spiking neural networks of different geometric sizes are modelled by means of computer simulation. Each network was propelled into reverberating dynamics by applying various initial input spike trains. We run the dynamics until it becomes periodic. The Shannon's formula is used to calculate the amount of information in any input spike train and in any periodic state found. As a result, we obtain explicit estimate of the degree of information condensation in the networks, and conclude that it depends strongly on the net's geometric size.  相似文献   

9.
基于神经网络和遗传算法的木糖醇发酵培养基优化研究   总被引:20,自引:2,他引:20  
发酵过程机理复杂、影响因素众多。菌种的生理生化特性及发酵的工艺确定之后 ,适宜的培养基配方成了发酵水平、原料成本高低的决定因素。为了优化培养基配方 ,采用遗传算法是一种行之有效的方法。遗传算法 (GA)是基于达尔文进化论和孟德尔遗传学说来实现随机、自适应、并行性全局搜索的一种无须数学模型的优化算法。与其它搜索方法相比 ,GA的优越性主要有 :(1)在搜索过程中GA不易陷入局部最优 ,即使所定义的目标函数非连续、不规则或伴有噪声 ,它也能以很大的概率找到全局最优解 ;(2 )由于GA固有的并行性 ,使得它非常适合于大规模并…  相似文献   

10.
Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information flow among networks.  相似文献   

11.
A new structure and training method for multilayer neural networks is presented. The proposed method is based on cascade training of subnetworks and optimizing weights layer by layer. The training procedure is completed in two steps. First, a subnetwork, m inputs and n outputs as the style of training samples, is trained using the training samples. Secondly the outputs of the subnetwork is taken as the inputs and the outputs of the training sample as the desired outputs, another subnetwork with n inputs and n outputs is trained. Finally the two trained subnetworks are connected and a trained multilayer neural networks is created. The numerical simulation results based on both linear least squares back-propagation (LSB) and traditional back-propagation (BP) algorithm have demonstrated the efficiency of the proposed method.  相似文献   

12.
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.  相似文献   

13.
Computational neuroscience models can be used to understand the diminished stability and noisy neurodynamical behaviour of prefrontal cortex networks in schizophrenia. These neurodynamical properties can be captured by simulated neural networks with randomly spiking neurons that introduce noise into the system and produce trial-by-trial variation of postsynaptic potentials. Theoretical and experimental studies have aimed to understand schizophrenia in relation to noise and signal-to-noise ratio, which are promising concepts for understanding the symptoms that characterize this heterogeneous illness. Simulations of biologically realistic neural networks show how the functioning of NMDA (N-methyl-D-aspartate), GABA (gamma-aminobutyric acid) and dopamine receptors is connected to the concepts of noise and variability, and to related neurophysiological findings and clinical symptoms in schizophrenia.  相似文献   

14.
Conductance-based neuron models are frequently employed to study the dynamics of biological neural networks. For speed and ease of use, these models are often reduced in morphological complexity. Simplified dendritic branching structures may process inputs differently than full branching structures, however, and could thereby fail to reproduce important aspects of biological neural processing. It is not yet well understood which processing capabilities require detailed branching structures. Therefore, we analyzed the processing capabilities of full or partially branched reduced models. These models were created by collapsing the dendritic tree of a full morphological model of a globus pallidus (GP) neuron while preserving its total surface area and electrotonic length, as well as its passive and active parameters. Dendritic trees were either collapsed into single cables (unbranched models) or the full complement of branch points was preserved (branched models). Both reduction strategies allowed us to compare dynamics between all models using the same channel density settings. Full model responses to somatic inputs were generally preserved by both types of reduced model while dendritic input responses could be more closely preserved by branched than unbranched reduced models. However, features strongly influenced by local dendritic input resistance, such as active dendritic sodium spike generation and propagation, could not be accurately reproduced by any reduced model. Based on our analyses, we suggest that there are intrinsic differences in processing capabilities between unbranched and branched models. We also indicate suitable applications for different levels of reduction, including fast searches of full model parameter space.  相似文献   

15.
This work clarifies the relation between network circuit (topology) and behaviour (information transmission and synchronization) in active networks, e.g. neural networks. As an application, we show how one can find network topologies that are able to transmit a large amount of information, possess a large number of communication channels, and are robust under large variations of the network coupling configuration. This theoretical approach is general and does not depend on the particular dynamic of the elements forming the network, since the network topology can be determined by finding a Laplacian matrix (the matrix that describes the connections and the coupling strengths among the elements) whose eigenvalues satisfy some special conditions. To illustrate our ideas and theoretical approaches, we use neural networks of electrically connected chaotic Hindmarsh-Rose neurons.  相似文献   

16.
A new approach for nonlinear system identification and control based on modular neural networks (MNN) is proposed in this paper. The computational complexity of neural identification can be greatly reduced if the whole system is decomposed into several subsystems. This is obtained using a partitioning algorithm. Each local nonlinear model is associated with a nonlinear controller. These are also implemented by neural networks. The switching between the neural controllers is done by a dynamical switcher, also implemented by neural networks, that tracks the different operating points. The proposed multiple modelling and control strategy has been successfully tested on simulated laboratory scale liquid-level system.  相似文献   

17.
Neurons in the brain are known to operate under a careful balance of excitation and inhibition, which maintains neural microcircuits within the proper operational range. How this balance is played out at the mesoscopic level of neuronal populations is, however, less clear. In order to address this issue, here we use a coupled neural mass model to study computationally the dynamics of a network of cortical macrocolumns operating in a partially synchronized, irregular regime. The topology of the network is heterogeneous, with a few of the nodes acting as connector hubs while the rest are relatively poorly connected. Our results show that in this type of mesoscopic network excitation and inhibition spontaneously segregate, with some columns acting mainly in an excitatory manner while some others have predominantly an inhibitory effect on their neighbors. We characterize the conditions under which this segregation arises, and relate the character of the different columns with their topological role within the network. In particular, we show that the connector hubs are preferentially inhibitory, the more so the larger the node''s connectivity. These results suggest a potential mesoscale organization of the excitation-inhibition balance in brain networks.  相似文献   

18.
Wuchty S 《Proteomics》2002,2(12):1715-1723
Data of currently available protein-protein interaction sets and protein domain sets of yeast are used to set up protein and domain interaction and domain sequence networks. All of them are far from being random or regular networks. In fact, they turn out to be sparse and locally well clustered indicating so-called scale-free and partially small-world topology. These subtle topologies display considerable indirect properties which are measured with a newly introduced transitivity coefficient. Fairly small sets of highly connected proteins and domains shape the topologies of the underlying networks, emphasizing a kind of backbone the nets are based on. The biological nature of these particular nodes is further investigated. Since highly connected proteins and domains accumulated a significant higher number of links by their important involvement in certain cellular aspects, their mutational effect on the cell is considered by a perturbation analysis. In comparison to domains of yeast, what factors force domains to accumulate links to other domains in protein sequences of higher eukaryotes are investigated.  相似文献   

19.
We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.  相似文献   

20.
All higher order central nervous systems exhibit spontaneous neural activity, though the purpose and mechanistic origin of such activity remains poorly understood. We quantitatively analyzed the ignition and spread of collective spontaneous electrophysiological activity in networks of cultured cortical neurons growing on microelectrode arrays. Leader neurons, which form a mono-synaptically connected primary circuit, and initiate a majority of network bursts were found to be a small subset of recorded neurons. Leader/follower firing delay times formed temporally stable positively skewed distributions. Blocking inhibitory synapses usually resulted in shorter delay times with reduced variance. These distributions are characterizations of general aspects of internal network dynamics and provide estimates of pair-wise synaptic distances. The resulting analysis produced specific quantitative constraints and insights into the activation patterns of collective neuronal activity in self-organized cortical networks, which may prove useful for models emulating spontaneously active systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号