首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigate the memory structure and retrieval of the brain and propose a hybrid neural network of addressable and content-addressable memory which is a special database model and can memorize and retrieve any piece of information (a binary pattern) both addressably and content-addressably. The architecture of this hybrid neural network is hierarchical and takes the form of a tree of slabs which consist of binary neurons with the same array. Simplex memory neural networks are considered as the slabs of basic memory units, being distributed on the terminal vertexes of the tree. It is shown by theoretical analysis that the hybrid neural network is able to be constructed with Hebbian and competitive learning rules, and some other important characteristics of its learning and memory behavior are also consistent with those of the brain. Moreover, we demonstrate the hybrid neural network on a set of ten binary numeral patters  相似文献   

2.
The maintenance of short-term memories is critical for survival in a dynamically changing world. Previous studies suggest that this memory can be stored in the form of persistent neural activity or using a synaptic mechanism, such as with short-term plasticity. Here, we compare the predictions of these two mechanisms to neural and behavioral measurements in a visual change detection task. Mice were trained to respond to changes in a repeated sequence of natural images while neural activity was recorded using two-photon calcium imaging. We also trained two types of artificial neural networks on the same change detection task as the mice. Following fixed pre-processing using a pretrained convolutional neural network, either a recurrent neural network (RNN) or a feedforward neural network with short-term synaptic depression (STPNet) was trained to the same level of performance as the mice. While both networks are able to learn the task, the STPNet model contains units whose activity are more similar to the in vivo data and produces errors which are more similar to the mice. When images are omitted, an unexpected perturbation which was absent during training, mice often do not respond to the omission but are more likely to respond to the subsequent image. Unlike the RNN model, STPNet produces a similar pattern of behavior. These results suggest that simple neural adaptation mechanisms may serve as an important bottom-up memory signal in this task, which can be used by downstream areas in the decision-making process.  相似文献   

3.
Game Dynamics with Learning and Evolution of Universal Grammar   总被引:1,自引:0,他引:1  
We investigate a model of language evolution, based on population game dynamics with learning. First, we examine the case of two genetic variants of universal grammar (UG), the heart of the human language faculty, assuming each admits two possible grammars. The dynamics are driven by a communication game. We prove using dynamical systems techniques that if the payoff matrix obeys certain constraints, then the two UGs are stable against invasion by each other, that is, they are evolutionarily stable. Then, we prove a similar theorem for an arbitrary number of disjoint UGs. In both theorems, the constraints are independent of the learning process. Intuitively, if a mutation in UG results in grammars that are incompatible with the established languages, then the mutation will die out because mutants will be unable to communicate and therefore unable to realize any potential benefit of the mutation. An example for which these theorems do not apply shows that compatible mutations may or may not be able to invade, depending on the population's history and the learning process. These results suggest that the genetic history of language is constrained by the need for compatibility and that mutations in the language faculty may have died out or taken over due more to historical accident than to any straightforward notion of relative fitness. MSC 1991: 37N25 · 92D15 · 91F20  相似文献   

4.
No need for a cognitive map: decentralized memory for insect navigation   总被引:1,自引:0,他引:1  
In many animals the ability to navigate over long distances is an important prerequisite for foraging. For example, it is widely accepted that desert ants and honey bees, but also mammals, use path integration for finding the way back to their home site. It is however a matter of a long standing debate whether animals in addition are able to acquire and use so called cognitive maps. Such a 'map', a global spatial representation of the foraging area, is generally assumed to allow the animal to find shortcuts between two sites although the direct connection has never been travelled before. Using the artificial neural network approach, here we develop an artificial memory system which is based on path integration and various landmark guidance mechanisms (a bank of individual and independent landmark-defined memory elements). Activation of the individual memory elements depends on a separate motivation network and an, in part, asymmetrical lateral inhibition network. The information concerning the absolute position of the agent is present, but resides in a separate memory that can only be used by the path integration subsystem to control the behaviour, but cannot be used for computational purposes with other memory elements of the system. Thus, in this simulation there is no neural basis of a cognitive map. Nevertheless, an agent controlled by this network is able to accomplish various navigational tasks known from ants and bees and often discussed as being dependent on a cognitive map. For example, map-like behaviour as observed in honey bees arises as an emergent property from a decentralized system. This behaviour thus can be explained without referring to the assumption that a cognitive map, a coherent representation of foraging space, must exist. We hypothesize that the proposed network essentially resides in the mushroom bodies of the insect brain.  相似文献   

5.
Using models based on generative grammars a theory of ecosystem assembly can be formulated that maps a set of species onto a set of environments (Haefner, 1977). Such a theory must incorporate a minimal set of ecological properties in order to correctly describe the adaptive strategies of species and the non-random collection of species comprising an ecosystem. These properties include (1) concordance between activities performed by the individuals of a species, (2) the elaboration of niches due to species invasion, (3) concordance between resources and the users of resources, and (4) the plasticity of species behavior.These properties are used to define the criteria for the weak and strong empirical adequacy of grammars. Weak empirical adequacy of a grammar is the ability of a grammar to generate the sentences of a language. Strong empirical adequacy of a grammar is the ability of a grammar to generate the correct relationships between the elements of the sentences of a language. The adequacy of the members of the Chomsky hierarchy (regular grammars, context-free phrase-structure grammars, context-sensitive phrase-structure grammars, and transformational grammars) is evaluated by comparing their generative capacities and the criteria for empirical adequacy. This analysis indicates that for the representations of the phenomena considered strong empirical adequacy requires at least the generative capacity of a transformational grammar.  相似文献   

6.
Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.  相似文献   

7.
It is well accepted that the brain''s computation relies on spatiotemporal activity of neural networks. In particular, there is growing evidence of the importance of continuously and precisely timed spiking activity. Therefore, it is important to characterize memory states in terms of spike-timing patterns that give both reliable memory of firing activities and precise memory of firing timings. The relationship between memory states and spike-timing patterns has been studied empirically with large-scale recording of neuron population in recent years. Here, by using a recurrent neural network model with dynamics at two time scales, we construct a dynamical memory network model which embeds both fast neural and synaptic variation and slow learning dynamics. A state vector is proposed to describe memory states in terms of spike-timing patterns of neural population, and a distance measure of state vector is defined to study several important phenomena of memory dynamics: partial memory recall, learning efficiency, learning with correlated stimuli. We show that the distance measure can capture the timing difference of memory states. In addition, we examine the influence of network topology on learning ability, and show that local connections can increase the network''s ability to embed more memory states. Together theses results suggest that the proposed system based on spike-timing patterns gives a productive model for the study of detailed learning and memory dynamics.  相似文献   

8.
In models of working memory, transient stimuli are encoded by feature-selective persistent neural activity. Network models of working memory are also implicitly bistable. In the absence of a brief stimulus, only spontaneous, low-level, and presumably nonpatterned neural activity is seen. In many working-memory models, local recurrent excitation combined with long-range inhibition (Mexican hat coupling) can result in a network-induced, spatially localized persistent activity or “bump state” that coexists with a stable uniform state. There is now renewed interest in the concept that individual neurons might have some intrinsic ability to sustain persistent activity without recurrent network interactions. A recent visuospatial working-memory model (Camperi and Wang 1998) incorporates both intrinsic bistability of individual neurons within a firing rate network model and a single population of neurons on a ring with lateral inhibitory coupling. We have explored this model in more detail and have characterized the response properties with changes in background synaptic input Io and stimulus width. We find that only a small range of Io yields a working-memory-like coexistence of bump and uniform solutions that are both stable. There is a rather larger range where only the bump solution is stable that might correspond instead to a feature-selective long-term memory. Such a network therefore requires careful tuning to exhibit working-memory-like function. Interestingly, where bumps and uniform stable states coexist, we find a continuous family of stable bumps representing stimulus width. Thus, in the range of parameters corresponding to working memory, the model is capable of capturing a two-parameter family of stimulus features including both orientation and width.  相似文献   

9.
Experimental evidence suggests that the maintenance of an item in working memory is achieved through persistent activity in selective neural assemblies of the cortex. To understand the mechanisms underlying this phenomenon, it is essential to investigate how persistent activity is affected by external inputs or neuromodulation. We have addressed these questions using a recurrent network model of object working memory. Recurrence is dominated by inhibition, although persistent activity is generated through recurrent excitation in small subsets of excitatory neurons.Our main findings are as follows. (1) Because of the strong feedback inhibition, persistent activity shows an inverted U shape as a function of increased external drive to the network. (2) A transient external excitation can switch off a network from a selective persistent state to its spontaneous state. (3) The maintenance of the sample stimulus in working memory is not affected by intervening stimuli (distractors) during the delay period, provided the stimulation intensity is not large. On the other hand, if stimulation intensity is large enough, distractors disrupt sample-related persistent activity, and the network is able to maintain a memory only of the last shown stimulus. (4) A concerted modulation of GABA A and NMDA conductances leads to a decrease of spontaneous activity but an increase of persistent activity; the enhanced signal-to-noise ratio is shown to increase the resistance of the network to distractors. (5) Two mechanisms are identified that produce an inverted U shaped dependence of persistent activity on modulation. The present study therefore points to several mechanisms that enhance the signal-to-noise ratio in working memory states. These mechanisms could be implemented in the prefrontal cortex by dopaminergic projections from the midbrain.  相似文献   

10.
In this contribution, the advantages of the artificial neural network approach to the identification and control of a laboratory-scale biochemical reactor are demonstrated. It is very important to be able to maintain the levels of two process variables, pH and dissolved oxygen (DO) concentration, over the course of fermentation in biosystems control. A PC-supported, fully automated, multi-task control system has been designed and built by the authors. Forward and inverse neural process models are used to identify and control both the pH and the DO concentration in a fermenter containing a Saccharomyces cerevisiae based-culture. The models are trained off-line, using a modified back-propagation algorithm based on conjugate gradients. The inverse neural controller is augmented by a new adaptive term that results in a system with robust performance. Experimental results have confirmed that the regulatory and tracking performances of the control system proposed are good.  相似文献   

11.
Synchronization of the oscillatory discharge of cortical neurons could be a part of the mechanism that is involved in cortical information processing. On the assumption that the basic functional unit is the column composed of local excitatory and inhibitory cells and generating oscillatory neural activity, a network model that attains associative memory function is proposed. The synchronization of oscillation in the model is studied analytically using a sublattice analysis. In particular, the retrieval of a single memory pattern can be studied in the system, which can be derived from the original network model of interacting columns and is formally equivalent to a system of an isolated column. The network model simulated numerically shows a remarkable performance in which retrieval is achieved simultaneously for more than one memory pattern. The manifestations of this simultaneous retrieval in the network dynamics are successive transitions of the network state from a synchronized oscillation for a memory pattern to that for another memory pattern.  相似文献   

12.
Short-term memory in the brain cannot in general be explained the way long-term memory can – as a gradual modification of synaptic weights – since it takes place too quickly. Theories based on some form of cellular bistability, however, do not seem able to account for the fact that noisy neurons can collectively store information in a robust manner. We show how a sufficiently clustered network of simple model neurons can be instantly induced into metastable states capable of retaining information for a short time (a few seconds). The mechanism is robust to different network topologies and kinds of neural model. This could constitute a viable means available to the brain for sensory and/or short-term memory with no need of synaptic learning. Relevant phenomena described by neurobiology and psychology, such as local synchronization of synaptic inputs and power-law statistics of forgetting avalanches, emerge naturally from this mechanism, and we suggest possible experiments to test its viability in more biological settings.  相似文献   

13.
Default mode network (DMN) is a functional brain network with a unique neural activity pattern that shows high activity in resting states but low activity in task states. This unique pattern has been proved to relate with higher cognitions such as learning, memory and decision-making. But neural mechanisms of interactions between the default network and the task-related network are still poorly understood. In this paper, a theoretical model of coupling the DMN and working memory network (WMN) is proposed. The WMN and DMN both consist of excitatory and inhibitory neurons connected by AMPA, NMDA, GABA synapses, and are coupled with each other only by excitatory synapses. This model is implemented to demonstrate dynamical processes in a working memory task containing encoding, maintenance and retrieval phases. Simulated results have shown that: (1) AMPA channels could produce significant synchronous oscillations in population neurons, which is beneficial to change oscillation patterns in the WMN and DMN. (2) Different NMDA conductance between the networks could generate multiple neural activity modes in the whole network, which may be an important mechanism to switch states of the networks between three different phases of working memory. (3) The number of sequentially memorized stimuli was related to the energy consumption determined by the network''s internal parameters, and the DMN contributed to a more stable working memory process. (4) Finally, this model demonstrated that, in three phases of working memory, different memory phases corresponded to different functional connections between the DMN and WMN. Coupling strengths that measured these functional connections differed in terms of phase synchronization. Phase synchronization characteristics of the contained energy were consistent with the observations of negative and positive correlations between the WMN and DMN reported in referenced fMRI experiments. The results suggested that the coupled interaction between the WMN and DMN played important roles in working memory.Supplementary InformationThe online version contains supplementary material available at 10.1007/s11571-021-09674-1.  相似文献   

14.
Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.  相似文献   

15.
 A neural mechanism for control of dynamics and function of associative processes in a hierarchical memory system is demonstrated. For the representation and processing of abstract knowledge, the semantic declarative memory system of the human brain is considered. The dynamics control mechanism is based on the influence of neuronal adaptation on the complexity of neural network dynamics. Different dynamical modes correspond to different levels of the ultrametric structure of the hierarchical memory being invoked during an associative process. The mechanism is deterministic but may also underlie free associative thought processes. The formulation of an abstract neural network model of hierarchical associative memory utilizes a recent approach to incorporate neuronal adaptation. It includes a generalized neuronal activation function recently derived by a Hodgkin-Huxley-type model. It is shown that the extent to which a hierarchically organized memory structure is searched is controlled by the neuronal adaptability, i.e. the strength of coupling between neuronal activity and excitability. In the brain, the concentration of various neuromodulators in turn can regulate the adaptability. An autonomously controlled sequence of bifurcations, from an initial exploratory to a final retrieval phase, of an associative process is shown to result from an activity-dependent release of neuromodulators. The dynamics control mechanism may be important in the context of various disorders of the brain and may also extend the range of applications of artificial neural networks. Received: 19 April 1995/Accepted in revised form: 8 August 1995  相似文献   

16.
Izhikevich神经元网络的同步与联想记忆   总被引:1,自引:0,他引:1  
联想记忆是人脑的一项重要功能。以Izhikevich神经元模型为节点,构建神经网络,神经元之间采用全连结的方式;以神经元群体的时空编码(spatio-temporal coding)理论研究所构建神经网络的联想记忆功能。在加入高斯白噪声的情况下,调节网络中神经元之间的连接强度的大小,当连接强度和噪声强度达到一个阈值时网络中部分神经元同步放电,实现了存储模式的联想记忆与恢复。仿真结果表明,神经元之间的连接强度在联想记忆的过程中发挥了重要的作用,噪声可以促使神经元间的同步放电,有助于神经网络实现存储模式的联想记忆与恢复。  相似文献   

17.
To reduce the increasing amount of time spent on literature search in the life sciences, several methods for automated knowledge extraction have been developed. Co-occurrence based approaches can deal with large text corpora like MEDLINE in an acceptable time but are not able to extract any specific type of semantic relation. Semantic relation extraction methods based on syntax trees, on the other hand, are computationally expensive and the interpretation of the generated trees is difficult. Several natural language processing (NLP) approaches for the biomedical domain exist focusing specifically on the detection of a limited set of relation types. For systems biology, generic approaches for the detection of a multitude of relation types which in addition are able to process large text corpora are needed but the number of systems meeting both requirements is very limited. We introduce the use of SENNA (“Semantic Extraction using a Neural Network Architecture”), a fast and accurate neural network based Semantic Role Labeling (SRL) program, for the large scale extraction of semantic relations from the biomedical literature. A comparison of processing times of SENNA and other SRL systems or syntactical parsers used in the biomedical domain revealed that SENNA is the fastest Proposition Bank (PropBank) conforming SRL program currently available. 89 million biomedical sentences were tagged with SENNA on a 100 node cluster within three days. The accuracy of the presented relation extraction approach was evaluated on two test sets of annotated sentences resulting in precision/recall values of 0.71/0.43. We show that the accuracy as well as processing speed of the proposed semantic relation extraction approach is sufficient for its large scale application on biomedical text. The proposed approach is highly generalizable regarding the supported relation types and appears to be especially suited for general-purpose, broad-scale text mining systems. The presented approach bridges the gap between fast, cooccurrence-based approaches lacking semantic relations and highly specialized and computationally demanding NLP approaches.  相似文献   

18.
短时记忆的神经网络模型   总被引:2,自引:1,他引:1  
提出一个带有指针环路的短时记忆神经网络模型,模型包含两个神经网络,其中一个是与长时记忆共有的存贮内容表达网络,另一个为短时指针神经元环路,由于指针环路仅作为记忆内容的临时指针,因此,仅用很少的存贮单元即可完成各种短时记忆任务,计算机仿真证明,本模型确能表现出短时记忆的存贮容量有限和组块编码两个基本特征。  相似文献   

19.
We study the properties of the dynamical phase transition occurring in neural network models in which a competition between associative memory and sequential pattern recognition exists. This competition occurs through a weighted mixture of the symmetric and asymmetric parts of the synaptic matrix. Through a generating functional formalism, we determine the structure of the parameter space at non-zero temperature and near saturation (i.e., when the number of stored patterns scales with the size of the network), identifying the regions of high and weak pattern correlations, the spin-glass solutions, and the order-disorder transition between these regions. This analysis reveals that, when associative memory is dominant, smooth transitions appear between high correlated regions and spurious states. In contrast when sequential pattern recognition is stronger than associative memory, the transitions are always discontinuous. Additionally, when the symmetric and asymmetric parts of the synaptic matrix are defined in terms of the same set of patterns, there is a discontinuous transition between associative memory and sequential pattern recognition. In contrast, when the symmetric and asymmetric parts of the synaptic matrix are defined in terms of independent sets of patterns, the network is able to perform both associative memory and sequential pattern recognition for a wide range of parameter values.  相似文献   

20.
The paper presents a novel memory-based Self-Generated Basis Function Neural Network (SGBFN) that is composed of small CMACs. The SGBFN requires much smaller memory space than the conventional CMAC and has an excellent learning convergence property compared to multilayer neural networks. Each CMAC in the new structure takes a subset of problem inputs as its inputs. Several CMACs that have different subsets of inputs form a submodule and a group of submodules form a neural network. The output of a submodule is the product of its CMACs' outputs. Each submodule implements a self-generated basis function, which is developed during the learning. The output of the neural network is the sum of the outputs from the submodules. Using only a subset of inputs in each CMAC significantly reduces the required memory space in high-dimensional modeling. With the same size of memory, the new structure is able to achieve a much smaller learning error compared to the conventional CMAC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号