首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 142 毫秒
1.
It is still unclear how information is actually stored in biological neural networks. We propose here that information could be first orthogonalized and then stored. This could happen in a manner similar to how a set of vectors is transformed into a set of orthogonalized (i.e. mutually perpendicular) vectors. Orthogonalization may overcome the limits of conventional artificial networks, particularly the catastrophic interference caused by interference between stored inputs. The features needed to allow orthogonalization are common to biological networks, suggesting that it may be a common network mechanism. To illustrate this hypothesis, we characterize the underlying features that an archetypal biological network must have in order to perform orthogonalization, and point out that a number of actual networks show this archetypal network organization.  相似文献   

2.
Connectionist models of memory storage have been studied for many years, and aim to provide insight into potential mechanisms of memory storage by the brain. A problem faced by these systems is that as the number of items to be stored increases across a finite set of neurons/synapses, the cumulative changes in synaptic weight eventually lead to a sudden and dramatic loss of the stored information (catastrophic interference, CI) as the previous changes in synaptic weight are effectively lost. This effect does not occur in the brain, where information loss is gradual. Various attempts have been made to overcome the effects of CI, but these generally use schemes that impose restrictions on the system or its inputs rather than allowing the system to intrinsically cope with increasing storage demands. We show here that catastrophic interference occurs as a result of interference among patterns that lead to catastrophic effects when the number of patterns stored exceeds a critical limit. However, when Gram-Schmidt orthogonalization is combined with the Hebb-Hopfield model, the model attains the ability to eliminate CI. This approach differs from previous orthogonalisation schemes used in connectionist networks which essentially reflect sparse coding of the input. Here CI is avoided in a network of a fixed size without setting limits on the rate or number of patterns encoded, and without separating encoding and retrieval, thus offering the advantage of allowing associations between incoming and stored patterns.PACS Nos.: 87.10.+e, 87.18.Bb, 87.18.Sn, 87.19.La  相似文献   

3.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The compu- tational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

4.
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist ‘What’ and ‘Where’ pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives ‘where’, for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.  相似文献   

5.
The aim of this paper is to propose an interdisciplinary evolutionary connectionism approach for the study of the evolution of modularity. It is argued that neural networks as a model of the nervous system and genetic algorithms as simulative models of biological evolution would allow us to formulate a clear and operative definition of module and to simulate the different evolutionary scenarios proposed for the origin of modularity. I will present a recent model in which the evolution of primate cortical visual streams is possible starting from non-modular neural networks. Simulation results not only confirm the existence of the phenomenon of neural interference in non-modular network architectures but also, for the first time, reveal the existence of another kind of interference at the genetic level, i.e. genetic interference, a new population genetic mechanism that is independent from the network architecture. Our simulations clearly show that genetic interference reduces the evolvability of visual neural networks and sexual reproduction can at least partially solve the problem of genetic interference. Finally, it is shown that entrusting the task of finding the neural network architecture to evolution and that of finding the network connection weights to learning is a way to completely avoid the problem of genetic interference. On the basis of this evidence, it is possible to formulate a new hypothesis on the origin of structural modularity, and thus to overcome the traditional dichotomy between innatist and empiricist theories of mind.  相似文献   

6.
Previous studies have shown that newly encoded memories are more resistant to retroactive interference when participants are allowed to sleep after learning the original material, suggesting a sleep-related strengthening of memories. In the present study, we investigated delayed, long-term effects of sleep vs. sleep deprivation (SD) on the first post-training night on memory consolidation and resistance to interference. On day 1, participants learned a list of unrelated word pairs (AB), either in the morning or in the evening, then spent the post-training night in a sleep or sleep deprivation condition, in a within-subject paradigm. On day 4, at the same time of day, they learned a novel list of word pairs (AC) in which 50% of the word pairs stemmed with the same word than in the AB list, resulting in retroactive interference. Participants had then to recall items from the AB list upon presentation of the “A” stem. Recall was marginally improved in the evening, as compared to the morning learning group. Most importantly, retroactive interference effects were found in the sleep evening group only, contrary to the hypothesis that sleep exerts a protective role against intrusion by novel but similar learning. We tentatively suggest that these results can be explained in the framework of the memory reconsolidation theory, stating that exposure to similar information sets back consolidated items in a labile form again sensitive to retroactive interference. In this context, sleep might not protect against interference but would promote an update of existing episodic memories while preventing saturation of the memory network due to the accumulation of dual traces.  相似文献   

7.
We investigate the memory structure and retrieval of the brain and propose a hybrid neural network of addressable and content-addressable memory which is a special database model and can memorize and retrieve any piece of information (a binary pattern) both addressably and content-addressably. The architecture of this hybrid neural network is hierarchical and takes the form of a tree of slabs which consist of binary neurons with the same array. Simplex memory neural networks are considered as the slabs of basic memory units, being distributed on the terminal vertexes of the tree. It is shown by theoretical analysis that the hybrid neural network is able to be constructed with Hebbian and competitive learning rules, and some other important characteristics of its learning and memory behavior are also consistent with those of the brain. Moreover, we demonstrate the hybrid neural network on a set of ten binary numeral patters  相似文献   

8.
A template matching model for pattern recognition is proposed. By following a previouslyproposed algorithm for synaptic modification (Hirai, 1980), the template of a stimulus pattern is selforganized as a spatial distribution pattern of matured synapses on the cells receiving modifiable synapses. Template matching is performed by the disinhibitory neural network cascaded beyond the neural layer composed of the cells receiving the modifiable synapses. The performance of the model has been simulated on a digital computer. After repetitive presentations of a stimulus pattern, a cell receiving the modifiable synapses comes to have the template of that pattern. And the cell in the latter layer of the disinhibitory bitory neural network that receives the disinhibitory input from that cell becomes electively sensitive to that pattern. Learning patterns are not restricted by previously learned ones. They can be subset or superset patterns of the ones previously learned. If an unknown pattern is presented to the model, no cell beyond the disinhibitory neural network will respond. However, if previously learned patterns are embedded in that pattern, the cells which have the templates of those patterns respond and are assumed to transmit the information to higher center. The computer simulation also shows that the model can organize a clean template under a noisy environment.  相似文献   

9.
Anterograde interference emerges when two differing tasks are learned in close temporal proximity, an effect repeatedly attributed to a competition between differing task memories. However, recent development alternatively suggests that initial learning may trigger a refractory period that occludes neuroplasticity and impairs subsequent learning, consequently mediating interference independently of memory competition. Accordingly, this study tested the hypothesis that interference can emerge when the same motor task is being learned twice, that is when competition between memories is prevented. In a first experiment, the inter-session interval (ISI) between two identical motor learning sessions was manipulated to be 2 min, 1 h or 24 h. Results revealed that retention of the second session was impaired as compared to the first one when the ISI was 2 min but not when it was 1 h or 24 h, indicating a time-dependent process. Results from a second experiment replicated those of the first one and revealed that adding a third motor learning session with a 2 min ISI further impaired retention, indicating a dose-dependent process. Results from a third experiment revealed that the retention impairments did not take place when a learning session was preceded by simple rehearsal of the motor task without concurrent learning, thus ruling out fatigue and confirming that retention is impaired specifically when preceded by a learning session. Altogether, the present results suggest that competing memories is not the sole mechanism mediating anterograde interference and introduce the possibility that a time- and dose-dependent refractory period—independent of fatigue—also contributes to its emergence. One possibility is that learning transiently perturbs the homeostasis of learning-related neuronal substrates. Introducing additional learning when homeostasis is still perturbed may not only impair performance improvements, but also memory formation.  相似文献   

10.
Avoiding toxins in food is as important as obtaining nutrition. Conditioned food aversions have been studied in animals as diverse as nematodes and humans [1, 2], but the neural signaling mechanisms underlying this form of learning have been difficult to pinpoint. Honeybees quickly learn to associate floral cues with food [3], a trait that makes them an excellent model organism for studying the neural mechanisms of learning and memory. Here we show that honeybees not only detect toxins but can also learn to associate odors with both the taste of toxins and the postingestive consequences of consuming them. We found that two distinct monoaminergic pathways mediate learned food aversions in the honeybee. As for other insect species conditioned with salt or electric shock reinforcers [4-7], learned avoidances of odors paired with bad-tasting toxins are mediated by dopamine. Our experiments are the first to identify a second, postingestive pathway for learned olfactory aversions that involves serotonin. This second pathway may represent an ancient mechanism for food aversion learning conserved across animal lineages.  相似文献   

11.
A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.  相似文献   

12.
A priori knowledge of secondary structure content can be of great use in theoretical and experimental determination of protein structure. We present a method that uses two computer-simulated neural networks placed in "tandem" to predict the secondary structure content of water-soluble, globular proteins. The first of the two networks, NET1, predicts a protein's helix and strand content given information about the protein's amino acid composition, molecular weight and heme presence. Because NET1 contained more adjustable parameters (network weights) than learning examples, this network experienced problems with memorization, which is the inability to generalize onto new, never-seen-before examples. To overcome this problem, we designed a second network, NET2, which learned to determine when NET1 was in a state of generalization. Together, these two networks produce prediction errors as low as 5.0% and 5.6% for helix and strand content, respectively, on a set of protein crystal structures bearing little homology to those used in network training. A comparison between three other methods including a multiple linear regression analysis, a non-hidden-node network analysis and a secondary structure assignment analysis reveals that our tandem neural network scheme is, indeed, the best method for predicting secondary structure content. The results of our analysis suggest that the knowledge of sequence information is not necessary for highly accurate predictions of protein secondary structure content.  相似文献   

13.
Dopaminergic neuron activity has been modeled during learning and appetitive behavior, most commonly using the temporal-difference (TD) algorithm. However, a proper representation of elapsed time and of the exact task is usually required for the model to work. Most models use timing elements such as delay-line representations of time that are not biologically realistic for intervals in the range of seconds. The interval-timing literature provides several alternatives. One of them is that timing could emerge from general network dynamics, instead of coming from a dedicated circuit. Here, we present a general rate-based learning model based on long short-term memory (LSTM) networks that learns a time representation when needed. Using a naïve network learning its environment in conjunction with TD, we reproduce dopamine activity in appetitive trace conditioning with a constant CS-US interval, including probe trials with unexpected delays. The proposed model learns a representation of the environment dynamics in an adaptive biologically plausible framework, without recourse to delay lines or other special-purpose circuits. Instead, the model predicts that the task-dependent representation of time is learned by experience, is encoded in ramp-like changes in single-neuron activity distributed across small neural networks, and reflects a temporal integration mechanism resulting from the inherent dynamics of recurrent loops within the network. The model also reproduces the known finding that trace conditioning is more difficult than delay conditioning and that the learned representation of the task can be highly dependent on the types of trials experienced during training. Finally, it suggests that the phasic dopaminergic signal could facilitate learning in the cortex.  相似文献   

14.
Crook N  Goh WJ  Hawarat M 《Bio Systems》2007,87(2-3):267-274
This research investigates the potential utility of chaotic dynamics in neural information processing. A novel chaotic spiking neural network model is presented which is composed of non-linear dynamic state (NDS) neurons. The activity of each NDS neuron is driven by a set of non-linear equations coupled with a threshold based spike output mechanism. If time-delayed self-connections are enabled then the network stabilises to a periodic pattern of activation. Previous publications of this work have demonstrated that the chaotic dynamics which drive the network activity ensure that an extremely large number of such periodic patterns can be generated by this network. This paper presents a major extension to this model which enables the network to recall a pattern of activity from a selection of previously stabilised patterns.  相似文献   

15.
One symbolic (rule-based inductive learning) and one connectionist (neural network) machine learning technique were used to reconstruct muscle activation patterns from kinematic data measured during normal human walking at several speeds. The activation patterns (or desired outputs) consisted of surface electromyographic (EMG) signals from the semitendinosus and vastus medialis muscles. The inputs consisted of flexion and extension angles measured at the hip and knee of the ipsilateral leg, their first and second derivatives, and bilateral foot contact information. The training set consisted of data from six trials, at two different speeds. The testing set consisted of data from two additional trials (one at each speed), which were not in the training set. It was possible to reconstruct the muscular activation at both speeds using both techniques. Timing of the reconstructed signals was accurate. The integrated value of the activation bursts was less accurate. The neural network gave a continuous output, whereas the rule-based inductive learning rule tree gave a quantised activation level. The advantage of rule-based inductive learning was that the rules used were both explicit and comprehensible, whilst the rules used by the neural network were implicit within its structure and not easily comprehended. The neural network was able to reconstruct the activation patterns of both muscles from one network, whereas two separate rule sets were needed for the rule-based technique. It is concluded that machine learning techniques, in comparison to explicit inverse muscular skeletal models, show good promise in modelling nearly cyclic movements such as locomotion at varying walking speeds. However, they do not provide insight into the biomechanics of the system, because they are not based on the biomechanical structure of the system.  相似文献   

16.
Summary Regularities in the environment are accessible to an autonomous agents as reproducible relations between actions and perceptions and can be exploited by unsupervised learning. Our approach is based on the possibility to perform and to verify predictions about perceivable consequences of actions. It is implemented as a three-layer neural network that combines predictive perception, internal-state transitions and action selection into a loop which closes via the environment. In addition to minimizing prediction errors, the goal of network adaptation comprises also an optimization of the minimization rate such that new behaviors are favored over already learned ones, which would result in a vanishing improvement of predictability. Previously learned behaviors are reactivated or continued if triggering stimuli are available and an externally or otherwise given reward overcompensates the decay of the learning rate. In the model, behavior learning and learning behavior are brought about by the same mechanism, namely the drive to continuously experience learning success. Behavior learning comprises representation and storage of learned behaviors and finally their inhibition such that a further exploration of the environment is possible. Learning behavior, in contrast, detects the frontiers of the manifold of learned behaviors and provides estimates of the learnability of behaviors leading outwards the field of expertise. The network module has been implemented in a Khepera miniature robot. We also consider hierarchical architectures consisting of several modules in one agent as well as groups of several agents, which are controlled by such networks.  相似文献   

17.
具有竞争指针的短时记忆神经网络模型   总被引:1,自引:0,他引:1  
在我们以前提出的短时记忆神经网络模型基础上[3],我们在新模型中引入突触竞争机制,提出了一个新的短时记忆神经网络模型。模型仍由两个神经网络所组成;其一为与长时记忆共有的信息内容表达网络,另一个为指针神经元环路。由于表达区神经元与指针神经元间的突触权重的竞争,使得模型可以表现出由干扰引起的短时记忆的遗忘。相应于自由回忆序列位置效应和汉字组块两个心理学实验,对模型做了计算机仿真。仿真结果显示模型的行为与两个心理实验定量地符合得很好。由此表明现在的模型更合适于作为短时记忆的模型。  相似文献   

18.
Kurikawa T  Kaneko K 《PloS one》2011,6(3):e17432
Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided.  相似文献   

19.
Animals with rudimentary innate abilities require substantial learning to transform those abilities into useful skills, where a skill can be considered as a set of sensory–motor associations. Using linear neural network models, it is proved that if skills are stored as distributed representations, then within-lifetime learning of part of a skill can induce automatic learning of the remaining parts of that skill. More importantly, it is shown that this “free-lunch” learning (FLL) is responsible for accelerated evolution of skills, when compared with networks which either 1) cannot benefit from FLL or 2) cannot learn. Specifically, it is shown that FLL accelerates the appearance of adaptive behaviour, both in its innate form and as FLL-induced behaviour, and that FLL can accelerate the rate at which learned behaviours become innate.  相似文献   

20.
The interplay between hippocampus and prefrontal cortex (PFC) is fundamental to spatial cognition. Complementing hippocampal place coding, prefrontal representations provide more abstract and hierarchically organized memories suitable for decision making. We model a prefrontal network mediating distributed information processing for spatial learning and action planning. Specific connectivity and synaptic adaptation principles shape the recurrent dynamics of the network arranged in cortical minicolumns. We show how the PFC columnar organization is suitable for learning sparse topological-metrical representations from redundant hippocampal inputs. The recurrent nature of the network supports multilevel spatial processing, allowing structural features of the environment to be encoded. An activation diffusion mechanism spreads the neural activity through the column population leading to trajectory planning. The model provides a functional framework for interpreting the activity of PFC neurons recorded during navigation tasks. We illustrate the link from single unit activity to behavioral responses. The results suggest plausible neural mechanisms subserving the cognitive "insight" capability originally attributed to rodents by Tolman & Honzik. Our time course analysis of neural responses shows how the interaction between hippocampus and PFC can yield the encoding of manifold information pertinent to spatial planning, including prospective coding and distance-to-goal correlates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号