首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
短时记忆的神经网络模型   总被引:2,自引:1,他引:1  
提出一个带有指针环路的短时记忆神经网络模型,模型包含两个神经网络,其中一个是与长时记忆共有的存贮内容表达网络,另一个为短时指针神经元环路,由于指针环路仅作为记忆内容的临时指针,因此,仅用很少的存贮单元即可完成各种短时记忆任务,计算机仿真证明,本模型确能表现出短时记忆的存贮容量有限和组块编码两个基本特征。  相似文献   

2.
Neural model of the genetic network   总被引:4,自引:0,他引:4  
Many cell control processes consist of networks of interacting elements that affect the state of each other over time. Such an arrangement resembles the principles of artificial neural networks, in which the state of a particular node depends on the combination of the states of other neurons. The lambda bacteriophage lysis/lysogeny decision circuit can be represented by such a network. It is used here as a model for testing the validity of a neural approach to the analysis of genetic networks. The model considers multigenic regulation including positive and negative feedback. It is used to simulate the dynamics of the lambda phage regulatory system; the results are compared with experimental observation. The comparison proves that the neural network model describes behavior of the system in full agreement with experiments; moreover, it predicts its function in experimentally inaccessible situations and explains the experimental observations. The application of the principles of neural networks to the cell control system leads to conclusions about the stability and redundancy of genetic networks and the cell functionality. Reverse engineering of the biochemical pathways from proteomics and DNA micro array data using the suggested neural network model is discussed.  相似文献   

3.
Double-layer neural networks with mutually inhibiting interconnections are analyzed using a continuous-variable model of the neuron. The first layer consists of excitatory neurons while the second layer consists of inhibitory neurons. Both feedforward and feedback interconnections exist between the two layers. An autonomous system of nonlinear differential equations is introduced to describe the network dynamics, and the stability conditions for some classes of equilibria are investigated in detail. Several simulation results are also presented. It is shown that even those networks which are formed with rather powerless synapses are capable of carrying out input pattern sharpening, temporary information storage, and periodic signal generation.  相似文献   

4.
In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level , in the large and sparse coding limits (). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.  相似文献   

5.
We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.  相似文献   

6.
Memory and DNA   总被引:1,自引:0,他引:1  
A model is presented for the storage of long-term memory. In our model consolidation takes place by specific DNA sequences. These DNA sequences are obtained by the recombination of DNA in a similar way to that during meiosis and the production of immunological antibodies. DNA has the potential of the production of large numbers of specific DNA sequences. These sequences can be attached to images of neural networks.The following considerations lead to the theory: (1)Most of the DNA is not used: approximately 3% of our DNA is used. (2)There are no cell divisions in the brain after adulthood is reached. Structural DNA arrangements will not be altered nor disrupted as a consequence of cell division and mitosis. (3)Chromosomal pairing is demonstrated in the brain, which could indicate the exchange of DNA. In addition, in our first survey experiments we found a positive reaction of components of the synaptonemal complex (SC) in the nuclei of brain cells. The SC is highly meiosis specific and plays a major role in genetic recombination.  相似文献   

7.
8.
Prediction of protein secondary structure is an important step towards elucidating its three dimensional structure and its function. This is a challenging problem in bioinformatics. Segmental semi Markov models (SSMMs) are one of the best studied methods in this field. However, incorporating evolutionary information to these methods is somewhat difficult. On the other hand, the systems of multiple neural networks (NNs) are powerful tools for multi-class pattern classification which can easily be applied to take these sorts of information into account.To overcome the weakness of SSMMs in prediction, in this work we consider a SSMM as a decision function on outputs of three NNs that uses multiple sequence alignment profiles. We consider four types of observations for outputs of a neural network. Then profile table related to each sequence is reduced to a sequence of four observations. In order to predict secondary structure of each amino acid we need to consider a decision function. We use an SSMM on outputs of three neural networks. The proposed SSMM has discriminative power and weights over different dependency models for outputs of neural networks. The results show that the accuracy of our model in predictions, particularly for strands, is considerably increased.  相似文献   

9.
Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time, thus making it crucial for context-dependent computation. Here, we use a top-down modeling approach to examine network-level mechanisms of working memory, an enigmatic issue and central topic of study in neuroscience. We optimize thousands of recurrent rate-based neural networks on a working memory task and then perform dynamical systems analysis on the ensuing optimized networks, wherein we find that four distinct dynamical mechanisms can emerge. In particular, we show the prevalence of a mechanism in which memories are encoded along slow stable manifolds in the network state space, leading to a phasic neuronal activation profile during memory periods. In contrast to mechanisms in which memories are directly encoded at stable attractors, these networks naturally forget stimuli over time. Despite this seeming functional disadvantage, they are more efficient in terms of how they leverage their attractor landscape and paradoxically, are considerably more robust to noise. Our results provide new hypotheses regarding how working memory function may be encoded within the dynamics of neural circuits.  相似文献   

10.
In this paper, I investigate the use of artificial neural networks in the study of prey coloration. I briefly review the anti-predator functions of prey coloration and describe both in general terms and with help of two studies as specific examples the use of neural network models in the research on prey coloration. The first example investigates the effect of visual complexity of background on evolution of camouflage. The second example deals with the evolutionary choice of defence strategy, crypsis or aposematism. I conclude that visual information processing by predators is central in evolution of prey coloration. Therefore, the capability to process patterns as well as to imitate aspects of predator's information processing and responses to visual information makes neural networks a well-suited modelling approach for the study of prey coloration. In addition, their suitability for evolutionary simulations is an advantage when complex or dynamic interactions are modelled. Since not all behaviours of neural network models are necessarily biologically relevant, it is important to validate a neural network model with empirical data. Bringing together knowledge about neural networks with knowledge about topics of prey coloration would provide a potential way to deepen our understanding of the specific appearances of prey coloration.  相似文献   

11.
While vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.  相似文献   

12.
动态神经元的网络模型 Ⅱ.计算机仿真   总被引:4,自引:3,他引:1  
在动态神经元的网络模型的基础上,在计算机上进行了仿真。结果表明,我们的单元模型能再现感受器的适应性、兴奋后抑制、相位锁定和位置编码。由五十个这样的单元构成的侧抑制网络能再现鲎复眼侧抑制网络的瞬态特性,而在达到稳态时则表现出马赫带现象。仿真结果还预测侧抑制网络对运动目标特别敏感。模型有关神经元处理信息的内部机制和外部特性与生物神经元的一致性,以及由此构成的侧抑制网络与鲎复眼侧抑制网络性质的一致性,都提示此模型有希望成为一种更接近于生物神经网络的模型。  相似文献   

13.
We describe a class of feed forward neural network models for associative content addressable memory (ACAM) which utilize sparse internal representations for stored data. In addition to the input and output layers, our networks incorporate an intermediate processing layer which serves to label each stored memory and to perform error correction and association. We study two classes of internal label representations: the unary representation and various sparse, distributed representations. Finally, we consider storage of sparse data and sparsification of data. These models are found to have advantages in terms of storage capacity, hardware efficiency, and recall reliability when compared to the Hopfield model, and to possess analogies to both biological neural networks and standard digital computer memories.  相似文献   

14.
An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting.  相似文献   

15.
Synchronized gamma frequency oscillations in neural networks are thought to be important to sensory information processing, and their effects have been intensively studied. Here we describe a mechanism by which the nervous system can readily control gamma oscillation effects, depending selectively on visual stimuli. Using a model neural network simulation, we found that sensory response in the primary visual cortex is significantly modulated by the resonance between “spontaneous” and “stimulus-driven” oscillations. This gamma resonance can be precisely controlled by the synaptic plasticity of thalamocortical connections, and cortical response is regulated differentially according to the resonance condition. The mechanism produces a selective synchronization between the afferent and downstream neural population. Our simulation results explain experimental observations such as stimulus-dependent synchronization between the thalamus and the cortex at different oscillation frequencies. The model generally shows how sensory information can be selectively routed depending on its frequency components.  相似文献   

16.
Understanding the neural mechanisms of object and face recognition is one of the fundamental challenges of visual neuroscience. The neurons in inferior temporal (IT) cortex have been reported to exhibit dynamic responses to face stimuli. However, little is known about how the dynamic properties of IT neurons emerge in the face information processing. To address this issue, we made a model of IT cortex, which performs face perception via an interaction between different IT networks. The model was based on the face information processed by three resolution maps in early visual areas. The network model of IT cortex consists of four kinds of networks, in which the information about a whole face is combined with the information about its face parts and their arrangements. We show here that the learning of face stimuli makes the functional connections between these IT networks, causing a high spike correlation of IT neuron pairs. A dynamic property of subthreshold membrane potential of IT neuron, produced by Hodgkin–Huxley model, enables the coordination of temporal information without changing the firing rate, providing the basis of the mechanism underlying face perception. We show also that the hierarchical processing of face information allows IT cortex to perform a “coarse-to-fine” processing of face information. The results presented here seem to be compatible with experimental data about dynamic properties of IT neurons.  相似文献   

17.
This paper addresses the stability problem on the memristive neural networks with time-varying impulses. Based on the memristor theory and neural network theory, the model of the memristor-based neural network is established. Different from the most publications on memristive networks with fixed-time impulse effects, we consider the case of time-varying impulses. Both the destabilizing and stabilizing impulses exist in the model simultaneously. Through controlling the time intervals of the stabilizing and destabilizing impulses, we ensure the effect of the impulses is stabilizing. Several sufficient conditions for the globally exponentially stability of memristive neural networks with time-varying impulses are proposed. The simulation results demonstrate the effectiveness of the theoretical results.  相似文献   

18.
This paper proposes a physical model involving the key structures within the neural cytoskeleton as major players in molecular-level processing of information required for learning and memory storage. In particular, actin filaments and microtubules are macromolecules having highly charged surfaces that enable them to conduct electric signals. The biophysical properties of these filaments relevant to the conduction of ionic current include a condensation of counterions on the filament surface and a nonlinear complex physical structure conducive to the generation of modulated waves. Cytoskeletal filaments are often directly connected with both ionotropic and metabotropic types of membrane-embedded receptors, thereby linking synaptic inputs to intracellular functions. Possible roles for cable-like, conductive filaments in neurons include intracellular information processing, regulating developmental plasticity, and mediating transport. The cytoskeletal proteins form a complex network capable of emergent information processing, and they stand to intervene between inputs to and outputs from neurons. In this manner, the cytoskeletal matrix is proposed to work with neuronal membrane and its intrinsic components (e.g., ion channels, scaffolding proteins, and adaptor proteins), especially at sites of synaptic contacts and spines. An information processing model based on cytoskeletal networks is proposed that may underlie certain types of learning and memory.  相似文献   

19.
Inspired by the temporal correlation theory of brain functions, researchers have presented a number of neural oscillator networks to implement visual scene segmentation problems. Recently, it is shown that many biological neural networks are typical small-world networks. In this paper, we propose and investigate two small-world models derived from the well-known LEGION (locally excitatory and globally inhibitory oscillator network) model. To form a small-world network, we add a proper proportion of unidirectional shortcuts (random long-range connections) to the original LEGION model. With local connections and shortcuts, the neural oscillators can not only communicate with neighbors but also exchange phase information with remote partners. Model 1 introduces excitatory shortcuts to enhance the synchronization within an oscillator group representing the same object. Model 2 goes further to replace the global inhibitor with a sparse set of inhibitory shortcuts. Simulation results indicate that the proposed small-world models could achieve synchronization faster than the original LEGION model and are more likely to bind disconnected image regions belonging together. In addition, we argue that these two models are more biologically plausible.  相似文献   

20.
During development, biological neural networks produce more synapses and neurons than needed. Many of these synapses and neurons are later removed in a process known as neural pruning. Why networks should initially be over-populated, and the processes that determine which synapses and neurons are ultimately pruned, remains unclear. We study the mechanisms and significance of neural pruning in model neural networks. In a deep Boltzmann machine model of sensory encoding, we find that (1) synaptic pruning is necessary to learn efficient network architectures that retain computationally-relevant connections, (2) pruning by synaptic weight alone does not optimize network size and (3) pruning based on a locally-available measure of importance based on Fisher information allows the network to identify structurally important vs. unimportant connections and neurons. This locally-available measure of importance has a biological interpretation in terms of the correlations between presynaptic and postsynaptic neurons, and implies an efficient activity-driven pruning rule. Overall, we show how local activity-dependent synaptic pruning can solve the global problem of optimizing a network architecture. We relate these findings to biology as follows: (I) Synaptic over-production is necessary for activity-dependent connectivity optimization. (II) In networks that have more neurons than needed, cells compete for activity, and only the most important and selective neurons are retained. (III) Cells may also be pruned due to a loss of synapses on their axons. This occurs when the information they convey is not relevant to the target population.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号