首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Dynamic recurrent neural networks composed of units with continuous activation functions provide a powerful tool for simulating a wide range of behaviors, since the requisite interconnections can be readily derived by gradient descent methods. However, it is not clear whether more realistic integrate-and-fire cells with comparable connection weights would perform the same functions. We therefore investigated methods to convert dynamic recurrent neural networks of continuous units into networks with integrate-and-fire cells. The transforms were tested on two recurrent networks derived by backpropagation. The first simulates a short-term memory task with units that mimic neural activity observed in cortex of monkeys performing instructed delay tasks. The network utilizes recurrent connections to generate sustained activity that codes the remembered value of a transient cue. The second network simulates patterns of neural activity observed in monkeys performing a step-tracking task with flexion/extension wrist movements. This more complicated network provides a working model of the interactions between multiple spinal and supraspinal centers controlling motoneurons. Our conversion algorithm replaced each continuous unit with multiple integrate-and-fire cells that interact through delayed "synaptic potentials". Successful transformation depends on obtaining an appropriate fit between the activation function of the continuous units and the input-output relation of the spiking cells. This fit can be achieved by adapting the parameters of the synaptic potentials to replicate the input-output behavior of a standard sigmoidal activation function (shown for the short-term memory network). Alternatively, a customized activation function can be derived from the input-output relation of the spiking cells for a chosen set of parameters (demonstrated for the wrist flexion/extension network). In both cases the resulting networks of spiking cells exhibited activity that replicated the activity of corresponding continuous units. This confirms that the network solutions obtained through backpropagation apply to spiking networks and provides a useful method for deriving recurrent spiking networks performing a wide range of functions.  相似文献   

2.
Transient, task related synchronous activity within neural populations has been recognized as the substrate of temporal coding in the brain. The mechanisms underlying inducing and propagation of transient synchronous activity are still unknown, and we propose that short-term plasticity (STP) of neural circuits may serve as a supplemental mechanism therein. By computational modeling, we showed that short-term facilitation greatly increases the reactivation rate of population spikes and decreases the latency of response to reactivation stimuli in local recurrent neural networks. Meanwhile, the timing of population spike reactivation is controlled by the memory effect of STP, and it is mediated primarily by the facilitation time constant. Furthermore, we demonstrated that synaptic facilitation dramatically enhances synchrony propagation in feedforward neural networks and that response timing mediated by synaptic facilitation offers a scheme for information routing. In addition, we verified that synaptic strengthening of intralayer or interlayer coupling enhances synchrony propagation, and we verified that other factors such as the delay of synaptic transmission and the mode of synaptic connectivity are also involved in regulating synchronous activity propagation. Overall, our results highlight the functional role of STP in regulating the inducing and propagation of transient synchronous activity, and they may inspire testable hypotheses for future experimental studies.  相似文献   

3.
Understanding the control of cellular networks consisting of gene and protein interactions and their emergent properties is a central activity of Systems Biology research. For this, continuous, discrete, hybrid, and stochastic methods have been proposed. Currently, the most common approach to modelling accurate temporal dynamics of networks is ordinary differential equations (ODE). However, critical limitations of ODE models are difficulty in kinetic parameter estimation and numerical solution of a large number of equations, making them more suited to smaller systems. In this article, we introduce a novel recurrent artificial neural network (RNN) that addresses above limitations and produces a continuous model that easily estimates parameters from data, can handle a large number of molecular interactions and quantifies temporal dynamics and emergent systems properties. This RNN is based on a system of ODEs representing molecular interactions in a signalling network. Each neuron represents concentration change of one molecule represented by an ODE. Weights of the RNN correspond to kinetic parameters in the system and can be adjusted incrementally during network training. The method is applied to the p53-Mdm2 oscillation system – a crucial component of the DNA damage response pathways activated by a damage signal. Simulation results indicate that the proposed RNN can successfully represent the behaviour of the p53-Mdm2 oscillation system and solve the parameter estimation problem with high accuracy. Furthermore, we presented a modified form of the RNN that estimates parameters and captures systems dynamics from sparse data collected over relatively large time steps. We also investigate the robustness of the p53-Mdm2 system using the trained RNN under various levels of parameter perturbation to gain a greater understanding of the control of the p53-Mdm2 system. Its outcomes on robustness are consistent with the current biological knowledge of this system. As more quantitative data become available on individual proteins, the RNN would be able to refine parameter estimation and mapping of temporal dynamics of individual signalling molecules as well as signalling networks as a system. Moreover, RNN can be used to modularise large signalling networks.  相似文献   

4.
The state of art in computer modelling of neural networks with associative memory is reviewed. The available experimental data are considered on learning and memory of small neural systems, on isolated synapses and on molecular level. Computer simulations demonstrate that realistic models of neural ensembles exhibit properties which can be interpreted as image recognition, categorization, learning, prototype forming, etc. A bilayer model of associative neural network is proposed. One layer corresponds to the short-term memory, the other one to the long-term memory. Patterns are stored in terms of the synaptic strength matrix. We have studied the relaxational dynamics of neurons firing and suppression within the short-term memory layer under the influence of the long-term memory layer. The interaction among the layers has found to create a number of novel stable states which are not the learning patterns. These synthetic patterns may consist of elements belonging to different non-intersecting learning patterns. Within the framework of a hypothesis of selective and definite coding of images in brain one can interpret the observed effect as the "idea? generating" process.  相似文献   

5.
Animals must respond selectively to specific combinations of salient environmental stimuli in order to survive in complex environments. A task with these features, biconditional discrimination, requires responses to select pairs of stimuli that are opposite to responses to those stimuli in another combination. We investigate the characteristics of synaptic plasticity and network connectivity needed to produce stimulus-pair neural responses within randomly connected model networks of spiking neurons trained in biconditional discrimination. Using reward-based plasticity for synapses from the random associative network onto a winner-takes-all decision-making network representing perceptual decision-making, we find that reliably correct decision making requires upstream neurons with strong stimulus-pair selectivity. By chance, selective neurons were present in initial networks; appropriate plasticity mechanisms improved task performance by enhancing the initial diversity of responses. We find long-term potentiation of inhibition to be the most beneficial plasticity rule by suppressing weak responses to produce reliably correct decisions across an extensive range of networks.  相似文献   

6.
Brain networks memorize previous performance to adjust their output in light of past experience. These activity-dependent modifications generally result from changes in synaptic strengths or ionic conductances, and ion pumps have only rarely been demonstrated to play a dynamic role. Locomotor behavior is produced by central pattern generator (CPG) networks and modified by sensory and descending signals to allow for changes in movement frequency, intensity, and duration, but whether or how the CPG networks recall recent activity is largely unknown. In Xenopus frog tadpoles, swim bout duration correlates linearly with interswim interval, suggesting that the locomotor network retains a short-term memory of previous output. We discovered an ultraslow, minute-long afterhyperpolarization (usAHP) in network neurons following locomotor episodes. The usAHP is mediated by an activity- and sodium spike-dependent enhancement of electrogenic Na(+)/K(+) pump function. By integrating spike frequency over time and linking the membrane potential of spinal neurons to network performance, the usAHP plays a dynamic role in short-term motor memory. Because Na(+)/K(+) pumps are ubiquitously expressed in neurons of all animals and because sodium spikes inevitably accompany network activity, the usAHP may represent a phylogenetically conserved but largely overlooked mechanism for short-term memory of neural network function.  相似文献   

7.
Renart A  Song P  Wang XJ 《Neuron》2003,38(3):473-485
The concept of bell-shaped persistent neural activity represents a cornerstone of the theory for the internal representation of analog quantities, such as spatial location or head direction. Previous models, however, relied on the unrealistic assumption of network homogeneity. We investigate this issue in a network model where fine tuning of parameters is destroyed by heterogeneities in cellular and synaptic properties. Heterogeneities result in the loss of stored spatial information in a few seconds. Accurate encoding is recovered when a homeostatic mechanism scales the excitatory synapses to each cell to compensate for the heterogeneity in cellular excitability and synaptic inputs. Moreover, the more realistic model produces a wide diversity of tuning curves, as commonly observed in recordings from prefrontal neurons. We conclude that recurrent attractor networks in conjunction with appropriate homeostatic mechanisms provide a robust, biologically plausible theoretical framework for understanding the neural circuit basis of spatial working memory.  相似文献   

8.
9.
Kaur H  Raghava GP 《In silico biology》2006,6(1-2):111-125
In this study, an attempt has been made to develop a method for predicting weak hydrogen bonding interactions, namely, C alpha-H...O and C alpha-H...pi interactions in proteins using artificial neural network. Both standard feed-forward neural network (FNN) and recurrent neural networks (RNN) have been trained and tested using five-fold cross-validation on a non-homologous dataset of 2298 protein chains where no pair of sequences has more than 25% sequence identity. It has been found that the prediction accuracy varies with the separation distance between donor and acceptor residues. The maximum sensitivity achieved with RNN for C alpha-H...O is 51.2% when donor and acceptor residues are four residues apart (i.e. at delta D-A = 4) and for C alpha-H...pi is 82.1% at delta D-A = 3. The performance of RNN is increased by 1-3% for both types of interactions when PSIPRED predicted protein secondary structure is used. Overall, RNN performs better than feed-forward networks at all separation distances between donor-acceptor pair for both types of interactions. Based on the observations, a web server CHpredict (available at http://www.imtech.res.in/raghava/chpredict/) has been developed for predicting donor and acceptor residues in C alpha-H...O and C alpha-H...pi interactions in proteins.  相似文献   

10.
短时记忆的神经网络模型   总被引:2,自引:1,他引:1  
提出一个带有指针环路的短时记忆神经网络模型,模型包含两个神经网络,其中一个是与长时记忆共有的存贮内容表达网络,另一个为短时指针神经元环路,由于指针环路仅作为记忆内容的临时指针,因此,仅用很少的存贮单元即可完成各种短时记忆任务,计算机仿真证明,本模型确能表现出短时记忆的存贮容量有限和组块编码两个基本特征。  相似文献   

11.
We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances—that naturally balances the network with excitatory and inhibitory synapses—and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.  相似文献   

12.
Saha S  Raghava GP 《Proteins》2006,65(1):40-48
B-cell epitopes play a vital role in the development of peptide vaccines, in diagnosis of diseases, and also for allergy research. Experimental methods used for characterizing epitopes are time consuming and demand large resources. The availability of epitope prediction method(s) can rapidly aid experimenters in simplifying this problem. The standard feed-forward (FNN) and recurrent neural network (RNN) have been used in this study for predicting B-cell epitopes in an antigenic sequence. The networks have been trained and tested on a clean data set, which consists of 700 non-redundant B-cell epitopes obtained from Bcipep database and equal number of non-epitopes obtained randomly from Swiss-Prot database. The networks have been trained and tested at different input window length and hidden units. Maximum accuracy has been obtained using recurrent neural network (Jordan network) with a single hidden layer of 35 hidden units for window length of 16. The final network yields an overall prediction accuracy of 65.93% when tested by fivefold cross-validation. The corresponding sensitivity, specificity, and positive prediction values are 67.14, 64.71, and 65.61%, respectively. It has been observed that RNN (JE) was more successful than FNN in the prediction of B-cell epitopes. The length of the peptide is also important in the prediction of B-cell epitopes from antigenic sequences. The webserver ABCpred is freely available at www.imtech.res.in/raghava/abcpred/.  相似文献   

13.
Jensen et al. (Learn Memory 3(2–3):243–256, 1996b) proposed an auto-associative memory model using an integrated short-term memory (STM) and long-term memory (LTM) spiking neural network. Their model requires that distinct pyramidal cells encoding different STM patterns are fired in different high-frequency gamma subcycles within each low-frequency theta oscillation. Auto-associative LTM is formed by modifying the recurrent synaptic efficacy between pyramidal cells. In order to store auto-associative LTM correctly, the recurrent synaptic efficacy must be bounded. The synaptic efficacy must be upper bounded to prevent re-firing of pyramidal cells in subsequent gamma subcycles. If cells encoding one memory item were to re-fire synchronously with other cells encoding another item in subsequent gamma subcycle, LTM stored via modifiable recurrent synapses would be corrupted. The synaptic efficacy must also be lower bounded so that memory pattern completion can be performed correctly. This paper uses the original model by Jensen et al. as the basis to illustrate the following points. Firstly, the importance of coordinated long-term memory (LTM) synaptic modification. Secondly, the use of a generic mathematical formulation (spiking response model) that can theoretically extend the results to other spiking network utilizing threshold-fire spiking neuron model. Thirdly, the interaction of long-term and short-term memory networks that possibly explains the asymmetric distribution of spike density in theta cycle through the merger of STM patterns with interaction of LTM network.  相似文献   

14.
The prefrontal cortex (PFC) plays a crucial role in flexible cognitive behavior by representing task relevant information with its working memory. The working memory with sustained neural activity is described as a neural dynamical system composed of multiple attractors, each attractor of which corresponds to an active state of a cell assembly, representing a fragment of information. Recent studies have revealed that the PFC not only represents multiple sets of information but also switches multiple representations and transforms a set of information to another set depending on a given task context. This representational switching between different sets of information is possibly generated endogenously by flexible network dynamics but details of underlying mechanisms are unclear. Here we propose a dynamically reorganizable attractor network model based on certain internal changes in synaptic connectivity, or short-term plasticity. We construct a network model based on a spiking neuron model with dynamical synapses, which can qualitatively reproduce experimentally demonstrated representational switching in the PFC when a monkey was performing a goal-oriented action-planning task. The model holds multiple sets of information that are required for action planning before and after representational switching by reconfiguration of functional cell assemblies. Furthermore, we analyzed population dynamics of this model with a mean field model and show that the changes in cell assemblies' configuration correspond to those in attractor structure that can be viewed as a bifurcation process of the dynamical system. This dynamical reorganization of a neural network could be a key to uncovering the mechanism of flexible information processing in the PFC.  相似文献   

15.
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.  相似文献   

16.
Converging evidence suggests the brain encodes time in dynamic patterns of neural activity, including neural sequences, ramping activity, and complex dynamics. Most temporal tasks, however, require more than just encoding time, and can have distinct computational requirements including the need to exhibit temporal scaling, generalize to novel contexts, or robustness to noise. It is not known how neural circuits can encode time and satisfy distinct computational requirements, nor is it known whether similar patterns of neural activity at the population level can exhibit dramatically different computational or generalization properties. To begin to answer these questions, we trained RNNs on two timing tasks based on behavioral studies. The tasks had different input structures but required producing identically timed output patterns. Using a novel framework we quantified whether RNNs encoded two intervals using either of three different timing strategies: scaling, absolute, or stimulus-specific dynamics. We found that similar neural dynamic patterns at the level of single intervals, could exhibit fundamentally different properties, including, generalization, the connectivity structure of the trained networks, and the contribution of excitatory and inhibitory neurons. Critically, depending on the task structure RNNs were better suited for generalization or robustness to noise. Further analysis revealed different connection patterns underlying the different regimes. Our results predict that apparently similar neural dynamic patterns at the population level (e.g., neural sequences) can exhibit fundamentally different computational properties in regards to their ability to generalize to novel stimuli and their robustness to noise—and that these differences are associated with differences in network connectivity and distinct contributions of excitatory and inhibitory neurons. We also predict that the task structure used in different experimental studies accounts for some of the experimentally observed variability in how networks encode time.  相似文献   

17.
On the dynamics of operant conditioning   总被引:1,自引:0,他引:1  
Simple psychological postulates are presented which are used to derive possible anatomical and physiological substrates of operant conditioning. These substrates are compatible with much psychological data about operants. A main theme is that aspects of operant and respondent conditioning share a single learning process. Among the phenomena which arise are the following: UCS-activated arousal; formation of conditioned, or secondary, reinforcers; a non-specific arousal system distinct from sensory and motor representations whose activation is required for sensory processing; polyvalent cells responsive to the sum of CS and UCS inputs and anodal d.c. potential shifts; neural loci responsive to the combined effect of sensory events and drive deprivation; “go”-like or “now print”-like mechanisms which, for example, influence incentive-motivational increases in general activity; a mechanism for learning repetitively to press a bar which electrically stimulates suitable arousal loci in the absence of drive reduction; uniformly distributed potentials, driven by the CS, in the “cerebral cortex” of a trained network; the distinction between short-term and long-term memory, and the possibility of eliminating transfer from short-term to long-term memory in the absence of suitable arousal; networks that can learn and perform arbitrarily complex sequences of acts or sensory memories, without continuous control by sensory feedback, whose rate of performance can be regulated by the level of internal arousal; networks with idetic memory; network analogs of “therapeutic resistance” and “repression”; the possibility of conditioning the sensory feedback created by a motor act to the neural controls of this act, with consequences for sensory-motor adaptation and child development. This paper introduces explicit minimal anatomies and physiological rules that formally give rise to analogous phenomena. These networks consider only aspects of positive conditioning. They are derived from simple psychological facts.  相似文献   

18.
Default mode network (DMN) is a functional brain network with a unique neural activity pattern that shows high activity in resting states but low activity in task states. This unique pattern has been proved to relate with higher cognitions such as learning, memory and decision-making. But neural mechanisms of interactions between the default network and the task-related network are still poorly understood. In this paper, a theoretical model of coupling the DMN and working memory network (WMN) is proposed. The WMN and DMN both consist of excitatory and inhibitory neurons connected by AMPA, NMDA, GABA synapses, and are coupled with each other only by excitatory synapses. This model is implemented to demonstrate dynamical processes in a working memory task containing encoding, maintenance and retrieval phases. Simulated results have shown that: (1) AMPA channels could produce significant synchronous oscillations in population neurons, which is beneficial to change oscillation patterns in the WMN and DMN. (2) Different NMDA conductance between the networks could generate multiple neural activity modes in the whole network, which may be an important mechanism to switch states of the networks between three different phases of working memory. (3) The number of sequentially memorized stimuli was related to the energy consumption determined by the network''s internal parameters, and the DMN contributed to a more stable working memory process. (4) Finally, this model demonstrated that, in three phases of working memory, different memory phases corresponded to different functional connections between the DMN and WMN. Coupling strengths that measured these functional connections differed in terms of phase synchronization. Phase synchronization characteristics of the contained energy were consistent with the observations of negative and positive correlations between the WMN and DMN reported in referenced fMRI experiments. The results suggested that the coupled interaction between the WMN and DMN played important roles in working memory.Supplementary InformationThe online version contains supplementary material available at 10.1007/s11571-021-09674-1.  相似文献   

19.
Uroshlev  L. A.  Bal  N. V.  Chesnokova  E. A. 《Biophysics》2020,65(4):574-576
Biophysics - Several models of long short-term memory (LSTM) neural networks were constructed. Each model was trained on the complete mouse genome to predict the exon–intron structure of a...  相似文献   

20.
The acts of learning and memory are thought to emerge from the modifications of synaptic connections between neurons, as guided by sensory feedback during behavior. However, much is unknown about how such synaptic processes can sculpt and are sculpted by neuronal population dynamics and an interaction with the environment. Here, we embodied a simulated network, inspired by dissociated cortical neuronal cultures, with an artificial animal (an animat) through a sensory-motor loop consisting of structured stimuli, detailed activity metrics incorporating spatial information, and an adaptive training algorithm that takes advantage of spike timing dependent plasticity. By using our design, we demonstrated that the network was capable of learning associations between multiple sensory inputs and motor outputs, and the animat was able to adapt to a new sensory mapping to restore its goal behavior: move toward and stay within a user-defined area. We further showed that successful learning required proper selections of stimuli to encode sensory inputs and a variety of training stimuli with adaptive selection contingent on the animat's behavior. We also found that an individual network had the flexibility to achieve different multi-task goals, and the same goal behavior could be exhibited with different sets of network synaptic strengths. While lacking the characteristic layered structure of in vivo cortical tissue, the biologically inspired simulated networks could tune their activity in behaviorally relevant manners, demonstrating that leaky integrate-and-fire neural networks have an innate ability to process information. This closed-loop hybrid system is a useful tool to study the network properties intermediating synaptic plasticity and behavioral adaptation. The training algorithm provides a stepping stone towards designing future control systems, whether with artificial neural networks or biological animats themselves.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号