首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A neural network model based on the analogy with the immune system   总被引:9,自引:0,他引:9  
The similarities between the immune system and the central nervous system lead to the formulation of an unorthodox neural network model. The similarities between the two systems are strong at the system level, but do not seem to be so striking at the level of the components. A new model of a neuron is therefore formulated, in order that the analogy can be used. The essential feature of the hypothetical neuron is that it exhibits hysteresis at the single neuron level. A network of N such neurons is modelled by an N-dimensional system of ordinary differential equations, which exhibits almost 2N attractors. The model has a property that resembles free will. A conjecture concerning how the network might learn stimulus-response behaviour is described. According to the conjecture, learning does not involve modifications of the strengths of synaptic connections. Instead, stimuli ("questions") selectively applied to the network by a "teacher" can be used to take the system to a region of the N-dimensional phase space where the network gives the desired stimulus-response behaviour. A key role for sleep in the learning process is suggested. The model for sleep leads to prediction that the variance in the rates of firing of the neurons associated with memory should increase during waking hours, and decrease during sleep.  相似文献   

3.
The aim of the present paper is to study the effects of Hebbian learning in random recurrent neural networks with biological connectivity, i.e. sparse connections and separate populations of excitatory and inhibitory neurons. We furthermore consider that the neuron dynamics may occur at a (shorter) time scale than synaptic plasticity and consider the possibility of learning rules with passive forgetting. We show that the application of such Hebbian learning leads to drastic changes in the network dynamics and structure. In particular, the learning rule contracts the norm of the weight matrix and yields a rapid decay of the dynamics complexity and entropy. In other words, the network is rewired by Hebbian learning into a new synaptic structure that emerges with learning on the basis of the correlations that progressively build up between neurons. We also observe that, within this emerging structure, the strongest synapses organize as a small-world network. The second effect of the decay of the weight matrix spectral radius consists in a rapid contraction of the spectral radius of the Jacobian matrix. This drives the system through the "edge of chaos" where sensitivity to the input pattern is maximal. Taken together, this scenario is remarkably predicted by theoretical arguments derived from dynamical systems and graph theory.  相似文献   

4.
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.  相似文献   

5.
We investigate the detectability of weak electric field in a noisy neural network based on Izhikevich neuron model systematically. The neural network is composed of excitatory and inhibitory neurons with similar ratio as that in the mammalian neocortex, and the axonal conduction delays between neurons are also considered. It is found that the noise intensity can modulate the detectability of weak electric field. Stochastic resonance (SR) phenomenon induced by white noise is observed when the weak electric field is added to the network. It is interesting that SR almost disappeared when the connections between neurons are cancelled, suggesting the amplification effects of the neural coupling on the synchronization of neuronal spiking. Furthermore, the network parameters, such as the connection probability, the synaptic coupling strength, the scale of neuron population and the neuron heterogeneity, can also affect the detectability of the weak electric field. Finally, the model sensitivity is studied in detail, and results show that the neural network model has an optimal region for the detectability of weak electric field signal.  相似文献   

6.
Many learning rules for neural networks derive from abstract objective functions. The weights in those networks are typically optimized utilizing gradient ascent on the objective function. In those networks each neuron needs to store two variables. One variable, called activity, contains the bottom-up sensory-fugal information involved in the core signal processing. The other variable typically describes the derivative of the objective function with respect to the cell's activity and is exclusively used for learning. This variable allows the objective function's derivative to be calculated with respect to each weight and thus the weight update. Although this approach is widely used, the mapping of such two variables onto physiology is unclear, and these learning algorithms are often considered biologically unrealistic. However, recent research on the properties of cortical pyramidal neurons shows that these cells have at least two sites of synaptic integration, the basal and the apical dendrite, and are thus appropriately described by at least two variables. Here we discuss whether these results could constitute a physiological basis for the described abstract learning rules. As examples we demonstrate an implementation of the backpropagation of error algorithm and a specific self-supervised learning algorithm using these principles. Thus, compared to standard, one-integration-site neurons, it is possible to incorporate interesting properties in neural networks that are inspired by physiology with a modest increase of complexity.  相似文献   

7.
Long after a new language has been learned and forgotten, relearning a few words seems to trigger the recall of other words. This "free-lunch learning" (FLL) effect has been demonstrated both in humans and in neural network models. Specifically, previous work proved that linear networks that learn a set of associations, then partially forget them all, and finally relearn some of the associations, show improved performance on the remaining (i.e., nonrelearned) associations. Here, we prove that relearning forgotten associations decreases performance on nonrelearned associations; an effect we call negative free-lunch learning. The difference between free-lunch learning and the negative free-lunch learning presented here is due to the particular method used to induce forgetting. Specifically, if forgetting is induced by isotropic drifting of weight vectors (i.e., by adding isotropic noise), then free-lunch learning is observed. However, as proved here, if forgetting is induced by weight values that simply decay or fall towards zero, then negative free-lunch learning is observed. From a biological perspective, and assuming that nervous systems are analogous to the networks used here, this suggests that evolution may have selected physiological mechanisms that involve forgetting using a form of synaptic drift rather than synaptic decay, because synaptic drift, but not synaptic decay, yields free-lunch learning.  相似文献   

8.
In order to control voluntary movements, the central nervous system (CNS) must solve the following three computational problems at different levels: the determination of a desired trajectory in the visual coordinates, the transformation of its coordinates to the body coordinates and the generation of motor command. Based on physiological knowledge and previous models, we propose a hierarchical neural network model which accounts for the generation of motor command. In our model the association cortex provides the motor cortex with the desired trajectory in the body coordinates, where the motor command is then calculated by means of long-loop sensory feedback. Within the spinocerebellum — magnocellular red nucleus system, an internal neural model of the dynamics of the musculoskeletal system is acquired with practice, because of the heterosynaptic plasticity, while monitoring the motor command and the results of movement. Internal feedback control with this dynamical model updates the motor command by predicting a possible error of movement. Within the cerebrocerebellum — parvocellular red nucleus system, an internal neural model of the inverse-dynamics of the musculo-skeletal system is acquired while monitoring the desired trajectory and the motor command. The inverse-dynamics model substitutes for other brain regions in the complex computation of the motor command. The dynamics and the inverse-dynamics models are realized by a parallel distributed neural network, which comprises many sub-systems computing various nonlinear transformations of input signals and a neuron with heterosynaptic plasticity (that is, changes of synaptic weights are assumed proportional to a product of two kinds of synaptic inputs). Control and learning performance of the model was investigated by computer simulation, in which a robotic manipulator was used as a controlled system, with the following results: (1) Both the dynamics and the inverse-dynamics models were acquired during control of movements. (2) As motor learning proceeded, the inverse-dynamics model gradually took the place of external feedback as the main controller. Concomitantly, overall control performance became much better. (3) Once the neural network model learned to control some movement, it could control quite different and faster movements. (4) The neural netowrk model worked well even when only very limited information about the fundamental dynamical structure of the controlled system was available. Consequently, the model not only accounts for the learning and control capability of the CNS, but also provides a promising parallel-distributed control scheme for a large-scale complex object whose dynamics are only partially known.  相似文献   

9.
The state of art in computer modelling of neural networks with associative memory is reviewed. The available experimental data are considered on learning and memory of small neural systems, on isolated synapses and on molecular level. Computer simulations demonstrate that realistic models of neural ensembles exhibit properties which can be interpreted as image recognition, categorization, learning, prototype forming, etc. A bilayer model of associative neural network is proposed. One layer corresponds to the short-term memory, the other one to the long-term memory. Patterns are stored in terms of the synaptic strength matrix. We have studied the relaxational dynamics of neurons firing and suppression within the short-term memory layer under the influence of the long-term memory layer. The interaction among the layers has found to create a number of novel stable states which are not the learning patterns. These synthetic patterns may consist of elements belonging to different non-intersecting learning patterns. Within the framework of a hypothesis of selective and definite coding of images in brain one can interpret the observed effect as the "idea? generating" process.  相似文献   

10.
Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specific domain, including the ability to discriminate between a large number of similar patterns. From an energy-efficiency perspective, effective discrimination requires a prudent allocation of neural resources with more frequent patterns and their variants being represented with greater precision. In this work, we demonstrate a biologically plausible means of constructing a single-layer neural network that adaptively (i.e., without supervision) meets this criterion. Specifically, the adaptive algorithm includes synaptogenesis, synaptic shedding, and bi-directional synaptic weight modification to produce a network with outputs (i.e. neural codes) that represent input patterns proportional to the frequency of related patterns. In addition to pattern frequency, the correlational structure of the input environment also affects allocation of neural resources. The combined synaptic modification mechanisms provide an explanation of neuron allocation in the case of self-taught experts.  相似文献   

11.
Some aspects of the communicational and computational features of the central nervous system are discussed. The existence in the central nervous system of two main types of interneuronal communication, the wiring (i.e. the classical type of synaptic transmission) and the volume (i.e. a humoral type of non-synaptic transmission) transmission, has been proposed. Some features of these types of transmission are discussed, with special reference to the informational properties of peptide transmitters. With respect to the computational aspects of neural function, the identification of putative computational structures at the macroscopic (network) and microscopic (local circuit, synapse) levels suggests the existence of a computational hierarchical organization. In this context, the existence of a compartmental organization of various cerebral regions is discussed. It is hypothesized that membrane domains, made by patches of membrane in which preselected molecular movements are possible resulting in molecular interactions, can have an important role in the integrative capabilities of neural tissue. The coexistence of multiple neuroactive substances in central synapses is analyzed in the framework of information transfer processes at this level. The presence of putative homeostatic, heterostatic and mnestic mechanisms in the synapse is also discussed.  相似文献   

12.
Motor learning with unstable neural representations   总被引:2,自引:0,他引:2  
Rokni U  Richardson AG  Bizzi E  Seung HS 《Neuron》2007,54(4):653-666
It is often assumed that learning takes place by changing an otherwise stable neural representation. To test this assumption, we studied changes in the directional tuning of primate motor cortical neurons during reaching movements performed in familiar and novel environments. During the familiar task, tuning curves exhibited slow random drift. During learning of the novel task, random drift was accompanied by systematic shifts of tuning curves. Our analysis suggests that motor learning is based on a surprisingly unstable neural representation. To explain these results, we propose that motor cortex is a redundant neural network, i.e., any single behavior can be realized by multiple configurations of synaptic strengths. We further hypothesize that synaptic modifications underlying learning contain a random component, which causes wandering among synaptic configurations with equivalent behaviors but different neural representations. We use a simple model to explore the implications of these assumptions.  相似文献   

13.
Deriving tractable reduced equations of biological neural networks capturing the macroscopic dynamics of sub-populations of neurons has been a longstanding problem in computational neuroscience. In this paper, we propose a reduction of large-scale multi-population stochastic networks based on the mean-field theory. We derive, for a wide class of spiking neuron models, a system of differential equations of the type of the usual Wilson-Cowan systems describing the macroscopic activity of populations, under the assumption that synaptic integration is linear with random coefficients. Our reduction involves one unknown function, the effective non-linearity of the network of populations, which can be analytically determined in simple cases, and numerically computed in general. This function depends on the underlying properties of the cells, and in particular the noise level. Appropriate parameters and functions involved in the reduction are given for different models of neurons: McKean, Fitzhugh-Nagumo and Hodgkin-Huxley models. Simulations of the reduced model show a precise agreement with the macroscopic dynamics of the networks for the first two models.  相似文献   

14.
Presynaptic function   总被引:5,自引:0,他引:5  
Changing the strength of synapses is key to the adaptive modifications of what neuronal circuits compute. Unsurprisingly, many different mechanisms have evolved to alter synaptic strength. Some of these mechanisms depend on the history of synaptic use, others reflect the activity of modulatory neurons that are controlled through neural computations, and still others involve more global measures of neural activity. The molecular machinery synapses use to convey information from one neuron to the next not only plays an essential part in brain function but also is at the basis of processes that are vital to all cells. Because membrane fusion events at synapses are so precisely controlled, synapses offer an especially favorable system in which to study these basic processes. Here, I review some of the recent progress that has been made in understanding both how synaptic strength is regulated and how fundamental cell biological mechanisms are used to accomplish neuronal intercommunication.  相似文献   

15.
BACKGROUND: It is now well established that persistent nonsynaptic neuronal plasticity occurs after learning and, like synaptic plasticity, it can be the substrate for long-term memory. What still remains unclear, though, is how nonsynaptic plasticity contributes to the altered neural network properties on which memory depends. Understanding how nonsynaptic plasticity is translated into modified network and behavioral output therefore represents an important objective of current learning and memory research. RESULTS: By using behavioral single-trial classical conditioning together with electrophysiological analysis and calcium imaging, we have explored the cellular mechanisms by which experience-induced nonsynaptic electrical changes in a neuronal soma remote from the synaptic region are translated into synaptic and circuit level effects. We show that after single-trial food-reward conditioning in the snail Lymnaea stagnalis, identified modulatory neurons that are extrinsic to the feeding network become persistently depolarized between 16 and 24 hr after training. This is delayed with respect to early memory formation but concomitant with the establishment and duration of long-term memory. The persistent nonsynaptic change is extrinsic to and maintained independently of synaptic effects occurring within the network directly responsible for the generation of feeding. Artificial membrane potential manipulation and calcium-imaging experiments suggest a novel mechanism whereby the somal depolarization of an extrinsic neuron recruits command-like intrinsic neurons of the circuit underlying the learned behavior. CONCLUSIONS: We show that nonsynaptic plasticity in an extrinsic modulatory neuron encodes information that enables the expression of long-term associative memory, and we describe how this information can be translated into modified network and behavioral output.  相似文献   

16.
Long-term modification of synaptic strength is thought to be the basic mechanism underlying the activity-dependent refinement of neural circuits and the formation of memories engrammed on them. Studies ranging from cell culture preparations to humans subjects indicate that the decision of whether a synapse will undergo strengthening or weakening critically depends on the temporal order of presynaptic and postsynaptic activity. At many synapses, potentiation will be induced only when the presynaptic neuron fires an action potential within milliseconds before the postsynaptic neuron fires, whereas weakening will occur when it is the postsynaptic neuron that fires first. Such processes might be important for the remodeling of neural circuits by activity during development and for network functions such as sequence learning and prediction. Ultimately, this synaptic property might also be fundamental for the cognitive process by which we structure our experience through cause and effect relations.  相似文献   

17.
The local field potential (LFP) is among the most important experimental measures when probing neural population activity, but a proper understanding of the link between the underlying neural activity and the LFP signal is still missing. Here we investigate this link by mathematical modeling of contributions to the LFP from a single layer-5 pyramidal neuron and a single layer-4 stellate neuron receiving synaptic input. An intrinsic dendritic low-pass filtering effect of the LFP signal, previously demonstrated for extracellular signatures of action potentials, is seen to strongly affect the LFP power spectra, even for frequencies as low as 10 Hz for the example pyramidal neuron. Further, the LFP signal is found to depend sensitively on both the recording position and the position of the synaptic input: the LFP power spectra recorded close to the active synapse are typically found to be less low-pass filtered than spectra recorded further away. Some recording positions display striking band-pass characteristics of the LFP. The frequency dependence of the properties of the current dipole moment set up by the synaptic input current is found to qualitatively account for several salient features of the observed LFP. Two approximate schemes for calculating the LFP, the dipole approximation and the two-monopole approximation, are tested and found to be potentially useful for translating results from large-scale neural network models into predictions for results from electroencephalographic (EEG) or electrocorticographic (ECoG) recordings.  相似文献   

18.
Cyclic patterns of motor neuron activity are involved in the production of many rhythmic movements, such as walking, swimming, and scratching. These movements are controlled by neural circuits referred to as central pattern generators (CPGs). Some of these circuits function in the absence of both internal pacemakers and external feedback. We describe an associative neural network model whose dynamic behavior is similar to that of CPGs. The theory predicts the strength of all possible connections between pairs of neurons on the basis of the outputs of the CPG. It also allows the mean operating levels of the neurons to be deduced from the measured synaptic strengths between the pairs of neurons. We apply our theory to the CPG controlling escape swimming in the mollusk Tritonia diomedea. The basic rhythmic behavior is shown to be consistent with a simplified model that approximates neurons as threshold units and slow synaptic responses as elementary time delays. The model we describe may have relevance to other fixed action behaviors, as well as to the learning, recall, and recognition of temporally ordered information.  相似文献   

19.
Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.  相似文献   

20.
Q Gan  Y Wei 《Bio Systems》1992,27(3):137-144
A variant of the FitzHugh-Nagumo model is proposed in order to fully make use of the computational properties of intraneuronal dynamics. The mechanisms of threshold and refractory periods resulting from the double dynamical processes are qualitatively studied through computer simulation. The results show that the variant neuron model has the property that its threshold, refractory period and response amplitude are dynamically adjustable. This paper has also discussed some problems relating to collective property, learning and implementation of the neural network based on the neuron model proposed. It is noted that the implicit way to describe threshold and refractory period is advantageous to adaptive learning in neural networks and that molecular electronics probably provides an effective approach to implementing the above neuron model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号