首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
We consider a neural network model in which the single neurons are chosen to closely resemble known physiological properties. The neurons are assumed to be linked by synapses which change their strength according to Hebbian rules on a short time scale (100ms). The dynamics of the network — the time evolution of the cell potentials and the synapses — is investigated by computer simulation. As in more abstract network models (Cooper 1973; Hopfield 1982; Kohonen 1984) it is found that the local dynamics of the cell potentials and the synaptic strengths result in global cooperative properties of the network and enable the network to process an incoming flux of information and to learn and store patterns associatively. A trained net can associate missing details of a pattern, can correct wrong details and can suppress noise in a pattern. The network can further abstract the prototype from a series of patterns with variations. A suitable coupling constant connecting the dynamics of the cell potentials with the synaptic strengths is derived by a mean field approximation. This coupling constant controls the neural sensitivity and thereby avoids both extremes of the network state, the state of permanent inactivity and the state of epileptic hyperactivity.  相似文献   

3.
 In this paper, we study the combined dynamics of the neural activity and the synaptic efficiency changes in a fully connected network of biologically realistic neurons with simple synaptic plasticity dynamics including both potentiation and depression. Using a mean-field of technique, we analyzed the equilibrium states of neural networks with dynamic synaptic connections and found a class of bistable networks. For this class of networks, one of the stable equilibrium states shows strong connectivity and coherent responses to external input. In the other stable equilibrium, the network is loosely connected and responds non coherently to external input. Transitions between the two states can be achieved by positively or negatively correlated external inputs. Such networks can therefore switch between their phases according to the statistical properties of the external input. Non-coherent input can only “rcad” the state of the network, while a correlated one can change its state. We speculate that this property, specific for plastic neural networks, can give a clue to understand fully unsupervised learning models. Received: 8 August 1999 / Accepted in revised form: 16 March 2000  相似文献   

4.
Deriving tractable reduced equations of biological neural networks capturing the macroscopic dynamics of sub-populations of neurons has been a longstanding problem in computational neuroscience. In this paper, we propose a reduction of large-scale multi-population stochastic networks based on the mean-field theory. We derive, for a wide class of spiking neuron models, a system of differential equations of the type of the usual Wilson-Cowan systems describing the macroscopic activity of populations, under the assumption that synaptic integration is linear with random coefficients. Our reduction involves one unknown function, the effective non-linearity of the network of populations, which can be analytically determined in simple cases, and numerically computed in general. This function depends on the underlying properties of the cells, and in particular the noise level. Appropriate parameters and functions involved in the reduction are given for different models of neurons: McKean, Fitzhugh-Nagumo and Hodgkin-Huxley models. Simulations of the reduced model show a precise agreement with the macroscopic dynamics of the networks for the first two models.  相似文献   

5.
Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities.  相似文献   

6.
Different network models have been suggested for the topology underlying complex interactions in natural systems. These models are aimed at replicating specific statistical features encountered in real-world networks. However, it is rarely considered to which degree the results obtained for one particular network class can be extrapolated to real-world networks. We address this issue by comparing different classical and more recently developed network models with respect to their ability to generate networks with large structural variability. In particular, we consider the statistical constraints which the respective construction scheme imposes on the generated networks. After having identified the most variable networks, we address the issue of which constraints are common to all network classes and are thus suitable candidates for being generic statistical laws of complex networks. In fact, we find that generic, not model-related dependencies between different network characteristics do exist. This makes it possible to infer global features from local ones using regression models trained on networks with high generalization power. Our results confirm and extend previous findings regarding the synchronization properties of neural networks. Our method seems especially relevant for large networks, which are difficult to map completely, like the neural networks in the brain. The structure of such large networks cannot be fully sampled with the present technology. Our approach provides a method to estimate global properties of under-sampled networks in good approximation. Finally, we demonstrate on three different data sets (C. elegans neuronal network, R. prowazekii metabolic network, and a network of synonyms extracted from Roget's Thesaurus) that real-world networks have statistical relations compatible with those obtained using regression models.  相似文献   

7.
8.
This work contains a proposition of an artificial modular neural network (MNN) in which every module network exchanges input/output information with others simultaneously. It further studies the basic dynamical characteristics of this network through both computer simulations and analytical considerations. A notable feature of this model is that it has generic representation with regard to the number of composed modules, network topologies, and classes of introduced interactions. The information processing of the MNN is described as the minimization of a total-energy function that consists of partial-energy functions for modules and their interactions, and the activity and weight dynamics are derived from the total-energy function under the Lyapunov stability condition. This concept was realized by Cross-Coupled Hopfield Nets (CCHN) that one of the authors proposed. In this paper, in order to investigate the basic dynamical properties of CCHN, we offer a representative model called Cross-Coupled Hopfield Nets with Local And Global Interactions (CCHN-LAGI) to which two distinct classes of interactions – local and global interactions – are introduced. Through a conventional test for associative memories, it is confirmed that our energy-function-based approach gives us proper dynamics of CCHN-LAGI even if the networks have different modularity. We also discuss the contribution of a single interaction and the joint contribution of the two distinct interactions through the eigenvalue analysis of connection matrices. Received: 18 July 1995 / Accepted in revised form: 2 October 1997  相似文献   

9.
10.
In this paper, we investigate the problem of global and robust stability of a class of interval Hopfield neural networks that have time-varying delays. Some criteria for the global and robust stability of such networks are derived, by means of constructing suitable Lyapunov functionals for the networks. As a by-product, for the conventional Hopfield neural networks with time-varying delays, we also obtain some new criteria for their global and asymptotic stability.  相似文献   

11.
Influence of noise on the function of a “physiological” neural network   总被引:5,自引:0,他引:5  
A model neural network with stochastic elements in its millisecond dynamics is investigated. The network consists of neuronal units which are modelled in close analogy to physiological neurons. Dynamical variables of the network are the cellular potentials, axonic currents and synaptic efficacies. The dynamics of the synapses obeys a modified Hebbian rule and, as proposed by v. d. Malsburg (1981, 1985), develop on a time scale of a tenth of a second. In a previous publication (Buhmann and Schulten 1986) we have confirmed that the resulting noiseless autoassociative network is capable of the well-known computational tasks of formal associative networks (Cooper 1973; Kohonen et al. 1984, 1981; Hopfield 1982). In the present paper we demonstrate that random fluctuations of the membrane potential improve the performance of the network. In comparison to a deterministic network a noisy neural network can learn at lower input frequencies and with lower average neural firing rates. The electrical activity of a noisy network is very reminiscent of that observed by physiological recordings. We demonstrate furthermore that associative storage reduces the effective dimension of the phase space in which the electrical activity of the network develops.  相似文献   

12.
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.  相似文献   

13.
We investigate an artificial neural network model with a modified Hebb rule. It is an auto-associative neural network similar to the Hopfield model and to the Willshaw model. It has properties of both of these models. Another property is that the patterns are sparsely coded and are stored in cycles of synchronous neural activities. The cycles of activity for some ranges of parameter increase the capacity of the model. We discuss basic properties of the model and some of the implementation issues, namely optimizing of the algorithms. We describe the modification of the Hebb learning rule, the learning algorithm, the generation of patterns, decomposition of patterns into cycles and pattern recall.  相似文献   

14.
A system's wiring constrains its dynamics, yet modelling of neural structures often overlooks the specific networks formed by their neurons. We developed an approach for constructing anatomically realistic networks and reconstructed the GABAergic microcircuit formed by the medium spiny neurons (MSNs) and fast-spiking interneurons (FSIs) of the adult rat striatum. We grew dendrite and axon models for these neurons and extracted probabilities for the presence of these neurites as a function of distance from the soma. From these, we found the probabilities of intersection between the neurites of two neurons given their inter-somatic distance, and used these to construct three-dimensional striatal networks. The MSN dendrite models predicted that half of all dendritic spines are within 100μm of the soma. The constructed networks predict distributions of gap junctions between FSI dendrites, synaptic contacts between MSNs, and synaptic inputs from FSIs to MSNs that are consistent with current estimates. The models predict that to achieve this, FSIs should be at most 1% of the striatal population. They also show that the striatum is sparsely connected: FSI-MSN and MSN-MSN contacts respectively form 7% and 1.7% of all possible connections. The models predict two striking network properties: the dominant GABAergic input to a MSN arises from neurons with somas at the edge of its dendritic field; and FSIs are inter-connected on two different spatial scales: locally by gap junctions and distally by synapses. We show that both properties influence striatal dynamics: the most potent inhibition of a MSN arises from a region of striatum at the edge of its dendritic field; and the combination of local gap junction and distal synaptic networks between FSIs sets a robust input-output regime for the MSN population. Our models thus intimately link striatal micro-anatomy to its dynamics, providing a biologically grounded platform for further study.  相似文献   

15.
We describe a class of feed forward neural network models for associative content addressable memory (ACAM) which utilize sparse internal representations for stored data. In addition to the input and output layers, our networks incorporate an intermediate processing layer which serves to label each stored memory and to perform error correction and association. We study two classes of internal label representations: the unary representation and various sparse, distributed representations. Finally, we consider storage of sparse data and sparsification of data. These models are found to have advantages in terms of storage capacity, hardware efficiency, and recall reliability when compared to the Hopfield model, and to possess analogies to both biological neural networks and standard digital computer memories.  相似文献   

16.
In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level , in the large and sparse coding limits (). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.  相似文献   

17.
Short-term synaptic dynamics differ markedly across connections and strongly regulate how action potentials communicate information. To model the range of synaptic dynamics observed in experiments, we have developed a flexible mathematical framework based on a linear-nonlinear operation. This model can capture various experimentally observed features of synaptic dynamics and different types of heteroskedasticity. Despite its conceptual simplicity, we show that it is more adaptable than previous models. Combined with a standard maximum likelihood approach, synaptic dynamics can be accurately and efficiently characterized using naturalistic stimulation patterns. These results make explicit that synaptic processing bears algorithmic similarities with information processing in convolutional neural networks.  相似文献   

18.
Recurrent neural networks with full symmetric connectivity have been extensively studied as associative memories and pattern recognition devices. However, there is considerable evidence that sparse, asymmetrically connected, mainly excitatory networks with broadly directed inhibition are more consistent with biological reality. In this paper, we use the technique of return maps to study the dynamics of random networks with sparse, asymmetric connectivity and nonspecific inhibition. These networks show three qualitatively different kinds of behavior: fixed points, cycles of low period, and extremely long cycles verging on aperiodicity. Using statistical arguments, we relate these behaviors to network parameters and present empirical evidence for the accuracy of this statistical model. The model, in turn, leads to methods for controlling the level of activity in networks. Studying random, untrained networks provides an understanding of the intrinsic dynamics of these systems. Such dynamics could provide a substrate for the much more complex behavior shown when synaptic modification is allowed.  相似文献   

19.
Neural networks are increasingly being used in science to infer hidden dynamics of natural systems from noisy observations, a task typically handled by hierarchical models in ecology. This article describes a class of hierarchical models parameterised by neural networks – neural hierarchical models. The derivation of such models analogises the relationship between regression and neural networks. A case study is developed for a neural dynamic occupancy model of North American bird populations, trained on millions of detection/non‐detection time series for hundreds of species, providing insights into colonisation and extinction at a continental scale. Flexible models are increasingly needed that scale to large data and represent ecological processes. Neural hierarchical models satisfy this need, providing a bridge between deep learning and ecological modelling that combines the function representation power of neural networks with the inferential capacity of hierarchical models.  相似文献   

20.
The state of art in computer modelling of neural networks with associative memory is reviewed. The available experimental data are considered on learning and memory of small neural systems, on isolated synapses and on molecular level. Computer simulations demonstrate that realistic models of neural ensembles exhibit properties which can be interpreted as image recognition, categorization, learning, prototype forming, etc. A bilayer model of associative neural network is proposed. One layer corresponds to the short-term memory, the other one to the long-term memory. Patterns are stored in terms of the synaptic strength matrix. We have studied the relaxational dynamics of neurons firing and suppression within the short-term memory layer under the influence of the long-term memory layer. The interaction among the layers has found to create a number of novel stable states which are not the learning patterns. These synthetic patterns may consist of elements belonging to different non-intersecting learning patterns. Within the framework of a hypothesis of selective and definite coding of images in brain one can interpret the observed effect as the "idea? generating" process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号