首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We examine the performance of Hebbian-like attractor neural networks, recalling stored memory patterns from their distorted versions. Searching for an activation (firing-rate) function that maximizes the performance in sparsely connected low-activity networks, we show that the optimal activation function is a threshold-sigmoid of the neuron's input field. This function is shown to be in close correspondence with the dependence of the firing rate of cortical neurons on their integrated input current, as described by neurophysiological recordings and conduction-based models. It also accounts for the decreasing-density shape of firing rates that has been reported in the literature. Received:9 December 1994 / Accepted in revised form: 9 January 1996  相似文献   

2.
Neural networks are modelling tools that are, in principle, able to capture the input-output behaviour of arbitrary systems that may include the dynamics of animal populations or brain circuits. While a neural network model is useful if it captures phenomenologically the behaviour of the target system in this way, its utility is amplified if key mechanisms of the model can be discovered, and identified with those of the underlying system. In this review, we first describe, at a fairly high level with minimal mathematics, some of the tools used in constructing neural network models. We then go on to discuss the implications of network models for our understanding of the system they are supposed to describe, paying special attention to those models that deal with neural circuits and brain systems. We propose that neural nets are useful for brain modelling if they are viewed in a wider computational framework originally devised by Marr. Here, neural networks are viewed as an intermediate mechanistic abstraction between 'algorithm' and 'implementation', which can provide insights into biological neural representations and their putative supporting architectures.  相似文献   

3.
Neural population dynamics are often highly coordinated, allowing task-related computations to be understood as neural trajectories through low-dimensional subspaces. How the network connectivity and input structure give rise to such activity can be investigated with the aid of low-rank recurrent neural networks, a recently-developed class of computational models which offer a rich theoretical framework linking the underlying connectivity structure to emergent low-dimensional dynamics. This framework has so far relied on the assumption of all-to-all connectivity, yet cortical networks are known to be highly sparse. Here we investigate the dynamics of low-rank recurrent networks in which the connections are randomly sparsified, which makes the network connectivity formally full-rank. We first analyse the impact of sparsity on the eigenvalue spectrum of low-rank connectivity matrices, and use this to examine the implications for the dynamics. We find that in the presence of sparsity, the eigenspectra in the complex plane consist of a continuous bulk and isolated outliers, a form analogous to the eigenspectra of connectivity matrices composed of a low-rank and a full-rank random component. This analogy allows us to characterise distinct dynamical regimes of the sparsified low-rank network as a function of key network parameters. Altogether, we find that the low-dimensional dynamics induced by low-rank connectivity structure are preserved even at high levels of sparsity, and can therefore support rich and robust computations even in networks sparsified to a biologically-realistic extent.  相似文献   

4.
J Yang  P Li 《PloS one》2012,7(8):e42993
Are explicit versus implicit learning mechanisms reflected in the brain as distinct neural structures, as previous research indicates, or are they distinguished by brain networks that involve overlapping systems with differential connectivity? In this functional MRI study we examined the neural correlates of explicit and implicit learning of artificial grammar sequences. Using effective connectivity analyses we found that brain networks of different connectivity underlie the two types of learning: while both processes involve activation in a set of cortical and subcortical structures, explicit learners engage a network that uses the insula as a key mediator whereas implicit learners evoke a direct frontal-striatal network. Individual differences in working memory also differentially impact the two types of sequence learning.  相似文献   

5.
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.  相似文献   

6.
Non-linear data structure extraction using simple hebbian networks   总被引:1,自引:0,他引:1  
. We present a class a neural networks algorithms based on simple hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to self-organise; such networks have previously been shown to be capable of performing principal component analysis (PCA). In this paper, this is extended to exploratory projection pursuit (EPP), which is a statistical method for investigating structure in high-dimensional data sets. As opposed to previous proposals for networks which learn using hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network literature on non-linear PCA. Received: 30 May 1994/Accepted in revised form: 18 November 1994  相似文献   

7.
The state of art in computer modelling of neural networks with associative memory is reviewed. The available experimental data are considered on learning and memory of small neural systems, on isolated synapses and on molecular level. Computer simulations demonstrate that realistic models of neural ensembles exhibit properties which can be interpreted as image recognition, categorization, learning, prototype forming, etc. A bilayer model of associative neural network is proposed. One layer corresponds to the short-term memory, the other one to the long-term memory. Patterns are stored in terms of the synaptic strength matrix. We have studied the relaxational dynamics of neurons firing and suppression within the short-term memory layer under the influence of the long-term memory layer. The interaction among the layers has found to create a number of novel stable states which are not the learning patterns. These synthetic patterns may consist of elements belonging to different non-intersecting learning patterns. Within the framework of a hypothesis of selective and definite coding of images in brain one can interpret the observed effect as the "idea? generating" process.  相似文献   

8.
MOTIVATION: Apoptosis has drawn the attention of researchers because of its importance in treating some diseases through finding a proper way to block or slow down the apoptosis process. Having understood that caspase cleavage is the key to apoptosis, we find novel methods or algorithms are essential for studying the specificity of caspase cleavage activity and this helps the effective drug design. As bio-basis function neural networks have proven to outperform some conventional neural learning algorithms, there is a motivation, in this study, to investigate the application of bio-basis function neural networks for the prediction of caspase cleavage sites. RESULTS: Thirteen protein sequences with experimentally determined caspase cleavage sites were downloaded from NCBI. Bayesian bio-basis function neural networks are investigated and the comparisons with single-layer perceptrons, multilayer perceptrons, the original bio-basis function neural networks and support vector machines are given. The impact of the sliding window size used to generate sub-sequences for modelling on prediction accuracy is studied. The results show that the Bayesian bio-basis function neural network with two Gaussian distributions for model parameters (weights) performed the best and the highest prediction accuracy is 97.15 +/- 1.13%. AVAILABILITY: The package of Bayesian bio-basis function neural network can be obtained by request to the author.  相似文献   

9.
Accurate prediction of species distributions based on sampling and environmental data is essential for further scientific analysis, such as stock assessment, detection of abundance fluctuation due to climate change or overexploitation, and to underpin management and legislation processes. The evolution of computer science and statistics has allowed the development of sophisticated and well-established modelling techniques as well as a variety of promising innovative approaches for modelling species distribution. The appropriate selection of modelling approach is crucial to the quality of predictions about species distribution. In this study, modelling techniques based on different approaches are compared and evaluated in relation to their predictive performance, utilizing fish density acoustic data. Generalized additive models and mixed models amongst the regression models, associative neural networks (ANNs) and artificial neural networks ensemble amongst the artificial neural networks and ordinary kriging amongst the geostatistical techniques are applied and evaluated. A verification dataset is used for estimating the predictive performance of these models. A combination of outputs from the different models is applied for prediction optimization to exploit the ability of each model to explain certain aspects of variation in species acoustic density. Neural networks and especially ANNs appear to provide more accurate results in fitting the training dataset while generalized additive models appear more flexible in predicting the verification dataset. The efficiency of each technique in relation to certain sampling and output strategies is also discussed.  相似文献   

10.
A new approach for nonlinear system identification and control based on modular neural networks (MNN) is proposed in this paper. The computational complexity of neural identification can be greatly reduced if the whole system is decomposed into several subsystems. This is obtained using a partitioning algorithm. Each local nonlinear model is associated with a nonlinear controller. These are also implemented by neural networks. The switching between the neural controllers is done by a dynamical switcher, also implemented by neural networks, that tracks the different operating points. The proposed multiple modelling and control strategy has been successfully tested on simulated laboratory scale liquid-level system.  相似文献   

11.
 On-center off-surround shunting neural networks are often applied as models for content-addressable memory (CAM), the equilibria being the stored memories. One important demand of biological plausible CAMs is that they function under a broad range of parameters, since several parameters vary due to postnatal maturation or learning. Ellias, Cohen and Grossberg have put much effort into showing the stability properties of several configurations of on-center off-surround shunting neural networks. In this article we present numerical bifurcation analysis of distance-dependent on-center off-surround shunting neural networks with fixed external input. We varied four parameters that may be subject to postnatal maturation: the range of both excitatory and inhibitory connections and the strength of both inhibitory and excitatory connections. These analyses show that fold bifurcations occur in the equilibrium behavior of the network by variation of all four parameters. The most important result is that the number of activation peaks in the equilibrium behavior varies from one to many if the range of inhibitory connections is decreased. Moreover, under a broad range of the parameters the stability of the network is maintained. The examined network is implemented in an ART network, Exact ART, where it functions as the classification layer F2. The stability of the ART network with the F2-field in different dynamic regimes is maintained and the behavior is functional in Exact ART. Through a bifurcation the learning behavior of Exact ART may even change from forming local representations to forming distributed representations. Received: 23 January 1996 / Accepted in revised form: 1 July 1996  相似文献   

12.
While learning and development are well characterized in feedforward networks, these features are more difficult to analyze in recurrent networks due to the increased complexity of dual dynamics – the rapid dynamics arising from activation states and the slow dynamics arising from learning or developmental plasticity. We present analytical and numerical results that consider dual dynamics in a recurrent network undergoing Hebbian learning with either constant weight decay or weight normalization. Starting from initially random connections, the recurrent network develops symmetric or near-symmetric connections through Hebbian learning. Reciprocity and modularity arise naturally through correlations in the activation states. Additionally, weight normalization may be better than constant weight decay for the development of multiple attractor states that allow a diverse representation of the inputs. These results suggest a natural mechanism by which synaptic plasticity in recurrent networks such as cortical and brainstem premotor circuits could enhance neural computation and the generation of motor programs. Received: 27 April 1998 / Accepted in revised form: 16 March 1999  相似文献   

13.
Rich clubs arise when nodes that are ‘rich’ in connections also form an elite, densely connected ‘club’. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour.  相似文献   

14.
This study investigates the contributions of network topology features to the dynamic behavior of hierarchically organized excitable networks. Representatives of different types of hierarchical networks as well as two biological neural networks are explored with a three-state model of node activation for systematically varying levels of random background network stimulation. The results demonstrate that two principal topological aspects of hierarchical networks, node centrality and network modularity, correlate with the network activity patterns at different levels of spontaneous network activation. The approach also shows that the dynamic behavior of the cerebral cortical systems network in the cat is dominated by the network's modular organization, while the activation behavior of the cellular neuronal network of Caenorhabditis elegans is strongly influenced by hub nodes. These findings indicate the interaction of multiple topological features and dynamic states in the function of complex biological networks.  相似文献   

15.
Characterizing metastable neural dynamics in finite-size spiking networks remains a daunting challenge. We propose to address this challenge in the recently introduced replica-mean-field (RMF) limit. In this limit, networks are made of infinitely many replicas of the finite network of interest, but with randomized interactions across replicas. Such randomization renders certain excitatory networks fully tractable at the cost of neglecting activity correlations, but with explicit dependence on the finite size of the neural constituents. However, metastable dynamics typically unfold in networks with mixed inhibition and excitation. Here, we extend the RMF computational framework to point-process-based neural network models with exponential stochastic intensities, allowing for mixed excitation and inhibition. Within this setting, we show that metastable finite-size networks admit multistable RMF limits, which are fully characterized by stationary firing rates. Technically, these stationary rates are determined as the solutions of a set of delayed differential equations under certain regularity conditions that any physical solutions shall satisfy. We solve this original problem by combining the resolvent formalism and singular-perturbation theory. Importantly, we find that these rates specify probabilistic pseudo-equilibria which accurately capture the neural variability observed in the original finite-size network. We also discuss the emergence of metastability as a stochastic bifurcation, which can be interpreted as a static phase transition in the RMF limits. In turn, we expect to leverage the static picture of RMF limits to infer purely dynamical features of metastable finite-size networks, such as the transition rates between pseudo-equilibria.  相似文献   

16.
Dynamic Bayesian networks (DBNs) are considered as a promising model for inferring gene networks from time series microarray data. DBNs have overtaken Bayesian networks (BNs) as DBNs can construct cyclic regulations using time delay information. In this paper, a general framework for DBN modelling is outlined. Both discrete and continuous DBN models are constructed systematically and criteria for learning network structures are introduced from a Bayesian statistical viewpoint. This paper reviews the applications of DBNs over the past years. Real data applications for Saccharomyces cerevisiae time series gene expression data are also shown.  相似文献   

17.
《Journal of Physiology》2009,103(6):342-347
The purpose of this study is to investigate information processing in the primary somatosensory system with the help of oscillatory network modelling. Specifically, we consider interactions in the oscillatory 600 Hz activity between the thalamus and the cortical Brodmann areas 3b and 1. This type of cortical activity occurs after electrical stimulation of peripheral nerves such as the median nerve. Our measurements consist of simultaneous 31-channel MEG and 32-channel EEG recordings and individual 3D MRI data. We perform source localization by means of a multi-dipole model. The dipole activation time courses are then modelled by a set of coupled oscillators, described by linear second-order ordinary delay differential equations (DDEs). In particular, a new model for the thalamic activity is included in the oscillatory network. The parameters of the DDE system are successfully fitted to the data by a nonlinear evolutionary optimization method. To activate the oscillatory network, an individual input function is used, based on measurements of the propagated stimulation signal at the biceps. A significant feedback from the cortex to the thalamus could be detected by comparing the network modelling with and without feedback connections. Our finding in humans is supported by earlier animal studies. We conclude that this type of rhythmic brain activity can be modelled by oscillatory networks in order to disentangle feed forward and feedback information transfer.  相似文献   

18.
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.  相似文献   

19.
In this paper, local synchronization is considered for coupled delayed neural networks with discontinuous activation functions. Under the framework of Filippov solution and in the sense of generalized derivative, a novel sufficient condition is obtained to ensure the synchronization based on the Lyapunov exponent and the detailed analysis in Danca (Int J Bifurcat Chaos 12(8):1813–1826, 2002; Chaos Solitons Fractals 22:605–612, 2004). Simulation results are given to illustrate the theoretical results.  相似文献   

20.
D Koruga 《Bio Systems》1990,23(4):297-303
We describe a new approach in the research of neural networks. This research is based on molecular networks in the neuron. If we use molecular networks as a sub-neuron factor of neural networks, it is a more realistic approach than today's concepts in this new computer technology field, because the artificial neural activity profile is similar to the profile of the action potential in the natural neuron. The molecular networks approach can be used in three technologies: neurocomputer, neurochip and molecular chip. This means that molecular networks open new fields of science and engineering called molecular-like machines and molecular machines.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号