首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In biological systems, instead of actual encoders at different joints, proprioception signals are acquired through distributed receptive fields. In robotics, a single and accurate sensor output per link (encoder) is commonly used to track the position and the velocity. Interfacing bio-inspired control systems with spiking neural networks emulating the cerebellum with conventional robots is not a straight forward task. Therefore, it is necessary to adapt this one-dimensional measure (encoder output) into a multidimensional space (inputs for a spiking neural network) to connect, for instance, the spiking cerebellar architecture; i.e. a translation from an analog space into a distributed population coding in terms of spikes. This paper analyzes how evolved receptive fields (optimized towards information transmission) can efficiently generate a sensorimotor representation that facilitates its discrimination from other "sensorimotor states". This can be seen as an abstraction of the Cuneate Nucleus (CN) functionality in a robot-arm scenario. We model the CN as a spiking neuron population coding in time according to the response of mechanoreceptors during a multi-joint movement in a robot joint space. An encoding scheme that takes into account the relative spiking time of the signals propagating from peripheral nerve fibers to second-order somatosensory neurons is proposed. Due to the enormous number of possible encodings, we have applied an evolutionary algorithm to evolve the sensory receptive field representation from random to optimized encoding. Following the nature-inspired analogy, evolved configurations have shown to outperform simple hand-tuned configurations and other homogenized configurations based on the solution provided by the optimization engine (evolutionary algorithm). We have used artificial evolutionary engines as the optimization tool to circumvent nonlinearity responses in receptive fields.  相似文献   

2.
Dynamic recurrent neural networks composed of units with continuous activation functions provide a powerful tool for simulating a wide range of behaviors, since the requisite interconnections can be readily derived by gradient descent methods. However, it is not clear whether more realistic integrate-and-fire cells with comparable connection weights would perform the same functions. We therefore investigated methods to convert dynamic recurrent neural networks of continuous units into networks with integrate-and-fire cells. The transforms were tested on two recurrent networks derived by backpropagation. The first simulates a short-term memory task with units that mimic neural activity observed in cortex of monkeys performing instructed delay tasks. The network utilizes recurrent connections to generate sustained activity that codes the remembered value of a transient cue. The second network simulates patterns of neural activity observed in monkeys performing a step-tracking task with flexion/extension wrist movements. This more complicated network provides a working model of the interactions between multiple spinal and supraspinal centers controlling motoneurons. Our conversion algorithm replaced each continuous unit with multiple integrate-and-fire cells that interact through delayed "synaptic potentials". Successful transformation depends on obtaining an appropriate fit between the activation function of the continuous units and the input-output relation of the spiking cells. This fit can be achieved by adapting the parameters of the synaptic potentials to replicate the input-output behavior of a standard sigmoidal activation function (shown for the short-term memory network). Alternatively, a customized activation function can be derived from the input-output relation of the spiking cells for a chosen set of parameters (demonstrated for the wrist flexion/extension network). In both cases the resulting networks of spiking cells exhibited activity that replicated the activity of corresponding continuous units. This confirms that the network solutions obtained through backpropagation apply to spiking networks and provides a useful method for deriving recurrent spiking networks performing a wide range of functions.  相似文献   

3.
Heterogeneity of firing rate statistics is known to have severe consequences on neural coding. Recent experimental recordings in weakly electric fish indicate that the distribution-width of superficial pyramidal cell firing rates (trial- and time-averaged) in the electrosensory lateral line lobe (ELL) depends on the stimulus, and also that network inputs can mediate changes in the firing rate distribution across the population. We previously developed theoretical methods to understand how two attributes (synaptic and intrinsic heterogeneity) interact and alter the firing rate distribution in a population of integrate-and-fire neurons with random recurrent coupling. Inspired by our experimental data, we extend these theoretical results to a delayed feedforward spiking network that qualitatively capture the changes of firing rate heterogeneity observed in in-vivo recordings. We demonstrate how heterogeneous neural attributes alter firing rate heterogeneity, accounting for the effect with various sensory stimuli. The model predicts how the strength of the effective network connectivity is related to intrinsic heterogeneity in such delayed feedforward networks: the strength of the feedforward input is positively correlated with excitability (threshold value for spiking) when firing rate heterogeneity is low and is negatively correlated with excitability with high firing rate heterogeneity. We also show how our theory can be used to predict effective neural architecture. We demonstrate that neural attributes do not interact in a simple manner but rather in a complex stimulus-dependent fashion to control neural heterogeneity and discuss how it can ultimately shape population codes.  相似文献   

4.
It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information.  相似文献   

5.
Summary We investigate the phenomenon of epileptiform activity using a discrete model of cortical neural networks. Our model is reduced to the elementary features of neurons and assumes simplified dynamics of action potentials and postsynaptic potentials. The discrete model provides a comparably high simulation speed which allows the rendering of phase diagrams and simulations of large neural networks in reasonable time. Further the reduction to the basic features of neurons provides insight into the essentials of a possible mechanism of epilepsy. Our computer simulations suggest that the detailed dynamics of postsynaptic and action potentials are not indispensable for obtaining epileptiform behavior on the system level. The simulation results of autonomously evolving networks exhibit a regime in which the network dynamics spontaneously switch between fluctuating and oscillating behavior and produce isolated network spikes without external stimulation. Inhibitory neurons have been found to play an important part in the synchronization of neural firing: an increased number of synapses established by inhibitory neurons onto other neurons induces a transition to the spiking regime. A decreased frequency accompanying the hypersynchronous population activity has only occurred with slow inhibitory postsynaptic potentials.  相似文献   

6.
Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).  相似文献   

7.
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.  相似文献   

8.
9.
In this paper, a method for automatic construction of a fuzzy rule-based system from numerical data using the Incremental Learning Fuzzy Neural (ILFN) network and the Genetic Algorithm is presented. The ILFN network was developed for pattern classification applications. The ILFN network, which employed fuzzy sets and neural network theory, equips with a fast, one-pass, on-line, and incremental learning algorithm. After trained, the ILFN network stored numerical knowledge in hidden units, which can then be directly interpreted into if then rule bases. However, the rules extracted from the ILFN network are not in an optimized fuzzy linguistic form. In this paper, a knowledge base for fuzzy expert system is extracted from the hidden units of the ILFN classifier. A genetic algorithm is then invoked, in an iterative manner, to reduce number of rules and select only discriminate features from input patterns needed to provide a fuzzy rule-based system. Three computer simulations using a simulated 2-D 3-class data, the well-known Fisher's Iris data set, and the Wisconsin breast cancer data set were performed. The fuzzy rule-based system derived from the proposed method achieved 100% and 97.33% correct classification on the 75 patterns for training set and 75 patterns for test set, respectively. For the Wisconsin breast cancer data set, using 400 patterns for training and 299 patterns for testing, the derived fuzzy rule-based system achieved 99.5% and 98.33% correct classification on the training set and the test set, respectively.  相似文献   

10.
Recent experimental reports have suggested that cortical networks can operate in regimes were sensory information is encoded by relatively small populations of spikes and their precise relative timing. Combined with the discovery of spike timing dependent plasticity, these findings have sparked growing interest in the capabilities of neurons to encode and decode spike timing based neural representations. To address these questions, a novel family of methodologically diverse supervised learning algorithms for spiking neuron models has been developed. These models have demonstrated the high capacity of simple neural architectures to operate also beyond the regime of the well established independent rate codes and to utilize theoretical advantages of spike timing as an additional coding dimension.  相似文献   

11.
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.  相似文献   

12.
Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.  相似文献   

13.
Coherent neural spiking and local field potentials are believed to be signatures of the binding and transfer of information in the brain. Coherent activity has now been measured experimentally in many regions of mammalian cortex. Recently experimental evidence has been presented suggesting that neural information is encoded and transferred in packets, i.e., in stereotypical, correlated spiking patterns of neural activity. Due to their relevance to coherent spiking, synfire chains are one of the main theoretical constructs that have been appealed to in order to describe coherent spiking and information transfer phenomena. However, for some time, it has been known that synchronous activity in feedforward networks asymptotically either approaches an attractor with fixed waveform and amplitude, or fails to propagate. This has limited the classical synfire chain’s ability to explain graded neuronal responses. Recently, we have shown that pulse-gated synfire chains are capable of propagating graded information coded in mean population current or firing rate amplitudes. In particular, we showed that it is possible to use one synfire chain to provide gating pulses and a second, pulse-gated synfire chain to propagate graded information. We called these circuits synfire-gated synfire chains (SGSCs). Here, we present SGSCs in which graded information can rapidly cascade through a neural circuit, and show a correspondence between this type of transfer and a mean-field model in which gating pulses overlap in time. We show that SGSCs are robust in the presence of variability in population size, pulse timing and synaptic strength. Finally, we demonstrate the computational capabilities of SGSC-based information coding by implementing a self-contained, spike-based, modular neural circuit that is triggered by streaming input, processes the input, then makes a decision based on the processed information and shuts itself down.  相似文献   

14.
The problem of rule extraction from neural networks is NP-hard. This work presents a new technique to extract "if-then-else" rules from ensembles of DIMLP neural networks. Rules are extracted in polynomial time with respect to the dimensionality of the problem, the number of examples, and the size of the resulting network. Further, the degree of matching between extracted rules and neural network responses is 100%. Ensembles of DIMLP networks were trained on four data sets in the public domain. Extracted rules were on average significantly more accurate than those extracted from C4.5 decision trees.  相似文献   

15.
Burst firings are functionally important behaviors displayed by neural circuits, which plays a primary role in reliable transmission of electrical signals for neuronal communication. However, with respect to the computational capability of neural networks, most of relevant studies are based on the spiking dynamics of individual neurons, while burst firing is seldom considered. In this paper, we carry out a comprehensive study to compare the performance of spiking and bursting dynamics on the capability of liquid computing, which is an effective approach for intelligent computation of neural networks. The results show that neural networks with bursting dynamic have much better computational performance than those with spiking dynamics, especially for complex computational tasks. Further analysis demonstrate that the fast firing pattern of bursting dynamics can obviously enhance the efficiency of synaptic integration from pre-neurons both temporally and spatially. This indicates that bursting dynamic can significantly enhance the complexity of network activity, implying its high efficiency in information processing.  相似文献   

16.
Developing networks of neural systems can exhibit spontaneous, synchronous activities called neural bursts, which can be important in the organization of functional neural circuits. Before the network matures, the activity level of a burst can reverberate in repeated rise-and-falls in periods of hundreds of milliseconds following an initial wave-like propagation of spiking activity, while the burst itself lasts for seconds. To investigate the spatiotemporal structure of the reverberatory bursts, we culture dissociated, rat cortical neurons on a high-density multi-electrode array to record the dynamics of neural activity over the growth and maturation of the network. We find the synchrony of the spiking significantly reduced following the initial wave and the activities become broadly distributed spatially. The synchrony recovers as the system reverberates until the end of the burst. Using a propagation model we infer the spreading speed of the spiking activity, which increases as the culture ages. We perform computer simulations of the system using a physiological model of spiking networks in two spatial dimensions and find the parameters that reproduce the observed resynchronization of spiking in the bursts. An analysis of the simulated dynamics suggests that the depletion of synaptic resources causes the resynchronization. The spatial propagation dynamics of the simulations match well with observations over the course of a burst and point to an interplay of the synaptic efficacy and the noisy neural self-activation in producing the morphology of the bursts.  相似文献   

17.
Wang  Ziyin  Wang  Rubin  Fang  Ruiyan 《Cognitive neurodynamics》2015,9(2):129-144
This paper aimed at assessing and comparing the effects of the inhibitory neurons in the neural network on the neural energy distribution, and the network activities in the absence of the inhibitory neurons to understand the nature of neural energy distribution and neural energy coding. Stimulus, synchronous oscillation has significant difference between neural networks with and without inhibitory neurons, and this difference can be quantitatively evaluated by the characteristic energy distribution. In addition, the synchronous oscillation difference of the neural activity can be quantitatively described by change of the energy distribution if the network parameters are gradually adjusted. Compared with traditional method of correlation coefficient analysis, the quantitative indicators based on nervous energy distribution characteristics are more effective in reflecting the dynamic features of the neural network activities. Meanwhile, this neural coding method from a global perspective of neural activity effectively avoids the current defects of neural encoding and decoding theory and enormous difficulties encountered. Our studies have shown that neural energy coding is a new coding theory with high efficiency and great potential.  相似文献   

18.
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them.  相似文献   

19.
Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号