首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.  相似文献   

2.
C Müller  H Beck  D Coulter  S Remy 《Neuron》2012,75(5):851-864
The transformation of dendritic excitatory synaptic inputs to axonal action potential output is the fundamental computation performed by all principal neurons. We show that in the hippocampus this transformation is potently controlled by recurrent inhibitory microcircuits. However, excitatory input on highly excitable dendritic branches could resist inhibitory?control by generating strong dendritic spikes and?trigger precisely timed action potential output. Furthermore, we show that inhibition-sensitive branches can be transformed into inhibition-resistant, strongly spiking branches by intrinsic plasticity of branch excitability. In addition, we demonstrate that the inhibitory control of spatially defined dendritic excitation is strongly regulated by network activity patterns. Our findings suggest that dendritic spikes may serve to transform correlated branch input into reliable and temporally precise output even in the presence of inhibition.  相似文献   

3.
4.
Despite the vital importance of our ability to accurately process and encode temporal information, the underlying neural mechanisms are largely unknown. We have previously described a theoretical framework that explains how temporal representations, similar to those reported in the visual cortex, can form in locally recurrent cortical networks as a function of reward modulated synaptic plasticity. This framework allows networks of both linear and spiking neurons to learn the temporal interval between a stimulus and paired reward signal presented during training. Here we use a mean field approach to analyze the dynamics of non-linear stochastic spiking neurons in a network trained to encode specific time intervals. This analysis explains how recurrent excitatory feedback allows a network structure to encode temporal representations.  相似文献   

5.
Secretomotor neurons, immunoreactive for vasoactive intestinal peptide (VIP), are important in controlling chloride secretion in the small intestine. These neurons form functional synapses with other submucosal VIP neurons and transmit via slow excitatory postsynaptic potentials (EPSPs). Thus they form a recurrent network with positive feedback. Intrinsic sensory neurons within the submucosa are also likely to form recurrent networks with positive feedback, provide substantial output to VIP neurons, and receive input from VIP neurons. If positive feedback within recurrent networks is sufficiently large, then neurons in the network respond to even small stimuli by firing at their maximum possible rate, even after the stimulus is removed. However, it is not clear whether such a mechanism operates within the recurrent networks of submucous neurons. We investigated this question by performing computer simulations of realistic models of VIP and intrinsic sensory neuron networks. In the expected range of electrophysiological properties, we found that activity in the VIP neuron network decayed slowly after cessation of a stimulus, indicating that positive feedback is not strong enough to support the uncontrolled firing state. The addition of intrinsic sensory neurons produced a low stable firing rate consistent with the common finding that basal secretory activity is, in part, neurogenic. Changing electrophysiological properties enables these recurrent networks to support the uncontrolled firing state, which may have implications with hypersecretion in the presence of enterotoxins such as cholera-toxin.  相似文献   

6.
7.
Certain premotor neurons of the oculomotor system fire at a rate proportional to desired eye velocity. Their output is integrated by a network of neurons to supply an eye positon command to the motoneurons of the extraocular muscles. This network, known as the neural integrator, is calibrated during infancy and then maintained through development and trauma with remarkable precision. We have modeled this system with a self-organizing neural network that learns to integrate vestibular velocity commands to generate appropriate eye movements. It learns by using current eye movement on any given trial to calculate the amount of retinal image slip and this is used as the error signal. The synaptic weights are then changed using a straightforward algorithm that is independent of the network configuration and does not necessitate backwards propagation of information. Minimization of the error in this fashion causes the network to develop multiple positive feedback loops that enable it to integrate a push-pull signal without integrating the background rate on which it rides. The network is also capable of recovering from various lesions and of generating more complicated signals to simulate induced postsaccadic drift and compensation for eye muscle mechanics.  相似文献   

8.
Feedback loops play an important role in determining the dynamics of biological networks. To study the role of negative feedback loops, this article introduces the notion of distance-to-positive-feedback which, in essence, captures the number of independent negative feedback loops in the network, a property inherent in the network topology. Through a computational study using Boolean networks, it is shown that distance-to-positive-feedback has a strong influence on network dynamics and correlates very well with the number and length of limit cycles in the phase space of the network. To be precise, it is shown that, as the number of independent negative feedback loops increases, the number (length) of limit cycles tends to decrease (increase). These conclusions are consistent with the fact that certain natural biological networks exhibit generally regular behavior and have fewer negative feedback loops than randomized networks with the same number of nodes and same connectivity.  相似文献   

9.
The directionality of network information flow dictates how networks process information. A central component of information processing in both biological and artificial neural networks is their ability to perform synergistic integration–a type of computation. We established previously that synergistic integration varies directly with the strength of feedforward information flow. However, the relationships between both recurrent and feedback information flow and synergistic integration remain unknown. To address this, we analyzed the spiking activity of hundreds of neurons in organotypic cultures of mouse cortex. We asked how empirically observed synergistic integration–determined from partial information decomposition–varied with local functional network structure that was categorized into motifs with varying recurrent and feedback information flow. We found that synergistic integration was elevated in motifs with greater recurrent information flow beyond that expected from the local feedforward information flow. Feedback information flow was interrelated with feedforward information flow and was associated with decreased synergistic integration. Our results indicate that synergistic integration is distinctly influenced by the directionality of local information flow.  相似文献   

10.
Shah MM  Anderson AE  Leung V  Lin X  Johnston D 《Neuron》2004,44(3):495-508
The entorhinal cortex (EC) provides the predominant excitatory drive to the hippocampal CA1 and subicular neurons in chronic epilepsy. Discerning the mechanisms underlying signal integration within EC neurons is essential for understanding network excitability alterations involving the hippocampus during epilepsy. Twenty-four hours following a single seizure episode when there were no behavioral or electrographic seizures, we found enhanced spontaneous activity still present in the rat EC in vivo and in vitro. The increased excitability was accompanied by a profound reduction in I(h) in EC layer III neurons and a significant decline in HCN1 and HCN2 subunits that encode for h channels. Consequently, dendritic excitability was enhanced, resulting in increased neuronal firing despite hyperpolarized membrane potentials. The loss of I(h) and the increased neuronal excitability persisted for 1 week following seizures. Our results suggest that dendritic I(h) plays an important role in determining the excitability of EC layer III neurons and their associated neural networks.  相似文献   

11.
The dendritic tree contributes significantly to the elementary computations a neuron performs while converting its synaptic inputs into action potential output. Traditionally, these computations have been characterized as both temporally and spatially localized. Under this localist account, neurons compute near-instantaneous mappings from their current input to their current output, brought about by somatic summation of dendritic contributions that are generated in functionally segregated compartments. However, recent evidence about the presence of oscillations in dendrites suggests a qualitatively different mode of operation: the instantaneous phase of such oscillations can depend on a long history of inputs, and under appropriate conditions, even dendritic oscillators that are remote may interact through synchronization. Here, we develop a mathematical framework to analyze the interactions of local dendritic oscillations and the way these interactions influence single cell computations. Combining weakly coupled oscillator methods with cable theoretic arguments, we derive phase-locking states for multiple oscillating dendritic compartments. We characterize how the phase-locking properties depend on key parameters of the oscillating dendrite: the electrotonic properties of the (active) dendritic segment, and the intrinsic properties of the dendritic oscillators. As a direct consequence, we show how input to the dendrites can modulate phase-locking behavior and hence global dendritic coherence. In turn, dendritic coherence is able to gate the integration and propagation of synaptic signals to the soma, ultimately leading to an effective control of somatic spike generation. Our results suggest that dendritic oscillations enable the dendritic tree to operate on more global temporal and spatial scales than previously thought; notably that local dendritic activity may be a mechanism for generating on-going whole-cell voltage oscillations.  相似文献   

12.
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).  相似文献   

13.
Cortical circuits process information by rich recurrent interactions between excitatory neurons and inhibitory interneurons. One of the prime functions of interneurons is to stabilize the circuit by feedback inhibition, but the level of specificity on which inhibitory feedback operates is not fully resolved. We hypothesized that inhibitory circuits could enable separate feedback control loops for different synaptic input streams, by means of specific feedback inhibition to different neuronal compartments. To investigate this hypothesis, we adopted an optimization approach. Leveraging recent advances in training spiking network models, we optimized the connectivity and short-term plasticity of interneuron circuits for compartment-specific feedback inhibition onto pyramidal neurons. Over the course of the optimization, the interneurons diversified into two classes that resembled parvalbumin (PV) and somatostatin (SST) expressing interneurons. Using simulations and mathematical analyses, we show that the resulting circuit can be understood as a neural decoder that inverts the nonlinear biophysical computations performed within the pyramidal cells. Our model provides a proof of concept for studying structure-function relations in cortical circuits by a combination of gradient-based optimization and biologically plausible phenomenological models.  相似文献   

14.
In this paper we present an oscillatory neural network composed of two coupled neural oscillators of the Wilson-Cowan type. Each of the oscillators describes the dynamics of average activities of excitatory and inhibitory populations of neurons. The network serves as a model for several possible network architectures. We study how the type and the strength of the connections between the oscillators affect the dynamics of the neural network. We investigate, separately from each other, four possible connection types (excitatory→excitatory, excitatory→inhibitory, inhibitory→excitatory, and inhibitory→inhibitory) and compute the corresponding bifurcation diagrams. In case of weak connections (small strength), the connection of populations of different types lead to periodicin-phase oscillations, while the connection of populations of the same type lead to periodicanti-phase oscillations. For intermediate connection strengths, the networks can enter quasiperiodic or chaotic regimes, and can also exhibit multistability. More generally, our analysis highlights the great diversity of the response of neural networks to a change of the connection strength, for different connection architectures. In the discussion, we address in particular the problem of information coding in the brain using quasiperiodic and chaotic oscillations. In modeling low levels of information processing, we propose that feature binding should be sought as a temporally coherent phase-locking of neural activity. This phase-locking is provided by one or more interacting convergent zones and does not require a central “top level” subcortical circuit (e.g. the septo-hippocampal system). We build a two layer model to show that although the application of a complex stimulus usually leads to different convergent zones with high frequency oscillations, it is nevertheless possible to synchronize these oscillations at a lower frequency level using envelope oscillations. This is interpreted as a feature binding of a complex stimulus.  相似文献   

15.
Dendritic morphology has been shown to have a dramatic impact on neuronal function. However, population features such as the inherent variability in dendritic morphology between cells belonging to the same neuronal type are often overlooked when studying computation in neural networks. While detailed models for morphology and electrophysiology exist for many types of single neurons, the role of detailed single cell morphology in the population has not been studied quantitatively or computationally. Here we use the structural context of the neural tissue in which dendritic trees exist to drive their generation in silico. We synthesize the entire population of dentate gyrus granule cells, the most numerous cell type in the hippocampus, by growing their dendritic trees within their characteristic dendritic fields bounded by the realistic structural context of (1) the granule cell layer that contains all somata and (2) the molecular layer that contains the dendritic forest. This process enables branching statistics to be linked to larger scale neuroanatomical features. We find large differences in dendritic total length and individual path length measures as a function of location in the dentate gyrus and of somatic depth in the granule cell layer. We also predict the number of unique granule cell dendrites invading a given volume in the molecular layer. This work enables the complete population-level study of morphological properties and provides a framework to develop complex and realistic neural network models.  相似文献   

16.
Rochel O  Cohen N 《Bio Systems》2007,87(2-3):260-266
Information processing in nervous systems intricately combines computation at the neuronal and network levels. Many computations may be envisioned as sequences of signal processing steps along some pathway. How can information encoded by single cells be mapped onto network population codes, and how do different modules or layers in the computation synchronize their communication and computation? These fundamental questions are particularly severe when dealing with real time streams of inputs. Here we study this problem within the context of a minimal signal perception task. In particular, we encode neuronal information by externally applying a space- and time-localized stimulus to individual neurons within a network. We show that a pulse-coupled recurrent neural network can successfully handle this task in real time, and obeys three key requirements: (i) stimulus dependence, (ii) initial-conditions independence, and (iii) accessibility by a readout mechanism. In particular, we suggest that the network's overall level of activity can be used as a temporal cue for a robust readout mechanism. Within this framework, the network can rapidly map a local stimulus onto a population code that can then be reliably read out during some narrow but well defined window of time.  相似文献   

17.
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.  相似文献   

18.
An algorithm called bidirectional long short-term memory networks (BLSTM) for processing sequential data is introduced. This supervised learning method trains a special recurrent neural network to use very long-range symmetric sequence context using a combination of nonlinear processing elements and linear feedback loops for storing long-range context. The algorithm is applied to the sequence-based prediction of protein localization and predicts 93.3 percent novel nonplant proteins and 88.4 percent novel plant proteins correctly, which is an improvement over feedforward and standard recurrent networks solving the same problem. The BLSTM system is available as a Web service at http://stepc.stepc.gr/-synaptic/blstm.html.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号