首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
Some cortical circuit models study the mechanisms of the transforms from visual inputs to neural responses. They model neural properties such as feature tunings, pattern sensitivities, and how they depend on intracortical connections and contextual inputs. Other cortical circuit models are more concerned with computational goals of the transform from visual inputs to neural responses, or the roles of the neural responses in the visual behavior. The appropriate complexity of a cortical circuit model depends on the question asked. Modeling neural circuits of many interacting hypercolumns is a necessary challenge, which is providing insights to cortical computations, such as visual saliency computation, and linking physiology with global visual cognitive behavior such as bottom-up attentional selection.  相似文献   

2.
Lateral and recurrent connections are ubiquitous in biological neural circuits. Yet while the strong computational abilities of feedforward networks have been extensively studied, our understanding of the role and advantages of recurrent computations that might explain their prevalence remains an important open challenge. Foundational studies by Minsky and Roelfsema argued that computations that require propagation of global information for local computation to take place would particularly benefit from the sequential, parallel nature of processing in recurrent networks. Such “tag propagation” algorithms perform repeated, local propagation of information and were originally introduced in the context of detecting connectedness, a task that is challenging for feedforward networks. Here, we advance the understanding of the utility of lateral and recurrent computation by first performing a large-scale empirical study of neural architectures for the computation of connectedness to explore feedforward solutions more fully and establish robustly the importance of recurrent architectures. In addition, we highlight a tradeoff between computation time and performance and construct hybrid feedforward/recurrent models that perform well even in the presence of varying computational time limitations. We then generalize tag propagation architectures to propagating multiple interacting tags and demonstrate that these are efficient computational substrates for more general computations of connectedness by introducing and solving an abstracted biologically inspired decision-making task. Our work thus clarifies and expands the set of computational tasks that can be solved efficiently by recurrent computation, yielding hypotheses for structure in population activity that may be present in such tasks.  相似文献   

3.
The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, “trained” networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale’s principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity patterns and behavior that can be modeled, and suggest a unified setting in which diverse cognitive computations and mechanisms can be studied.  相似文献   

4.
A large body of experimental and theoretical work on neural coding suggests that the information stored in brain circuits is represented by time-varying patterns of neural activity. Reservoir computing, where the activity of a recurrently connected pool of neurons is read by one or more units that provide an output response, successfully exploits this type of neural activity. However, the question of system robustness to small structural perturbations, such as failing neurons and synapses, has been largely overlooked. This contrasts with well-studied dynamical perturbations that lead to divergent network activity in the presence of chaos, as is the case for many reservoir networks. Here, we distinguish between two types of structural network perturbations, namely local (e.g., individual synaptic or neuronal failure) and global (e.g., network-wide fluctuations). Surprisingly, we show that while global perturbations have a limited impact on the ability of reservoir models to perform various tasks, local perturbations can produce drastic effects. To address this limitation, we introduce a new architecture where the reservoir is driven by a layer of oscillators that generate stable and repeatable trajectories. This model outperforms previous implementations while being resistant to relatively large local and global perturbations. This finding has implications for the design of reservoir models that capture the capacity of brain circuits to perform cognitively and behaviorally relevant tasks while remaining robust to various forms of perturbations. Further, our work proposes a novel role for neuronal oscillations found in cortical circuits, where they may serve as a collection of inputs from which a network can robustly generate complex dynamics and implement rich computations.  相似文献   

5.
From single‐cell organisms to complex neural networks, all evolved to provide control solutions to generate context‐ and goal‐specific actions. Neural circuits performing sensorimotor computation to drive navigation employ inhibitory control as a gating mechanism as they hierarchically transform (multi)sensory information into motor actions. Here, the focus is on this literature to critically discuss the proposition that prominent inhibitory projections form sensorimotor circuits. After reviewing the neural circuits of navigation across various invertebrate species, it is argued that with increased neural circuit complexity and the emergence of parallel computations, inhibitory circuits acquire new functions. The contribution of inhibitory neurotransmission for navigation goes beyond shaping the communication that drives motor neurons, and instead includes encoding of emergent sensorimotor representations. A mechanistic understanding of the neural circuits performing sensorimotor computations in invertebrates will unravel the minimum circuit requirements driving adaptive navigation.  相似文献   

6.
7.
Local neocortical circuits are characterized by stereotypical physiological and structural features that subserve generic computational operations. These basic computations of the cortical microcircuit emerge through the interplay of neuronal connectivity, cellular intrinsic properties, and synaptic plasticity dynamics. How these interacting mechanisms generate specific computational operations in the cortical circuit remains largely unknown. Here, we identify the neurophysiological basis of both the rate of change and anticipation computations on synaptic inputs in a cortical circuit. Through biophysically realistic computer simulations and neuronal recordings, we show that the rate-of-change computation is operated robustly in cortical networks through the combination of two ubiquitous brain mechanisms: short-term synaptic depression and spike-frequency adaptation. We then show how this rate-of-change circuit can be embedded in a convergently connected network to anticipate temporally incoming synaptic inputs, in quantitative agreement with experimental findings on anticipatory responses to moving stimuli in the primary visual cortex. Given the robustness of the mechanism and the widespread nature of the physiological machinery involved, we suggest that rate-of-change computation and temporal anticipation are principal, hard-wired functions of neural information processing in the cortical microcircuit.  相似文献   

8.
Finding out the physical structure of neuronal circuits that governs neuronal responses is an important goal for brain research. With fast advances for large-scale recording techniques, identification of a neuronal circuit with multiple neurons and stages or layers becomes possible and highly demanding. Although methods for mapping the connection structure of circuits have been greatly developed in recent years, they are mostly limited to simple scenarios of a few neurons in a pairwise fashion; and dissecting dynamical circuits, particularly mapping out a complete functional circuit that converges to a single neuron, is still a challenging question. Here, we show that a recent method, termed spike-triggered non-negative matrix factorization (STNMF), can address these issues. By simulating different scenarios of spiking neural networks with various connections between neurons and stages, we demonstrate that STNMF is a persuasive method to dissect functional connections within a circuit. Using spiking activities recorded at neurons of the output layer, STNMF can obtain a complete circuit consisting of all cascade computational components of presynaptic neurons, as well as their spiking activities. For simulated simple and complex cells of the primary visual cortex, STNMF allows us to dissect the pathway of visual computation. Taken together, these results suggest that STNMF could provide a useful approach for investigating neuronal systems leveraging recorded functional neuronal activity.  相似文献   

9.
10.
Biological data suggests that activity patterns emerging in small- and large-scale neural systems may play an important role in performing the functions of the neural system, and in particular, neural computations. It is proposed in this paper that neural systems can be understood in terms of pattern computation and abstract communication systems theory. It is shown that analysing high-resolution surface EEG data, it is possible to determine abstract probabilistic rules that describe how emerging activity patterns follow earlier activity patterns. The results indicate the applicability of the proposed approach for understanding the working of complex neural systems.  相似文献   

11.
Neurons in sensory systems can represent information not only by their firing rate, but also by the precise timing of individual spikes. For example, certain retinal ganglion cells, first identified in the salamander, encode the spatial structure of a new image by their first-spike latencies. Here we explore how this temporal code can be used by downstream neural circuits for computing complex features of the image that are not available from the signals of individual ganglion cells. To this end, we feed the experimentally observed spike trains from a population of retinal ganglion cells to an integrate-and-fire model of post-synaptic integration. The synaptic weights of this integration are tuned according to the recently introduced tempotron learning rule. We find that this model neuron can perform complex visual detection tasks in a single synaptic stage that would require multiple stages for neurons operating instead on neural spike counts. Furthermore, the model computes rapidly, using only a single spike per afferent, and can signal its decision in turn by just a single spike. Extending these analyses to large ensembles of simulated retinal signals, we show that the model can detect the orientation of a visual pattern independent of its phase, an operation thought to be one of the primitives in early visual processing. We analyze how these computations work and compare the performance of this model to other schemes for reading out spike-timing information. These results demonstrate that the retina formats spatial information into temporal spike sequences in a way that favors computation in the time domain. Moreover, complex image analysis can be achieved already by a simple integrate-and-fire model neuron, emphasizing the power and plausibility of rapid neural computing with spike times.  相似文献   

12.
It is quite difficult to construct circuits of spiking neurons that can carry out complex computational tasks. On the other hand even randomly connected circuits of spiking neurons can in principle be used for complex computational tasks such as time-warp invariant speech recognition. This is possible because such circuits have an inherent tendency to integrate incoming information in such a way that simple linear readouts can be trained to transform the current circuit activity into the target output for a very large number of computational tasks. Consequently we propose to analyze circuits of spiking neurons in terms of their roles as analog fading memory and non-linear kernels, rather than as implementations of specific computational operations and algorithms. This article is a sequel to [W. Maass, T. Natschl?ger, H. Markram, Real-time computing without stable states: a new framework for neural computation based on perturbations, Neural Comput. 14 (11) (2002) 2531-2560, Online available as #130 from: ], and contains new results about the performance of generic neural microcircuit models for the recognition of speech that is subject to linear and non-linear time-warps, as well as for computations on time-varying firing rates. These computations rely, apart from general properties of generic neural microcircuit models, just on capabilities of simple linear readouts trained by linear regression. This article also provides detailed data on the fading memory property of generic neural microcircuit models, and a quick review of other new results on the computational power of such circuits of spiking neurons.  相似文献   

13.
The response of a population of neurons to time-varying synaptic inputs can show a rich phenomenology, hardly predictable from the dynamical properties of the membrane’s inherent time constants. For example, a network of neurons in a state of spontaneous activity can respond significantly more rapidly than each single neuron taken individually. Under the assumption that the statistics of the synaptic input is the same for a population of similarly behaving neurons (mean field approximation), it is possible to greatly simplify the study of neural circuits, both in the case in which the statistics of the input are stationary (reviewed in La Camera et al. in Biol Cybern, 2008) and in the case in which they are time varying and unevenly distributed over the dendritic tree. Here, we review theoretical and experimental results on the single-neuron properties that are relevant for the dynamical collective behavior of a population of neurons. We focus on the response of integrate-and-fire neurons and real cortical neurons to long-lasting, noisy, in vivo-like stationary inputs and show how the theory can predict the observed rhythmic activity of cultures of neurons. We then show how cortical neurons adapt on multiple time scales in response to input with stationary statistics in vitro. Next, we review how it is possible to study the general response properties of a neural circuit to time-varying inputs by estimating the response of single neurons to noisy sinusoidal currents. Finally, we address the dendrite–soma interactions in cortical neurons leading to gain modulation and spike bursts, and show how these effects can be captured by a two-compartment integrate-and-fire neuron. Most of the experimental results reviewed in this article have been successfully reproduced by simple integrate-and-fire model neurons.  相似文献   

14.
Frontal cortex is thought to underlie many advanced cognitive capacities, from self-control to long term planning. Reflecting these diverse demands, frontal neural activity is notoriously idiosyncratic, with tuning properties that are correlated with endless numbers of behavioral and task features. This menagerie of tuning has made it difficult to extract organizing principles that govern frontal neural activity. Here, we contrast two successful yet seemingly incompatible approaches that have begun to address this challenge. Inspired by the indecipherability of single-neuron tuning, the first approach casts frontal computations as dynamical trajectories traversed by arbitrary mixtures of neurons. The second approach, by contrast, attempts to explain the functional diversity of frontal activity with the biological diversity of cortical cell-types. Motivated by the recent discovery of functional clusters in frontal neurons, we propose a consilience between these population and cell-type-specific approaches to neural computations, advancing the conjecture that evolutionarily inherited cell-type constraints create the scaffold within which frontal population dynamics must operate.  相似文献   

15.
Understanding the computations that take place in neural circuits requires identifying how neurons in those circuits are connected to one another. In addition, recent research indicates that aberrant neuronal wiring may be the cause of several neurodevelopmental disorders, further emphasizing the importance of identifying the wiring diagrams of brain circuits. To address this issue, several new approaches have been recently developed. In this review, we describe several methods that are currently available to investigate the structure and connectivity of the brain, and discuss their strengths and limitations.  相似文献   

16.
Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt correlator (HRC), relates visual inputs to neural activity and behavioral responses to motion, but the circuits that implement this computation remain unknown. By using cell-type specific genetic silencing, minimal motion stimuli, and in?vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, "reverse phi," that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection.  相似文献   

17.
Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time, thus making it crucial for context-dependent computation. Here, we use a top-down modeling approach to examine network-level mechanisms of working memory, an enigmatic issue and central topic of study in neuroscience. We optimize thousands of recurrent rate-based neural networks on a working memory task and then perform dynamical systems analysis on the ensuing optimized networks, wherein we find that four distinct dynamical mechanisms can emerge. In particular, we show the prevalence of a mechanism in which memories are encoded along slow stable manifolds in the network state space, leading to a phasic neuronal activation profile during memory periods. In contrast to mechanisms in which memories are directly encoded at stable attractors, these networks naturally forget stimuli over time. Despite this seeming functional disadvantage, they are more efficient in terms of how they leverage their attractor landscape and paradoxically, are considerably more robust to noise. Our results provide new hypotheses regarding how working memory function may be encoded within the dynamics of neural circuits.  相似文献   

18.
Several efforts are currently underway to decipher the connectome or parts thereof in a variety of organisms. Ascertaining the detailed physiological properties of all the neurons in these connectomes, however, is out of the scope of such projects. It is therefore unclear to what extent knowledge of the connectome alone will advance a mechanistic understanding of computation occurring in these neural circuits, especially when the high-level function of the said circuit is unknown. We consider, here, the question of how the wiring diagram of neurons imposes constraints on what neural circuits can compute, when we cannot assume detailed information on the physiological response properties of the neurons. We call such constraints—that arise by virtue of the connectome—connectomic constraints on computation. For feedforward networks equipped with neurons that obey a deterministic spiking neuron model which satisfies a small number of properties, we ask if just by knowing the architecture of a network, we can rule out computations that it could be doing, no matter what response properties each of its neurons may have. We show results of this form, for certain classes of network architectures. On the other hand, we also prove that with the limited set of properties assumed for our model neurons, there are fundamental limits to the constraints imposed by network structure. Thus, our theory suggests that while connectomic constraints might restrict the computational ability of certain classes of network architectures, we may require more elaborate information on the properties of neurons in the network, before we can discern such results for other classes of networks.  相似文献   

19.
Katsov AY  Clandinin TR 《Neuron》2008,59(2):322-335
Motion vision is an ancient faculty, critical to many animals in a range of ethological contexts, the underlying algorithms of which provide central insights into neural computation. However, how motion cues guide behavior is poorly understood, as the neural circuits that implement these computations are largely unknown in any organism. We develop a systematic, forward genetic approach using high-throughput, quantitative behavioral analyses to identify the neural substrates of motion vision in Drosophila in an unbiased fashion. We then delimit the behavioral contributions of both known and novel circuit elements. Contrary to expectation from previous studies, we find that orienting responses to motion are shaped by at least two neural pathways. These pathways are sensitive to different visual features, diverge immediately postsynaptic to photoreceptors, and are coupled to distinct behavioral outputs. Thus, behavioral responses to complex stimuli can rely on surprising neural specialization from even the earliest sensory processing stages.  相似文献   

20.
Presynaptic function   总被引:5,自引:0,他引:5  
Changing the strength of synapses is key to the adaptive modifications of what neuronal circuits compute. Unsurprisingly, many different mechanisms have evolved to alter synaptic strength. Some of these mechanisms depend on the history of synaptic use, others reflect the activity of modulatory neurons that are controlled through neural computations, and still others involve more global measures of neural activity. The molecular machinery synapses use to convey information from one neuron to the next not only plays an essential part in brain function but also is at the basis of processes that are vital to all cells. Because membrane fusion events at synapses are so precisely controlled, synapses offer an especially favorable system in which to study these basic processes. Here, I review some of the recent progress that has been made in understanding both how synaptic strength is regulated and how fundamental cell biological mechanisms are used to accomplish neuronal intercommunication.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号