首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ability of the human brain to carry out logical reasoning can be interpreted, in general, as a by-product of adaptive capacities of complex neural networks. Thus, we seek to base abstract logical operations in the general properties of neural networks designed as learning modules. We show that logical operations executable by McCulloch–Pitts binary networks can also be programmed in analog neural networks built with associative memory modules that process inputs as logical gates. These modules can interact among themselves to generate dynamical systems that extend the repertoire of logical operations. We demonstrate how the operations of the exclusive-OR or the implication appear as outputs of these interacting modules. In particular, we provide a model of the exclusive-OR that succeeds in evaluating an odd number of options (the exclusive-OR of classical logic fails in his case), thus paving the way for a more reasonable biological model of this important logical operator. We propose that a brain trained to compute can associate a complex logical operation to an orderly structured but temporary contingent episode by establishing a codified association among memory modules. This explanation offers an interpretation of complex logical processes (eventually learned) as associations of contingent events in memorized episodes. We suggest, as an example, a cognitive model that describes these “logical episodes”.  相似文献   

2.
In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components—like genetic circuits, biochemical cascades, and ion channels, among others—enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode—with almost 20–60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.  相似文献   

3.
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.  相似文献   

4.
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving.  相似文献   

5.
Neurons are spatially extended structures that receive and process inputs on their dendrites. It is generally accepted that neuronal computations arise from the active integration of synaptic inputs along a dendrite between the input location and the location of spike generation in the axon initial segment. However, many application such as simulations of brain networks use point-neurons—neurons without a morphological component—as computational units to keep the conceptual complexity and computational costs low. Inevitably, these applications thus omit a fundamental property of neuronal computation. In this work, we present an approach to model an artificial synapse that mimics dendritic processing without the need to explicitly simulate dendritic dynamics. The model synapse employs an analytic solution for the cable equation to compute the neuron’s membrane potential following dendritic inputs. Green’s function formalism is used to derive the closed version of the cable equation. We show that by using this synapse model, point-neurons can achieve results that were previously limited to the realms of multi-compartmental models. Moreover, a computational advantage is achieved when only a small number of simulated synapses impinge on a morphologically elaborate neuron. Opportunities and limitations are discussed.  相似文献   

6.
7.
In many regions of the brain, information is represented by patterns of activity occurring over populations of neurons. Understanding the encoding of information in neural population activity is important both for grasping the fundamental computations underlying brain function, and for interpreting signals that may be useful for the control of prosthetic devices. We concentrate on the representation of information in neurons with Poisson spike statistics, in which information is contained in the average spike firing rate. We analyze the properties of population codes in terms of the tuning functions that describe individual neuron behavior. The discussion centers on three computational questions: first, what information is encoded in a population; second, how does the brain compute using populations; and third, when is a population optimal? To answer these questions, we discuss several methods for decoding population activity in an experimental setting. We also discuss how computation can be performed within the brain in networks of interconnected populations. Finally, we examine questions of optimal design of population codes that may help to explain their particular form and the set of variables that are best represented. We show that for population codes based on neurons that have a Poisson distribution of spike probabilities, the behavior and computational properties of the code can be understood in terms of the tuning properties of individual cells.  相似文献   

8.

Background

This paper addresses the problem of finding attractors in biological regulatory networks. We focus here on non-deterministic synchronous and asynchronous multi-valued networks, modeled using automata networks (AN). AN is a general and well-suited formalism to study complex interactions between different components (genes, proteins,...). An attractor is a minimal trap domain, that is, a part of the state-transition graph that cannot be escaped. Such structures are terminal components of the dynamics and take the form of steady states (singleton) or complex compositions of cycles (non-singleton). Studying the effect of a disease or a mutation on an organism requires finding the attractors in the model to understand the long-term behaviors.

Results

We present a computational logical method based on answer set programming (ASP) to identify all attractors. Performed without any network reduction, the method can be applied on any dynamical semantics. In this paper, we present the two most widespread non-deterministic semantics: the asynchronous and the synchronous updating modes. The logical approach goes through a complete enumeration of the states of the network in order to find the attractors without the necessity to construct the whole state-transition graph. We realize extensive computational experiments which show good performance and fit the expected theoretical results in the literature.

Conclusion

The originality of our approach lies on the exhaustive enumeration of all possible (sets of) states verifying the properties of an attractor thanks to the use of ASP. Our method is applied to non-deterministic semantics in two different schemes (asynchronous and synchronous). The merits of our methods are illustrated by applying them to biological examples of various sizes and comparing the results with some existing approaches. It turns out that our approach succeeds to exhaustively enumerate on a desktop computer, in a large model (100 components), all existing attractors up to a given size (20 states). This size is only limited by memory and computation time.
  相似文献   

9.
How spiking neurons cooperate to control behavioral processes is a fundamental problem in computational neuroscience. Such cooperative dynamics are required during visual perception when spatially distributed image fragments are grouped into emergent boundary contours. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity occur in response to binary spikes with irregular timing across many interacting cells. Some models have demonstrated spiking dynamics in recurrent laminar neocortical circuits, but not how perceptual grouping occurs. Other models have analyzed the fast speed of certain percepts in terms of a single feedforward sweep of activity, but cannot explain other percepts, such as illusory contours, wherein perceptual ambiguity can take hundreds of milliseconds to resolve by integrating multiple spikes over time. The current model reconciles fast feedforward with slower feedback processing, and binary spikes with analog network-level properties, in a laminar cortical network of spiking cells whose emergent properties quantitatively simulate parametric data from neurophysiological experiments, including the formation of illusory contours; the structure of non-classical visual receptive fields; and self-synchronizing gamma oscillations. These laminar dynamics shed new light on how the brain resolves local informational ambiguities through the use of properly designed nonlinear feedback spiking networks which run as fast as they can, given the amount of uncertainty in the data that they process.  相似文献   

10.
Investigation of mechanisms of information handling in neural assemblies involved in computational and cognitive tasks is a challenging problem. Synergetic cooperation of neurons in time domain, through synchronization of firing of multiple spatially distant neurons, has been widely spread as the main paradigm. Complementary, the brain may also employ information coding and processing in spatial dimension. Then, the result of computation depends also on the spatial distribution of long-scale information. The latter bi-dimensional alternative is notably less explored in the literature. Here, we propose and theoretically illustrate a concept of spatiotemporal representation and processing of long-scale information in laminar neural structures. We argue that relevant information may be hidden in self-sustained traveling waves of neuronal activity and then their nonlinear interaction yields efficient wave-processing of spatiotemporal information. Using as a testbed a chain of FitzHugh-Nagumo neurons, we show that the wave-processing can be achieved by incorporating into the single-neuron dynamics an additional voltage-gated membrane current. This local mechanism provides a chain of such neurons with new emergent network properties. In particular, nonlinear waves as a carrier of long-scale information exhibit a variety of functionally different regimes of interaction: from complete or asymmetric annihilation to transparent crossing. Thus neuronal chains can work as computational units performing different operations over spatiotemporal information. Exploiting complexity resonance these composite units can discard stimuli of too high or too low frequencies, while selectively compress those in the natural frequency range. We also show how neuronal chains can contextually interpret raw wave information. The same stimulus can be processed differently or identically according to the context set by a periodic wave train injected at the opposite end of the chain.  相似文献   

11.
Critical dynamics are assumed to be an attractive mode for normal brain functioning as information processing and computational capabilities are found to be optimal in the critical state. Recent experimental observations of neuronal activity patterns following power-law distributions, a hallmark of systems at a critical state, have led to the hypothesis that human brain dynamics could be poised at a phase transition between ordered and disordered activity. A so far unresolved question concerns the medical significance of critical brain activity and how it relates to pathological conditions. Using data from invasive electroencephalogram recordings from humans we show that during epileptic seizure attacks neuronal activity patterns deviate from the normally observed power-law distribution characterizing critical dynamics. The comparison of these observations to results from a computational model exhibiting self-organized criticality (SOC) based on adaptive networks allows further insights into the underlying dynamics. Together these results suggest that brain dynamics deviates from criticality during seizures caused by the failure of adaptive SOC.  相似文献   

12.
13.
The fundamental process that underlies volume transmission in the brain is the extracellular diffusion of neurotransmitters from release sites to distal target cells. Dopaminergic neurons display a range of activity states, from low-frequency tonic firing to bursts of high-frequency action potentials (phasic firing). However, it is not clear how this activity affects volume transmission on a subsecond time scale. To evaluate this, we developed a finite-difference model that predicts the lifetime and diffusion of dopamine in brain tissue. We first used this model to decode in vivo amperometric measurements of electrically evoked dopamine, and obtained rate constants for release and uptake as well as the extent of diffusion. Accurate predictions were made under a variety of conditions including different regions, different stimulation parameters and with uptake inhibited. Second, we used the decoded rate constants to predict how heterogeneity of dopamine release and uptake sites would affect dopamine concentration fluctuations during different activity states in the absence of an electrode. These simulations show that synchronous phasic firing can produce spatially and temporally heterogeneous concentration profiles whereas asynchronous tonic firing elicits uniform, steady-state dopamine concentrations.  相似文献   

14.
Computational systems are useful in neuroscience in many ways. For instance, they may be used to construct maps of brain structure and activation, or to describe brain processes mathematically. Furthermore, they inspired a powerful theory of brain function, in which the brain is viewed as a system characterized by intrinsic computational activities or as a "computational information processor. "Although many neuroscientists believe that neural systems really perform computations, some are more cautious about computationalism or reject it. Thus, does the brain really compute? Answering this question requires getting clear on a definition of computation that is able to draw a line between physical systems that compute and systems that do not, so that we can discern on which side of the line the brain (or parts of it) could fall. In order to shed some light on the role of computational processes in brain function, available neurobiological data will be summarized from the standpoint of a recently proposed taxonomy of notions of computation, with the aim of identifying which brain processes can be considered computational. The emerging picture shows the brain as a very peculiar system, in which genuine computational features act in concert with noncomputational dynamical processes, leading to continuous self-organization and remodeling under the action of external stimuli from the environment and from the rest of the organism.  相似文献   

15.
State-dependent computation is key to cognition in both biological and artificial systems. Alan Turing recognized the power of stateful computation when he created the Turing machine with theoretically infinite computational capacity in 1936. Independently, by 1950, ethologists such as Tinbergen and Lorenz also began to implicitly embed rudimentary forms of state-dependent computation to create qualitative models of internal drives and naturally occurring animal behaviors. Here, we reformulate core ethological concepts in explicitly dynamical systems terms for stateful computation. We examine, based on a wealth of recent neural data collected during complex innate behaviors across species, the neural dynamics that determine the temporal structure of internal states. We will also discuss the degree to which the brain can be hierarchically partitioned into nested dynamical systems and the need for a multi-dimensional state-space model of the neuromodulatory system that underlies motivational and affective states.  相似文献   

16.
In view of ever-changing conditions both in the external world and in intrinsic brain states, maintaining the robustness of computations poses a challenge, adequate solutions to which we are only beginning to understand. At the level of cell-intrinsic properties, biophysical models of neurons permit one to identify relevant physiological substrates that can serve as regulators of neuronal excitability and to test how feedback loops can stabilize crucial variables such as long-term calcium levels and firing rates. Mathematical theory has also revealed a rich set of complementary computational properties arising from distinct cellular dynamics and even shaping processing at the network level. Here, we provide an overview over recently explored homeostatic mechanisms derived from biophysical models and hypothesize how multiple dynamical characteristics of cells, including their intrinsic neuronal excitability classes, can be stably controlled.  相似文献   

17.
Estimating the difficulty of a decision is a fundamental process to elaborate complex and adaptive behaviour. In this paper, we show that the movement time of behaving monkeys performing a decision-making task is correlated with decision difficulty and that the activity of a population of neurons in ventral Premotor cortex correlates with the movement time. Moreover, we found another population of neurons that encodes the discriminability of the stimulus, thereby supplying another source of information about the difficulty of the decision. The activity of neurons encoding the difficulty can be produced by very different computations. Therefore, we show that decision difficulty can be encoded through three different mechanisms: 1. Switch time coding, 2. rate coding and 3. binary coding. This rich representation reflects the basis of different functional aspects of difficulty in the making of a decision and the possible role of difficulty estimation in complex decision scenarios.  相似文献   

18.
The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter—describing somatic integration—and the spike-history filter—accounting for spike-frequency adaptation—dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.  相似文献   

19.
Brain computation is metabolically expensive and requires the supply of significant amounts of energy. Mitochondria are highly specialized organelles whose main function is to generate cellular energy. Due to their complex morphologies, neurons are especially dependent on a set of tools necessary to regulate mitochondrial function locally in order to match energy provision with local demands. By regulating mitochondrial transport, neurons control the local availability of mitochondrial mass in response to changes in synaptic activity. Neurons also modulate mitochondrial dynamics locally to adjust metabolic efficiency with energetic demand. Additionally, neurons remove inefficient mitochondria through mitophagy. Neurons coordinate these processes through signalling pathways that couple energetic expenditure with energy availability. When these mechanisms fail, neurons can no longer support brain function giving rise to neuropathological states like metabolic syndromes or neurodegeneration.  相似文献   

20.
Suppression of excessively synchronous beta-band oscillatory activity in the brain is believed to suppress hypokinetic motor symptoms of Parkinson’s disease. Recently, a lot of interest has been devoted to desynchronizing delayed feedback deep brain stimulation (DBS). This type of synchrony control was shown to destabilize the synchronized state in networks of simple model oscillators as well as in networks of coupled model neurons. However, the dynamics of the neural activity in Parkinson’s disease exhibits complex intermittent synchronous patterns, far from the idealized synchronous dynamics used to study the delayed feedback stimulation. This study explores the action of delayed feedback stimulation on partially synchronized oscillatory dynamics, similar to what one observes experimentally in parkinsonian patients. We employ a computational model of the basal ganglia networks which reproduces experimentally observed fine temporal structure of the synchronous dynamics. When the parameters of our model are such that the synchrony is unphysiologically strong, the feedback exerts a desynchronizing action. However, when the network is tuned to reproduce the highly variable temporal patterns observed experimentally, the same kind of delayed feedback may actually increase the synchrony. As network parameters are changed from the range which produces complete synchrony to those favoring less synchronous dynamics, desynchronizing delayed feedback may gradually turn into synchronizing stimulation. This suggests that delayed feedback DBS in Parkinson’s disease may boost rather than suppress synchronization and is unlikely to be clinically successful. The study also indicates that delayed feedback stimulation may not necessarily exhibit a desynchronization effect when acting on a physiologically realistic partially synchronous dynamics, and provides an example of how to estimate the stimulation effect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号