首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
Computational systems are useful in neuroscience in many ways. For instance, they may be used to construct maps of brain structure and activation, or to describe brain processes mathematically. Furthermore, they inspired a powerful theory of brain function, in which the brain is viewed as a system characterized by intrinsic computational activities or as a "computational information processor. "Although many neuroscientists believe that neural systems really perform computations, some are more cautious about computationalism or reject it. Thus, does the brain really compute? Answering this question requires getting clear on a definition of computation that is able to draw a line between physical systems that compute and systems that do not, so that we can discern on which side of the line the brain (or parts of it) could fall. In order to shed some light on the role of computational processes in brain function, available neurobiological data will be summarized from the standpoint of a recently proposed taxonomy of notions of computation, with the aim of identifying which brain processes can be considered computational. The emerging picture shows the brain as a very peculiar system, in which genuine computational features act in concert with noncomputational dynamical processes, leading to continuous self-organization and remodeling under the action of external stimuli from the environment and from the rest of the organism.  相似文献   

4.
Migliore M  Messineo L  Cardaci M 《Bio Systems》2000,58(1-3):187-193
How and where the brain calculates elapsing time is not known, and one or more internal pacemakers or others timekeeping systems have been suggested. Experiments have shown that the accuracy in estimating or producing time intervals depends on many factors and, in particular, both on the length of the intervals to be estimated and on the additional, and unrelated, cognitive load required during the task. The psychological 'attentional approach' is able to explain the experimental data in terms of perturbations of a cognitive timer. However, the basic biophysical mechanisms that could be involved at the single neuron level are still not clear. Here we propose a computational model suggesting how the process to focus the attention on a non-temporal task could alter the perception of time intervals as observed in the experiments. The model suggests that an attention-based excitatory and/or inhibitory background synaptic noise, impinging on the pacemaker circuit, could represent both qualitative and quantitative features of the cognitive load. These effects are predicted to be independent of the number, location or specific implementations of the internal timing systems.  相似文献   

5.
Cognitive stability and flexibility are core functions in the successful pursuit of behavioral goals. While there is evidence for a common frontoparietal network underlying both functions and for a key role of dopamine in the modulation of flexible versus stable behavior, the exact neurocomputational mechanisms underlying those executive functions and their adaptation to environmental demands are still unclear. In this work we study the neurocomputational mechanisms underlying cue based task switching (flexibility) and distractor inhibition (stability) in a paradigm specifically designed to probe both functions. We develop a physiologically plausible, explicit model of neural networks that maintain the currently active task rule in working memory and implement the decision process. We simplify the four-choice decision network to a nonlinear drift-diffusion process that we canonically derive from a generic winner-take-all network model. By fitting our model to the behavioral data of individual subjects, we can reproduce their full behavior in terms of decisions and reaction time distributions in baseline as well as distractor inhibition and switch conditions. Furthermore, we predict the individual hemodynamic response timecourse of the rule-representing network and localize it to a frontoparietal network including the inferior frontal junction area and the intraparietal sulcus, using functional magnetic resonance imaging. This refines the understanding of task-switch-related frontoparietal brain activity as reflecting attractor-like working memory representations of task rules. Finally, we estimate the subject-specific stability of the rule-representing attractor states in terms of the minimal action associated with a transition between different rule states in the phase-space of the fitted models. This stability measure correlates with switching-specific thalamocorticostriatal activation, i.e., with a system associated with flexible working memory updating and dopaminergic modulation of cognitive flexibility. These results show that stochastic dynamical systems can implement the basic computations underlying cognitive stability and flexibility and explain neurobiological bases of individual differences.  相似文献   

6.
Predictive ADMET is the new 'hip' area in drug discovery. The aim is to use large databases of ADMET data associated with structures to build computational models that link structural changes with changes in response, from which compounds with improved properties can be designed and predicted. These databases also provide the means to enable predictions of human ADMET properties to be made from human in vitro and animal in vivo ADMET measurements. Both methods are limited by the amount of data available to build such predictive models, the limitations of modelling methods and our understanding of the systems we wish to model. The current failures, successes and opportunities are reviewed.  相似文献   

7.
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.  相似文献   

8.
Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.  相似文献   

9.
Recent computational and behavioral studies suggest that motor adaptation results from the update of multiple memories with different timescales. Here, we designed a model-based functional magnetic resonance imaging (fMRI) experiment in which subjects adapted to two opposing visuomotor rotations. A computational model of motor adaptation with multiple memories was fitted to the behavioral data to generate time-varying regressors of brain activity. We identified regional specificity to timescales: in particular, the activity in the inferior parietal region and in the anterior-medial cerebellum was associated with memories for intermediate and long timescales, respectively. A sparse singular value decomposition analysis of variability in specificities to timescales over the brain identified four components, two fast, one middle, and one slow, each associated with different brain networks. Finally, a multivariate decoding analysis showed that activity patterns in the anterior-medial cerebellum progressively represented the two rotations. Our results support the existence of brain regions associated with multiple timescales in adaptation and a role of the cerebellum in storing multiple internal models.  相似文献   

10.
Pavlovian predictions of future aversive outcomes lead to behavioral inhibition, suppression, and withdrawal. There is considerable evidence for the involvement of serotonin in both the learning of these predictions and the inhibitory consequences that ensue, although less for a causal relationship between the two. In the context of a highly simplified model of chains of affectively charged thoughts, we interpret the combined effects of serotonin in terms of pruning a tree of possible decisions, (i.e., eliminating those choices that have low or negative expected outcomes). We show how a drop in behavioral inhibition, putatively resulting from an experimentally or psychiatrically influenced drop in serotonin, could result in unexpectedly large negative prediction errors and a significant aversive shift in reinforcement statistics. We suggest an interpretation of this finding that helps dissolve the apparent contradiction between the fact that inhibition of serotonin reuptake is the first-line treatment of depression, although serotonin itself is most strongly linked with aversive rather than appetitive outcomes and predictions.  相似文献   

11.
Andras P  Wennekers T 《Bio Systems》2007,87(2-3):179-185
Neural computations are modelled in various ways, but still there is no clear understanding of how the brain performs its computational tasks. This paper presents new results about analysis of neural processes in terms of activity pattern computations. It is shown that it is possible to extract from high-resolution EEG data a first order Markov approximation of a neural communication system employing pattern computations, which is significantly different from similar purely random systems. In our view this result shows that it is likely that neural activity patterns measurable at the macro-level by EEG are correlated with underlying neural computations.  相似文献   

12.
Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, e.g. fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of RNNs may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.  相似文献   

13.
Uncertainty, neuromodulation, and attention   总被引:10,自引:0,他引:10  
Yu AJ  Dayan P 《Neuron》2005,46(4):681-692
Uncertainty in various forms plagues our interactions with the environment. In a Bayesian statistical framework, optimal inference and prediction, based on unreliable observations in changing contexts, require the representation and manipulation of different forms of uncertainty. We propose that the neuromodulators acetylcholine and norepinephrine play a major role in the brain's implementation of these uncertainty computations. Acetylcholine signals expected uncertainty, coming from known unreliability of predictive cues within a context. Norepinephrine signals unexpected uncertainty, as when unsignaled context switches produce strongly unexpected observations. These uncertainty signals interact to enable optimal inference and learning in noisy and changeable environments. This formulation is consistent with a wealth of physiological, pharmacological, and behavioral data implicating acetylcholine and norepinephrine in specific aspects of a range of cognitive processes. Moreover, the model suggests a class of attentional cueing tasks that involve both neuromodulators and shows how their interactions may be part-antagonistic, part-synergistic.  相似文献   

14.
State-dependent computation is key to cognition in both biological and artificial systems. Alan Turing recognized the power of stateful computation when he created the Turing machine with theoretically infinite computational capacity in 1936. Independently, by 1950, ethologists such as Tinbergen and Lorenz also began to implicitly embed rudimentary forms of state-dependent computation to create qualitative models of internal drives and naturally occurring animal behaviors. Here, we reformulate core ethological concepts in explicitly dynamical systems terms for stateful computation. We examine, based on a wealth of recent neural data collected during complex innate behaviors across species, the neural dynamics that determine the temporal structure of internal states. We will also discuss the degree to which the brain can be hierarchically partitioned into nested dynamical systems and the need for a multi-dimensional state-space model of the neuromodulatory system that underlies motivational and affective states.  相似文献   

15.
How the brain uses success and failure to optimize future decisions is a long-standing question in neuroscience. One computational solution involves updating the values of context-action associations in proportion to a reward prediction error. Previous evidence suggests that such computations are expressed in the striatum and, as they are cognitively impenetrable, represent an unconscious learning mechanism. Here, we formally test this by studying instrumental conditioning in a situation where we masked contextual cues, such that they were not consciously perceived. Behavioral data showed that subjects nonetheless developed a significant propensity to choose cues associated with monetary rewards relative to punishments. Functional neuroimaging revealed that during conditioning cue values and prediction errors, generated from a computational model, both correlated with activity in ventral striatum. We conclude that, even without conscious processing of contextual cues, our brain can learn their reward value and use them to provide a bias on decision making.  相似文献   

16.
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.  相似文献   

17.
This article introduces a simulation model of rat behavior in the elevated plus-maze, designed through a Decision trees approach using Classification and Regression algorithms. Starting from the analysis of the behavior performed by a sample of 18 Sprague-Dawley male rats, probabilistic rules describing behavioral patterns of the animals were extracted, and were used as the basis of the model computations. The model adequacy was tested by contrasting a simulated sample against an independent sample of real animals. Statistical tests showed that the simulated sample exhibits similar behaviors to those displayed by the real animals, both in terms of the number of entries to open and close arms as well as in terms of the time spent by the animals in those arms. However, the performance of the model in parameters related to the behavioral patterns was partially satisfactory. Given that previous attempts in the literature have neither include this kind of patterns nor the time as a crucial model parameter, the present model offers a suitable alternative for the computational simulation of this paradigm. Compared with antecedent models, the present simulation produced similar or better results in all the considered parameters. Beyond the goal of establish an appropriate simulational model, extracted rules also reveal important regularities associated to the rat behavior previously ignored by other models, i.e. that specific rat behaviors in the elevated plus-maze are time dependent. These and other important considerations to improve the model performance are discussed.  相似文献   

18.
The manner in which microorganisms utilize their metabolic processes can be predicted using constraint-based analysis of genome-scale metabolic networks. Herein, we present the constraint-based reconstruction and analysis toolbox, a software package running in the Matlab environment, which allows for quantitative prediction of cellular behavior using a constraint-based approach. Specifically, this software allows predictive computations of both steady-state and dynamic optimal growth behavior, the effects of gene deletions, comprehensive robustness analyses, sampling the range of possible cellular metabolic states and the determination of network modules. Functions enabling these calculations are included in the toolbox, allowing a user to input a genome-scale metabolic model distributed in Systems Biology Markup Language format and perform these calculations with just a few lines of code. The results are predictions of cellular behavior that have been verified as accurate in a growing body of research. After software installation, calculation time is minimal, allowing the user to focus on the interpretation of the computational results.  相似文献   

19.
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.  相似文献   

20.
On the basis of brain imaging studies, Doyon and Ungerleider recently proposed a model describing the cerebral plasticity that occurs in both cortico-striatal and cortico-cerebellar systems of the adult brain during learning of new motor skilled behaviors. This theoretical framework makes several testable predictions with regards to the contribution of these neural systems based on the phase (fast, slow, consolidation, automatization, and retention) and nature of the motor learning processes (motor sequence versus motor adaptation) acquired through repeated practice. There has been recent behavioral, lesion and additional neuroimaging studies that have addressed the assumptions made in this theory that will help in the revision of this model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号