首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.  相似文献   

2.
It has been proved, for several classes of continuous and discrete dynamical systems, that the presence of a positive (resp. negative) circuit in the interaction graph of a system is a necessary condition for the presence of multiple stable states (resp. a cyclic attractor). A positive (resp. negative) circuit is said to be functional when it “generates” several stable states (resp. a cyclic attractor). However, there are no definite mathematical frameworks translating the underlying meaning of “generates.” Focusing on Boolean networks, we recall and propose some definitions concerning the notion of functionality along with associated mathematical results.  相似文献   

3.
B. Doyon 《Acta biotheoretica》1992,40(2-3):113-119
Chaos theory is a rapidly growing field. As a technical term, “chaos” refers to deterministic but unpredictable processes being sensitively dependent upon initial conditions. Neurobiological models and experimental results are very complicated and some research groups have tried to pursue the “neuronal chaos”. Babloyantz's group has studied the fractal dimension (d) of electroencephalograms (EEG) in various physiological and pathological states. From deep sleep (d=4) to full awakening (d>8), a hierarchy of “strange” attractors paralles the hierarchy of states of consciousness. In epilepsy (petit mal), despite the turbulent aspect of a seizure, the attractor dimension was near to 2. In Creutzfeld-Jacob disease, the regular EEG activity corresponded to an attractor dimension less than the one measured in deep sleep. Is it healthy to be chaotic? An “active desynchronisation” could be favourable to a physiological system. Rapp's group reported variations of fractal dimension according to particular tasks. During a mental arithmetic task, this dimension increased. In another task, a P300 fractal index decreased when a target was identified. It is clear that the EEG is not representing noise. Its underlying dynamics depends on only a few degrees of freedom despite yet it is difficult to compute accurately the relevant parameters. What is the cognitive role of such a chaotic dynamics? Freeman has studied the olfactory bulb in rabbits and rats for 15 years. Multi-electrode recordings of a few mm2 showed a chaotic hierarchy from deep anaesthesia to alert state. When an animal identified a previously learned odour, the fractal dimension of the dynamics dropped off (near limit cycles). The chaotic activity corresponding to an alert-and-waiting state seems to be a field of all possibilities and a focused activity corresponds to a reduction of the attractor in state space. For a couple of years, Freeman has developed a model of the olfactory bulb-cortex system. The behaviour of the simple model “without learning” was quite similar to the real behaviour and a model “with learning” is developed. Recently, more and more authors insisted on the importance of the dynamic aspect of nervous functioning in cognitive modelling. Most of the models in the neural-network field are designed to converge to a stable state (fixed point) because such behaviour is easy to understand and to control. However, some theoretical studies in physics try to understand how a chaotic behaviour can emerge from neural networks. Sompolinsky's group showed that a sharp transition from a stable state to a chaotic state occurred in totally interconnected networks depending on the value of one control parameter. Learning in such systems is an open field. In conclusion, chaos does exist in neurophysiological processes. It is neither a kind of noise nor a pathological sign. Its main role could be to provide diversity and flexibility to physiological processes. Could “strange” attractors in nervous system embody mental forms? This is a difficult but fascinating question.  相似文献   

4.
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal''s position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat''s velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of ∼10–100 meters and ∼1–10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.  相似文献   

5.
Neuromodulatory inputs are known to play a major role in the adaptive plasticity of rhythmic neural networks in adult animals. Using the crustacean stomatogastric nervous system, we have investigated the role of modulatory inputs in the development of rhythmic neural networks. We found that the same neuronal population is organised into a single network in the embryo, as opposed to the two networks present in the adult. However, these adult networks pre-exist in the embryo and can be unmasked by specific alterations of the neuromodulatory environment. Similarly, adult networks may switch back to the embryonic phenotype by manipulating neuromodulatory inputs. During development, we found that the early established neuromodulatory population display alteration in expressed neurotransmitter phenotypes, and that although the population of modulatory neurones is established early, with morphology and projection pattern similar to adult ones, their neurotransmitter phenotype may appear gradually. Therefore the abrupt switch from embryonic to adult network expression occurring at metamorphosis may be due to network reconfiguration in response to changes in modulatory input, as found in adult adaptive plasticity. Strikingly, related crustacean species express different motor outputs using the same basic network circuitry, due to species-specific alteration in neuromodulatory substances within homologous projecting neurones. Therefore we propose that alterations within neuromodulatory systems to a given rhythmic neural network displaying the same basic circuitry may account for the generation of different motor outputs throughout development (ontogenetic plasticity), adulthood (adaptive plasticity) and evolution (phylogenetic plasticity).Abbreviations CoG Commissural ganglion - OG Oesophageal ganglion - STG Stomatogastric ganglion - STNS Stomatogastric nervous system  相似文献   

6.
MacNeil D  Eliasmith C 《PloS one》2011,6(9):e22885
A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.  相似文献   

7.
The oculomotor integrator is a brainstem neural network that converts velocity signals into the position commands necessary for eye-movement control. The cerebellum can independently adjust the amplitude of eye-movement commands and the temporal characteristics of neural integration, but the percentage of integrator neurons that receive cerebellar input is very small. Adaptive dynamic systems models, configured using the genetic algorithm, show how sparse cerebellar inputs could morph the dynamics of the oculomotor integrator and independently adjust its overall response amplitude and time course. Dynamic morphing involves an interplay of opposites, in which some model Purkinje cells exert positive feedback on the network, while others exert negative feedback. Positive feedback can be increased to prolong the integrator time course at virtually any level of negative feedback. The more these two influences oppose each other, the larger become the response amplitudes of the individual units and of the overall integrator network. Action Editor: Jonathan D. Victor  相似文献   

8.
How the brain combines information from different sensory modalities and of differing reliability is an important and still-unanswered question. Using the head direction (HD) system as a model, we explored the resolution of conflicts between landmarks and background cues. Sensory cue integration models predict averaging of the two cues, whereas attractor models predict capture of the signal by the dominant cue. We found that a visual landmark mostly captured the HD signal at low conflicts: however, there was an increasing propensity for the cells to integrate the cues thereafter. A large conflict presented to naive rats resulted in greater visual cue capture (less integration) than in experienced rats, revealing an effect of experience. We propose that weighted cue integration in HD cells arises from dynamic plasticity of the feed-forward inputs to the network, causing within-trial spatial redistribution of the visual inputs onto the ring. This suggests that an attractor network can implement decision processes about cue reliability using simple architecture and learning rules, thus providing a potential neural substrate for weighted cue integration.  相似文献   

9.
Wang XJ 《Neuron》2002,36(5):955-968
Recent physiological studies of alert primates have revealed cortical neural correlates of key steps in a perceptual decision-making process. To elucidate synaptic mechanisms of decision making, I investigated a biophysically realistic cortical network model for a visual discrimination experiment. In the model, slow recurrent excitation and feedback inhibition produce attractor dynamics that amplify the difference between conflicting inputs and generates a binary choice. The model is shown to account for salient characteristics of the observed decision-correlated neural activity, as well as the animal's psychometric function and reaction times. These results suggest that recurrent excitation mediated by NMDA receptors provides a candidate cellular mechanism for the slow time integration of sensory stimuli and the formation of categorical choices in a decision-making neocortical network.  相似文献   

10.
Hippocampal neural codes for different, familiar environments are thought to reflect distinct attractor states, possibly implemented in the recurrent CA3 network. A defining property of an attractor network is its ability to undergo sharp and coherent transitions between pre-established (learned) representations when the inputs to the network are changed. To determine whether hippocampal neuronal ensembles exhibit such discontinuities, we recorded in CA3 and CA1 when a familiar square recording enclosure was morphed in quantifiable steps into a familiar circular enclosure while leaving other inputs constant. We observed a gradual noncoherent progression from the initial to the final network state. In CA3, the transformation was accompanied by significant hysteresis, resulting in more similar end states than when only square and circle were presented. These observations suggest that hippocampal cell assemblies are capable of incremental plastic deformation, with incongruous information being incorporated into pre-existing representations.  相似文献   

11.
As a method for the analysis of neural spike trains, we examine fundamental characteristics of interspike interval (ISI) reconstruction theoretically with a leaky-integrator neuron model and experimentally with cricket wind receptor cells. Both the input to the leaky integrator and the stimulus to the wind receptor cells are the time series generated from the Rossler system. By numerical analysis of the leaky integrator, it is shown that, even if ISI reconstruction is possible, sometimes the entire structure of the R?ssler attractor may not be reconstructed with ISI reconstruction. For analysis of the in vivo physiological responses of cricket wind receptor cells, we apply ISI reconstruction, nonlinear prediction and the surrogate data method to the experimental data. As a result of the analysis, it is found that there is a significant deterministic structure in the spike trains. By this analysis of physiological data, it is also shown that, even if ISI reconstruction is possible, the entire attractor may not be reconstructed.  相似文献   

12.
Two neuronal models are analyzed in which subthreshold inputs are integrated either without loss (perfect integrator) or with a decay which follows an exponential time course (leaky integrator). Linear frequency response functions for these models are compared using sinusoids, Poisson-distributed impulses, or gaussian white noise as inputs. The responses of both models show the nonlinear behavior characteristic of a rectifier for sinusoidal inputs of sufficient amplitude. The leaky integrator shows another nonlinearity in which responses become phase locked to cyclic stimuli. Addition of white noise reduces the distortions due to phase locking. Both models also show selective attenuation of high-frequency components with white noise inputs. Input, output, and cross-spectra are computed using inputs having a broad frequency spectrum. Measures of the coherence and information transmission between the input and output of the models are also derived. Steady inputs, which produce a constant “carrier” rate, and intrinsic sources, which produce variability in the discharge of neurons, may either increase or decrease coherence; however, information transmission using inputs with a broad spectrum is generally increased by steady inputs and reduced by intrinsic variability.  相似文献   

13.
The comparative study of electronic and neural networks involved in pattern recognition starts with the analogies of structure and function which exist between the electronic “basic integrative unit” and the neuron. Both elements represent the basic components in each system of networks and may be considered as functionally equivalent.According to the kind of response given to a standard input signal, four types of integrative units, either electronic or neural, may be distinguished: the fixed, the accommodative, the signal prolongating and the adaptive type.The integrative units perform many different functions. Those involved in pattern recognition, however, can all be grouped into three categories according to one of the following functions they perform: contrast detection, pattern detection and pattern discrimination. A “contrast detecting unit” gives responses in two senses, positive or negative, according to the position of the stimulus over its receptive field. A “pattern detecting unit” gives responses in one sense only, with a maximum for a pattern having the spatial distribution corresponding to the positive acting receptors of its receptive field. For performing the function of discrimination, which leads to reliable identification of any pattern, a network arrangement called a “maximum amplitude filter” is necessary. Examples of such units and arrangements existing in the nervous system are provided.It is concluded that a “logical analysis of neural networks” based on engineering principles is possible and that this could provide a new tool to the neurophysiologist in the study of the nervous system.  相似文献   

14.
During the development of the nervous system embryonic neurons are incorporated into neural networks that underlie behaviour. For example, during embryogenesis in Drosophila, motor neurons in every body segment are wired into the circuitry that drives the simple peristaltic locomotion of the larva. Very little is known about the way in which the necessary central synapses are formed in such a network or how their properties are controlled. One possibility is that presynaptic and postsynaptic elements form relatively independently of each other. Alternatively, there might be an interaction between presynaptic and postsynaptic neurons that allows for adjustment and plasticity in the embryonic network. Here we have addressed this issue by analysing the role of synaptic transmission in the formation of synaptic inputs onto identified motorneurons as the locomotor circuitry is assembled in the Drosophila embryo. We targeted the expression of tetanus toxin light chain (TeTxLC) to single identified neurons using the GAL4 system. TeTxLC prevents the evoked release of neurotransmitter by enzymatically cleaving the synaptic-vesicle-associated protein neuronal-Synaptobrevin (n-Syb) [1]. Unexpectedly, we found that the cells that expressed TeTxLC, which were themselves incapable of evoked release, showed a dramatic reduction in synaptic input. We detected this reduction both electrophysiologically and ultrastructurally.  相似文献   

15.
I hypothesize that re‐occurring prior experience of complex systems mobilizes a fast response, whose attractor is encoded by their strongly connected network core. In contrast, responses to novel stimuli are often slow and require the weakly connected network periphery. Upon repeated stimulus, peripheral network nodes remodel the network core that encodes the attractor of the new response. This “core‐periphery learning” theory reviews and generalizes the heretofore fragmented knowledge on attractor formation by neural networks, periphery‐driven innovation, and a number of recent reports on the adaptation of protein, neuronal, and social networks. The core‐periphery learning theory may increase our understanding of signaling, memory formation, information encoding and decision‐making processes. Moreover, the power of network periphery‐related “wisdom of crowds” inventing creative, novel responses indicates that deliberative democracy is a slow yet efficient learning strategy developed as the success of a billion‐year evolution. Also see the video abstract here: https://youtu.be/IIjP7zWGjVE .  相似文献   

16.
We investigate the role of adaptation in a neural field model, composed of ON and OFF cells, with delayed all-to-all recurrent connections. As external spatially profiled inputs drive the network, ON cells receive inputs directly, while OFF cells receive an inverted image of the original signals. Via global and delayed inhibitory connections, these signals can cause the system to enter states of sustained oscillatory activity. We perform a bifurcation analysis of our model to elucidate how neural adaptation influences the ability of the network to exhibit oscillatory activity. We show that slow adaptation encourages input-induced rhythmic states by decreasing the Andronov–Hopf bifurcation threshold. We further determine how the feedback and adaptation together shape the resonant properties of the ON and OFF cell network and how this affects the response to time-periodic input. By introducing an additional frequency in the system, adaptation alters the resonance frequency by shifting the peaks where the response is maximal. We support these results with numerical experiments of the neural field model. Although developed in the context of the circuitry of the electric sense, these results are applicable to any network of spontaneously firing cells with global inhibitory feedback to themselves, in which a fraction of these cells receive external input directly, while the remaining ones receive an inverted version of this input via feedforward di-synaptic inhibition. Thus the results are relevant beyond the many sensory systems where ON and OFF cells are usually identified, and provide the backbone for understanding dynamical network effects of lateral connections and various forms of ON/OFF responses.  相似文献   

17.
While learning and development are well characterized in feedforward networks, these features are more difficult to analyze in recurrent networks due to the increased complexity of dual dynamics – the rapid dynamics arising from activation states and the slow dynamics arising from learning or developmental plasticity. We present analytical and numerical results that consider dual dynamics in a recurrent network undergoing Hebbian learning with either constant weight decay or weight normalization. Starting from initially random connections, the recurrent network develops symmetric or near-symmetric connections through Hebbian learning. Reciprocity and modularity arise naturally through correlations in the activation states. Additionally, weight normalization may be better than constant weight decay for the development of multiple attractor states that allow a diverse representation of the inputs. These results suggest a natural mechanism by which synaptic plasticity in recurrent networks such as cortical and brainstem premotor circuits could enhance neural computation and the generation of motor programs. Received: 27 April 1998 / Accepted in revised form: 16 March 1999  相似文献   

18.
The stunning possibility of “reprogramming” differentiated somatic cells to express a pluripotent stem cell phenotype (iPS, induced pluripotent stem cell) and the “ground state” character of pluripotency reveal fundamental features of cell fate regulation that lie beyond existing paradigms. The rarity of reprogramming events appears to contradict the robustness with which the unfathomably complex phenotype of stem cells can reliably be generated. This apparent paradox, however, is naturally explained by the rugged “epigenetic landscape” with valleys representing “preprogrammed” attractor states that emerge from the dynamical constraints of the gene regulatory network. This article provides a pedagogical primer to the fundamental principles of gene regulatory networks as integrated dynamic systems and reviews recent insights in gene expression noise and fate determination, thereby offering a formal framework that may help us to understand why cell fate reprogramming events are inherently rare and yet so robust.  相似文献   

19.
We analyze a competitive neural network model of perceptual rivalry that receives time-varying inputs. Time-dependence of inputs can be discrete or smooth. Spike frequency adaptation provides negative feedback that generates network oscillations when inputs are constant in time. Oscillations that resemble perceptual rivalry involve only one population being “ON” at a time, which represents the dominance of a single percept at a time. As shown in Laing and Chow (J. Comput. Neurosci. 12(1):39–53, 2002), for sufficiently high contrast, one can derive relationships between dominance times and contrast that agree with Levelt’s propositions (Levelt in On binocular rivalry, 1965). Time-dependent stimuli give rise to novel network oscillations where both, one, or neither populations are “ON” at any given time. When a single population receives an interrupted stimulus, the fundamental mode of behavior we find is phase-locking, where the temporally driven population locks its state to the stimulus. Other behaviors are analyzed as bifurcations from this forced oscillation, using fast/slow analysis that exploits the slow timescale of adaptation. When both populations receive time-varying input, we find mixtures of fusion and sole population dominance, and we partition parameter space into particular oscillation types. Finally, when a single population’s input contrast is smoothly varied in time, 1:n mode-locked states arise through period-adding bifurcations beyond phase-locking. Our results provide several testable predictions for future psychophysical experiments on perceptual rivalry.  相似文献   

20.
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号