首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Short-term synaptic depression (STD) and spike-frequency adaptation (SFA) are two basic physiological cortical mechanisms for reducing the system's excitability under repetitive stimulation. The computational implications of each one of these mechanisms on information processing have been studied in detail, but not so the dynamics arising from their combination in a realistic biological scenario. We show here, both experimentally with intracellular recordings from cortical slices of the ferret and computationally using a biologically realistic model of a feedforward cortical network, that STD combined with presynaptic SFA results in the resensitization of cortical synaptic efficacies in the course of sustained stimulation. This fundamental effect is then shown in the computational model to have important implications for the network response to time-varying inputs. The main findings are: (1) the addition of SFA to the model endowed with STD improves the network sensitivity to the degree of synchrony in the incoming inputs; (2) presynaptic SFA, whether slow or fast, combined with STD results in postsynaptic neurons responding briskly to abrupt changes in the presynaptic input current and ignoring sustained stimulation, much more effectively than either SFA or STD alone; (3) for slow presynaptic SFA postsynaptic responses to strong inputs decrease inversely to the input, whereas for weak input current to presynaptic neurons transient postsynaptic responses are strongly facilitated, thus enhancing the system's sensitivity for subtle changes in weak presynaptic inputs. Taken together, these results suggest that in systems designed to respond to temporal aspects of the input, SFA and STD might constitute two necessary, linked elements whose simultaneous interplay is important for the performance of the system.  相似文献   

2.
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving.  相似文献   

3.
It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also throw new light on the computational role of feedback in other complex biological dynamical systems, such as, for example, genetic regulatory networks.  相似文献   

4.
The dynamics of cerebellar neuronal networks is controlled by the underlying building blocks of neurons and synapses between them. For which, the computation of Purkinje cells (PCs), the only output cells of the cerebellar cortex, is implemented through various types of neural pathways interactively routing excitation and inhibition converged to PCs. Such tuning of excitation and inhibition, coming from the gating of specific pathways as well as short-term plasticity (STP) of the synapses, plays a dominant role in controlling the PC dynamics in terms of firing rate and spike timing. PCs receive cascade feedforward inputs from two major neural pathways: the first one is the feedforward excitatory pathway from granule cells (GCs) to PCs; the second one is the feedforward inhibition pathway from GCs, via molecular layer interneurons (MLIs), to PCs. The GC-PC pathway, together with short-term dynamics of excitatory synapses, has been a focus over past decades, whereas recent experimental evidence shows that MLIs also greatly contribute to controlling PC activity. Therefore, it is expected that the diversity of excitation gated by STP of GC-PC synapses, modulated by strong inhibition from MLI-PC synapses, can promote the computation performed by PCs. However, it remains unclear how these two neural pathways are interacted to modulate PC dynamics. Here using a computational model of PC network installed with these two neural pathways, we addressed this question to investigate the change of PC firing dynamics at the level of single cell and network. We show that the nonlinear characteristics of excitatory STP dynamics can significantly modulate PC spiking dynamics mediated by inhibition. The changes in PC firing rate, firing phase, and temporal spike pattern, are strongly modulated by these two factors in different ways. MLIs mainly contribute to variable delays in the postsynaptic action potentials of PCs while modulated by excitation STP. Notably, the diversity of synchronization and pause response in the PC network is governed not only by the balance of excitation and inhibition, but also by the synaptic STP, depending on input burst patterns. Especially, the pause response shown in the PC network can only emerge with the interaction of both pathways. Together with other recent findings, our results show that the interaction of feedforward pathways of excitation and inhibition, incorporated with synaptic short-term dynamics, can dramatically regulate the PC activities that consequently change the network dynamics of the cerebellar circuit.  相似文献   

5.
Recent experimental data from the rodent cerebral cortex and olfactory bulb indicate that specific connectivity motifs are correlated with short-term dynamics of excitatory synaptic transmission. It was observed that neurons with short-term facilitating synapses form predominantly reciprocal pairwise connections, while neurons with short-term depressing synapses form predominantly unidirectional pairwise connections. The cause of these structural differences in excitatory synaptic microcircuits is unknown. We show that these connectivity motifs emerge in networks of model neurons, from the interactions between short-term synaptic dynamics (SD) and long-term spike-timing dependent plasticity (STDP). While the impact of STDP on SD was shown in simultaneous neuronal pair recordings in vitro, the mutual interactions between STDP and SD in large networks are still the subject of intense research. Our approach combines an SD phenomenological model with an STDP model that faithfully captures long-term plasticity dependence on both spike times and frequency. As a proof of concept, we first simulate and analyze recurrent networks of spiking neurons with random initial connection efficacies and where synapses are either all short-term facilitating or all depressing. For identical external inputs to the network, and as a direct consequence of internally generated activity, we find that networks with depressing synapses evolve unidirectional connectivity motifs, while networks with facilitating synapses evolve reciprocal connectivity motifs. We then show that the same results hold for heterogeneous networks, including both facilitating and depressing synapses. This does not contradict a recent theory that proposes that motifs are shaped by external inputs, but rather complements it by examining the role of both the external inputs and the internally generated network activity. Our study highlights the conditions under which SD-STDP might explain the correlation between facilitation and reciprocal connectivity motifs, as well as between depression and unidirectional motifs.  相似文献   

6.
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.  相似文献   

7.
Chaos and synchrony in a model of a hypercolumn in visual cortex   总被引:2,自引:0,他引:2  
Neurons in cortical slices emit spikes or bursts of spikes regularly in response to a suprathreshold current injection. This behavior is in marked contrast to the behavior of cortical neurons in vivo, whose response to electrical or sensory input displays a strong degree of irregularity. Correlation measurements show a significant degree of synchrony in the temporal fluctuations of neuronal activities in cortex. We explore the hypothesis that these phenomena are the result of the synchronized chaos generated by the deterministic dynamics of local cortical networks. A model of a hypercolumn in the visual cortex is studied. It consists of two populations of neurons, one inhibitory and one excitatory. The dynamics of the neurons is based on a Hodgkin-Huxley type model of excitable voltage-clamped cells with several cellular and synaptic conductances. A slow potassium current is included in the dynamics of the excitatory population to reproduce the observed adaptation of the spike trains emitted by these neurons. The pattern of connectivity has a spatial structure which is correlated with the internal organization of hypercolumns in orientation columns. Numerical simulations of the model show that in an appropriate parameter range, the network settles in a synchronous chaotic state, characterized by a strong temporal variability of the neural activity which is correlated across the hypercolumn. Strong inhibitory feedback is essential for the stabilization of this state. These results show that the cooperative dynamics of large neuronal networks are capable of generating variability and synchrony similar to those observed in cortex. Auto-correlation and cross-correlation functions of neuronal spike trains are computed, and their temporal and spatial features are analyzed. In other parameter regimes, the network exhibits two additional states: synchronized oscillations and an asynchronous state. We use our model to study cortical mechanisms for orientation selectivity. It is shown that in a suitable parameter regime, when the input is not oriented, the network has a continuum of states, each representing an inhomogeneous population activity which is peaked at one of the orientation columns. As a result, when a weakly oriented input stimulates the network, it yields a sharp orientation tuning. The properties of the network in this regime, including the appearance of virtual rotations and broad stimulus-dependent cross-correlations, are investigated. The results agree with the predictions of the mean field theory which was previously derived for a simplified model of stochastic, two-state neurons. The relation between the results of the model and experiments in visual cortex are discussed.  相似文献   

8.
《Bio Systems》2007,87(1-3):53-62
The dynamics of activity in interactive neural populations is simulated by the networks of Wilson–Cowan oscillators. Two extreme cases of connection architectures in the networks are considered: (1) 1D and 2D regular and homogeneous grids with local connections and (2) sparse random coupling. Propagating waves in the network have been found under the stationary external input and the regime of partial synchronization has been obtained for the periodic input. It has been shown that in the case of random coupling about 60% of neural populations demonstrate oscillatory activity and some of these oscillations are synchronous. The role of different types of dynamics in information processing is discussed. In particular, we discuss the regime of partial synchronization in the context of cortical microcircuits.  相似文献   

9.
The dynamics of activity in interactive neural populations is simulated by the networks of Wilson-Cowan oscillators. Two extreme cases of connection architectures in the networks are considered: (1) 1D and 2D regular and homogeneous grids with local connections and (2) sparse random coupling. Propagating waves in the network have been found under the stationary external input and the regime of partial synchronization has been obtained for the periodic input. It has been shown that in the case of random coupling about 60% of neural populations demonstrate oscillatory activity and some of these oscillations are synchronous. The role of different types of dynamics in information processing is discussed. In particular, we discuss the regime of partial synchronization in the context of cortical microcircuits.  相似文献   

10.
Recent studies have shown that local cortical feedback can havean important effect on the response of neurons in primary visualcortex to the orientation of visual stimuli. In this work, westudy the role of the cortical feedback in shaping thespatiotemporal patterns of activity in cortex. Two questionsare addressed: one, what are the limitations on the ability ofcortical neurons to lock their activity to rotatingoriented stimuli within a single receptive field? Two, can thelocal architecture of visual cortex lead to the generation ofspontaneous traveling pulses of activity? We study theseissues analytically by a population-dynamic model of ahypercolumn in visual cortex. The order parameter thatdescribes the macroscopic behavior of the network is thetime-dependent population vector of the network. We firststudy the network dynamics under the influence of a weakly tunedinput that slowly rotates within the receptive field. We showthat if the cortical interactions have strong spatialmodulation, the network generates a sharply tuned activityprofile that propagates across the hypercolumn in a path thatis completely locked to the stimulus rotation. The resultantrotating population vector maintains a constant angular lagrelative to the stimulus, the magnitude of which grows with thestimulus rotation frequency. Beyond a critical frequency thepopulation vector does not lock to the stimulus but executes aquasi-periodic motion with an average frequency that is smallerthan that of the stimulus. In the second part we consider thestable intrinsic state of the cortex under the influence of isotropic stimulation. We show that if the local inhibitoryfeedback is sufficiently strong, the network does not settleinto a stationary state but develops spontaneous travelingpulses of activity. Unlike recent models of wave propagation incortical networks, the connectivity pattern in our model isspatially symmetric, hence the direction of propagation ofthese waves is arbitrary. The interaction of these waves withan external-oriented stimulus is studied. It is shown that thesystem can lock to a weakly tuned rotating stimulus if thestimulus frequency is close to the frequency of the intrinsic wave.  相似文献   

11.
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.  相似文献   

12.
Artificial neural networks are usually built on rather few elements such as activation functions, learning rules, and the network topology. When modelling the more complex properties of realistic networks, however, a number of higher-level structural principles become important. In this paper we present a theoretical framework for modelling cortical networks at a high level of abstraction. Based on the notion of a population of neurons, this framework can accommodate the common features of cortical architecture, such as lamination, multiple areas and topographic maps, input segregation, and local variations of the frequency of different cell types (e.g., cytochrome oxidase blobs). The framework is meant primarily for the simulation of activation dynamics; it can also be used to model the neural environment of single cells in a multiscale approach. Received: 9 January 1996 / Accepted in revised form: 24 July 1996  相似文献   

13.
In vitro neural networks of cortical neurons interfaced to a computer via multichannel microelectrode arrays (MEA) provide a unique paradigm to create a hybrid neural computer. Unfortunately, only rudimentary information about these in vitro network's computational properties or the extent of their abilities are known. To study those properties, a liquid state machine (LSM) approach was employed in which the liquid (typically an artificial neural network) was replaced with a living cortical network and the input and readout functions were replaced by the MEA-computer interface. A key requirement of the LSM architecture is that inputs into the liquid state must result in separable outputs based on the liquid's response (separation property). In this paper, high and low frequency multi-site stimulation patterns were applied to the living cortical networks. Two template-based classifiers, one based on Euclidean distance and a second based on a cross-correlation were then applied to measure the separation of the input-output relationship. The result was over a 95% (99.8% when nonstationarity is compensated) input reconstruction accuracy for the high and low frequency patterns, confirming the existence of the separation property in these biological networks.  相似文献   

14.
Computational studies as well as in vivo and in vitro results have shown that many cortical neurons fire in a highly irregular manner and at low average firing rates. These patterns seem to persist even when highly rhythmic signals are recorded by local field potential electrodes or other methods that quantify the summed behavior of a local population. Models of the 30-80 Hz gamma rhythm in which network oscillations arise through 'stochastic synchrony' capture the variability observed in the spike output of single cells while preserving network-level organization. We extend upon these results by constructing model networks constrained by experimental measurements and using them to probe the effect of biophysical parameters on network-level activity. We find in simulations that gamma-frequency oscillations are enabled by a high level of incoherent synaptic conductance input, similar to the barrage of noisy synaptic input that cortical neurons have been shown to receive in vivo. This incoherent synaptic input increases the emergent network frequency by shortening the time scale of the membrane in excitatory neurons and by reducing the temporal separation between excitation and inhibition due to decreased spike latency in inhibitory neurons. These mechanisms are demonstrated in simulations and in vitro current-clamp and dynamic-clamp experiments. Simulation results further indicate that the membrane potential noise amplitude has a large impact on network frequency and that the balance between excitatory and inhibitory currents controls network stability and sensitivity to external inputs.  相似文献   

15.
Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons'' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission.  相似文献   

16.
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a pertinent framework to model local cortical dynamics and their contribution to higher cognitive function.  相似文献   

17.
Persistent neuronal activity is usually studied in the context of short-term memory localized in central cortical areas. Recent studies show that early sensory areas also can have persistent representations of stimuli which emerge quickly (over tens of milliseconds) and decay slowly (over seconds). Traditional positive feedback models cannot explain sensory persistence for at least two reasons: (i) They show attractor dynamics, with transient perturbations resulting in a quasi-permanent change of system state, whereas sensory systems return to the original state after a transient. (ii) As we show, those positive feedback models which decay to baseline lose their persistence when their recurrent connections are subject to short-term depression, a common property of excitatory connections in early sensory areas. Dual time constant network behavior has also been implemented by nonlinear afferents producing a large transient input followed by much smaller steady state input. We show that such networks require unphysiologically large onset transients to produce the rise and decay observed in sensory areas. Our study explores how memory and persistence can be implemented in another model class, derivative feedback networks. We show that these networks can operate with two vastly different time courses, changing their state quickly when new information is coming in but retaining it for a long time, and that these capabilities are robust to short-term depression. Specifically, derivative feedback networks with short-term depression that acts differentially on positive and negative feedback projections are capable of dynamically changing their time constant, thus allowing fast onset and slow decay of responses without requiring unrealistically large input transients.  相似文献   

18.
Neural oscillations occur within a wide frequency range with different brain regions exhibiting resonance-like characteristics at specific points in the spectrum. At the microscopic scale, single neurons possess intrinsic oscillatory properties, such that is not yet known whether cortical resonance is consequential to neural oscillations or an emergent property of the networks that interconnect them. Using a network model of loosely-coupled Wilson-Cowan oscillators to simulate a patch of cortical sheet, we demonstrate that the size of the activated network is inversely related to its resonance frequency. Further analysis of the parameter space indicated that the number of excitatory and inhibitory connections, as well as the average transmission delay between units, determined the resonance frequency. The model predicted that if an activated network within the visual cortex increased in size, the resonance frequency of the network would decrease. We tested this prediction experimentally using the steady-state visual evoked potential where we stimulated the visual cortex with different size stimuli at a range of driving frequencies. We demonstrate that the frequency corresponding to peak steady-state response inversely correlated with the size of the network. We conclude that although individual neurons possess resonance properties, oscillatory activity at the macroscopic level is strongly influenced by network interactions, and that the steady-state response can be used to investigate functional networks.  相似文献   

19.
Randomly-connected networks of integrate-and-fire (IF) neurons are known to display asynchronous irregular (AI) activity states, which resemble the discharge activity recorded in the cerebral cortex of awake animals. However, it is not clear whether such activity states are specific to simple IF models, or if they also exist in networks where neurons are endowed with complex intrinsic properties similar to electrophysiological measurements. Here, we investigate the occurrence of AI states in networks of nonlinear IF neurons, such as the adaptive exponential IF (Brette-Gerstner-Izhikevich) model. This model can display intrinsic properties such as low-threshold spike (LTS), regular spiking (RS) or fast-spiking (FS). We successively investigate the oscillatory and AI dynamics of thalamic, cortical and thalamocortical networks using such models. AI states can be found in each case, sometimes with surprisingly small network size of the order of a few tens of neurons. We show that the presence of LTS neurons in cortex or in thalamus, explains the robust emergence of AI states for relatively small network sizes. Finally, we investigate the role of spike-frequency adaptation (SFA). In cortical networks with strong SFA in RS cells, the AI state is transient, but when SFA is reduced, AI states can be self-sustained for long times. In thalamocortical networks, AI states are found when the cortex is itself in an AI state, but with strong SFA, the thalamocortical network displays Up and Down state transitions, similar to intracellular recordings during slow-wave sleep or anesthesia. Self-sustained Up and Down states could also be generated by two-layer cortical networks with LTS cells. These models suggest that intrinsic properties such as adaptation and low-threshold bursting activity are crucial for the genesis and control of AI states in thalamocortical networks.  相似文献   

20.
 In this paper, we study the combined dynamics of the neural activity and the synaptic efficiency changes in a fully connected network of biologically realistic neurons with simple synaptic plasticity dynamics including both potentiation and depression. Using a mean-field of technique, we analyzed the equilibrium states of neural networks with dynamic synaptic connections and found a class of bistable networks. For this class of networks, one of the stable equilibrium states shows strong connectivity and coherent responses to external input. In the other stable equilibrium, the network is loosely connected and responds non coherently to external input. Transitions between the two states can be achieved by positively or negatively correlated external inputs. Such networks can therefore switch between their phases according to the statistical properties of the external input. Non-coherent input can only “rcad” the state of the network, while a correlated one can change its state. We speculate that this property, specific for plastic neural networks, can give a clue to understand fully unsupervised learning models. Received: 8 August 1999 / Accepted in revised form: 16 March 2000  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号