首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Kurikawa T  Kaneko K 《PloS one》2011,6(3):e17432
Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided.  相似文献   

2.
Time is considered to be an important encoding dimension in olfaction, as neural populations generate odour-specific spatiotemporal responses to constant stimuli. However, during pheromone mediated anemotactic search insects must discriminate specific ratios of blend components from rapidly time varying input. The dynamics intrinsic to olfactory processing and those of naturalistic stimuli can therefore potentially collide, thereby confounding ratiometric information. In this paper we use a computational model of the macroglomerular complex of the insect antennal lobe to study the impact on ratiometric information of this potential collision between network and stimulus dynamics. We show that the model exhibits two different dynamical regimes depending upon the connectivity pattern between inhibitory interneurons (that we refer to as fixed point attractor and limit cycle attractor), which both generate ratio-specific trajectories in the projection neuron output population that are reminiscent of temporal patterning and periodic hyperpolarisation observed in olfactory antennal lobe neurons. We compare the performance of the two corresponding population codes for reporting ratiometric blend information to higher centres of the insect brain. Our key finding is that whilst the dynamically rich limit cycle attractor spatiotemporal code is faster and more efficient in transmitting blend information under certain conditions it is also more prone to interference between network and stimulus dynamics, thus degrading ratiometric information under naturalistic input conditions. Our results suggest that rich intrinsically generated network dynamics can provide a powerful means of encoding multidimensional stimuli with high accuracy and efficiency, but only when isolated from stimulus dynamics. This interference between temporal dynamics of the stimulus and temporal patterns of neural activity constitutes a real challenge that must be successfully solved by the nervous system when faced with naturalistic input.  相似文献   

3.
The notion of attractor networks is the leading hypothesis for how associative memories are stored and recalled. A defining anatomical feature of such networks is excitatory recurrent connections. These “attract” the firing pattern of the network to a stored pattern, even when the external input is incomplete (pattern completion). The CA3 region of the hippocampus has been postulated to be such an attractor network; however, the experimental evidence has been ambiguous, leading to the suggestion that CA3 is not an attractor network. In order to resolve this controversy and to better understand how CA3 functions, we simulated CA3 and its input structures. In our simulation, we could reproduce critical experimental results and establish the criteria for identifying attractor properties. Notably, under conditions in which there is continuous input, the output should be “attracted” to a stored pattern. However, contrary to previous expectations, as a pattern is gradually “morphed” from one stored pattern to another, a sharp transition between output patterns is not expected. The observed firing patterns of CA3 meet these criteria and can be quantitatively accounted for by our model. Notably, as morphing proceeds, the activity pattern in the dentate gyrus changes; in contrast, the activity pattern in the downstream CA3 network is attracted to a stored pattern and thus undergoes little change. We furthermore show that other aspects of the observed firing patterns can be explained by learning that occurs during behavioral testing. The CA3 thus displays both the learning and recall signatures of an attractor network. These observations, taken together with existing anatomical and behavioral evidence, make the strong case that CA3 constructs associative memories based on attractor dynamics.  相似文献   

4.
Based on theoretical issues and neurobiological evidence, considerable interest has recently focused on dynamic computational elements in neural systems. Such elements respond to stimuli by altering their dynamical behavior rather than by changing a scalar output. In particular, neural oscillators capable of chaotic dynamics represent a potentially very rich substrate for complex spatiotemporal information processing. However, the response properties of such systems must be studied in detail before they can be used as computational elements in neural models. In this paper, we focus on the response of a very simple discrete-time neural oscillator model to a fixed input. We show that the oscillator responds to the stimulus through a fairly complex set of bifurcations, and shows critical switching between attractors. This information can be used to construct very sophisticated dynamic computational elements with well-understood response properties. Examples of such elements are presented in the paper. We end with a brief discussion of simple architectures for networks of dynamical elements, and the relevance of our results to neurobiological models. Received: 7 August 1997 / Accepted in revised form: 22 April 1998  相似文献   

5.
Recurrence plots of neuronal spike trains   总被引:2,自引:0,他引:2  
The recently developed qualitative method of diagnosis of dynamical systems — recurrence plots has been applied to the analysis of dynamics of neuronal spike trains recorded from cerebellum and red nucleus of anesthetized cats. Recurrence plots revealed robust and common changes in the similarity structure of interspike interval sequences as well as significant deviations from randomness in serial ordering of intervals. Recurring episodes of alike, quasi-deterministic firing patterns suggest the spontaneous modulation of the dynamical complexity of the trajectories of observed neurons. These modulations are associated with changing dynamical properties of a neuronal spike-train-generating system. Their existence is compatible with the information processing paradigm of attractor neural networks.  相似文献   

6.
Collective rhythmic dynamics from neurons is vital for cognitive functions such as memory formation but how neurons self-organize to produce such activity is not well understood. Attractor-based computational models have been successfully implemented as a theoretical framework for memory storage in networks of neurons. Additionally, activity-dependent modification of synaptic transmission is thought to be the physiological basis of learning and memory. The goal of this study is to demonstrate that using a pharmacological treatment that has been shown to increase synaptic strength within in vitro networks of hippocampal neurons follows the dynamical postulates theorized by attractor models. We use a grid of extracellular electrodes to study changes in network activity after this perturbation and show that there is a persistent increase in overall spiking and bursting activity after treatment. This increase in activity appears to recruit more “errant” spikes into bursts. Phase plots indicate a conserved activity pattern suggesting that a synaptic potentiation perturbation to the attractor leaves it unchanged. Lastly, we construct a computational model to demonstrate that these synaptic perturbations can account for the dynamical changes seen within the network.  相似文献   

7.
The early processing of sensory information by neuronal circuits often includes a reshaping of activity patterns that may facilitate further processing in the brain. For instance, in the olfactory system the activity patterns that related odors evoke at the input of the olfactory bulb can be highly similar. Nevertheless, the corresponding activity patterns of the mitral cells, which represent the output of the olfactory bulb, can differ significantly from each other due to strong inhibition by granule cells and peri-glomerular cells. Motivated by these results we study simple adaptive inhibitory networks that aim to separate or even orthogonalize activity patterns representing similar stimuli. Since the animal experiences the different stimuli at different times it is difficult for the network to learn the connectivity based on their similarity; biologically it is more plausible that learning is driven by simultaneous correlations between the input channels. We investigate the connection between pattern orthogonalization and channel decorrelation and demonstrate that networks can achieve effective pattern orthogonalization through channel decorrelation if they simultaneously equalize their output levels. In feedforward networks biophysically plausible learning mechanisms fail, however, for even moderately similar input patterns. Recurrent networks do not have that limitation; they can orthogonalize the representations of highly similar input patterns. Even when they are optimized for linear neuronal dynamics they perform very well when the dynamics are nonlinear. These results provide insights into fundamental features of simplified inhibitory networks that may be relevant for pattern orthogonalization by neuronal circuits in general.  相似文献   

8.
It is well accepted that the brain''s computation relies on spatiotemporal activity of neural networks. In particular, there is growing evidence of the importance of continuously and precisely timed spiking activity. Therefore, it is important to characterize memory states in terms of spike-timing patterns that give both reliable memory of firing activities and precise memory of firing timings. The relationship between memory states and spike-timing patterns has been studied empirically with large-scale recording of neuron population in recent years. Here, by using a recurrent neural network model with dynamics at two time scales, we construct a dynamical memory network model which embeds both fast neural and synaptic variation and slow learning dynamics. A state vector is proposed to describe memory states in terms of spike-timing patterns of neural population, and a distance measure of state vector is defined to study several important phenomena of memory dynamics: partial memory recall, learning efficiency, learning with correlated stimuli. We show that the distance measure can capture the timing difference of memory states. In addition, we examine the influence of network topology on learning ability, and show that local connections can increase the network''s ability to embed more memory states. Together theses results suggest that the proposed system based on spike-timing patterns gives a productive model for the study of detailed learning and memory dynamics.  相似文献   

9.
In the absence of sensory stimulation, neocortical circuits display complex patterns of neural activity. These patterns are thought to reflect relevant properties of the network, including anatomical features like its modularity. It is also assumed that the synaptic connections of the network constrain the repertoire of emergent, spontaneous patterns. Although the link between network architecture and network activity has been extensively investigated in the last few years from different perspectives, our understanding of the relationship between the network connectivity and the structure of its spontaneous activity is still incomplete. Using a general mathematical model of neural dynamics we have studied the link between spontaneous activity and the underlying network architecture. In particular, here we show mathematically how the synaptic connections between neurons determine the repertoire of spatial patterns displayed in the spontaneous activity. To test our theoretical result, we have also used the model to simulate spontaneous activity of a neural network, whose architecture is inspired by the patchy organization of horizontal connections between cortical columns in the neocortex of primates and other mammals. The dominant spatial patterns of the spontaneous activity, calculated as its principal components, coincide remarkably well with those patterns predicted from the network connectivity using our theory. The equivalence between the concept of dominant pattern and the concept of attractor of the network dynamics is also demonstrated. This in turn suggests new ways of investigating encoding and storage capabilities of neural networks.  相似文献   

10.
Goldberg JA  Rokni U  Sompolinsky H 《Neuron》2004,42(3):489-500
Ongoing spontaneous activity in the cerebral cortex exhibits complex spatiotemporal patterns in the absence of sensory stimuli. To elucidate the nature of this ongoing activity, we present a theoretical treatment of two contrasting scenarios of cortical dynamics: (1) fluctuations about a single background state and (2) wandering among multiple "attractor" states, which encode a single or several stimulus features. Studying simplified network rate models of the primary visual cortex (V1), we show that the single state scenario is characterized by fast and high-dimensional Gaussian-like fluctuations, whereas in the multiple state scenario the fluctuations are slow, low dimensional, and highly non-Gaussian. Studying a more realistic model that incorporates correlations in the feed-forward input, spatially restricted cortical interactions, and an experimentally derived layout of pinwheels, we show that recent optical-imaging data of ongoing activity in V1 are consistent with the presence of either a single background state or multiple attractor states encoding many features.  相似文献   

11.
Reverberating spontaneous synchronized brain activity is believed to play an important role in neural information processing. Whether and how external stimuli can influence this spontaneous activity is poorly understood. Because periodic synchronized network activity is also prominent in in vitro neuronal cultures, we used cortical cultures grown on multielectrode arrays to examine how spontaneous activity is affected by external stimuli. Spontaneous network activity before and after low-frequency electrical stimulation was quantified in several ways. Our results show that the initially stable pattern of stereotypical spontaneous activity was transformed into another activity pattern that remained stable for at least 1 h. The transformations consisted of changes in single site and culture-wide network activity as well as in the spatiotemporal dynamics of network bursting. We show for the first time that low-frequency electrical stimulation can induce long-lasting alterations in spontaneous activity of cortical neuronal networks. We discuss whether the observed transformations in network activity could represent a switch in attractor state.  相似文献   

12.
Tang S  Juusola M 《PloS one》2010,5(12):e14455
The small insect brain is often described as an input/output system that executes reflex-like behaviors. It can also initiate neural activity and behaviors intrinsically, seen as spontaneous behaviors, different arousal states and sleep. However, less is known about how intrinsic activity in neural circuits affects sensory information processing in the insect brain and variability in behavior. Here, by simultaneously monitoring Drosophila's behavioral choices and brain activity in a flight simulator system, we identify intrinsic activity that is associated with the act of selecting between visual stimuli. We recorded neural output (multiunit action potentials and local field potentials) in the left and right optic lobes of a tethered flying Drosophila, while its attempts to follow visual motion (yaw torque) were measured by a torque meter. We show that when facing competing motion stimuli on its left and right, Drosophila typically generate large torque responses that flip from side to side. The delayed onset (0.1-1 s) and spontaneous switch-like dynamics of these responses, and the fact that the flies sometimes oppose the stimuli by flying straight, make this behavior different from the classic steering reflexes. Drosophila, thus, seem to choose one stimulus at a time and attempt to rotate toward its direction. With this behavior, the neural output of the optic lobes alternates; being augmented on the side chosen for body rotation and suppressed on the opposite side, even though the visual input to the fly eyes stays the same. Thus, the flow of information from the fly eyes is gated intrinsically. Such modulation can be noise-induced or intentional; with one possibility being that the fly brain highlights chosen information while ignoring the irrelevant, similar to what we know to occur in higher animals.  相似文献   

13.
Anatomic connections between brain areas affect information flow between neuronal circuits and the synchronization of neuronal activity. However, such structural connectivity does not coincide with effective connectivity (or, more precisely, causal connectivity), related to the elusive question “Which areas cause the present activity of which others?”. Effective connectivity is directed and depends flexibly on contexts and tasks. Here we show that dynamic effective connectivity can emerge from transitions in the collective organization of coherent neural activity. Integrating simulation and semi-analytic approaches, we study mesoscale network motifs of interacting cortical areas, modeled as large random networks of spiking neurons or as simple rate units. Through a causal analysis of time-series of model neural activity, we show that different dynamical states generated by a same structural connectivity motif correspond to distinct effective connectivity motifs. Such effective motifs can display a dominant directionality, due to spontaneous symmetry breaking and effective entrainment between local brain rhythms, although all connections in the considered structural motifs are reciprocal. We show then that transitions between effective connectivity configurations (like, for instance, reversal in the direction of inter-areal interactions) can be triggered reliably by brief perturbation inputs, properly timed with respect to an ongoing local oscillation, without the need for plastic synaptic changes. Finally, we analyze how the information encoded in spiking patterns of a local neuronal population is propagated across a fixed structural connectivity motif, demonstrating that changes in the active effective connectivity regulate both the efficiency and the directionality of information transfer. Previous studies stressed the role played by coherent oscillations in establishing efficient communication between distant areas. Going beyond these early proposals, we advance here that dynamic interactions between brain rhythms provide as well the basis for the self-organized control of this “communication-through-coherence”, making thus possible a fast “on-demand” reconfiguration of global information routing modalities.  相似文献   

14.
Brains were built by evolution to react swiftly to environmental challenges. Thus, sensory stimuli must be processed ad hoc, i.e., independent—to a large extent—from the momentary brain state incidentally prevailing during stimulus occurrence. Accordingly, computational neuroscience strives to model the robust processing of stimuli in the presence of dynamical cortical states. A pivotal feature of ongoing brain activity is the regional predominance of EEG eigenrhythms, such as the occipital alpha or the pericentral mu rhythm, both peaking spectrally at 10 Hz. Here, we establish a novel generalized concept to measure event-related desynchronization (ERD), which allows one to model neural oscillatory dynamics also in the presence of dynamical cortical states. Specifically, we demonstrate that a somatosensory stimulus causes a stereotypic sequence of first an ERD and then an ensuing amplitude overshoot (event-related synchronization), which at a dynamical cortical state becomes evident only if the natural relaxation dynamics of unperturbed EEG rhythms is utilized as reference dynamics. Moreover, this computational approach also encompasses the more general notion of a “conditional ERD,” through which candidate explanatory variables can be scrutinized with regard to their possible impact on a particular oscillatory dynamics under study. Thus, the generalized ERD represents a powerful novel analysis tool for extending our understanding of inter-trial variability of evoked responses and therefore the robust processing of environmental stimuli.  相似文献   

15.
Cortical neural networks exhibit high internal variability in spontaneous dynamic activities and they can robustly and reliably respond to external stimuli with multilevel features–from microscopic irregular spiking of neurons to macroscopic oscillatory local field potential. A comprehensive study integrating these multilevel features in spontaneous and stimulus–evoked dynamics with seemingly distinct mechanisms is still lacking. Here, we study the stimulus–response dynamics of biologically plausible excitation–inhibition (E–I) balanced networks. We confirm that networks around critical synchronous transition states can maintain strong internal variability but are sensitive to external stimuli. In this dynamical region, applying a stimulus to the network can reduce the trial-to-trial variability and shift the network oscillatory frequency while preserving the dynamical criticality. These multilevel features widely observed in different experiments cannot simultaneously occur in non-critical dynamical states. Furthermore, the dynamical mechanisms underlying these multilevel features are revealed using a semi-analytical mean-field theory that derives the macroscopic network field equations from the microscopic neuronal networks, enabling the analysis by nonlinear dynamics theory and linear noise approximation. The generic dynamical principle revealed here contributes to a more integrative understanding of neural systems and brain functions and incorporates multimodal and multilevel experimental observations. The E–I balanced neural network in combination with the effective mean-field theory can serve as a mechanistic modeling framework to study the multilevel neural dynamics underlying neural information and cognitive processes.  相似文献   

16.
It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain''s input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network''s spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings.  相似文献   

17.
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.  相似文献   

18.
19.
When the dimensionality of a neural circuit is substantially larger than the dimensionality of the variable it encodes, many different degenerate network states can produce the same output. In this review I will discuss three different neural systems that are linked by this theme. The pyloric network of the lobster, the song control system of the zebra finch, and the odor encoding system of the locust, while different in design, all contain degeneracies between their internal parameters and the outputs they encode. Indeed, although the dynamics of song generation and odor identification are quite different, computationally, odor recognition can be thought of as running the song generation circuitry backwards. In both of these systems, degeneracy plays a vital role in mapping a sparse neural representation devoid of correlations onto external stimuli (odors or song structure) that are strongly correlated. I argue that degeneracy between input and output states is an inherent feature of many neural systems, which can be exploited as a fault-tolerant method of reliably learning, generating, and discriminating closely related patterns.  相似文献   

20.
The prefrontal cortex (PFC) plays a crucial role in flexible cognitive behavior by representing task relevant information with its working memory. The working memory with sustained neural activity is described as a neural dynamical system composed of multiple attractors, each attractor of which corresponds to an active state of a cell assembly, representing a fragment of information. Recent studies have revealed that the PFC not only represents multiple sets of information but also switches multiple representations and transforms a set of information to another set depending on a given task context. This representational switching between different sets of information is possibly generated endogenously by flexible network dynamics but details of underlying mechanisms are unclear. Here we propose a dynamically reorganizable attractor network model based on certain internal changes in synaptic connectivity, or short-term plasticity. We construct a network model based on a spiking neuron model with dynamical synapses, which can qualitatively reproduce experimentally demonstrated representational switching in the PFC when a monkey was performing a goal-oriented action-planning task. The model holds multiple sets of information that are required for action planning before and after representational switching by reconfiguration of functional cell assemblies. Furthermore, we analyzed population dynamics of this model with a mean field model and show that the changes in cell assemblies' configuration correspond to those in attractor structure that can be viewed as a bifurcation process of the dynamical system. This dynamical reorganization of a neural network could be a key to uncovering the mechanism of flexible information processing in the PFC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号