首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The hippocampus plays an important role in the course of establishing long-term memory, i.e., to make short-term memory of spatially and temporally associated input information. In 1996 (Tsukada et al. 1996), the spatiotemporal learning rule was proposed based on differences observed in hippocampal long-term potentiation (LTP) induced by various spatiotemporal pattern stimuli. One essential point of this learning rule is that the change of synaptic weight depends on both spatial coincidence and the temporal summation of input pulses. We applied this rule to a single-layered neural network and compared its ability to separate spatiotemporal patterns with that of other rules, including the Hebbian learning rule and its extended rules. The simulated results showed that the spatiotemporal learning rule had the highest efficiency in discriminating spatiotemporal pattern sequences, while the Hebbian learning rule (including its extended rules) was sensitive to differences in spatial patterns.  相似文献   

2.
Rhythmic activity of the brain often depends on synchronized spiking of interneuronal networks interacting with principal neurons. The quest for physiological mechanisms regulating network synchronization has therefore been firmly focused on synaptic circuits. However, it has recently emerged that synaptic efficacy could be influenced by astrocytes that release signalling molecules into their macroscopic vicinity. To understand how this volume-limited synaptic regulation can affect oscillations in neural populations, here we explore an established artificial neural network mimicking hippocampal basket cells receiving inputs from pyramidal cells. We find that network oscillation frequencies and average cell firing rates are resilient to changes in excitatory input even when such changes occur in a significant proportion of participating interneurons, be they randomly distributed or clustered in space. The astroglia-like, volume-limited regulation of excitatory synaptic input appears to better preserve network synchronization (compared with a similar action evenly spread across the network) while leading to a structural segmentation of the network into cell subgroups with distinct firing patterns. These observations provide us with some previously unknown insights into the basic principles of neural network control by astroglia.  相似文献   

3.
Synchronized oscillation is very commonly observed in many neuronal systems and might play an important role in the response properties of the system. We have studied how the spontaneous oscillatory activity affects the responsiveness of a neuronal network, using a neural network model of the visual cortex built from Hodgkin-Huxley type excitatory (E-) and inhibitory (I-) neurons. When the isotropic local E-I and I-E synaptic connections were sufficiently strong, the network commonly generated gamma frequency oscillatory firing patterns in response to random feed-forward (FF) input spikes. This spontaneous oscillatory network activity injects a periodic local current that could amplify a weak synaptic input and enhance the network's responsiveness. When E-E connections were added, we found that the strength of oscillation can be modulated by varying the FF input strength without any changes in single neuron properties or interneuron connectivity. The response modulation is proportional to the oscillation strength, which leads to self-regulation such that the cortical network selectively amplifies various FF inputs according to its strength, without requiring any adaptation mechanism. We show that this selective cortical amplification is controlled by E-E cell interactions. We also found that this response amplification is spatially localized, which suggests that the responsiveness modulation may also be spatially selective. This suggests a generalized mechanism by which neural oscillatory activity can enhance the selectivity of a neural network to FF inputs.  相似文献   

4.
Kurikawa T  Kaneko K 《PloS one》2011,6(3):e17432
Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided.  相似文献   

5.
The synchronization frequency of neural networks and its dynamics have important roles in deciphering the working mechanisms of the brain. It has been widely recognized that the properties of functional network synchronization and its dynamics are jointly determined by network topology, network connection strength, i.e., the connection strength of different edges in the network, and external input signals, among other factors. However, mathematical and computational characterization of the relationships between network synchronization frequency and these three important factors are still lacking. This paper presents a novel computational simulation framework to quantitatively characterize the relationships between neural network synchronization frequency and network attributes and input signals. Specifically, we constructed a series of neural networks including simulated small-world networks, real functional working memory network derived from functional magnetic resonance imaging, and real large-scale structural brain networks derived from diffusion tensor imaging, and performed synchronization simulations on these networks via the Izhikevich neuron spiking model. Our experiments demonstrate that both of the network synchronization strength and synchronization frequency change according to the combination of input signal frequency and network self-synchronization frequency. In particular, our extensive experiments show that the network synchronization frequency can be represented via a linear combination of the network self-synchronization frequency and the input signal frequency. This finding could be attributed to an intrinsically-preserved principle in different types of neural systems, offering novel insights into the working mechanism of neural systems.  相似文献   

6.
A mathematical model of neural processing is proposed which incorporates a theory for the storage of information. The model consists of a network of neurons that linearly processes incoming neural activity. The network stores the input by modifying the synaptic properties of all of its neurons. The model lends support to a distributive theory of memory using synaptic modification. The dynamics of the processing and storage are represented by a discrete system. Asymptotic analysis is applied to the system to show the learning capabilities of the network under constant input. Results are also given to predict the network's ability to learn periodic input, and input subjected to small random fluctuations.  相似文献   

7.
A template matching model for pattern recognition is proposed. By following a previouslyproposed algorithm for synaptic modification (Hirai, 1980), the template of a stimulus pattern is selforganized as a spatial distribution pattern of matured synapses on the cells receiving modifiable synapses. Template matching is performed by the disinhibitory neural network cascaded beyond the neural layer composed of the cells receiving the modifiable synapses. The performance of the model has been simulated on a digital computer. After repetitive presentations of a stimulus pattern, a cell receiving the modifiable synapses comes to have the template of that pattern. And the cell in the latter layer of the disinhibitory bitory neural network that receives the disinhibitory input from that cell becomes electively sensitive to that pattern. Learning patterns are not restricted by previously learned ones. They can be subset or superset patterns of the ones previously learned. If an unknown pattern is presented to the model, no cell beyond the disinhibitory neural network will respond. However, if previously learned patterns are embedded in that pattern, the cells which have the templates of those patterns respond and are assumed to transmit the information to higher center. The computer simulation also shows that the model can organize a clean template under a noisy environment.  相似文献   

8.
Massive synaptic pruning following over-growth is a general feature of mammalian brain maturation. This article studies the synaptic pruning that occurs in large networks of simulated spiking neurons in the absence of specific input patterns of activity. The evolution of connections between neurons were governed by an original bioinspired spike-timing-dependent synaptic plasticity (STDP) modification rule which included a slow decay term. The network reached a steady state with a bimodal distribution of the synaptic weights that were either incremented to the maximum value or decremented to the lowest value. After 1x10(6) time steps the final number of synapses that remained active was below 10% of the number of initially active synapses independently of network size. The synaptic modification rule did not introduce spurious biases in the geometrical distribution of the remaining active projections. The results show that, under certain conditions, the model is capable of generating spontaneously emergent cell assemblies.  相似文献   

9.
 This paper studies the relation between the functional synaptic connections between two artificial neural networks and the correlation of their spiking activities. The model neurons had realistic non-oscillatory dynamic properties and the networks showed oscillatory behavior as a result of their internal synaptic connectivity. We found that both excitation and inhibition cause phase locking of the oscillating activities. When the two networks excite each other the oscillations synchronize with zero phase lag, whereas mutual inhibition between the networks resulted in an anti-phase (half period phase difference) synchronization. Correlations between the activities of the two networks can also be caused by correlated external inputs driving the systems (common input). Our analysis shows that when the networks exhibit oscillatory behavior and the rate of the common input is smaller than a characteristic network oscillator frequency, the cross-correlation functions between the activities of two systems still carry information about the mutual synaptic connectivity. This information can be retrieved with linear partialization, removing the influence of the common input. We further explored the network responses to periodic external input. We found that when the input is of a frequency smaller than a certain threshold, the network responds with bursts at the same frequency as the input. Above the threshold, the network responds with a fraction of the input frequency. This frequency threshold, characterizing the oscillatory properties of the network, is also found to determine the limit to which linear partialization works. Received: 20 October 1995 / Accepted in revised form: 20 May 1996  相似文献   

10.
This study compares the ability of excitatory, feed-forward neural networks to construct good transformations on their inputs. The quality of such a transformation is judged by the minimization of two information measures: the information loss of the transformation and the statistical dependency of the output. The networks that are compared differ from each other in the parametric properties of their neurons and in their connectivity. The particular network parameters studied are output firing threshold, synaptic connectivity, and associative modification of connection weights. The network parameters that most directly affect firing levels are threshold and connectivity. Networks incorporating neurons with dynamic threshold adjustment produce better transformations. When firing threshold is optimized, sparser synaptic connectivity produces a better transformation than denser connectivity. Associative modification of synaptic weights confers only a slight advantage in the construction of optimal transformations. Additionally, our research shows that some environments are better suited than others for recoding. Specifically, input environments high in statistical dependence, i.e. those environments most in need of recoding, are more likely to undergo successful transformations.  相似文献   

11.
For the ring neural network to function as a generator of rhythmic oscillation, mechanisms are required by which rhythmic oscillation is generated and maintained and then its period controlled. This paper demonstrates by simulation that those mechanisms can be actualized by employing a synaptic modification algorithm and by applying inputs from the outside to excitatory and inhibitory cells. When the constants in the synaptic modification algorithm are fixed, it is possible to select two modes, that is, the modification mode and the non-modification mode, using the excitatory input level to excitatory cells alone. This property solves the problem of the re-modification caused by the dispersion of AIDs (average impulse densities) with the application of the excitatory synchronous input to inhibitory cells.  相似文献   

12.
Several critical issues associated with the processing of olfactory stimuli in animals (but focusing on insects) are discussed with a view to designing a neural network which can process olfactory stimuli. This leads to the construction of a neural network that can learn and identify the quality (direction cosines) of an input vector or extract information from a sequence of correlated input vectors, where the latter corresponds to sampling a time varying olfactory stimulus (or other generically similar pattern recognition problems). The network is constructed around a discrete time content-addressable memory (CAM) module which basically satisfies the Hopfield equations with the addition of a unit time delay feedback. This modification improves the convergence properties of the network and is used to control a switch which activates the learning or template formation process when the input is “unknown”. The network dynamics are embedded within a sniff cycle which includes a larger time delay (i.e. an integert s <1) that is also used to control the template formation switch. In addition, this time delay is used to modify the input into the CAM module so that the more dominant of two mingling odors or an odor increasing against a background of odors is more readily identified. The performance of the network is evaluated using Monte Carlo simulations and numerical results are presented.  相似文献   

13.
In vitro neural networks of cortical neurons interfaced to a computer via multichannel microelectrode arrays (MEA) provide a unique paradigm to create a hybrid neural computer. Unfortunately, only rudimentary information about these in vitro network's computational properties or the extent of their abilities are known. To study those properties, a liquid state machine (LSM) approach was employed in which the liquid (typically an artificial neural network) was replaced with a living cortical network and the input and readout functions were replaced by the MEA-computer interface. A key requirement of the LSM architecture is that inputs into the liquid state must result in separable outputs based on the liquid's response (separation property). In this paper, high and low frequency multi-site stimulation patterns were applied to the living cortical networks. Two template-based classifiers, one based on Euclidean distance and a second based on a cross-correlation were then applied to measure the separation of the input-output relationship. The result was over a 95% (99.8% when nonstationarity is compensated) input reconstruction accuracy for the high and low frequency patterns, confirming the existence of the separation property in these biological networks.  相似文献   

14.
We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances—that naturally balances the network with excitatory and inhibitory synapses—and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.  相似文献   

15.
We present a hypothesis for how head-centered visual representations in primate parietal areas could self-organize through visually-guided learning, and test this hypothesis using a neural network model. The model consists of a competitive output layer of neurons that receives afferent synaptic connections from a population of input neurons with eye position gain modulated retinal receptive fields. The synaptic connections in the model are trained with an associative trace learning rule which has the effect of encouraging output neurons to learn to respond to subsets of input patterns that tend to occur close together in time. This network architecture and synaptic learning rule is hypothesized to promote the development of head-centered output neurons during periods of time when the head remains fixed while the eyes move. This hypothesis is demonstrated to be feasible, and each of the core model components described is tested and found to be individually necessary for successful self-organization.  相似文献   

16.
Computational modeling has played an important role in the dissection of the biophysical basis of rhythmic oscillations in thalamus that are associated with sleep and certain forms of epilepsy. In contrast, the dynamic filter properties of thalamic relay nuclei during states of arousal are not well understood. Here we present a modeling and simulation study of the throughput properties of the visually driven dorsal lateral geniculate nucleus (dLGN) in the presence of feedback inhibition from the perigeniculate nucleus (PGN). We employ thalamocortical (TC) and thalamic reticular (RE) versions of a minimal integrate-and-fire-or-burst type model and a one-dimensional, two-layered network architecture. Potassium leakage conductances control the neuromodulatory state of the network and eliminate rhythmic bursting in the presence of spontaneous input (i.e., wake up the network). The aroused dLGN/PGN network model is subsequently stimulated by spatially homogeneous spontaneous retinal input or spatio-temporally patterned input consistent with the activity of X-type retinal ganglion cells during full-field or drifting grating visual stimulation. The throughput properties of this visually-driven dLGN/PGN network model are characterized and quantified as a function of stimulus parameters such as contrast, temporal frequency, and spatial frequency. During low-frequency oscillatory full-field stimulation, feedback inhibition from RE neurons often leads to TC neuron burst responses, while at high frequency tonic responses dominate. Depending on the average rate of stimulation, contrast level, and temporal frequency of modulation, the TC and RE cell bursts may or may not be phase-locked to the visual stimulus. During drifting-grating stimulation, phase-locked bursts often occur for sufficiently high contrast so long as the spatial period of the grating is not small compared to the synaptic footprint length, i.e., the spatial scale of the network connectivity.  相似文献   

17.
The brain is self-writable; as the brain voluntarily adapts itself to a changing environment, the neural circuitry rearranges its functional connectivity by referring to its own activity. How the internal activity modifies synaptic weights is largely unknown, however. Here we report that spontaneous activity causes complex reorganization of synaptic connectivity without any external (or artificial) stimuli. Under physiologically relevant ionic conditions, CA3 pyramidal cells in hippocampal slices displayed spontaneous spikes with bistable slow oscillations of membrane potential, alternating between the so-called UP and DOWN states. The generation of slow oscillations did not require fast synaptic transmission, but their patterns were coordinated by local circuit activity. In the course of generating spontaneous activity, individual neurons acquired bidirectional long-lasting synaptic modification. The spontaneous synaptic plasticity depended on a rise in intracellular calcium concentrations of postsynaptic cells, but not on NMDA receptor activity. The direction and amount of the plasticity varied depending on slow oscillation patterns and synapse locations, and thus, they were diverse in a network. Once this global synaptic refinement occurred, the same neurons now displayed different patterns of spontaneous activity, which in turn exhibited different levels of synaptic plasticity. Thus, active networks continuously update their internal states through ongoing synaptic plasticity. With computational simulations, we suggest that with this slow oscillation-induced plasticity, a recurrent network converges on a more specific state, compared to that with spike timing-dependent plasticity alone.  相似文献   

18.
Overproduction and pruning during development is a phenomenon that can be observed in the number of organisms in a population, the number of cells in many tissue types, and even the number of synapses on individual neurons. The sculpting of synaptic connections in the brain of a developing organism is guided by its personal experience, which on a neural level translates to specific patterns of activity. Activity-dependent plasticity at glutamatergic synapses is an integral part of neuronal network formation and maturation in developing vertebrate and invertebrate brains. As development of the rodent forebrain transitions away from an over-proliferative state, synaptic plasticity undergoes modification. Late developmental changes in synaptic plasticity signal the establishment of a more stable network and relate to pronounced perceptual and cognitive abilities. In large part, activation of glutamate-sensitive N-methyl-d-aspartate (NMDA) receptors regulates synaptic stabilization during development and is a necessary step in memory formation processes that occur in the forebrain. A developmental change in the subunits that compose NMDA receptors coincides with developmental modifications in synaptic plasticity and cognition, and thus much research in this area focuses on NMDA receptor composition. We propose that there are additional, equally important developmental processes that influence synaptic plasticity, including mechanisms that are upstream (factors that influence NMDA receptors) and downstream (intracellular processes regulated by NMDA receptors) from NMDA receptor activation. The goal of this review is to summarize what is known and what is not well understood about developmental changes in functional plasticity at glutamatergic synapses, and in the end, attempt to relate these changes to maturation of neural networks.  相似文献   

19.
Acetylcholine (ACh) is a regulator of neural excitability and one of the neurochemical substrates of sleep. Amongst the cellular effects induced by cholinergic modulation are a reduction in spike-frequency adaptation (SFA) and a shift in the phase response curve (PRC). We demonstrate in a biophysical model how changes in neural excitability and network structure interact to create three distinct functional regimes: localized asynchronous, traveling asynchronous, and traveling synchronous. Our results qualitatively match those observed experimentally. Cortical activity during slow wave sleep (SWS) differs from that during REM sleep or waking states. During SWS there are traveling patterns of activity in the cortex; in other states stationary patterns occur. Our model is a network composed of Hodgkin-Huxley type neurons with a M-current regulated by ACh. Regulation of ACh level can account for dynamical changes between functional regimes. Reduction of the magnitude of this current recreates the reduction in SFA the shift from a type 2 to a type 1 PRC observed in the presence of ACh. When SFA is minimal (in waking or REM sleep state, high ACh) patterns of activity are localized and easily pinned by network inhomogeneities. When SFA is present (decreasing ACh), traveling waves of activity naturally arise. A further decrease in ACh leads to a high degree of synchrony within traveling waves. We also show that the level of ACh determines how sensitive network activity is to synaptic heterogeneity. These regimes may have a profound functional significance as stationary patterns may play a role in the proper encoding of external input as memory and traveling waves could lead to synaptic regularization, giving unique insights into the role and significance of ACh in determining patterns of cortical activity and functional differences arising from the patterns.  相似文献   

20.
Jensen et al. (Learn Memory 3(2–3):243–256, 1996b) proposed an auto-associative memory model using an integrated short-term memory (STM) and long-term memory (LTM) spiking neural network. Their model requires that distinct pyramidal cells encoding different STM patterns are fired in different high-frequency gamma subcycles within each low-frequency theta oscillation. Auto-associative LTM is formed by modifying the recurrent synaptic efficacy between pyramidal cells. In order to store auto-associative LTM correctly, the recurrent synaptic efficacy must be bounded. The synaptic efficacy must be upper bounded to prevent re-firing of pyramidal cells in subsequent gamma subcycles. If cells encoding one memory item were to re-fire synchronously with other cells encoding another item in subsequent gamma subcycle, LTM stored via modifiable recurrent synapses would be corrupted. The synaptic efficacy must also be lower bounded so that memory pattern completion can be performed correctly. This paper uses the original model by Jensen et al. as the basis to illustrate the following points. Firstly, the importance of coordinated long-term memory (LTM) synaptic modification. Secondly, the use of a generic mathematical formulation (spiking response model) that can theoretically extend the results to other spiking network utilizing threshold-fire spiking neuron model. Thirdly, the interaction of long-term and short-term memory networks that possibly explains the asymmetric distribution of spike density in theta cycle through the merger of STM patterns with interaction of LTM network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号