共查询到20条相似文献,搜索用时 15 毫秒
1.
Passive membrane properties of neurons, characterized by a linear voltage response to constant current stimulation, were investigated by busing a system model approach. This approach utilizes the derived expression for the input impedance of a network, which simulates the passive properties of neurons, to correlate measured intracellular recordings with the response of network models. In this study, the input impedances of different network configurations and of dentate granule neurons, were derived as a function of the network elements and were validated with computer simulations. The parameters of the system model, which are the values of the network elements, were estimated using an optimization strategy. The system model provides for better estimation of the network elements than the previously described signal model, due to its explicit nature. In contrast, the signal model is an implicit function of the network elements which requires intermediate steps to estimate some of the passive properties. 相似文献
2.
At the time of synaptogenesis typically 50% of the neurons die. The biological role of this is still unclear, but there is
evidence in the visual system that many neurons projecting to topographically inappropriate parts of their target are eliminated
to improve the accuracy of the mapping. The signaling that determines neuronal survival involves electrical activity and trophic
factors. Based on these observations, we have elaborated a computational model for the self-organization of a two-layered
neural network. We observe changes in the topographical organization between the two layers. In layer 1, a traveling wave
of electrical activity is used as input. Activity transmission to layer 2 can generate, according to a Hebbian rule, a retrograde
death signal that is compensated by a trophic survival signal generated by the target cells. Approximately 50% of the neurons
die, and we observe refinement in the topography between the two layers. In alternative versions of the model, we show that
an equivalent reorganization can occur through Hebbian synaptic modification alone, but with less precision and efficiency.
When the two mechanisms are combined, synaptic modification provides no further improvement over that produced by neuronal
death alone. This computational study supports the hypothesis that neuronal death during development can play a role in the
refinement of topographical projections in the nervous system.
Received: 9 November 1998 / Accepted in revised form: 14 April 1999 相似文献
3.
Jack W. Silverstein 《Biological cybernetics》1976,22(2):73-84
A mathematical model of neural processing is proposed which incorporates a theory for the storage of information. The model consists of a network of neurons that linearly processes incoming neural activity. The network stores the input by modifying the synaptic properties of all of its neurons. The model lends support to a distributive theory of memory using synaptic modification. The dynamics of the processing and storage are represented by a discrete system. Asymptotic analysis is applied to the system to show the learning capabilities of the network under constant input. Results are also given to predict the network's ability to learn periodic input, and input subjected to small random fluctuations. 相似文献
4.
Masatoshi Nishikawa Marcel H?rning Masahiro Ueda Tatsuo Shibata 《Biophysical journal》2014,106(3):723-734
Intracellular asymmetry in the signaling network works as a compass to navigate eukaryotic chemotaxis in response to guidance cues. Although the compass variable can be derived from a self-organization dynamics, such as excitability, the responsible mechanism remains to be clarified. Here, we analyzed the spatiotemporal dynamics of the phosphatidylinositol 3,4,5-trisphosphate (PtdInsP3) pathway, which is crucial for chemotaxis. We show that spontaneous activation of PtdInsP3-enriched domains is generated by an intrinsic excitable system. Formation of the same signal domain could be triggered by various perturbations, such as short impulse perturbations that triggered the activation of intrinsic dynamics to form signal domains. We also observed the refractory behavior exhibited in typical excitable systems. We show that the chemotactic response of PtdInsP3 involves biasing the spontaneous excitation to orient the activation site toward the chemoattractant. Thus, this biased excitability embodies the compass variable that is responsible for both random cell migration and biased random walk. Our finding may explain how cells achieve high sensitivity to and robust coordination of the downstream activation that allows chemotactic behavior in the noisy environment outside and inside the cells. 相似文献
5.
Intracellular asymmetry in the signaling network works as a compass to navigate eukaryotic chemotaxis in response to guidance cues. Although the compass variable can be derived from a self-organization dynamics, such as excitability, the responsible mechanism remains to be clarified. Here, we analyzed the spatiotemporal dynamics of the phosphatidylinositol 3,4,5-trisphosphate (PtdInsP3) pathway, which is crucial for chemotaxis. We show that spontaneous activation of PtdInsP3-enriched domains is generated by an intrinsic excitable system. Formation of the same signal domain could be triggered by various perturbations, such as short impulse perturbations that triggered the activation of intrinsic dynamics to form signal domains. We also observed the refractory behavior exhibited in typical excitable systems. We show that the chemotactic response of PtdInsP3 involves biasing the spontaneous excitation to orient the activation site toward the chemoattractant. Thus, this biased excitability embodies the compass variable that is responsible for both random cell migration and biased random walk. Our finding may explain how cells achieve high sensitivity to and robust coordination of the downstream activation that allows chemotactic behavior in the noisy environment outside and inside the cells. 相似文献
6.
We present a hypothesis for how head-centered visual representations in primate parietal areas could self-organize through visually-guided learning, and test this hypothesis using a neural network model. The model consists of a competitive output layer of neurons that receives afferent synaptic connections from a population of input neurons with eye position gain modulated retinal receptive fields. The synaptic connections in the model are trained with an associative trace learning rule which has the effect of encouraging output neurons to learn to respond to subsets of input patterns that tend to occur close together in time. This network architecture and synaptic learning rule is hypothesized to promote the development of head-centered output neurons during periods of time when the head remains fixed while the eyes move. This hypothesis is demonstrated to be feasible, and each of the core model components described is tested and found to be individually necessary for successful self-organization. 相似文献
7.
Temporally precise sequences of neuronal spikes that span hundreds of milliseconds are observed in many brain areas, including songbird premotor nucleus, cat visual cortex, and primary motor cortex. Synfire chains-networks in which groups of neurons are connected via excitatory synapses into a unidirectional chain-are thought to underlie the generation of such sequences. It is unknown, however, how synfire chains can form in local neural circuits, especially for long chains. Here, we show through computer simulation that long synfire chains can develop through spike-time dependent synaptic plasticity and axon remodeling-the pruning of prolific weak connections that follows the emergence of a finite number of strong connections. The formation process begins with a random network. A subset of neurons, called training neurons, intermittently receive superthreshold external input. Gradually, a synfire chain emerges through a recruiting process, in which neurons within the network connect to the tail of the chain started by the training neurons. The model is robust to varying parameters, as well as natural events like neuronal turnover and massive lesions. Our model suggests that long synfire chain can form during the development through self-organization, and axon remodeling, ubiquitous in developing neural circuits, is essential in the process. 相似文献
8.
Bieberich E 《Bio Systems》2002,66(3):145-164
The regulation of biological networks relies significantly on convergent feedback signaling loops that render a global output locally accessible. Ideally, the recurrent connectivity within these systems is self-organized by a time-dependent phase-locking mechanism. This study analyzes recurrent fractal neural networks (RFNNs), which utilize a self-similar or fractal branching structure of dendrites and downstream networks for phase-locking of reciprocal feedback loops: output from outer branch nodes of the network tree enters inner branch nodes of the dendritic tree in single neurons. This structural organization enables RFNNs to amplify re-entrant input by over-the-threshold signal summation from feedback loops with equivalent signal traveling times. The columnar organization of pyramidal neurons in the neocortical layers V and III is discussed as the structural substrate for this network architecture. RFNNs self-organize spike trains and render the entire neural network output accessible to the dendritic tree of each neuron within this network. As the result of a contraction mapping operation, the local dendritic input pattern contains a downscaled version of the network output coding structure. RFNNs perform robust, fractal data compression, thus coping with a limited number of feedback loops for signal transport in convergent neural networks. This property is discussed as a significant step toward the solution of a fundamental problem in neuroscience: how is neuronal computation in separate neurons and remote brain areas unified as an instance of experience in consciousness? RFNNs are promising candidates for engaging neural networks into a coherent activity and provide a strategy for the exchange of global and local information processing in the human brain, thereby ensuring the completeness of a transformation from neuronal computation into conscious experience. 相似文献
9.
The brain’s activity is characterized by the interaction of a very large number of neurons that are strongly affected by noise.
However, signals often arise at macroscopic scales integrating the effect of many neurons into a reliable pattern of activity.
In order to study such large neuronal assemblies, one is often led to derive mean-field limits summarizing the effect of the
interaction of a large number of neurons into an effective signal. Classical mean-field approaches consider the evolution
of a deterministic variable, the mean activity, thus neglecting the stochastic nature of neural behavior. In this article,
we build upon two recent approaches that include correlations and higher order moments in mean-field equations, and study
how these stochastic effects influence the solutions of the mean-field equations, both in the limit of an infinite number
of neurons and for large yet finite networks. We introduce a new model, the infinite model, which arises from both equations
by a rescaling of the variables and, which is invertible for finite-size networks, and hence, provides equivalent equations
to those previously derived models. The study of this model allows us to understand qualitative behavior of such large-scale
networks. We show that, though the solutions of the deterministic mean-field equation constitute uncorrelated solutions of
the new mean-field equations, the stability properties of limit cycles are modified by the presence of correlations, and additional
non-trivial behaviors including periodic orbits appear when there were none in the mean field. The origin of all these behaviors
is then explored in finite-size networks where interesting mesoscopic scale effects appear. This study leads us to show that
the infinite-size system appears as a singular limit of the network equations, and for any finite network, the system will
differ from the infinite system. 相似文献
10.
A network model that consists of neurons with a restricted range of interaction is presented. The neurons are connected mutually
by inhibition weights. The inhibition of the whole network can be controlled by the range of interaction of a neuron. By this
local inhibition mechanism, the present network can produce patterns with a small activity from input patterns with various
large activities. Moreover, it is shown in simulation that the network has attractors for input patterns. The appearance of
attractors is caused by the local interaction of neurons. Thus, we expect that the network not only works as a kind of filter,
but also as a memory device for storing the produced patterns. In the present paper, the fundamental features and behavior
of the network are studied by using a simple network structure and a simple rule of interaction of neurons. In particular,
the relation between the interaction range of a neuron and the activity of input-output patterns is shown in simulation. Furthermore,
the limit of the␣transformation and the size of basin are studied numerically.
Received: 5 January 1995 / Accepted in revised form: 13 November 1997 相似文献
11.
The striatum is the main input station of the basal ganglia and is strongly associated with motor and cognitive functions. Anatomical evidence suggests that individual striatal neurons are unlikely to share their inputs from the cortex. Using a biologically realistic large-scale network model of striatum and cortico-striatal projections, we provide a functional interpretation of the special anatomical structure of these projections. Specifically, we show that weak pairwise correlation within the pool of inputs to individual striatal neurons enhances the saliency of signal representation in the striatum. By contrast, correlations among the input pools of different striatal neurons render the signal representation less distinct from background activity. We suggest that for the network architecture of the striatum, there is a preferred cortico-striatal input configuration for optimal signal representation. It is further enhanced by the low-rate asynchronous background activity in striatum, supported by the balance between feedforward and feedback inhibitions in the striatal network. Thus, an appropriate combination of rates and correlations in the striatal input sets the stage for action selection presumably implemented in the basal ganglia. 相似文献
12.
Yazdanbakhsh A Babadi B Rouhani S Arabzadeh E Abbassian A 《Biological cybernetics》2002,86(5):367-378
In a feedforward network of integrate-and-fire neurons, where the firing of each layer is synchronous (synfire chain), the
final firing state of the network converges to two attractor states: either a full activation or complete fading of the tailing
layers. In this article, we analyze various modes of pattern propagation in a synfire chain with random connection weights
and delta-type postsynaptic currents. We predict analytically that when the input is fully synchronized and the network is
noise free, varying the characteristics of the weights distribution would result in modes of behavior that are different from
those described in the literature. These are convergence to fixed points, limit cycles, multiple periodic, and possibly chaotic
dynamics. We checked our analytic results by computer simulation of the network, and showed that the above results can be
generalized when the input is asynchronous and neurons are spontaneously active at low rates.
Received: 27 July 2001 / Accepted in revised form: 23 October 2001 相似文献
13.
Interaction mechanisms between excitatory and inhibitory impulse sequences operating on neurons play an important role for the processing of information by the nervous system. For instance, the convergence of excitatory and inhibitory influences on retinal ganglion cells to form their receptive fields has been taken as an example for the process of neuronal sharpening by lateral inhibition. In order to analyze quantitatively the functional behavior of such a system, Shannon's entropy method for multiple access channels has been applied to biological two-inputs-one-output systems using the theoretical model developed by Tsukada et al. (1979). Here we give an extension of this procedure from the point of view to reduce redundancy of information in the input signal space of single neurons and attempt to obtain a new interpretation for the information processing of the system. The concept for the redundancy reducing mechanism in single neurons is examined and discussed for the following two processes. The first process is concerned with a signal space formed by superposing two random sequences on the input of a neuron. In this process, we introduce a coding technique to encode the inhibitory sequence by using the timing of the excitatory sequence, which is closely related to an encoding technique of multiple access channels with a correlated source (Marko, 1966, 1970, 1973; Slepian and Wolf, 1973) and which is an invariant transformation in the input signal space without changing the information contents of the input. The second process is concerned with a procedure of reducing redundant signals in the signal space mentioned before. In this connection, it is an important point to see how single neurons reduce the dimensionality of the signal space via transformation with a minimum loss of effective information. For this purpose we introduce the criterion that average transmission of information from signal space to the output does not change when redundant signals are added. This assumption is based on the fact that two signals are equivalent if and only if they have identical input-output behavior. The mechanism is examined and estimated by using a computer-simulated model. As the result of such a simulation we can estimate the minimal segmentation in the signal space which is necessary and sufficient for temporal pattern sensitivity in neurons. 相似文献
14.
An improved neural-network model for the neural integrator of the oculomotor system: More realistic neuron behavior 总被引:1,自引:0,他引:1
The discharge rates of premotor, brain-stem neurons that create eye movements modulate in relation to eye velocity yet firing rates of extraocular motoneurons contain both eye-position and eyevelocity signals. The eye-position signal is derived from the eye-velocity command by means of a neural network which functioins as a temporal integrator. We have previously proposed a network of lateral-inhibitory neurons that is capable of performing the required integration. That analysis centered on the temporal aspects of the signal processing for a limited class of idealized inputs. All of its cells were identical and carried only the integrated signal. Recordings in the brain stem, however, show that neurons in the region of the neural integrator have a variety of background firing rates, all carry some eye-velocity signal as well as the eye-position signal, and carry the former with different strengths depending on the type of eye movement being made. It was necessary to see if the proposed model could be modified to make its neurons more realistic.By modifying the spatial distribution of afferents to the network, we demonstrate that the same basic model functions properly in spite of afferents with nonuniform background firing rates. To introduce the eye-velocity signal a double-layer network, consisting of inhibitory and excitatory cells, was necessary. By presenting the velocity input to only local regions of this network it was shown that all cells in the network still carried the integrated signal and that its cells could carry different eye-velocity signals for different types of eye movements. Thus, this model stimulates quantitatively and qualitatively, the behavior of neurons seen in the region of the neural integrator. 相似文献
15.
Correlation-based learning (CBL) models and self-organizing maps (SOM) are two classes of Hebbian models that have both been
proposed to explain the activity-driven formation of cortical maps. Both models differ significantly in the way lateral cortical
interactions are treated, leading to different predictions for the formation of receptive fields. The linear CBL models predict
that receptive field profiles are determined by the average values and the spatial correlations of the second order of the
afferent activity patterns, wheras SOM models map stimulus features. Here, we investigate a class of models which are characterized
by a variable degree of lateral competition and which have the CBL and SOM models as limit cases. We show that there exists
a critical value for intracortical competition below which the model exhibits CBL properties and above which feature mapping
sets in. The class of models is then analyzed with respect to the formation of topographic maps between two layers of neurons.
For Gaussian input stimuli we find that localized receptive fields and topographic maps emerge above the critical value for
intracortical competition, and we calculate this value as a function of the size of the input stimuli and the range of the
lateral interaction function. Additionally, we show that the learning rule can be derived via the optimization of a global
cost function in a framework of probabilistic output neurons which represent a set of input stimuli by a sparse code.
Received: 23 June 1999 / Accepted in revised form: 05 November 1999 相似文献
16.
Önder Gürcan Kemal S. Türker Jean-Pierre Mano Carole Bernon Oğuz Dikenelli Pierre Glize 《Journal of computational neuroscience》2014,36(2):235-257
We present a novel computational model that detects temporal configurations of a given human neuronal pathway and constructs its artificial replication. This poses a great challenge since direct recordings from individual neurons are impossible in the human central nervous system and therefore the underlying neuronal pathway has to be considered as a black box. For tackling this challenge, we used a branch of complex systems modeling called artificial self-organization in which large sets of software entities interacting locally give rise to bottom-up collective behaviors. The result is an emergent model where each software entity represents an integrate-and-fire neuron. We then applied the model to the reflex responses of single motor units obtained from conscious human subjects. Experimental results show that the model recovers functionality of real human neuronal pathways by comparing it to appropriate surrogate data. What makes the model promising is the fact that, to the best of our knowledge, it is the first realistic model to self-wire an artificial neuronal network by efficiently combining neuroscience with artificial self-organization. Although there is no evidence yet of the model’s connectivity mapping onto the human connectivity, we anticipate this model will help neuroscientists to learn much more about human neuronal networks, and could also be used for predicting hypotheses to lead future experiments. 相似文献
17.
18.
We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time-dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) that have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using the maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allows us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data are generated by a phased cosine Fourier series having a fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size, and sample size are applied in order to examine the effect of the stimulus to the identification process. Results are presented in tabular and graphical forms at the end of this text. In addition, to demonstrate the success of this research, a study involving the same model, nominal parameters and stimulus structure, and another study that works on different models are compared to that of this research. 相似文献
19.
Nykamp DQ 《Journal of mathematical biology》2009,59(2):147-173
We present an analysis of interactions among neurons in stimulus-driven networks that is designed to control for effects from
unmeasured neurons. This work builds on previous connectivity analyses that assumed connectivity strength to be constant with
respect to the stimulus. Since unmeasured neuron activity can modulate with the stimulus, the effective strength of common
input connections from such hidden neurons can also modulate with the stimulus. By explicitly accounting for the resulting
stimulus-dependence of effective interactions among measured neurons, we are able to remove ambiguity in the classification
of causal interactions that resulted from classification errors in the previous analyses. In this way, we can more reliably
distinguish causal connections among measured neurons from common input connections that arise from hidden network nodes.
The approach is derived in a general mathematical framework that can be applied to other types of networks. We illustrate
the effects of stimulus-dependent connectivity estimates with simulations of neurons responding to a visual stimulus.
This research was supported by the National Science Foundation grants DMS-0415409 and DMS-0748417. 相似文献
20.
With the aid of a membrane introduction mass spectrometer (MIMS), the major product 2,3-butanediol (2,3-BDL) as well as the other metabolites from the fermentation carried by Klebsiella oxytoca can be measured on-line simultaneously. A backpropagation neural network (BPN) being recognized with superior mapping ability was applied to this control study. This neural network adaptive control differs from those conventional controls for fermentation systems in which the measurements of cell mass and glucose are not included in the network model. It is only the measured product concentrations from the MIMS that are involved. Oxygen composition was chosen to be the control variable for this fermentation system. Oxygen composition was directly correlated to the measured product concentrations in the controller model. A two-dimensional (number of input nodes by number of data sets) moving window for on-line, dynamic learning of this fermentation system was applied. The input nodes of the network were also properly selected. Number of the training data sets for obtaining better control results was also determined empirically. Two control structures for this 2,3-BDL fermentation are discussed and compared in this work. The effect from adding time delay element to the network controller was also investigated. 相似文献