首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Hangartner RD  Cull P 《Bio Systems》2000,58(1-3):167-176
In this paper, we address the question, can biologically feasible neural nets compute more than can be computed by deterministic polynomial time algorithms? Since we want to maintain a claim of plausibility and reasonableness we restrict ourselves to algorithmically easy to construct nets and we rule out infinite precision in parameters and in any analog parts of the computation. Our approach is to consider the recent advances in randomized algorithms and see if such randomized computations can be described by neural nets. We start with a pair of neurons and show that by connecting them with reciprocal inhibition and some tonic input, then the steady-state will be one neuron ON and one neuron OFF, but which neuron will be ON and which neuron will be OFF will be chosen at random (perhaps, it would be better to say that microscopic noise in the analog computation will be turned into a megascale random bit). We then show that we can build a small network that uses this random bit process to generate repeatedly random bits. This random bit generator can then be connected with a neural net representing the deterministic part of randomized algorithm. We, therefore, demonstrate that these neural nets can carry out probabilistic computation and thus be less limited than classical neural nets.  相似文献   

2.
Hopfield and Tank have shown that neural networks can be used to solve certain computationally hard problems, in particular they studied the Traveling Salesman Problem (TSP). Based on network simulation results they conclude that analog VLSI neural nets can be promising in solving these problems. Recently, Wilson and Pawley presented the results of their simulations which contradict the original results and cast doubts on the usefulness of neural nets. In this paper we give the results of our simulations that clarify some of the discrepancies. We also investigate the scaling of TSP solutions found by neural nets as the size of the problem increases. Further, we consider the neural net solution of the Clustering Problem, also a computationally hard problem, and discuss the types of problems that appear to be well suited for a neural net approach.  相似文献   

3.
Based on experiments with the locust olfactory system, we demonstrate that model sensory neural networks with lateral inhibition can generate stimulus specific identity-temporal patterns in the form of stimulus-dependent switching among small and dynamically changing neural ensembles (each ensemble being a group of synchronized projection neurons). Networks produce this switching mode of dynamical activity when lateral inhibitory connections are strongly non-symmetric. Such coding uses 'winner-less competitive' (WLC) dynamics. In contrast to the well known winner-take-all competitive (WTA) networks and Hopfield nets, winner-less competition represents sensory information dynamically. Such dynamics are reproducible, robust against intrinsic noise and sensitive to changes in the sensory input. We demonstrate the validity of sensory coding with WLC networks using two different formulations of the dynamics, namely the average and spiking dynamics of projection neurons (PN).  相似文献   

4.
This article describes new aspects of hysteresis dynamics which have been uncovered through computer experiments. There are several motivations to be interested in fast-slow dynamics. For instance, many physiological or biological systems display different time scales. The bursting oscillations which can be observed in neurons, beta-cells of the pancreas and population dynamics are essentially studied via bifurcation theory and analysis of fast-slow systems (Keener and Sneyd, 1998; Rinzel, 1987). Hysteresis is a possible mechanism to generate bursting oscillations. A first part of this article presents the computer techniques (the dotted-phase portrait, the bifurcation of the fast dynamics and the wave form) we have used to represent several patterns specific to hysteresis dynamics. This framework yields a natural generalization to the notion of bursting oscillations where, for instance, the active phase is chaotic and alternates with a quiescent phase. In a second part of the article, we emphasize the evolution to chaos which is often associated with bursting oscillations on the specific example of the Hindmarsh-Rose system. This evolution to chaos has already been studied with classical tools of dynamical systems but we give here numerical evidence on hysteresis dynamics and on some aspects of the wave form. The analytical proofs will be given elsewhere.  相似文献   

5.
Towards an artificial brain   总被引:2,自引:1,他引:1  
M Conrad  R R Kampfner  K G Kirby  E N Rizki  G Schleis  R Smalz  R Trenary 《Bio Systems》1989,23(2-3):175-215; discussion 216-8
Three components of a brain model operating on neuromolecular computing principles are described. The first component comprises neurons whose input-output behavior is controlled by significant internal dynamics. Models of discrete enzymatic neurons, reaction-diffusion neurons operating on the basis of the cyclic nucleotide cascade, and neurons controlled by cytoskeletal dynamics are described. The second component of the model is an evolutionary learning algorithm which is used to mold the behavior of enzyme-driven neurons or small networks of these neurons for specific function, usually pattern recognition or target seeking tasks. The evolutionary learning algorithm may be interpreted either as representing the mechanism of variation and natural selection acting on a phylogenetic time scale, or as a conceivable ontogenetic adaptation mechanism. The third component of the model is a memory manipulation scheme, called the reference neuron scheme. In principle it is capable of orchestrating a repertoire of enzyme-driven neurons for coherent function. The existing implementations, however, utilize simple neurons without internal dynamics. Spatial navigation and simple game playing (using tic-tac-toe) provide the task environments that have been used to study the properties of the reference neuron model. A memory-based evolutionary learning algorithm has been developed that can assign credit to the individual neurons in a network. It has been run on standard benchmark tasks, and appears to be quite effective both for conventional neural nets and for networks of discrete enzymatic neurons. The models have the character of artificial worlds in that they map the hierarchy of processes in the brain (at the molecular, neuronal, and network levels), provide a task environment, and use this relatively self-contained setup to develop and evaluate learning and adaptation algorithms.  相似文献   

6.
By “neural net” will be meant “neural net without circles.” Every neural net effects a transformation from inputs (i.e., firing patterns of the input neurons) to outputs (firing patterns of the output neurons). Two neural nets will be calledequivalent if they effect the same transformation from inputs to outputs. A canonical form is found for neural nets with respect to equivalence; i.e., a class of neural nets is defined, no two of which are equivalent, and which contains a neural net equivalent to any given neural net. This research was supported by the U.S. Air Force under Contract AF 49(638)-414 monitored by the Air Force Office of Scientific Research.  相似文献   

7.
Frontal cortex is thought to underlie many advanced cognitive capacities, from self-control to long term planning. Reflecting these diverse demands, frontal neural activity is notoriously idiosyncratic, with tuning properties that are correlated with endless numbers of behavioral and task features. This menagerie of tuning has made it difficult to extract organizing principles that govern frontal neural activity. Here, we contrast two successful yet seemingly incompatible approaches that have begun to address this challenge. Inspired by the indecipherability of single-neuron tuning, the first approach casts frontal computations as dynamical trajectories traversed by arbitrary mixtures of neurons. The second approach, by contrast, attempts to explain the functional diversity of frontal activity with the biological diversity of cortical cell-types. Motivated by the recent discovery of functional clusters in frontal neurons, we propose a consilience between these population and cell-type-specific approaches to neural computations, advancing the conjecture that evolutionarily inherited cell-type constraints create the scaffold within which frontal population dynamics must operate.  相似文献   

8.
In this paper, we investigate the use of partial correlation analysis for the identification of functional neural connectivity from simultaneously recorded neural spike trains. Partial correlation analysis allows one to distinguish between direct and indirect connectivities by removing the portion of the relationship between two neural spike trains that can be attributed to linear relationships with recorded spike trains from other neurons. As an alternative to the common frequency domain approach based on the partial spectral coherence we propose a new statistic in the time domain. The new scaled partial covariance density provides additional information on the direction and the type, excitatory or inhibitory, of the connectivities. In simulation studies, we investigated the power and limitations of the new statistic. The simulations show that the detectability of various connectivity patterns depends on various parameters such as connectivity strength and background activity. In particular, the detectability decreases with the number of neurons included in the analysis and increases with the recording time. Further, we show that the method can also be used to detect multiple direct connectivities between two neurons. Finally, the methods of this paper are illustrated by an application to neurophysiological data from spinal dorsal horn neurons.  相似文献   

9.
The role of symmetry in simplifying the theory of complex neural systems is argued. When the structural symmetries of a network are expressed as an ismorphism group, implications emerge for the dynamics. Various qualitative possibilities concerning stability of uniform motion in homogeneous nets are discussed and an approach to neural hierarchies is outlined.  相似文献   

10.
The paper presents a methodology for using computational neurogenetic modelling (CNGM) to bring new original insights into how genes influence the dynamics of brain neural networks. CNGM is a novel computational approach to brain neural network modelling that integrates dynamic gene networks with artificial neural network model (ANN). Interaction of genes in neurons affects the dynamics of the whole ANN model through neuronal parameters, which are no longer constant but change as a function of gene expression. Through optimization of interactions within the internal gene regulatory network (GRN), initial gene/protein expression values and ANN parameters, particular target states of the neural network behaviour can be achieved, and statistics about gene interactions can be extracted. In such a way, we have obtained an abstract GRN that contains predictions about particular gene interactions in neurons for subunit genes of AMPA, GABAA and NMDA neuro-receptors. The extent of sequence conservation for 20 subunit proteins of all these receptors was analysed using standard bioinformatics multiple alignment procedures. We have observed abundance of conserved residues but the most interesting observation has been the consistent conservation of phenylalanine (F at position 269) and leucine (L at position 353) in all 20 proteins with no mutations. We hypothesise that these regions can be the basis for mutual interactions. Existing knowledge on evolutionary linkage of their protein families and analysis at molecular level indicate that the expression of these individual subunits should be coordinated, which provides the biological justification for our optimized GRN.  相似文献   

11.
The dynamics of populations of neurons are studied analytically and by computer simulation. The nets are probabilistic but may be coupled into systems of interacting populations in accordance with the netlet approach, first described by Harth et al., 1970a, Harth et al., 1970b. The analysis is here extended to include the situation in which the neuronal activity is given by finite difference equations of order two or greater. It is shown that the formalism developed here can take into account any combination of refractory periods, summation times and effective delays that may exist in a net. Stationary states are represented as the results of an eigenvalue problem. For slowly varying excitatory or inhibitory inputs these states exhibit marked hysteresis effects. Transient behavior is investigated and found to consist generally of damped oscillations about the stationary states.  相似文献   

12.
Burst firings are functionally important behaviors displayed by neural circuits, which plays a primary role in reliable transmission of electrical signals for neuronal communication. However, with respect to the computational capability of neural networks, most of relevant studies are based on the spiking dynamics of individual neurons, while burst firing is seldom considered. In this paper, we carry out a comprehensive study to compare the performance of spiking and bursting dynamics on the capability of liquid computing, which is an effective approach for intelligent computation of neural networks. The results show that neural networks with bursting dynamic have much better computational performance than those with spiking dynamics, especially for complex computational tasks. Further analysis demonstrate that the fast firing pattern of bursting dynamics can obviously enhance the efficiency of synaptic integration from pre-neurons both temporally and spatially. This indicates that bursting dynamic can significantly enhance the complexity of network activity, implying its high efficiency in information processing.  相似文献   

13.
The adult hippocampus is one of the primary neural structures involved in memory formation. In addition to synapse-specific modifications thought to encode information at the subcellular level, changes in the intrahippocampal neuro-populational activity and dynamics at the circuit-level may contribute substantively to the functional capacity of this region. Within the hippocampus, the dentate gyrus has the potential to make a preferential contribution to neural circuit modification owing to the continuous addition of new granule cell population. The integration of newborn neurons into pre-existing circuitry is hypothesized to deliver a unique processing capacity, as opposed to merely replacing dying granule cells. Recent studies have begun to assess the impact of hippocampal neurogenesis by examining the extent to which adult-born neurons participate in hippocampal networks, including when newborn neurons become engaged in ongoing network activity and how they modulate circuit dynamics via their unique intrinsic physiological properties. Understanding the contributions of adult neurogenesis to hippocampal function will provide new insight into the fundamental aspects of brain plasticity, which can be used to guide therapeutic interventions to replace neural populations damaged by disease or injury.  相似文献   

14.
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.  相似文献   

15.
Effective desynchronization can be exploited as a tool for probing the functional significance of synchronized neural activity underlying perceptual and cognitive processes or as a mild treatment for neurological disorders like Parkinson’s disease. In this article we show that pulse-based desynchronization techniques, originally developed for networks of globally coupled oscillators (Kuramoto model), can be adapted to networks of coupled neurons with dendritic dynamics. Compared to the Kuramoto model, the dendritic dynamics significantly alters the response of the neuron to the stimulation. Under medium stimulation amplitude a bistability of the re- sponse of a single neuron is observed. When stimulated at some initial phases, the neuron displays only modulations of its firing, whereas at other initial phases it stops oscillating entirely. Significant alterations in the duration of stimulation-induced transients are also observed. These transients endure after the end of the stimulation and cause maximal desynchronization to occur not during the stimulation, but with some delay after the stimulation has been turned off. To account for this delayed desynchronization effect, we have designed a new calibration procedure for finding the stimulation parameters that result in optimal desynchronization. We have also developed a new desynchronization technique by low frequency entrainment. The stimulation techniques originally developed for the Kuramoto model, when using the new calibration procedure, can also be applied to networks with dendritic dynamics. However, the mechanism by which desynchronization is achieved is substantially different than for the network of Kuramoto oscillators. In particular, the addition of dendritic dynamics significantly changes the timing of the stimulation required to obtain desynchronization. We propose desynchronization stimulation for experimental analysis of synchronized neural processes and for the therapy of movement disorders.  相似文献   

16.
Humans are able to form internal representations of the information they process—a capability which enables them to perform many different memory tasks. Therefore, the neural system has to learn somehow to represent aspects of the environmental situation; this process is assumed to be based on synaptic changes. The situations to be represented are various as for example different types of static patterns but also dynamic scenes. How are neural networks consisting of mutually connected neurons capable of performing such tasks? Here we propose a new neuronal structure for artificial neurons. This structure allows one to disentangle the dynamics of the recurrent connectivity from the dynamics induced by synaptic changes due to the learning processes. The error signal is computed locally within the individual neuron. Thus, online learning is possible without any additional structures. Recurrent neural networks equipped with these computational units cope with different memory tasks. Examples illustrate how information is extracted from environmental situations comprising fixed patterns to produce sustained activity and to deal with simple algebraic relations.  相似文献   

17.
Epileptic seizure dynamics span multiple scales in space and time. Understanding seizure mechanisms requires identifying the relations between seizure components within and across these scales, together with the analysis of their dynamical repertoire. Mathematical models have been developed to reproduce seizure dynamics across scales ranging from the single neuron to the neural population. In this study, we develop a network model of spiking neurons and systematically investigate the conditions, under which the network displays the emergent dynamic behaviors known from the Epileptor, which is a well-investigated abstract model of epileptic neural activity. This approach allows us to study the biophysical parameters and variables leading to epileptiform discharges at cellular and network levels. Our network model is composed of two neuronal populations, characterized by fast excitatory bursting neurons and regular spiking inhibitory neurons, embedded in a common extracellular environment represented by a slow variable. By systematically analyzing the parameter landscape offered by the simulation framework, we reproduce typical sequences of neural activity observed during status epilepticus. We find that exogenous fluctuations from extracellular environment and electro-tonic couplings play a major role in the progression of the seizure, which supports previous studies and further validates our model. We also investigate the influence of chemical synaptic coupling in the generation of spontaneous seizure-like events. Our results argue towards a temporal shift of typical spike waves with fast discharges as synaptic strengths are varied. We demonstrate that spike waves, including interictal spikes, are generated primarily by inhibitory neurons, whereas fast discharges during the wave part are due to excitatory neurons. Simulated traces are compared with in vivo experimental data from rodents at different stages of the disorder. We draw the conclusion that slow variations of global excitability, due to exogenous fluctuations from extracellular environment, and gap junction communication push the system into paroxysmal regimes. We discuss potential mechanisms underlying such machinery and the relevance of our approach, supporting previous detailed modeling studies and reflecting on the limitations of our methodology.  相似文献   

18.
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.  相似文献   

19.
The activity of networking neurons is largely characterized by the alternation of synchronous and asynchronous spiking sequences. One of the most relevant challenges that scientists are facing today is, then, relating that evidence with the fundamental mechanisms through which the brain computes and processes information, as well as with the arousal (or progress) of a number of neurological illnesses. In other words, the problem is how to associate an organized dynamics of interacting neural assemblies to a computational task. Here we show that computation can be seen as a feature emerging from the collective dynamics of an ensemble of networking neurons, which interact by means of adaptive dynamical connections. Namely, by associating logical states to synchronous neuron's dynamics, we show how the usual Boolean logics can be fully recovered, and a universal Turing machine can be constructed. Furthermore, we show that, besides the static binary gates, a wider class of logical operations can be efficiently constructed as the fundamental computational elements interact within an adaptive network, each operation being represented by a specific motif. Our approach qualitatively differs from the past attempts to encode information and compute with complex systems, where computation was instead the consequence of the application of control loops enforcing a desired state into the specific system's dynamics. Being the result of an emergent process, the computation mechanism here described is not limited to a binary Boolean logic, but it can involve a much larger number of states. As such, our results can enlighten new concepts for the understanding of the real computing processes taking place in the brain.  相似文献   

20.
Experimental observations of simultaneous activity in large cortical areas have seemed to justify a large network approach in early studies of neural information codes and memory capacity. This approach has overlooked, however, the segregated nature of cortical structure and functionality. Employing graph-theoretic results, we show that, given the estimated number of neurons in the human brain, there are only a few primal sizes that can be attributed to neural circuits under probabilistically sparse connectivity. The significance of this finding is that neural circuits of relatively small primal sizes in cyclic interaction, implied by inhibitory interneuron potentiation and excitatory inter-circuit potentiation, generate relatively long non-repetitious sequences of asynchronous primal-length periods. The meta-periodic nature of such circuit interaction translates into meta-periodic firing-rate dynamics, representing cortical information. It is finally shown that interacting neural circuits of primal sizes 7 or less exhaust most of the capacity of the human brain, with relatively little room to spare for circuits of larger primal sizes. This also appears to ratify experimental findings on the human working memory capacity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号