首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Khrennikov AY 《Bio Systems》2007,90(3):656-675
We try to perform geometrization of cognitive science and psychology by representing information states of cognitive systems by points of mental space given by a hierarchic m-adic tree. Associations are represented by balls and ideas by collections of balls. We consider dynamics of ideas based on lifting of dynamics of mental points. We apply our dynamical model for modeling of flows of unconscious and conscious information in the human brain. In a series of models, Models 1–3, we consider cognitive systems with increasing complexity of psychological behavior determined by structure of flows of associations and ideas.  相似文献   

2.
《Bio Systems》2008,91(3):656-675
We try to perform geometrization of cognitive science and psychology by representing information states of cognitive systems by points of mental space given by a hierarchic m-adic tree. Associations are represented by balls and ideas by collections of balls. We consider dynamics of ideas based on lifting of dynamics of mental points. We apply our dynamical model for modeling of flows of unconscious and conscious information in the human brain. In a series of models, Models 1–3, we consider cognitive systems with increasing complexity of psychological behavior determined by structure of flows of associations and ideas.  相似文献   

3.
We propose a mathematical model of the memory retrieval process based on dynamical systems over a metric space of p-adic numbers representing a configuration 'space of ideas' in which two ideas are close if they have a sufficiently long common root. Our aim is to suggest a new way of conceptualizing human memory retrieval that might be useful for simulation purposes or for the construction of artificial intelligence devices, as well as for a deeper understanding of the process itself. The dynamical system is assumed to be located in a blackbox processing unit (the 'subconscious') and controlled by an interface control unit (the 'conscious') that fixes parameters in the dynamical system and starts its iteration by sending an initial generating idea to it. We show that even simple p-adic dynamical systems admit behavioral scenarios that could explain some of the essential features of the human memory retrieval process.  相似文献   

4.
Khrennikov A 《Bio Systems》2003,70(3):211-233
We develop a quantum formalism (Hilbert space probabilistic calculus) for measurements performed over cognitive systems. In particular, this formalism is used for mathematical modelling of the functioning of consciousness as a self-measuring quantum-like system. By using this formalism, we could predict averages of cognitive observables. Reflecting the basic idea of neurophysiological and psychological studies on a hierarchic structure of cognitive processes, we use p-adic hierarchic trees as a mathematical model of a mental space. We also briefly discuss the general problem of the choice of an adequate mental geometry.  相似文献   

5.
Khrennikov A 《Bio Systems》2011,105(3):250-262
We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations.  相似文献   

6.
The functional role of synchronization has attracted much interest and debate: in particular, synchronization may allow distant sites in the brain to communicate and cooperate with each other, and therefore may play a role in temporal binding, in attention or in sensory-motor integration mechanisms. In this article, we study another role for synchronization: the so-called “collective enhancement of precision”. We argue, in a full nonlinear dynamical context, that synchronization may help protect interconnected neurons from the influence of random perturbations—intrinsic neuronal noise—which affect all neurons in the nervous system. More precisely, our main contribution is a mathematical proof that, under specific, quantified conditions, the impact of noise on individual interconnected systems and on their spatial mean can essentially be cancelled through synchronization. This property then allows reliable computations to be carried out even in the presence of significant noise (as experimentally found e.g., in retinal ganglion cells in primates). This in turn is key to obtaining meaningful downstream signals, whether in terms of precisely-timed interaction (temporal coding), population coding, or frequency coding. Similar concepts may be applicable to questions of noise and variability in systems biology.  相似文献   

7.
I argue against a growing radical trend in current theoretical cognitive science that moves from the premises of embedded cognition, embodied cognition, dynamical systems theory and/or situated robotics to conclusions either to the effect that the mind is not in the brain or that cognition does not require representation, or both. I unearth the considerations at the foundation of this view: Haugeland's bandwidth-component argument to the effect that the brain is not a component in cognitive activity, and arguments inspired by dynamical systems theory and situated robotics to the effect that cognitive activity does not involve representations. Both of these strands depend not only on a shift of emphasis from higher cognitive functions to things like sensorimotor processes, but also depend on a certain understanding of how sensorimotor processes are implemented - as closed-loop control systems. I describe a much more sophisticated model of sensorimotor processing that is not only more powerful and robust than simple closed-loop control, but for which there is great evidence that it is implemented in the nervous system. The is the emulation theory of representation, according to which the brain constructs inner dynamical models, or emulators, of the body and environment which are used in parallel with the body and environment to enhance motor control and perception and to provide faster feedback during motor processes, and can be run off-line to produce imagery and evaluate sensorimotor counterfactuals. I then show that the emulation framework is immune to the radical arguments, and makes apparent why the brain is a component in the cognitive activity, and exactly what the representations are in sensorimotor control.  相似文献   

8.
9.
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.  相似文献   

10.
Conductance-based models of neurons from the lobster stomatogastric ganglion (STG) have been developed to understand the observed chaotic behavior of individual STG neurons. These models identify an additional slow dynamical process – calcium exchange and storage in the endoplasmic reticulum – as a biologically plausible source for the observed chaos in the oscillations of these cells. In this paper we test these ideas further by exploring the dynamical behavior when two model neurons are coupled by electrical or gap junction connections. We compare in detail the model results to the laboratory measurements of electrically-coupled neurons that we reported earlier. The experiments on the biological neurons varied the strength of the effective coupling by applying a parallel, artificial synapse, which changed both the magnitude and polarity of the conductance between the neurons. We observed a sequence of bifurcations that took the neurons from strongly synchronized in-phase behavior, through uncorrelated chaotic oscillations to strongly synchronized – and now regular – out-of-phase behavior. The model calculations reproduce these observations quantitatively, indicating that slow subcellular processes could account for the mechanisms involved in the synchronization and regularization of the otherwise individual chaotic activities. Received: 28 June 1999 / Accepted in revised form: 30 June 2000  相似文献   

11.
It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also throw new light on the computational role of feedback in other complex biological dynamical systems, such as, for example, genetic regulatory networks.  相似文献   

12.
Khrennikov A 《Bio Systems》2000,56(2-3):95-120
We propose mathematical models of information processes of unconscious and conscious thinking (based on p-adic number representation of mental spaces). Unconscious thinking is described by classical cognitive mechanics (which generalizes Newton's mechanics). Conscious thinking is described by quantum cognitive mechanics (which generalizes the pilot wave model of quantum mechanics). The information state and motivation of a conscious cognitive system evolve under the action of classical information forces and a new quantum information force, namely, conscious force. Our model might provide mathematical foundations for some cognitive and psychological phenomena: collective conscious behavior, connection between physiological and mental processes in a biological organism, Freud's psychoanalysis, hypnotism, homeopathy. It may be used as the basis of a model of conscious evolution of life.  相似文献   

13.
 Generation and control of different dynamical modes of computational processes in a net of interconnected integrate-and-fire neurons are demonstrated. A net architecture resembling a generic cortical structure is formed from pairs of excitatory and inhibitory units with excitatory connections between and inhibitory connections within pairs. Integrate-and-fire model neurons derived from detailed conductance-based models of neocortical pyramidal cells and fast-spiking interneurons are employed for the excitatory and inhibitory units, respectively. Firing-rate adaptation is incorporated into the excitatory units based on the regulation of the slow afterhyperpolarization phase of action potentials by intracellular calcium ions. Saturation of synaptic conductances is implemented for the interconnections between units. It is shown that neuronal adaptation of the excitatory units can generate richer net dynamics than relaxation to fixed-point attractors in a pattern space. At strong adaptivity, i.e. when the neuronal excitability is strongly influenced by the preceding activity, complex dynamics of either aperiodic or limit-cycle character are generated in both the pattern space and the phase space of all dynamical variables. This regime corresponds to an exploratory mode of the system, in which the pattern space can be searched. At weak adaptivity, the dynamics are governed by fixed-point attractors in the pattern space, and this corresponds to a mode for retrieval of a particular pattern. In the brain, neuronal adaptivity can be regulated by various neuromodulators. The results are in accordance with those recently obtained by means of more abstract models formulated in terms of mean firing rates. The increased realism makes the present model reveal more detailed mechanisms and strengthens the relevance of the conclusions to biological systems. The simplicity and realism of the coupled integrate-and-fire neurons make the present model useful for studies of systems in which the temporal aspects of neural coding are important. Received: 8 December 1995 / Accepted in revised form: 23 January 1997  相似文献   

14.
Mejias JF  Kappen HJ  Torres JJ 《PloS one》2010,5(11):e13651
Complex coherent dynamics is present in a wide variety of neural systems. A typical example is the voltage transitions between up and down states observed in cortical areas in the brain. In this work, we study this phenomenon via a biologically motivated stochastic model of up and down transitions. The model is constituted by a simple bistable rate dynamics, where the synaptic current is modulated by short-term synaptic processes which introduce stochasticity and temporal correlations. A complete analysis of our model, both with mean-field approaches and numerical simulations, shows the appearance of complex transitions between high (up) and low (down) neural activity states, driven by the synaptic noise, with permanence times in the up state distributed according to a power-law. We show that the experimentally observed large fluctuation in up and down permanence times can be explained as the result of sufficiently noisy dynamical synapses with sufficiently large recovery times. Static synapses cannot account for this behavior, nor can dynamical synapses in the absence of noise.  相似文献   

15.
We consider the dependence of information transfer by neurons on the Type I vs. Type II classification of their dynamics. Our computational study is based on Type I and II implementations of the Morris-Lecar model. It mainly concerns neurons, such as those in the auditory or electrosensory system, which encode band-limited amplitude modulations of a periodic carrier signal, and which fire at random cycles yet preferred phases of this carrier. We first show that the Morris-Lecar model with additive broadband noise ("synaptic noise") can exhibit such firing patterns with either Type I or II dynamics, with or without amplitude modulations of the carrier. We then compare the encoding of band-limited random amplitude modulations for both dynamical types. The comparison relies on a parameter calibration that closely matches firing rates for both models across a range of parameters. In the absence of synaptic noise, Type I performs slightly better than Type II, and its performance is optimal for perithreshold signals. However, Type II performs well over a slightly larger range of inputs, and this range lies mostly in the subthreshold region. Further, Type II performs marginally better than Type I when synaptic noise, which yields more realistic baseline firing patterns, is present in both models. These results are discussed in terms of the tuning and phase locking properties of the models with deterministic and stochastic inputs.  相似文献   

16.
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a general approach to study the dynamics of interacting populations of spiking networks.  相似文献   

17.
We present a quantum-like model of decision making in games of the Prisoner's Dilemma type. By this model the brain processes information by using representation of mental states in a complex Hilbert space. Driven by the master equation the mental state of a player, say Alice, approaches an equilibrium point in the space of density matrices (representing mental states). This equilibrium state determines Alice's mixed (i.e., probabilistic) strategy. We use a master equation in which quantum physics describes the process of decoherence as the result of interaction with environment. Thus our model is a model of thinking through decoherence of the initially pure mental state. Decoherence is induced by the interaction with memory and the external mental environment. We study (numerically) the dynamics of quantum entropy of Alice's mental state in the process of decision making. We also consider classical entropy corresponding to Alice's choices. We introduce a measure of Alice's diffidence as the difference between classical and quantum entropies of Alice's mental state. We see that (at least in our model example) diffidence decreases (approaching zero) in the process of decision making. Finally, we discuss the problem of neuronal realization of quantum-like dynamics in the brain; especially roles played by lateral prefrontal cortex or/and orbitofrontal cortex.  相似文献   

18.
19.
20.
Maintaining cognitive processes comes with neurological costs. Thus, enhanced cognition and its underlying neural mechanisms should change in response to environmental pressures. Indeed, recent evidence suggests that variation in spatially based cognitive abilities is reflected in the morphology of the hippocampus (Hp), the region of the brain involved in spatial memory. Moreover, recent work on this region establishes a dynamic link between brain plasticity and cognitive experiences both across populations and within individuals. However, the mechanisms involved in neurological changes as a result of differential space use and the reversibility of such effects are unknown. Using a house sparrow (Passer domesticus ) model, we experimentally manipulated the space available to birds, testing the hypothesis that reductions in dendritic branching is associated with reduced Hp volume and that such reductions in volume are reversible. We found that reduced spatial availability associated with captivity had a profound and significant reduction in sparrow hippocampal volumes, which was highly correlated with the total length of dendrites in the region. This result suggests that changes to the dendritic structure of neurons may, in part, explain volumetric reductions in region size associated with captivity. In addition, small changes in available space even within captivity produced significant changes in the spine structure on Hp dendrites. These reductions were reversible following increased spatial opportunities. Overall, these results are consistent with the hypothesis that reductions to the Hp in captivity, often assumed to reflect a deleterious process, may be adaptive and a consequence of the trade‐off between cognitive and energetic demands. © 2016 Wiley Periodicals, Inc. Develop Neurobiol 77: 93–101, 2017  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号