首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Synapses are specialized structures that mediate information flow between neurons and target cells,and thus are the basis for neuronal system to execute various functions,including learning and memory.There are around 1011 neurons in the human brain,with each neuron receiving thousands of synaptic inputs,either excitatory or inhibitory.A synapse is an asymmetric structure that is composed of pre-synaptic axon terminals,synaptic cleft,and postsynaptic compartments.Synapse formation involves a number of cell ...  相似文献   

2.
The human brain contains ∼86 billion neurons, which are precisely organized in specific brain regions and nuclei. High fidelity synaptic communication between subsets of neurons in specific circuits is required for most human behaviors, and is often disrupted in neuropsychiatric disorders. The presynaptic axon terminals of one neuron release neurotransmitters that activate receptors on multiple postsynaptic neuron targets to induce electrical and chemical responses. Typically, postsynaptic neurons integrate signals from multiple presynaptic neurons at thousands of synaptic inputs to control downstream communication to the next neuron in the circuit. Importantly, the strength (or efficiency) of signal transmission at each synapse can be modulated on time scales ranging up to the lifetime of the organism. This “synaptic plasticity” leads to changes in overall neuronal circuit activity, resulting in behavioral modifications. This series of minireviews will focus on recent advances in our understanding of the molecular and cellular mechanisms that control synaptic plasticity.  相似文献   

3.
Improving biological plausibility and functional capacity are two important goals for brain models that connect low-level neural details to high-level behavioral phenomena. We develop a method called “oracle-supervised Neural Engineering Framework” (osNEF) to train biologically-detailed spiking neural networks that realize a variety of cognitively-relevant dynamical systems. Specifically, we train networks to perform computations that are commonly found in cognitive systems (communication, multiplication, harmonic oscillation, and gated working memory) using four distinct neuron models (leaky-integrate-and-fire neurons, Izhikevich neurons, 4-dimensional nonlinear point neurons, and 4-compartment, 6-ion-channel layer-V pyramidal cell reconstructions) connected with various synaptic models (current-based synapses, conductance-based synapses, and voltage-gated synapses). We show that osNEF networks exhibit the target dynamics by accounting for nonlinearities present within the neuron models: performance is comparable across all four systems and all four neuron models, with variance proportional to task and neuron model complexity. We also apply osNEF to build a model of working memory that performs a delayed response task using a combination of pyramidal cells and inhibitory interneurons connected with NMDA and GABA synapses. The baseline performance and forgetting rate of the model are consistent with animal data from delayed match-to-sample tasks (DMTST): we observe a baseline performance of 95% and exponential forgetting with time constant τ = 8.5s, while a recent meta-analysis of DMTST performance across species observed baseline performances of 58 − 99% and exponential forgetting with time constants of τ = 2.4 − 71s. These results demonstrate that osNEF can train functional brain models using biologically-detailed components and open new avenues for investigating the relationship between biophysical mechanisms and functional capabilities.  相似文献   

4.
BACKGROUND: It is now well established that persistent nonsynaptic neuronal plasticity occurs after learning and, like synaptic plasticity, it can be the substrate for long-term memory. What still remains unclear, though, is how nonsynaptic plasticity contributes to the altered neural network properties on which memory depends. Understanding how nonsynaptic plasticity is translated into modified network and behavioral output therefore represents an important objective of current learning and memory research. RESULTS: By using behavioral single-trial classical conditioning together with electrophysiological analysis and calcium imaging, we have explored the cellular mechanisms by which experience-induced nonsynaptic electrical changes in a neuronal soma remote from the synaptic region are translated into synaptic and circuit level effects. We show that after single-trial food-reward conditioning in the snail Lymnaea stagnalis, identified modulatory neurons that are extrinsic to the feeding network become persistently depolarized between 16 and 24 hr after training. This is delayed with respect to early memory formation but concomitant with the establishment and duration of long-term memory. The persistent nonsynaptic change is extrinsic to and maintained independently of synaptic effects occurring within the network directly responsible for the generation of feeding. Artificial membrane potential manipulation and calcium-imaging experiments suggest a novel mechanism whereby the somal depolarization of an extrinsic neuron recruits command-like intrinsic neurons of the circuit underlying the learned behavior. CONCLUSIONS: We show that nonsynaptic plasticity in an extrinsic modulatory neuron encodes information that enables the expression of long-term associative memory, and we describe how this information can be translated into modified network and behavioral output.  相似文献   

5.
It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information.  相似文献   

6.
Greater socio-environmental instability favors the individual production of knowledge because innovations are adapted to new circumstances. Furthermore, instability stimulates the horizontal transmission of knowledge because this mechanism disseminates adapted information. This study investigates the following hypothesis: Greater socio-environmental instability favors the production of knowledge (innovation) to adapt to new situations, and socio-environmental instability stimulates the horizontal transmission of knowledge, which is a mechanism that diffuses adapted information. In addition, the present study describes “how”, “when”, “from whom” and the “stimulus/context”, in which knowledge regarding medicinal plants is gained or transferred. Data were collected through semi-structured interviews from three groups that represented different levels of socio-environmental instability. Socio-environmental instability did not favor individual knowledge production or any cultural transmission modes, including vertical to horizontal, despite increasing the frequency of horizontal pathways. Vertical transmission was the most important knowledge transmission strategy in all of the groups in which mothers were the most common models (knowledge sources). Significantly, childhood was the most important learning stage, although learning also occurred throughout life. Direct teaching using language was notable as a knowledge transmission strategy. Illness was the main stimulus that triggered local learning. Learning modes about medicinal plants were influenced by the knowledge itself, particularly the dynamic uses of therapeutic resources.  相似文献   

7.
The brain is thought to represent specific memories through the activity of sparse and distributed neural ensembles. In this review, we examine the use of immediate early genes (IEGs), genes that are induced by neural activity, to specifically identify and genetically modify neurons activated naturally by environmental experience. Recent studies using this approach have identified cellular and molecular changes specific to neurons activated during learning relative to their inactive neighbors. By using opto- and chemogenetic regulators of neural activity, the neurons naturally recruited during learning can be artificially reactivated to directly test their role in coding external information. In contextual fear conditioning, artificial reactivation of learning-induced neural ensembles in the hippocampus or neocortex can substitute for the context itself. That is, artificial stimulation of these neurons can apparently cause the animals to “think” they are in the context. This represents a powerful approach to testing the principles by which the brain codes for the external world and how these circuits are modified with learning.A central feature of nervous systems is that, to function properly, specific neurons must become active in response to specific stimuli. The nature of this selective activation and its modification with experience is the focus of much neuroscience research, ranging from studies of sensory processing in experimental animals to disorders of thought such as schizophrenia in humans. The central dogma of neuroscience is that perceptions, memories, thoughts, and higher mental functions arise from the pattern and timing of the activity in neural ensembles in specific parts of the brain at specific points in time. Until quite recently, the investigation of these “circuit”-based questions has primarily been limited to observational techniques, such as single unit recording, functional magnetic resonance imagery (fMRI), and calcium imaging, to document the patterns of neural activity evoked by sensory experience or even complex psychological contingencies in human fMRI studies. These techniques have been enormously successful and created a framework for understanding information processing in the brain. For example, recordings in the visual system have indicated that, in the primary visual cortex, neurons are tuned to the orientation of linear stimuli (Hubel and Wiesel 1962). In contrast, neurons in higher brain areas can respond to discrete items. The most striking example of this specificity comes from in vivo recording in the human medial temporal lobe in which single units have been identified that respond to photos of the actress Halle Berry as well as her written name (Quiroga et al. 2005). This highly selective tuning of neural activity is suggestive of function, but how can this be directly tested? What would be the effect of stimulating just this rare population of neurons, a memory of the actress, a sensory illusion of her image? How does this type of specific firing arise? Do these neurons differ from their nonresponsive neighbors in terms of biochemistry, cell biology, or connectivity? Do they undergo molecular alterations when new information is learned about this individual and are these changes required for the learning? These types of questions have recently become accessible to study in mice through the use of activity-based genetic manipulation, in which neurons that are activated by a specific sensory stimulus can be altered to express any gene of experimental interest. These studies and approaches will be the focus of this work.  相似文献   

8.
Short-term memory in the brain cannot in general be explained the way long-term memory can – as a gradual modification of synaptic weights – since it takes place too quickly. Theories based on some form of cellular bistability, however, do not seem able to account for the fact that noisy neurons can collectively store information in a robust manner. We show how a sufficiently clustered network of simple model neurons can be instantly induced into metastable states capable of retaining information for a short time (a few seconds). The mechanism is robust to different network topologies and kinds of neural model. This could constitute a viable means available to the brain for sensory and/or short-term memory with no need of synaptic learning. Relevant phenomena described by neurobiology and psychology, such as local synchronization of synaptic inputs and power-law statistics of forgetting avalanches, emerge naturally from this mechanism, and we suggest possible experiments to test its viability in more biological settings.  相似文献   

9.
Nishida Y  Sugi T  Nonomura M  Mori I 《EMBO reports》2011,12(8):855-862
Behaviour is a consequence of computation in neural circuits composed of massive synaptic connections among sensory neurons and interneurons. The cyclic AMP response element-binding protein (CREB) responsible for learning and memory is expressed in almost all neurons. Nevertheless, we find that the Caenorhabditis elegans CREB orthologue, CRH-1, is only required in the single bilateral thermosensory neuron AFD, for a memory-related behaviour. Restoration of CRH-1 in AFD of CREB-depleted crh-1 mutants rescues its thermotactic defect, whereas restorations in other neurons do not. In calcium-imaging analyses, the AFD neurons of CREB-depleted crh-1 mutants exhibit an abnormal response to temperature increase. We present a new platform for analysing the mechanism of behavioural memory at single-cellular resolution within the neural circuit.  相似文献   

10.
Collective rhythmic dynamics from neurons is vital for cognitive functions such as memory formation but how neurons self-organize to produce such activity is not well understood. Attractor-based computational models have been successfully implemented as a theoretical framework for memory storage in networks of neurons. Additionally, activity-dependent modification of synaptic transmission is thought to be the physiological basis of learning and memory. The goal of this study is to demonstrate that using a pharmacological treatment that has been shown to increase synaptic strength within in vitro networks of hippocampal neurons follows the dynamical postulates theorized by attractor models. We use a grid of extracellular electrodes to study changes in network activity after this perturbation and show that there is a persistent increase in overall spiking and bursting activity after treatment. This increase in activity appears to recruit more “errant” spikes into bursts. Phase plots indicate a conserved activity pattern suggesting that a synaptic potentiation perturbation to the attractor leaves it unchanged. Lastly, we construct a computational model to demonstrate that these synaptic perturbations can account for the dynamical changes seen within the network.  相似文献   

11.
Gap junctions play an important role in the regulation of neuronal metabolism and homeostasis by serving as connections that enable small molecules to pass between cells and synchronize activity between cells. Although recent studies have linked gap junctions to memory formation, it remains unclear how they contribute to this process. Gap junctions are hexameric hemichannels formed from the connexin and pannexin gene families in chordates and the innexin (inx) gene family in invertebrates. Here we show that two modulatory neurons, the anterior paired lateral (APL) neuron and the dorsal paired medial (DPM) neuron, form heterotypic gap junctions within the mushroom body (MB), a learning and memory center in the Drosophila brain. Using RNA interference-mediated knockdowns of inx7 and inx6 in the APL and DPM neurons, respectively, we found that flies showed normal olfactory associative learning and intact anesthesia-resistant memory (ARM) but failed to form anesthesia-sensitive memory (ASM). Our results reveal that the heterotypic gap junctions between the APL and DPM neurons are an essential part of the MB circuitry for memory formation, potentially constituting a recurrent neural network to stabilize ASM.  相似文献   

12.
Many brain regions exhibit lateral differences in structure and function, and also incorporate new neurons in adulthood, thought to function in learning and in the formation of new memories. However, the contribution of new neurons to hemispheric differences in processing is unknown. The present study combines cellular, behavioral, and physiological methods to address whether 1) new neuron incorporation differs between the brain hemispheres, and 2) the degree to which hemispheric lateralization of new neurons correlates with behavioral and physiological measures of learning and memory. The songbird provides a model system for assessing the contribution of new neurons to hemispheric specialization because songbird brain areas for vocal processing are functionally lateralized and receive a continuous influx of new neurons in adulthood. In adult male zebra finches, we quantified new neurons in the caudomedial nidopallium (NCM), a forebrain area involved in discrimination and memory for the complex vocalizations of individual conspecifics. We assessed song learning and recorded neural responses to song in NCM. We found significantly more new neurons labeled in left than in right NCM; moreover, the degree of asymmetry in new neuron numbers was correlated with the quality of song learning and strength of neuronal memory for recently heard songs. In birds with experimentally impaired song quality, the hemispheric difference in new neurons was diminished. These results suggest that new neurons may contribute to an allocation of function between the hemispheres that underlies the learning and processing of complex signals.  相似文献   

13.
In an effort to determine whether the “growth state” and the “mature state” of a neuron are differentiated by different programs of gene expression, we have compared the rapidly transported (group I) proteins in growing and nongrowing axons in rabbits. We observed two polypeptides (GAP-23 and GAP-43) which were of particular interest because of their apparent association with axon growth. GAP-43 was rapidly transported in the central nervous system (CNS) (retinal ganglion cell) axons of neonatal animals, but its relative amount declined precipitously with subsequent development. It could not be reinduced by axotomy of the adult optic nerves, which do not regenerate; however, it was induced after axotomy of an adult peripheral nervous system nerve (the hypoglossal nerve, which does regenerate) which transported only very low levels of GAP-43 before axotomy. The second polypeptide, GAP-23 followed the same pattern of growth-associated transport, except that it was transported at significant levels in uninjured adult hypoglossal nerves and not further induced by axotomy. These observations are consistent with the “GAP hypothesis” that the neuronal growth state can be defined as an altered program of gene expression exemplified in part by the expression of GAP genes whose products are involved in critical growth-specific functions. When interpreted in terms of GAP hypothesis, they lead to the following conclusions: (a) the growth state can be subdivided into a “synaptogenic state” characterized by the transport of GAP-23 but not GAP-43, and an “axon elongation state” requiring both GAPs; (b) with respect to the expression of GAP genes, regeneration involves a recapitulation of a neonatal state of the neuron; and (c) the failure of mammalian CNS neurons to express the GAP genes may underly the failure of CNS axons to regenerate after axon injury.  相似文献   

14.
Despite the fact that temporal information processing is of particular significance in biological memory systems, not much has yet been explored about how these systems manage to store temporal information involved in sequences of stimuli. A neural network model capable of learning and recalling temporal sequences is proposed, based on a neural mechanism in which the sequences are expanded into a series of periodic rectangular oscillations. Thus, the mathematical framework underlying the model, to some extent, is concerned with the Walsh function series. The oscillatory activities generated by the interplay between excitatory and inhibitory neuron pools are transmitted to another neuron pool whose role in learning and retrieval is to modify the rhythms and phases of the rectangular oscillations. Thus, a basic functional neural circuit involves three different neuron pools. The modifiability of rhythms and phases is incorporated into the model with the aim of improving the quality of the retrieval. Numerical simulations were conducted to show the characteristic features of the learning as well as the performance of the model in memory recall.  相似文献   

15.
16.
RV Florian 《PloS one》2012,7(8):e40233
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.  相似文献   

17.
The referential aspect of a concept can be defined by a disjunction of conjunations of attributes. A single neuron can represent a disjunction of conjunctions of attributes if the assumption that neurons are single-threshold devices is discarded. Instead, one must assume that such concept neurons are composed of hundreds or thousands of (high-threshold) receptive areas, each containing tens or hundreds of synaptic sites. When essentially all of the sites of a receptive area are activated in close temporal contiguity, the receptive area generates a local (spike) response which is assumed to be sufficient to fire the cell body and axon of the neuron. If we assume that all concepts possessed by a single human being can be encoded by single neurons in this manner, there are enough neurons in the human cortex only if we assume that most of these concept neurons are specified by learning. Genetic specification is ruled out by the enormous (infinite?) nember of possible concepts humans appear to be able to learn. Therefore, a speculative neural mechanism is presented regarding how “free” neurons could become specified by learning.  相似文献   

18.
Although already William James and, more explicitly, Donald Hebb''s theory of cell assemblies have suggested that activity-dependent rewiring of neuronal networks is the substrate of learning and memory, over the last six decades most theoretical work on memory has focused on plasticity of existing synapses in prewired networks. Research in the last decade has emphasized that structural modification of synaptic connectivity is common in the adult brain and tightly correlated with learning and memory. Here we present a parsimonious computational model for learning by structural plasticity. The basic modeling units are “potential synapses” defined as locations in the network where synapses can potentially grow to connect two neurons. This model generalizes well-known previous models for associative learning based on weight plasticity. Therefore, existing theory can be applied to analyze how many memories and how much information structural plasticity can store in a synapse. Surprisingly, we find that structural plasticity largely outperforms weight plasticity and can achieve a much higher storage capacity per synapse. The effect of structural plasticity on the structure of sparsely connected networks is quite intuitive: Structural plasticity increases the “effectual network connectivity”, that is, the network wiring that specifically supports storage and recall of the memories. Further, this model of structural plasticity produces gradients of effectual connectivity in the course of learning, thereby explaining various cognitive phenomena including graded amnesia, catastrophic forgetting, and the spacing effect.  相似文献   

19.
Humans are able to form internal representations of the information they process—a capability which enables them to perform many different memory tasks. Therefore, the neural system has to learn somehow to represent aspects of the environmental situation; this process is assumed to be based on synaptic changes. The situations to be represented are various as for example different types of static patterns but also dynamic scenes. How are neural networks consisting of mutually connected neurons capable of performing such tasks? Here we propose a new neuronal structure for artificial neurons. This structure allows one to disentangle the dynamics of the recurrent connectivity from the dynamics induced by synaptic changes due to the learning processes. The error signal is computed locally within the individual neuron. Thus, online learning is possible without any additional structures. Recurrent neural networks equipped with these computational units cope with different memory tasks. Examples illustrate how information is extracted from environmental situations comprising fixed patterns to produce sustained activity and to deal with simple algebraic relations.  相似文献   

20.
脑皮层的功能连接模式与突触可塑性密切相关,受突触空间分布和刺激模式等多种因素的影响。尽管越来越多的证据表明突触可塑性不仅受突触后动作电位而且还受突触后局部树突电位的影响,但是目前尚不清楚神经元的功能连接模式是否和怎样依赖于突触后局部电位的。为此,本文建立了一个无需硬边界设置的、突触后局部膜电位依赖的可塑性模型。该模型具有突触强度的自平衡能力并且能够再现多种突触可塑性实验结果。基于该模型对两个锥体神经元的功能连接模式进行仿真的结果表明,当突触后局部电位都处于亚阈值时两个神经元无功能连接,如果一个神经元的突触后膜电位高于阈值电位则产生向该神经元的单向连接,当两个神经元的突触后膜电位都超过阈值电位时则产生双向连接,说明突触后局部膜电位分布是神经元功能连接模式形成的关键。研究结果加深了神经网络连接模式形成机制的理解,对学习和记忆的研究具有重要意义。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号