首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
用自组织学习联想神经树评价中药质量   总被引:1,自引:0,他引:1  
本文运用自组织学习联想神经树评价中药质量,并对中药厚朴,根据其气相色谱分析得到的各组分相对含量,运用该方法作了尝试,识别成功率达100%,结果表明,神经树方法性能良好,可望成为中药质量的有效辅助手段。  相似文献   

2.
The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.  相似文献   

3.
It has been generally assumed for a long time that learning is accomplished in the central nervous system (CNS) by modifying strengths of ties between neurons. Various mechanisms may contribute to this process, but it is not known which are the specific mechanisms, and what are the rules by which they operate. Theoretical models, which are based on that general assumption are introduced. The purpose of the models is to suggest plausible ways by which learned information may be stored in the neural network, and be retrieved when it is needed. The networks in the models consist of four basic subunits, in accordance with identified units in the CNS: sensing, response, feeling, and control, plus association areas. The suggested operation rules are based on established operation rules of individual neurons, and assumed rules when neurons in groups are considered. Computer simulations are done, to check the consistency of the models, and to illustrate how they work. They simulate how an hypothetical kitten learns part of its environment, and show how relevant information may be stored and retrieved in its neuronal network. The suggested mechanisms could be examined in experiments, albeit not easy ones to conduct.  相似文献   

4.
In order to probe into the self-organizing emergence of simple cell orientation selectivity, we tried to construct a neural network model that consists of LGN neurons and simple cells in visual cortex and obeys the Hebbian learning rule. We investigated the neural coding and representation of simple cells to a natural image by means of this model. The results show that the structures of their receptive fields are determined by the preferred orientation selectivity of simple cells. However, they are also decided by the emergence of self-organization in the unsupervision learning process. This kind of orientation selectivity results from dynamic self-organization based on the interactions between LGN and cortex.  相似文献   

5.
In order to probe into the self-organizing emergence of simple cell orientation selectivity, we tried to construct a neural network model that consists of LGN neurons and simple cells in visual cortex and obeys the Hebbian learning rule. We investigated the neural coding and representation of simple cells to a natural image by means of this model. The results show that the structures of their receptive fields are determined by the preferred orientation selectivity of simple cells. However, they are also decided by the emergence of self-organization in the unsupervision learning process. This kind of orientation selectivity results from dynamic self-organization based on the interactions between LGN and cortex.  相似文献   

6.
A neural net model is simulated on an IBM-1130 digital computer. The model includes rules for learning of the presented patterns. The learning algorithm uses an iteration procedure, in order to compute the ultimate cross coupling-coefficients between the neurons for a specific pattern. The network has a set of latent cyclic modes or reverberations. If the net is stimulated briefly, by presenting a pattern, it will subsequently either return to quiescence or settle into periodic activity in one of its cyclic modes.  相似文献   

7.
Blind source separation is the computation underlying the cocktail party effect––a partygoer can distinguish a particular talker’s voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes’ principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle.  相似文献   

8.
We derive generalized spin models for the development of feedforward cortical architecture from a Hebbian synaptic learning rule in a two layer neural network with nonlinear weight constraints. Our model takes into account the effects of lateral interactions in visual cortex combining local excitation and long range effective inhibition. Our approach allows the principled derivation of developmental rules for low-dimensional feature maps, starting from high-dimensional synaptic learning rules. We incorporate the effects of smooth nonlinear constraints on net synaptic weight projected from units in the thalamic layer (the fan-out) and on the net synaptic weight received by units in the cortical layer (the fan-in). These constraints naturally couple together multiple feature maps such as orientation preference and retinotopic organization. We give a detailed illustration of the method applied to the development of the orientation preference map as a special case, in addition to deriving a model for joint pattern formation in cortical maps of orientation preference, retinotopic location, and receptive field width. We show that the combination of Hebbian learning and center-surround cortical interaction naturally leads to an orientation map development model that is closely related to the XY magnetic lattice model from statistical physics. The results presented here provide justification for phenomenological models studied in Cowan and Friedman (Advances in neural information processing systems 3, 1991), Thomas and Cowan (Phys Rev Lett 92(18):e188101, 2004) and provide a developmental model realizing the synaptic weight constraints previously assumed in Thomas and Cowan (Math Med Biol 23(2):119–138, 2006).  相似文献   

9.
We propose a working hypothesis supported by numerical simulations that brain networks evolve based on the principle of the maximization of their internal information flow capacity. We find that synchronous behavior and capacity of information flow of the evolved networks reproduce well the same behaviors observed in the brain dynamical networks of Caenorhabditis elegans and humans, networks of Hindmarsh-Rose neurons with graphs given by these brain networks. We make a strong case to verify our hypothesis by showing that the neural networks with the closest graph distance to the brain networks of Caenorhabditis elegans and humans are the Hindmarsh-Rose neural networks evolved with coupling strengths that maximize information flow capacity. Surprisingly, we find that global neural synchronization levels decrease during brain evolution, reflecting on an underlying global no Hebbian-like evolution process, which is driven by no Hebbian-like learning behaviors for some of the clusters during evolution, and Hebbian-like learning rules for clusters where neurons increase their synchronization.  相似文献   

10.
We investigate the memory structure and retrieval of the brain and propose a hybrid neural network of addressable and content-addressable memory which is a special database model and can memorize and retrieve any piece of information (a binary pattern) both addressably and content-addressably. The architecture of this hybrid neural network is hierarchical and takes the form of a tree of slabs which consist of binary neurons with the same array. Simplex memory neural networks are considered as the slabs of basic memory units, being distributed on the terminal vertexes of the tree. It is shown by theoretical analysis that the hybrid neural network is able to be constructed with Hebbian and competitive learning rules, and some other important characteristics of its learning and memory behavior are also consistent with those of the brain. Moreover, we demonstrate the hybrid neural network on a set of ten binary numeral patters  相似文献   

11.
12.
Rich clubs arise when nodes that are ‘rich’ in connections also form an elite, densely connected ‘club’. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour.  相似文献   

13.
The brain of a honeybee contains only 960,000 neurons and its volume represents only 1 mm3. However, it supports impressive behavioral capabilities. Honeybees are equipped with sophisticated sensory systems and have well developed learning and memory capacities, whose essential mechanisms do not differ drastically from those of vertebrates. Here, I focus on non-elemental forms of learning by honeybees. I show that bees exhibit learning abilities that have been traditionally ascribed to a restricted portion of vertebrates, as they go beyond simple stimulus-stimulus or response-stimulus associations. To relate these abilities to neural structures and functioning in the bee brain we focus on the antennal lobes and the mushroom bodies. We conclude that there is a fair chance to understand complex behavior in bees, and to identify the potential neural substrates underlying such behavior by adopting a cognitive neuroethological approach. In such an approach, behavioral and neurobiological studies are combined to understand the rules and mechanisms of plastic behavior in a natural context.  相似文献   

14.
Stimulus representation is a functional interpretation of early sensory cortices. Early sensory cortices are subject to stimulus-induced modifications. Common models for stimulus-induced learning within topographic representations are based on the stimuli's spatial structure and probability distribution. Furthermore, we argue that average temporal stimulus distances reflect the stimuli's relatedness. As topographic representations reflect the stimuli's relatedness, the temporal structure of incoming stimuli is important for the learning in cortical maps. Motivated by recent neurobiological findings, we present an approach of cortical self-organization that additionally takes temporal stimulus aspects into account. The proposed model transforms average interstimulus intervals into representational distances. Thereby, neural topography is related to stimulus dynamics. This offers a new time-based interpretation of cortical maps. Our approach is based on a wave-like spread of cortical activity. Interactions between dynamics and feedforward activations lead to shifts of neural activity. The psychophysical saltation phenomenon may represent an analogue to the shifts proposed here. With regard to cortical plasticity, we offer an explanation for neurobiological findings that other models cannot explain. Moreover, we predict cortical reorganizations under new experimental, spatiotemporal conditions. With regard to psychophysics, we relate the saltation phenomenon to dynamics and interaction in early sensory cortices and predict further effects in the perception of spatiotemporal stimuli. Received: 17 March 1999 / Accepted in revised form: 10 August 1999  相似文献   

15.
Learning-induced synchronization of a neural network at various developing stages is studied by computer simulations using a pulse-coupled neural network model in which the neuronal activity is simulated by a one-dimensional map. Two types of Hebbian plasticity rules are investigated and their differences are compared. For both models, our simulations show a logarithmic increase in the synchronous firing frequency of the network with the culturing time of the neural network. This result is consistent with recent experimental observations. To investigate how to control the synchronization behavior of a neural network after learning, we compare the occurrence of synchronization for four networks with different designed patterns under the influence of an external signal. The effect of such a signal on the network activity highly depends on the number of connections between neurons. We discuss the synaptic plasticity and enhancement effects for a random network after learning at various developing stages.  相似文献   

16.
After an introduction (1) the article analyzes the evolution of the embodied mind (2), the innovation of embodied robotics (3), and finally discusses conclusions of embodied robotics for human responsibility (4). Considering the evolution of the embodied mind (2), we start with an introduction of complex systems and nonlinear dynamics (2.1), apply this approach to neural self-organization (2.2), distinguish degrees of complexity of the brain (2.3), explain the emergence of cognitive states by complex systems dynamics (2.4), and discuss criteria for modeling the brain as complex nonlinear system (2.5). The innovation of embodied robotics (3) is a challenge of future technology. We start with the distinction of symbolic and embodied AI (3.1) and explain embodied robots as dynamical systems (3.2). Self-organization needs self-control of technical systems (3.3). Cellular neural networks (CNN) are an example of self-organizing technical systems offering new avenues for neurobionics (3.4). In general, technical neural networks support different kinds of learning robots (3.5). Finally, embodied robotics aim at the development of cognitive and conscious robots (3.6).  相似文献   

17.
Neural learning algorithms generally involve a number of identical processing units, which are fully or partially connected, and involve an update function, such as a ramp, a sigmoid or a Gaussian function for instance. Some variations also exist, where units can be heterogeneous, or where an alternative update technique is employed, such as a pulse stream generator. Associated with connections are numerical values that must be adjusted using a learning rule, and and dictated by parameters that are learning rule specific, such as momentum, a learning rate, a temperature, amongst others. Usually, neural learning algorithms involve local updates, and a global interaction between units is often discouraged, except in instances where units are fully connected, or involve synchronous updates. In all of these instances, concurrency within a neural algorithm cannot be fully exploited without a suitable implementation strategy. A design scheme is described for translating a neural learning algorithm from inception to implementation on a parallel machine using PVM or MPI libraries, or onto programmable logic such as FPGAs. A designer must first describe the algorithm using a specialised Neural Language, from which a Petri net (PN) model is constructed automatically for verification, and building a performance model. The PN model can be used to study issues such as synchronisation points, resource sharing and concurrency within a learning rule. Specialised constructs are provided to enable a designer to express various aspects of a learning rule, such as the number and connectivity of neural nodes, the interconnection strategies, and information flows required by the learning algorithm. A scheduling and mapping strategy is then used to translate this PN model onto a multiprocessor template. We demonstrate our technique using a Kohonen and backpropagation learning rules, implemented on a loosely coupled workstation cluster, and a dedicated parallel machine, with PVM libraries.  相似文献   

18.
We contrast two computational models of sequence learning. The associative learner posits that learning proceeds by strengthening existing association weights. Alternatively, recoding posits that learning creates new and more efficient representations of the learned sequences. Importantly, both models propose that humans act as optimal learners but capture different statistics of the stimuli in their internal model. Furthermore, these models make dissociable predictions as to how learning changes the neural representation of sequences. We tested these predictions by using fMRI to extract neural activity patterns from the dorsal visual processing stream during a sequence recall task. We observed that only the recoding account can explain the similarity of neural activity patterns, suggesting that participants recode the learned sequences using chunks. We show that associative learning can theoretically store only very limited number of overlapping sequences, such as common in ecological working memory tasks, and hence an efficient learner should recode initial sequence representations.  相似文献   

19.
Much evidence indicates that recognition memory involves two separable processes, recollection and familiarity discrimination, with familiarity discrimination being dependent on the perirhinal cortex of the temporal lobe. Here, we describe a new neural network model designed to mimic the response patterns of perirhinal neurons that signal information concerning the novelty or familiarity of stimuli. The model achieves very fast and accurate familiarity discrimination while employing biologically plausible parameters and Hebbian learning rules. The fact that the activity patterns of the model's simulated neurons are closely similar to those of neurons recorded from the primate perirhinal cortex indicates that this brain region could discriminate familiarity using principles akin to those of the model. If so, the capacity of the model establishes that the perirhinal cortex alone may discriminate the familiarity of many more stimuli than current neural network models indicate could be recalled (recollected) by all the remaining areas of the cerebral cortex. This efficiency and speed of detecting novelty provides an evolutionary advantage, thereby providing a reason for the existence of a familiarity discrimination network in addition to networks used for recollection.  相似文献   

20.
Molecular models of 5 nm sized ZnO/Zn(OH)2 core-shell nanoparticles in ethanolic solution were derived as scale-up models (based on an earlier model created from ion-by-ion aggregation and self-organization) and subjected to mechanistic analyses of surface stabilization by block-copolymers. The latter comprise a poly-methacrylate chain accounting for strong surfactant association to the nanoparticle by hydrogen bonding and salt-bridges. While dangling poly-ethylene oxide chains provide only a limited degree of sterical hindering to nanoparticle agglomeration, the key mechanism of surface stabilization is electrostatic shielding arising from the acrylates and a halo of Na+ counter ions associated to the nanoparticle. Molecular dynamics simulations reveal different solvent shells and distance-dependent mobility of ions and solvent molecules. From this, we provide a molecular rationale of effective particle size, net charge and polarizability of the nanoparticles in solution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号