首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks.  相似文献   

2.
This paper investigates finite-time synchronization of an array of coupled neural networks via discontinuous controllers. Based on Lyapunov function method and the discontinuous version of finite-time stability theory, some sufficient criteria for finite-time synchronization are obtained. Furthermore, we propose switched control and adaptive tuning parameter strategies in order to reduce the settling time. In addition, pinning control scheme via a single controller is also studied in this paper. With the hypothesis that the coupling network topology contains a directed spanning tree and each of the strongly connected components is detail-balanced, we prove that finite-time synchronization can be achieved via pinning control. Finally, some illustrative examples are given to show the validity of the theoretical results.  相似文献   

3.
We study how individual memory items are stored assuming that situations given in the environment can be represented in the form of synaptic-like couplings in recurrent neural networks. Previous numerical investigations have shown that specific architectures based on suppression or max units can successfully learn static or dynamic stimuli (situations). Here we provide a theoretical basis concerning the learning process convergence and the network response to a novel stimulus. We show that, besides learning “simple” static situations, a nD network can learn and replicate a sequence of up to n different vectors or frames. We find limits on the learning rate and show coupling matrices developing during training in different cases including expansion of the network into the case of nonlinear interunit coupling. Furthermore, we show that a specific coupling matrix provides low-pass-filter properties to the units, thus connecting networks constructed by static summation units with continuous-time networks. We also show under which conditions such networks can be used to perform arithmetic calculations by means of pattern completion.  相似文献   

4.
In this paper a new learning rule for the coupling weights tuning of Hopfield like chaotic neural networks is developed in such a way that all neurons behave in a synchronous manner, while the desirable structure of the network is preserved during the learning process. The proposed learning rule is based on sufficient synchronization criteria, on the eigenvalues of the weight matrix belonging to the neural network and the idea of Structured Inverse Eigenvalue Problem. Our developed learning rule not only synchronizes all neuron’s outputs with each other in a desirable topology, but also enables us to enhance the synchronizability of the networks by choosing the appropriate set of weight matrix eigenvalues. Specifically, this method is evaluated by performing simulations on the scale-free topology.  相似文献   

5.
A large class of neural network models have their units organized in a lattice with fixed topology or generate their topology during the learning process. These network models can be used as neighborhood preserving map of the input manifold, but such a structure is difficult to manage since these maps are graphs with a number of nodes that is just one or two orders of magnitude less than the number of input points (i.e., the complexity of the map is comparable with the complexity of the manifold) and some hierarchical algorithms were proposed in order to obtain a high-level abstraction of these structures. In this paper a general structure capable to extract high order information from the graph generated by a large class of self-organizing networks is presented. This algorithm will allow to build a two layers hierarchical structure starting from the results obtained by using the suitable neural network for the distribution of the input data. Moreover the proposed algorithm is also capable to build a topology preserving map if it is trained using a graph that is also a topology preserving map.  相似文献   

6.
Since metabolome data are derived from the underlying metabolic network, reverse engineering of such data to recover the network topology is of wide interest. Lyapunov equation puts a constraint to the link between data and network by coupling the covariance of data with the strength of interactions (Jacobian matrix). This equation, when expressed as a linear set of equations at steady state, constitutes a basis to infer the network structure given the covariance matrix of data. The sparse structure of metabolic networks points to reactions which are active based on minimal enzyme production, hinting at sparsity as a cellular objective. Therefore, for a given covariance matrix, we solved Lyapunov equation to calculate Jacobian matrix by a simultaneous use of minimization of Euclidean norm of residuals and maximization of sparsity (the number of zeros in Jacobian matrix) as objective functions to infer directed small-scale networks from three kingdoms of life (bacteria, fungi, mammalian). The inference performance of the approach was found to be promising, with zero False Positive Rate, and almost one True positive Rate. The effect of missing data on results was additionally analyzed, revealing superiority over similarity-based approaches which infer undirected networks. Our findings suggest that the covariance of metabolome data implies an underlying network with sparsest pattern. The theoretical analysis forms a framework for further investigation of sparsity-based inference of metabolic networks from real metabolome data.  相似文献   

7.
Different network models have been suggested for the topology underlying complex interactions in natural systems. These models are aimed at replicating specific statistical features encountered in real-world networks. However, it is rarely considered to which degree the results obtained for one particular network class can be extrapolated to real-world networks. We address this issue by comparing different classical and more recently developed network models with respect to their ability to generate networks with large structural variability. In particular, we consider the statistical constraints which the respective construction scheme imposes on the generated networks. After having identified the most variable networks, we address the issue of which constraints are common to all network classes and are thus suitable candidates for being generic statistical laws of complex networks. In fact, we find that generic, not model-related dependencies between different network characteristics do exist. This makes it possible to infer global features from local ones using regression models trained on networks with high generalization power. Our results confirm and extend previous findings regarding the synchronization properties of neural networks. Our method seems especially relevant for large networks, which are difficult to map completely, like the neural networks in the brain. The structure of such large networks cannot be fully sampled with the present technology. Our approach provides a method to estimate global properties of under-sampled networks in good approximation. Finally, we demonstrate on three different data sets (C. elegans neuronal network, R. prowazekii metabolic network, and a network of synonyms extracted from Roget's Thesaurus) that real-world networks have statistical relations compatible with those obtained using regression models.  相似文献   

8.
Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graph of standard neural models can not be directly handled by digital hardware devices. More particularly, several works show that programmable digital hardware is a real opportunity for flexible hardware implementations of neural networks. And yet many area and topology problems arise when standard neural models are implemented onto programmable circuits such as FPGAs, so that the fast FPGA technology improvements can not be fully exploited. Therefore neural network hardware implementations need to reconcile simple hardware topologies with complex neural architectures. The theoretical and practical framework developed, allows this combination thanks to some principles of configurable hardware that are applied to neural computation: Field Programmable Neural Arrays (FPNA) lead to powerful neural architectures that are easy to map onto FPGAs, thanks to a simplified topology and an original data exchange scheme. This paper shows how FPGAs have led to the definition of the FPNA computation paradigm. Then it shows how FPNAs contribute to current and future FPGA-based neural implementations by solving the general problems that are raised by the implementation of complex neural networks onto FPGAs.  相似文献   

9.
Signal transduction networks: topology, response and biochemical processes   总被引:2,自引:0,他引:2  
Conventionally, biological signal transduction networks are analysed using experimental and theoretical methods to describe specific protein components, interactions, and biochemical processes and to model network behavior under various conditions. While these studies provide crucial information on specific networks, this information is not easily converted to a broader understanding of signal transduction systems. Here, using a specific model of protein interaction we analyse small network topologies to understand their response and general properties. In particular, we catalogue the response for all possible topologies of a given network size to generate a response distribution, analyse the effects of specific biochemical processes on this distribution, and analyse the robustness and diversity of responses with respect to internal fluctuations or mutations in the network. The results show that even three- and four-protein networks are capable of creating diverse and biologically relevant responses, that the distribution of response types changes drastically as a function of biochemical processes at protein level, and that certain topologies strongly pre-dispose a specific response type while others allow for diverse types of responses. This study sheds light on the response types and properties that could be expected from signal transduction networks, provides possible explanations for the role of certain biochemical processes in signal transduction and suggests novel approaches to interfere with signaling pathways at the molecular level. Furthermore it shows that network topology plays a key role on determining response type and properties and that proper representation of network topology is crucial to discover and understand so-called building blocks of large networks.  相似文献   

10.
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.  相似文献   

11.
Yu D  Parlitz U 《PloS one》2011,6(9):e24333
We suggest a control based approach to topology estimation of networks with N elements. This method first drives the network to steady states by a delayed feedback control; then performs structural perturbations for shifting the steady states M times; and finally infers the connection topology from the steady states' shifts by matrix inverse algorithm (M = N) or l(1)-norm convex optimization strategy applicable to estimate the topology of sparse networks from M < N perturbations. We discuss as well some aspects important for applications, such as the topology reconstruction quality and error sources, advantages and disadvantages of the suggested method, and the influence of (control) perturbations, inhomegenity, sparsity, coupling functions, and measurement noise. Some examples of networks with Chua's oscillators are presented to illustrate the reliability of the suggested technique.  相似文献   

12.
Large-scale artificial neural networks have many redundant structures, making the network fall into the issue of local optimization and extended training time. Moreover, existing neural network topology optimization algorithms have the disadvantage of many calculations and complex network structure modeling. We propose a Dynamic Node-based neural network Structure optimization algorithm (DNS) to handle these issues. DNS consists of two steps: the generation step and the pruning step. In the generation step, the network generates hidden layers layer by layer until accuracy reaches the threshold. Then, the network uses a pruning algorithm based on Hebb’s rule or Pearson’s correlation for adaptation in the pruning step. In addition, we combine genetic algorithm to optimize DNS (GA-DNS). Experimental results show that compared with traditional neural network topology optimization algorithms, GA-DNS can generate neural networks with higher construction efficiency, lower structure complexity, and higher classification accuracy.  相似文献   

13.
The interplay between anatomical connectivity and dynamics in neural networks plays a key role in the functional properties of the brain and in the associated connectivity changes induced by neural diseases. However, a detailed experimental investigation of this interplay at both cellular and population scales in the living brain is limited by accessibility. Alternatively, to investigate the basic operational principles with morphological, electrophysiological and computational methods, the activity emerging from large in vitro networks of primary neurons organized with imposed topologies can be studied. Here, we validated the use of a new bio-printing approach, which effectively maintains the topology of hippocampal cultures in vitro and investigated, by patch-clamp and MEA electrophysiology, the emerging functional properties of these grid-confined networks. In spite of differences in the organization of physical connectivity, our bio-patterned grid networks retained the key properties of synaptic transmission, short-term plasticity and overall network activity with respect to random networks. Interestingly, the imposed grid topology resulted in a reinforcement of functional connections along orthogonal directions, shorter connectivity links and a greatly increased spiking probability in response to focal stimulation. These results clearly demonstrate that reliable functional studies can nowadays be performed on large neuronal networks in the presence of sustained changes in the physical network connectivity.  相似文献   

14.
In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level , in the large and sparse coding limits (). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.  相似文献   

15.
Artificial neural networks are usually built on rather few elements such as activation functions, learning rules, and the network topology. When modelling the more complex properties of realistic networks, however, a number of higher-level structural principles become important. In this paper we present a theoretical framework for modelling cortical networks at a high level of abstraction. Based on the notion of a population of neurons, this framework can accommodate the common features of cortical architecture, such as lamination, multiple areas and topographic maps, input segregation, and local variations of the frequency of different cell types (e.g., cytochrome oxidase blobs). The framework is meant primarily for the simulation of activation dynamics; it can also be used to model the neural environment of single cells in a multiscale approach. Received: 9 January 1996 / Accepted in revised form: 24 July 1996  相似文献   

16.
The synchronization frequency of neural networks and its dynamics have important roles in deciphering the working mechanisms of the brain. It has been widely recognized that the properties of functional network synchronization and its dynamics are jointly determined by network topology, network connection strength, i.e., the connection strength of different edges in the network, and external input signals, among other factors. However, mathematical and computational characterization of the relationships between network synchronization frequency and these three important factors are still lacking. This paper presents a novel computational simulation framework to quantitatively characterize the relationships between neural network synchronization frequency and network attributes and input signals. Specifically, we constructed a series of neural networks including simulated small-world networks, real functional working memory network derived from functional magnetic resonance imaging, and real large-scale structural brain networks derived from diffusion tensor imaging, and performed synchronization simulations on these networks via the Izhikevich neuron spiking model. Our experiments demonstrate that both of the network synchronization strength and synchronization frequency change according to the combination of input signal frequency and network self-synchronization frequency. In particular, our extensive experiments show that the network synchronization frequency can be represented via a linear combination of the network self-synchronization frequency and the input signal frequency. This finding could be attributed to an intrinsically-preserved principle in different types of neural systems, offering novel insights into the working mechanism of neural systems.  相似文献   

17.
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.  相似文献   

18.
Human brain functions are heavily contingent on neural interactions both at the single neuron and the neural population or system level. Accumulating evidence from neurophysiological studies strongly suggests that coupling of oscillatory neural activity provides an important mechanism to establish neural interactions. With the availability of whole-head magnetoencephalography (MEG) macroscopic oscillatory activity can be measured non-invasively from the human brain with high temporal and spatial resolution. To localise, quantify and map oscillatory activity and interactions onto individual brain anatomy we have developed the 'dynamic imaging of coherent sources' (DICS) method which allows to identify and analyse cerebral oscillatory networks from MEG recordings. Using this approach we have characterized physiological and pathological oscillatory networks in the human sensorimotor system. Coherent 8 Hz oscillations emerge from a cerebello-thalamo-premotor-motor cortical network and exert an 8 Hz oscillatory drive on the spinal motor neurons which can be observed as a physiological tremulousness of the movement termed movement discontinuities. This network represents the neurophysiological substrate of a discrete mode of motor control. In parkinsonian resting tremor we have identified an extensive cerebral network consisting of primary motor and lateral premotor cortex, supplementary motor cortex, thalamus/basal ganglia, posterior parietal cortex and secondary somatosensory cortex, which are entrained in the tremor or twice the tremor rhythm. This low frequency entrapment of motor areas likely plays an important role in the pathophysiology of parkinsonian motor symptoms. Finally, studies on patients with postural tremor in hepatic encephalopathy revealed that this type of tremor results from a pathologically slow thalamocortical and cortico-muscular coupling during isometric hold tasks. In conclusion, the analysis of oscillatory cerebral networks provides new insights into physiological mechanisms of motor control and pathophysiological mechanisms of tremor disorders.  相似文献   

19.
We investigate the memory structure and retrieval of the brain and propose a hybrid neural network of addressable and content-addressable memory which is a special database model and can memorize and retrieve any piece of information (a binary pattern) both addressably and content-addressably. The architecture of this hybrid neural network is hierarchical and takes the form of a tree of slabs which consist of binary neurons with the same array. Simplex memory neural networks are considered as the slabs of basic memory units, being distributed on the terminal vertexes of the tree. It is shown by theoretical analysis that the hybrid neural network is able to be constructed with Hebbian and competitive learning rules, and some other important characteristics of its learning and memory behavior are also consistent with those of the brain. Moreover, we demonstrate the hybrid neural network on a set of ten binary numeral patters  相似文献   

20.
Most neural communication and processing tasks are driven by spikes. This has enabled the application of the event-driven simulation schemes. However the simulation of spiking neural networks based on complex models that cannot be simplified to analytical expressions (requiring numerical calculation) is very time consuming. Here we describe briefly an event-driven simulation scheme that uses pre-calculated table-based neuron characterizations to avoid numerical calculations during a network simulation, allowing the simulation of large-scale neural systems. More concretely we explain how electrical coupling can be simulated efficiently within this computation scheme, reproducing synchronization processes observed in detailed simulations of neural populations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号