首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
A new structure and training method for multilayer neural networks is presented. The proposed method is based on cascade training of subnetworks and optimizing weights layer by layer. The training procedure is completed in two steps. First, a subnetwork, m inputs and n outputs as the style of training samples, is trained using the training samples. Secondly the outputs of the subnetwork is taken as the inputs and the outputs of the training sample as the desired outputs, another subnetwork with n inputs and n outputs is trained. Finally the two trained subnetworks are connected and a trained multilayer neural networks is created. The numerical simulation results based on both linear least squares back-propagation (LSB) and traditional back-propagation (BP) algorithm have demonstrated the efficiency of the proposed method.  相似文献   

2.
In this paper, the synchronization problem for delayed continuous time nonlinear complex neural networks is considered. The delay dependent state feed back synchronization gain matrix is obtained by considering more general case of time-varying delay. Using Lyapunov stability theory, the sufficient synchronization criteria are derived in terms of Linear Matrix Inequalities (LMIs). By decomposing the delay interval into multiple equidistant subintervals, Lyapunov-Krasovskii functionals (LKFs) are constructed on these intervals. Employing these LKFs, new delay dependent synchronization criteria are proposed in terms of LMIs for two cases with and without derivative of time-varying delay. Numerical examples are illustrated to show the effectiveness of the proposed method.  相似文献   

3.
This paper explores the problem of synchronization of a class of generalized reaction-diffusion neural networks with mixed time-varying delays. The mixed time-varying delays under consideration comprise of both discrete and distributed delays. Due to the development and merits of digital controllers, sampled-data control is a natural choice to establish synchronization in continuous-time systems. Using a newly introduced integral inequality, less conservative synchronization criteria that assure the global asymptotic synchronization of the considered generalized reaction-diffusion neural network and mixed delays are established in terms of linear matrix inequalities (LMIs). The obtained easy-to-test LMI-based synchronization criteria depends on the delay bounds in addition to the reaction-diffusion terms, which is more practicable. Upon solving these LMIs by using Matlab LMI control toolbox, a desired sampled-data controller gain can be acuqired without any difficulty. Finally, numerical examples are exploited to express the validity of the derived LMI-based synchronization criteria.  相似文献   

4.
In this paper, the design problem of state estimator for genetic regulatory networks with time delays and randomly occurring uncertainties has been addressed by a delay decomposition approach. The norm-bounded uncertainties enter into the genetic regulatory networks (GRNs) in random ways, and such randomly occurring uncertainties (ROUs) obey certain mutually uncorrelated Bernoulli distributed white noise sequences. Under these circumstances, the state estimator is designed to estimate the true concentration of the mRNA and the protein of the uncertain GRNs. Delay-dependent stability criteria are obtained in terms of linear matrix inequalities by constructing a Lyapunov–Krasovskii functional and using some inequality techniques (LMIs). Then, the desired state estimator, which can ensure the estimation error dynamics to be globally asymptotically robustly stochastically stable, is designed from the solutions of LMIs. Finally, a numerical example is provided to demonstrate the feasibility of the proposed estimation schemes.  相似文献   

5.
Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graph of standard neural models can not be directly handled by digital hardware devices. More particularly, several works show that programmable digital hardware is a real opportunity for flexible hardware implementations of neural networks. And yet many area and topology problems arise when standard neural models are implemented onto programmable circuits such as FPGAs, so that the fast FPGA technology improvements can not be fully exploited. Therefore neural network hardware implementations need to reconcile simple hardware topologies with complex neural architectures. The theoretical and practical framework developed, allows this combination thanks to some principles of configurable hardware that are applied to neural computation: Field Programmable Neural Arrays (FPNA) lead to powerful neural architectures that are easy to map onto FPGAs, thanks to a simplified topology and an original data exchange scheme. This paper shows how FPGAs have led to the definition of the FPNA computation paradigm. Then it shows how FPNAs contribute to current and future FPGA-based neural implementations by solving the general problems that are raised by the implementation of complex neural networks onto FPGAs.  相似文献   

6.
To efficiently simulate very large networks of interconnected neurons, particular consideration has to be given to the computer architecture being used. This article presents techniques for implementing simulators for large neural networks on a number of different computer architectures. The neuronal simulation task and the computer architectures of interest are first characterized, and the potential bottlenecks are highlighted. Then we describe the experience gained from adapting an existing simulator, SWIM, to two very different architectures–vector computers and multiprocessor workstations. This work lead to the implementation of a new simulation library, SPLIT, designed to allow efficient simulation of large networks on several architectures. Different computer architectures put different demands on the organization of both data structures and computations. Strict separation of such architecture considerations from the neuronal models and other simulation aspects makes it possible to construct both portable and extendible code.  相似文献   

7.
Episodic memory depends on interactions between the hippocampus and interconnected neocortical regions. Here, using data-driven analyses of resting-state functional magnetic resonance imaging (fMRI) data, we identified the networks that interact with the hippocampus—the default mode network (DMN) and a “medial temporal network” (MTN) that included regions in the medial temporal lobe (MTL) and precuneus. We observed that the MTN plays a critical role in connecting the visual network to the DMN and hippocampus. The DMN could be further divided into 3 subnetworks: a “posterior medial” (PM) subnetwork comprised of posterior cingulate and lateral parietal cortices; an “anterior temporal” (AT) subnetwork comprised of regions in the temporopolar and dorsomedial prefrontal cortex; and a “medial prefrontal” (MP) subnetwork comprised of regions primarily in the medial prefrontal cortex (mPFC). These networks vary in their functional connectivity (FC) along the hippocampal long axis and represent different kinds of information during memory-guided decision-making. Finally, a Neurosynth meta-analysis of fMRI studies suggests new hypotheses regarding the functions of the MTN and DMN subnetworks, providing a framework to guide future research on the neural architecture of episodic memory.

Episodic memory depends on interactions between the hippocampus and interconnected neocortical regions. This study uses network analyses of intrinsic brain networks at rest to identify and characterize brain networks that interact with the hippocampus and have distinct functions during memory-guided decision making.  相似文献   

8.
Lateral and recurrent connections are ubiquitous in biological neural circuits. Yet while the strong computational abilities of feedforward networks have been extensively studied, our understanding of the role and advantages of recurrent computations that might explain their prevalence remains an important open challenge. Foundational studies by Minsky and Roelfsema argued that computations that require propagation of global information for local computation to take place would particularly benefit from the sequential, parallel nature of processing in recurrent networks. Such “tag propagation” algorithms perform repeated, local propagation of information and were originally introduced in the context of detecting connectedness, a task that is challenging for feedforward networks. Here, we advance the understanding of the utility of lateral and recurrent computation by first performing a large-scale empirical study of neural architectures for the computation of connectedness to explore feedforward solutions more fully and establish robustly the importance of recurrent architectures. In addition, we highlight a tradeoff between computation time and performance and construct hybrid feedforward/recurrent models that perform well even in the presence of varying computational time limitations. We then generalize tag propagation architectures to propagating multiple interacting tags and demonstrate that these are efficient computational substrates for more general computations of connectedness by introducing and solving an abstracted biologically inspired decision-making task. Our work thus clarifies and expands the set of computational tasks that can be solved efficiently by recurrent computation, yielding hypotheses for structure in population activity that may be present in such tasks.  相似文献   

9.
A neural-model-based control design for some nonlinear systems is addressed. The design approach is to approximate the nonlinear systems with neural networks of which the activation functions satisfy the sector conditions. A novel neural network model termed standard neural network model (SNNM) is advanced for describing this class of approximating neural networks. Full-order dynamic output feedback control laws are then designed for the SNNMs with inputs and outputs to stabilize the closed-loop systems. The control design equations are shown to be a set of linear matrix inequalities (LMIs) which can be easily solved by various convex optimization algorithms to determine the control signals. It is shown that most neural-network-based nonlinear systems can be transformed into input-output SNNMs to be stabilization synthesized in a unified way. Finally, some application examples are presented to illustrate the control design procedures.  相似文献   

10.
11.
12.
A unified neural network model termed standard neural network model (SNNM) is advanced. Based on the robust L(2) gain (i.e. robust H(infinity) performance) analysis of the SNNM with external disturbances, a state-feedback control law is designed for the SNNM to stabilize the closed-loop system and eliminate the effect of external disturbances. The control design constraints are shown to be a set of linear matrix inequalities (LMIs) which can be easily solved by various convex optimization algorithms (e.g. interior-point algorithms) to determine the control law. Most discrete-time recurrent neural network (RNNs) and discrete-time nonlinear systems modelled by neural networks or Takagi and Sugeno (T-S) fuzzy models can be transformed into the SNNMs to be robust H(infinity) performance analyzed or robust H(infinity) controller synthesized in a unified SNNM's framework. Finally, some examples are presented to illustrate the wide application of the SNNMs to the nonlinear systems, and the proposed approach is compared with related methods reported in the literature.  相似文献   

13.
A model or hybrid network consisting of oscillatory cells interconnected by inhibitory and electrical synapses may express different stable activity patterns without any change of network topology or parameters, and switching between the patterns can be induced by specific transient signals. However, little is known of properties of such signals. In the present study, we employ numerical simulations of neural networks of different size composed of relaxation oscillators, to investigate switching between in-phase (IP) and anti-phase (AP) activity patterns. We show that the time windows of susceptibility to switching between the patterns are similar in 2-, 4- and 6-cell fully-connected networks. Moreover, in a network (N = 4, 6) expressing a given AP pattern, a stimulus with a given profile consisting of depolarizing and hyperpolarizing signals sent to different subpopulations of cells can evoke switching to another AP pattern. Interestingly, the resulting pattern encodes the profile of the switching stimulus. These results can be extended to different network architectures. Indeed, relaxation oscillators are not only models of cellular pacemakers, bursting or spiking, but are also analogous to firing-rate models of neural activity. We show that rules of switching similar to those found for relaxation oscillators apply to oscillating circuits of excitatory cells interconnected by electrical synapses and cross-inhibition. Our results suggest that incoming information, arriving in a proper time window, may be stored in an oscillatory network in the form of a specific spatio-temporal activity pattern which is expressed until new pertinent information arrives.  相似文献   

14.
Background: The induction of neural regeneration is vital to the repair of spinal cord injury (SCI). While compared with peripheral nervous system (PNS), the regenerative capacity of the central nervous system (CNS) is extremely limited. This indicates that modulating the molecular pathways underlying PNS repair may lead to the discovery of potential treatment for CNS injury.Methods: Based on the gene expression profiles of dorsal root ganglion (DRG) after a sciatic nerve injury, we utilized network guided forest (NGF) to rank genes in terms of their capacity of distinguishing injured DRG from sham-operated controls. Gene importance scores deriving from NGF were used as initial heat in a heat diffusion model (HotNet2) to infer the subnetworks underlying neural regeneration in the DRG. After potential regulators of the subnetworks were found through Connectivity Map (cMap), candidate compounds were experimentally evaluated for their capacity to regenerate the damaged neurons.Results: Gene ontology analysis of the subnetworks revealed ubiquinone biosynthetic process is crucial for neural regeneration. Moreover, almost half of the genes in these subnetworks are found to be related to neural regeneration via text mining. After screening compounds that are likely to modulate gene expressions of the subnetworks, three compounds were selected for the experiment. Of them, trichostatin A, a histone deacetylase inhibitor, was validated to enhance neurite outgrowth in vivo via an optic nerve crush mouse model.Conclusions: Our study identified subnetworks underlying neural regeneration, and validated a compound can promote neurite outgrowth by modulating these subnetworks. This work also suggests an alternative approach for drug repositioning that can be easily extended to other disease phenotypes.  相似文献   

15.
Network Analysis Tools (NeAT) is a suite of computer tools that integrate various algorithms for the analysis of biological networks: comparison between graphs, between clusters, or between graphs and clusters; network randomization; analysis of degree distribution; network-based clustering and path finding. The tools are interconnected to enable a stepwise analysis of the network through a complete analytical workflow. In this protocol, we present a typical case of utilization, where the tasks above are combined to decipher a protein-protein interaction network retrieved from the STRING database. The results returned by NeAT are typically subnetworks, networks enriched with additional information (i.e., clusters or paths) or tables displaying statistics. Typical networks comprising several thousands of nodes and arcs can be analyzed within a few minutes. The complete protocol can be read and executed in approximately 1 h.  相似文献   

16.
Local network alignment is an important component of the analysis of protein-protein interaction networks that may lead to the identification of evolutionary related complexes. We present AlignNemo, a new algorithm that, given the networks of two organisms, uncovers subnetworks of proteins that relate in biological function and topology of interactions. The discovered conserved subnetworks have a general topology and need not to correspond to specific interaction patterns, so that they more closely fit the models of functional complexes proposed in the literature. The algorithm is able to handle sparse interaction data with an expansion process that at each step explores the local topology of the networks beyond the proteins directly interacting with the current solution. To assess the performance of AlignNemo, we ran a series of benchmarks using statistical measures as well as biological knowledge. Based on reference datasets of protein complexes, AlignNemo shows better performance than other methods in terms of both precision and recall. We show our solutions to be biologically sound using the concept of semantic similarity applied to Gene Ontology vocabularies. The binaries of AlignNemo and supplementary details about the algorithms and the experiments are available at: sourceforge.net/p/alignnemo.  相似文献   

17.
Does each cognitive task elicit a new cognitive network each time in the brain? Recent data suggest that pre-existing repertoires of a much smaller number of canonical network components are selectively and dynamically used to compute new cognitive tasks. To this end, we propose a novel method (graph-ICA) that seeks to extract these canonical network components from a limited number of resting state spontaneous networks. Graph-ICA decomposes a weighted mixture of source edge-sharing subnetworks with different weighted edges by applying an independent component analysis on cross-sectional brain networks represented as graphs. We evaluated the plausibility in our simulation study and identified 49 intrinsic subnetworks by applying it in the resting state fMRI data. Using the derived subnetwork repertories, we decomposed brain networks during specific tasks including motor activity, working memory exercises, and verb generation, and identified subnetworks associated with performance on these tasks. We also analyzed sex differences in utilization of subnetworks, which was useful in characterizing group networks. These results suggest that this method can effectively be utilized to identify task-specific as well as sex-specific functional subnetworks. Moreover, graph-ICA can provide more direct information on the edge weights among brain regions working together as a network, which cannot be directly obtained through voxel-level spatial ICA.  相似文献   

18.
A new approach for nonlinear system identification and control based on modular neural networks (MNN) is proposed in this paper. The computational complexity of neural identification can be greatly reduced if the whole system is decomposed into several subsystems. This is obtained using a partitioning algorithm. Each local nonlinear model is associated with a nonlinear controller. These are also implemented by neural networks. The switching between the neural controllers is done by a dynamical switcher, also implemented by neural networks, that tracks the different operating points. The proposed multiple modelling and control strategy has been successfully tested on simulated laboratory scale liquid-level system.  相似文献   

19.
Neurons in the cortex exhibit a number of patterns that correlate with working memory. Specifically, averaged across trials of working memory tasks, neurons exhibit different firing rate patterns during the delay of those tasks. These patterns include: 1) persistent fixed-frequency elevated rates above baseline, 2) elevated rates that decay throughout the tasks memory period, 3) rates that accelerate throughout the delay, and 4) patterns of inhibited firing (below baseline) analogous to each of the preceding excitatory patterns. Persistent elevated rate patterns are believed to be the neural correlate of working memory retention and preparation for execution of behavioral/motor responses as required in working memory tasks. Models have proposed that such activity corresponds to stable attractors in cortical neural networks with fixed synaptic weights. However, the variability in patterned behavior and the firing statistics of real neurons across the entire range of those behaviors across and within trials of working memory tasks are typical not reproduced. Here we examine the effect of dynamic synapses and network architectures with multiple cortical areas on the states and dynamics of working memory networks. The analysis indicates that the multiple pattern types exhibited by cells in working memory networks are inherent in networks with dynamic synapses, and that the variability and firing statistics in such networks with distributed architectures agree with that observed in the cortex.  相似文献   

20.
Boolean networks have been widely used to model biological processes lacking detailed kinetic information. Despite their simplicity, Boolean network dynamics can still capture some important features of biological systems such as stable cell phenotypes represented by steady states. For small models, steady states can be determined through exhaustive enumeration of all state transitions. As the number of nodes increases, however, the state space grows exponentially thus making it difficult to find steady states. Over the last several decades, many studies have addressed how to handle such a state space explosion. Recently, increasing attention has been paid to a satisfiability solving algorithm due to its potential scalability to handle large networks. Meanwhile, there still lies a problem in the case of large models with high maximum node connectivity where the satisfiability solving algorithm is known to be computationally intractable. To address the problem, this paper presents a new partitioning-based method that breaks down a given network into smaller subnetworks. Steady states of each subnetworks are identified by independently applying the satisfiability solving algorithm. Then, they are combined to construct the steady states of the overall network. To efficiently apply the satisfiability solving algorithm to each subnetwork, it is crucial to find the best partition of the network. In this paper, we propose a method that divides each subnetwork to be smallest in size and lowest in maximum node connectivity. This minimizes the total cost of finding all steady states in entire subnetworks. The proposed algorithm is compared with others for steady states identification through a number of simulations on both published small models and randomly generated large models with differing maximum node connectivities. The simulation results show that our method can scale up to several hundreds of nodes even for Boolean networks with high maximum node connectivity. The algorithm is implemented and available at http://cps.kaist.ac.kr/∼ckhong/tools/download/PAD.tar.gz.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号