首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region.  相似文献   

2.
Lateral and recurrent connections are ubiquitous in biological neural circuits. Yet while the strong computational abilities of feedforward networks have been extensively studied, our understanding of the role and advantages of recurrent computations that might explain their prevalence remains an important open challenge. Foundational studies by Minsky and Roelfsema argued that computations that require propagation of global information for local computation to take place would particularly benefit from the sequential, parallel nature of processing in recurrent networks. Such “tag propagation” algorithms perform repeated, local propagation of information and were originally introduced in the context of detecting connectedness, a task that is challenging for feedforward networks. Here, we advance the understanding of the utility of lateral and recurrent computation by first performing a large-scale empirical study of neural architectures for the computation of connectedness to explore feedforward solutions more fully and establish robustly the importance of recurrent architectures. In addition, we highlight a tradeoff between computation time and performance and construct hybrid feedforward/recurrent models that perform well even in the presence of varying computational time limitations. We then generalize tag propagation architectures to propagating multiple interacting tags and demonstrate that these are efficient computational substrates for more general computations of connectedness by introducing and solving an abstracted biologically inspired decision-making task. Our work thus clarifies and expands the set of computational tasks that can be solved efficiently by recurrent computation, yielding hypotheses for structure in population activity that may be present in such tasks.  相似文献   

3.
The maintenance of short-term memories is critical for survival in a dynamically changing world. Previous studies suggest that this memory can be stored in the form of persistent neural activity or using a synaptic mechanism, such as with short-term plasticity. Here, we compare the predictions of these two mechanisms to neural and behavioral measurements in a visual change detection task. Mice were trained to respond to changes in a repeated sequence of natural images while neural activity was recorded using two-photon calcium imaging. We also trained two types of artificial neural networks on the same change detection task as the mice. Following fixed pre-processing using a pretrained convolutional neural network, either a recurrent neural network (RNN) or a feedforward neural network with short-term synaptic depression (STPNet) was trained to the same level of performance as the mice. While both networks are able to learn the task, the STPNet model contains units whose activity are more similar to the in vivo data and produces errors which are more similar to the mice. When images are omitted, an unexpected perturbation which was absent during training, mice often do not respond to the omission but are more likely to respond to the subsequent image. Unlike the RNN model, STPNet produces a similar pattern of behavior. These results suggest that simple neural adaptation mechanisms may serve as an important bottom-up memory signal in this task, which can be used by downstream areas in the decision-making process.  相似文献   

4.
5.
Dynamic recurrent neural networks composed of units with continuous activation functions provide a powerful tool for simulating a wide range of behaviors, since the requisite interconnections can be readily derived by gradient descent methods. However, it is not clear whether more realistic integrate-and-fire cells with comparable connection weights would perform the same functions. We therefore investigated methods to convert dynamic recurrent neural networks of continuous units into networks with integrate-and-fire cells. The transforms were tested on two recurrent networks derived by backpropagation. The first simulates a short-term memory task with units that mimic neural activity observed in cortex of monkeys performing instructed delay tasks. The network utilizes recurrent connections to generate sustained activity that codes the remembered value of a transient cue. The second network simulates patterns of neural activity observed in monkeys performing a step-tracking task with flexion/extension wrist movements. This more complicated network provides a working model of the interactions between multiple spinal and supraspinal centers controlling motoneurons. Our conversion algorithm replaced each continuous unit with multiple integrate-and-fire cells that interact through delayed "synaptic potentials". Successful transformation depends on obtaining an appropriate fit between the activation function of the continuous units and the input-output relation of the spiking cells. This fit can be achieved by adapting the parameters of the synaptic potentials to replicate the input-output behavior of a standard sigmoidal activation function (shown for the short-term memory network). Alternatively, a customized activation function can be derived from the input-output relation of the spiking cells for a chosen set of parameters (demonstrated for the wrist flexion/extension network). In both cases the resulting networks of spiking cells exhibited activity that replicated the activity of corresponding continuous units. This confirms that the network solutions obtained through backpropagation apply to spiking networks and provides a useful method for deriving recurrent spiking networks performing a wide range of functions.  相似文献   

6.
We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.  相似文献   

7.
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them.  相似文献   

8.
9.
The microarray technology allows the high-throughput quantification of the mRNA level of thousands of genes under dozens of conditions, generating a wealth of data which must be analyzed using some form of computational means. A popular framework for such analysis is Matlab, a powerful computing language for which many functions have been written. However, although complex topics like neural networks or principal component analysis are freely available in Matlab, functions to perform more basic tasks like data normalization or hierarchical clustering in an efficient manner are not. The MatArray toolbox aims at filling this gap by offering efficient implementations of the most needed functions for microarray analysis. The functions in the toolbox are command-line only, since it is geared toward seasoned Matlab users.  相似文献   

10.
According to the WHO, pollution is a worldwide public health problem. In Colombia, low-cost strategies for air quality monitoring have been implemented using wireless sensor networks (WSNs), which achieve a better spatial resolution than traditional sensor networks for a lower operating cost. Nevertheless, one of the recurrent issues of WSNs is the missing data due to environmental and location conditions, hindering data collection. Consequently, WSNs should have effective mechanisms to recover missing data, and matrix factorization (MF) has shown to be a solid alternative to solve this problem. This study proposes a novel MF technique with a neural network architecture (i.e., deep matrix factorization or DMF) to estimate missing particulate matter (PM) data in a WSN in Aburrá Valley, Colombia. We found that the model that included spatial-temporal features (using embedding layers) captured the behavior of the pollution measured at each node more efficiently, thus producing better estimations than standard matrix factorization and other variations of the model proposed here.  相似文献   

11.
In this paper we summarize some of the main contributions of models of recurrent neural networks with associative memory properties. We compare the behavior of these attractor neural networks with empirical data from both physiology and psychology. This type of network could be used in models with more complex functions.  相似文献   

12.
Summary A modular approach to neural behavior control of autonomous robots is presented. It is based on the assumption that complex internal dynamics of recurrent neural networks can efficiently solve complex behavior tasks. For the development of appropriate neural control structures an evolutionary algorithm is introduced, which is able to generate neuromodules with specific functional properties, as well as the connectivity structure for a modular synthesis of such modules. This so called ENS 3-algorithm does not use genetic coding. It is primarily designed to develop size and connectivity structure of neuro-controllers. But at the same time it optimizes also parameters of individual networks like synaptic weights and bias terms. For demonstration, evolved networks for the control of miniature Khepera robots are presented. The aim is to develop robust controllers in the sense that neuro-controllers evolved in a simulator show comparably good behavior when loaded to a real robot acting in a physical environment. Discussed examples of such controllers generate obstacle avoidance and phototropic behaviors in non-trivial environments.  相似文献   

13.
Saha S  Raghava GP 《Proteins》2006,65(1):40-48
B-cell epitopes play a vital role in the development of peptide vaccines, in diagnosis of diseases, and also for allergy research. Experimental methods used for characterizing epitopes are time consuming and demand large resources. The availability of epitope prediction method(s) can rapidly aid experimenters in simplifying this problem. The standard feed-forward (FNN) and recurrent neural network (RNN) have been used in this study for predicting B-cell epitopes in an antigenic sequence. The networks have been trained and tested on a clean data set, which consists of 700 non-redundant B-cell epitopes obtained from Bcipep database and equal number of non-epitopes obtained randomly from Swiss-Prot database. The networks have been trained and tested at different input window length and hidden units. Maximum accuracy has been obtained using recurrent neural network (Jordan network) with a single hidden layer of 35 hidden units for window length of 16. The final network yields an overall prediction accuracy of 65.93% when tested by fivefold cross-validation. The corresponding sensitivity, specificity, and positive prediction values are 67.14, 64.71, and 65.61%, respectively. It has been observed that RNN (JE) was more successful than FNN in the prediction of B-cell epitopes. The length of the peptide is also important in the prediction of B-cell epitopes from antigenic sequences. The webserver ABCpred is freely available at www.imtech.res.in/raghava/abcpred/.  相似文献   

14.
The current status of the effects of ovarian steroids on learning and memory remains somewhat unclear, despite a large undertaking to evaluate these effects. What is emerging from this literature is that estrogen, and perhaps progesterone, influences learning and memory, but does so in a task-dependent manner. Previously, we have shown that ovariectomized rats given acute treatments of estrogen acquire allocentric or "place" tasks more easily than do rats deprived of estrogen, but acquire egocentric or "response" learning tasks more slowly than do those deprived of hormone, suggesting that estrogen treatment may bias the strategy a rat is able to use to solve tasks. To determine if natural fluctuations in ovarian hormones influence cognitive strategy, we tested whether strategy use fluctuated across the estrous cycle in reproductively intact female rats. We found that in two tasks in which rats freely choose the strategy used to solve the task, rats were more likely to use place strategies at proestrous, that is, when ovarian steroids are high. Conversely, estrous rats were biased toward response strategies. The data suggest that natural fluctuations in ovarian steroids may bias the neural system used and thus the cognitive strategies chosen during learning and memory.  相似文献   

15.
A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.  相似文献   

16.
Solving a multiclass classification task using a small imbalanced database of patterns of high dimension is difficult due to the curse-of-dimensionality and the bias of the training toward the majority classes. Such a problem has arisen while diagnosing genetic abnormalities by classifying a small database of fluorescence in situ hybridization signals of types having different frequencies of occurrence. We propose and experimentally study using the cytogenetic domain two solutions to the problem. The first is hierarchical decomposition of the classification task, where each hierarchy level is designed to tackle a simpler problem which is represented by classes that are approximately balanced. The second solution is balancing the data by up-sampling the minority classes accompanied by dimensionality reduction. Implemented by the naive Bayesian classifier or the multilayer perceptron neural network, both solutions have diminished the problem and contributed to accuracy improvement. In addition, the experiments suggest that coping with the smallness of the data is more beneficial than dealing with its imbalance  相似文献   

17.
This paper investigates the problem of stability analysis for recurrent neural networks with time-varying delays and polytopic uncertainties. Parameter-dependent Lypaunov functionals are employed to obtain sufficient conditions that guarantee the robust global exponential stability of the equilibrium point of the considered neural network. The derived stability criteria are expressed in terms of a set of relaxed linear matrix inequalities, which can be easily tested by using commercially available software. Two numerical examples are provided to demonstrate the effectiveness of the proposed results.  相似文献   

18.
The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, “trained” networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale’s principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity patterns and behavior that can be modeled, and suggest a unified setting in which diverse cognitive computations and mechanisms can be studied.  相似文献   

19.
 It has been shown that dynamic recurrent neural networks are successful in identifying the complex mapping relationship between full-wave-rectified electromyographic (EMG) signals and limb trajectories during complex movements. These connectionist models include two types of adaptive parameters: the interconnection weights between the units and the time constants associated to each neuron-like unit; they are governed by continuous-time equations. Due to their internal structure, these models are particularly appropriate to solve dynamical tasks (with time-varying input and output signals). We show in this paper that the introduction of a modular organization dedicated to different aspects of the dynamical mapping including privileged communication channels can refine the architecture of these recurrent networks. We first divide the initial individual network into two communicating subnetworks. These two modules receive the same EMG signals as input but are involved in different identification tasks related to position and acceleration. We then show that the introduction of an artificial distance in the model (using a Gaussian modulation factor of weights) induces a reduced modular architecture based on a self-elimination of null synaptic weights. Moreover, this self-selected reduced model based on two subnetworks performs the identification task better than the original single network while using fewer free parameters (better learning curve and better identification quality). We also show that this modular network exhibits several features that can be considered as biologically plausible after the learning process: self-selection of a specific inhibitory communicating path between both subnetworks after the learning process, appearance of tonic and phasic neurons, and coherent distribution of the values of the time constants within each subnetwork. Received: 17 September 2001 / Accepted in revised form: 15 January 2002  相似文献   

20.
An algorithm called bidirectional long short-term memory networks (BLSTM) for processing sequential data is introduced. This supervised learning method trains a special recurrent neural network to use very long-range symmetric sequence context using a combination of nonlinear processing elements and linear feedback loops for storing long-range context. The algorithm is applied to the sequence-based prediction of protein localization and predicts 93.3 percent novel nonplant proteins and 88.4 percent novel plant proteins correctly, which is an improvement over feedforward and standard recurrent networks solving the same problem. The BLSTM system is available as a Web service at http://stepc.stepc.gr/-synaptic/blstm.html.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号