首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.  相似文献   

2.
Following the unconventional gas revolution, the forecasting of natural gas prices has become increasingly important because the association of these prices with those of crude oil has weakened. With this as motivation, we propose some modified hybrid models in which various combinations of the wavelet approximation, detail components, autoregressive integrated moving average, generalized autoregressive conditional heteroskedasticity, and artificial neural network models are employed to predict natural gas prices. We also emphasize the boundary problem in wavelet decomposition, and compare results that consider the boundary problem case with those that do not. The empirical results show that our suggested approach can handle the boundary problem, such that it facilitates the extraction of the appropriate forecasting results. The performance of the wavelet-hybrid approach was superior in all cases, whereas the application of detail components in the forecasting was only able to yield a small improvement in forecasting performance. Therefore, forecasting with only an approximation component would be acceptable, in consideration of forecasting efficiency.  相似文献   

3.
4.
Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance.  相似文献   

5.
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method''s practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.  相似文献   

6.
We investigate the dynamics of a deterministic finite-sized network of synaptically coupled spiking neurons and present a formalism for computing the network statistics in a perturbative expansion. The small parameter for the expansion is the inverse number of neurons in the network. The network dynamics are fully characterized by a neuron population density that obeys a conservation law analogous to the Klimontovich equation in the kinetic theory of plasmas. The Klimontovich equation does not possess well-behaved solutions but can be recast in terms of a coupled system of well-behaved moment equations, known as a moment hierarchy. The moment hierarchy is impossible to solve but in the mean field limit of an infinite number of neurons, it reduces to a single well-behaved conservation law for the mean neuron density. For a large but finite system, the moment hierarchy can be truncated perturbatively with the inverse system size as a small parameter but the resulting set of reduced moment equations that are still very difficult to solve. However, the entire moment hierarchy can also be re-expressed in terms of a functional probability distribution of the neuron density. The moments can then be computed perturbatively using methods from statistical field theory. Here we derive the complete mean field theory and the lowest order second moment corrections for physiologically relevant quantities. Although we focus on finite-size corrections, our method can be used to compute perturbative expansions in any parameter.  相似文献   

7.
The objectives of this trial were to develop multiple linear regression (MLR) models and three-layer Levenberg-Marquardt back propagation (BP3) neural network models using the Cornell Net Carbohydrate and Protein System (CNCPS) carbohydrate fractions as dietary variables for predicting in vitro rumen volatile fatty acid (VFA) production and further compare MLR and BP3 models. Two datasets were established for the trial, of which the first dataset containing 45 feed mixtures with concentrate/roughage ratios of 10∶90, 20∶80, 30∶70, 40∶60, and 50∶50 were used for establishing the models and the second dataset containing 10 feed mixtures with the same concentrate/roughage ratios with the first dataset were used for testing the models. The VFA production of feed samples was determined using an in vitro incubation technique. The CNCPS carbohydrate fractions (g), i.e. CA (sugars), CB1 (starch and pectin), CB2 (available cell wall) of feed samples were calculated based on chemical analysis. The performance of MLR models and BP3 models were compared using a paired t-test, the determination coefficient (R2) and the root mean square prediction error (RMSPE) between observed and predicted values. Statistical analysis indicated that VFA production (mmol) was significantly correlated with CNCPS carbohydrate fractions (g) CA, CB1, and CB2 in a multiple linear pattern. Compared with MLR models, BP3 models were more accurate in predicting acetate, propionate, and total VFA production while similar in predicting butyrate production. The trial indicated that both MLR and BP3 models were suitable for predicting in vitro rumen VFA production of feed mixtures using CNCPS carbohydrate fractions CA, CB1, and CB2 as input dietary variables while BP3 models showed greater accuracy for prediction.  相似文献   

8.
In this paper, the extended Kalman filter (EKF) algorithm is applied to model the gene regulatory network from gene time series data. The gene regulatory network is considered as a nonlinear dynamic stochastic model that consists of the gene measurement equation and the gene regulation equation. After specifying the model structure, we apply the EKF algorithm for identifying both the model parameters and the actual value of gene expression levels. It is shown that the EKF algorithm is an online estimation algorithm that can identify a large number of parameters (including parameters of nonlinear functions) through iterative procedure by using a small number of observations. Four real-world gene expression data sets are employed to demonstrate the effectiveness of the EKF algorithm, and the obtained models are evaluated from the viewpoint of bioinformatics.  相似文献   

9.
从酵母表达时间序列估计基因调控网络   总被引:10,自引:0,他引:10  
基因调控网络是生命功能在基因表达层面上的展现。用组合线性调控模型、调控元件识别和基因聚类等方法 ,从基因组表达谱解读酵母在细胞周期与环境胁迫中的基因调控网络。结果表明 ,细胞在不同环境条件下会调整基因调控网络。在适应的环境下 ,起主要作用的是细胞生长和增殖有关的基因调控网络 ;而在响应环境胁迫时 ,细胞会再规划调控网络 ,抑制细胞生长和增殖相关的基因 ,诱导跟适应性糖类代谢与结构修复相关的基因 ,还可能启动减数分裂产生孢子。分别从细胞周期和环境胁迫响应相关基因中 ,搜索到转录因子Mcm1结合位点TT CC T GGAAA ,和Dal82在尿囊素代谢途径相关基因上的结合位点TGAAAAWTTT。从而 ,从酵母表达时间序列估计基因调控网络是可行的 ,与至今已知的实验观察相当吻合  相似文献   

10.
具有时滞的细胞神经网络模型的全局指数稳定性   总被引:8,自引:1,他引:7  
利用拓扑度理论、推广的Halanaly矩阵时滞微分不等式、Lyapunov原理以及Dini导数,研究了具有时滞的细胞神经网络模型的全局指数稳定性.去掉了有关文献中要求输出函数fj在实数集R上有界、可微的条件,给出了更弱的判定平衡点的存在唯一性以及全局指数稳定性的判据,推广和改进了前人的相关结论,最后的数值例子说明本文结果不仅保守性小,而且计算简单.  相似文献   

11.
Recent years have witnessed a rapid development of network reconstruction approaches, especially for a series of methods based on compressed sensing. Although compressed-sensing based methods require much less data than conventional approaches, the compressed sensing for reconstructing heterogeneous networks has not been fully exploited because of hubs. Hub neighbors require much more data to be inferred than small-degree nodes, inducing a cask effect for the reconstruction of heterogeneous networks. Here, a conflict-based method is proposed to overcome the cast effect to considerably reduce data amounts for achieving accurate reconstruction. Moreover, an element elimination method is presented to use the partially available structural information to reduce data requirements. The integration of both methods can further improve the reconstruction performance than separately using each technique. These methods are validated by exploring two evolutionary games taking place in scale-free networks, where individual information is accessible and an attempt to decode the network structure from measurable data is made. The results demonstrate that for all of the cases, much data are saved compared to that in the absence of these two methods. Due to the prevalence of heterogeneous networks in nature and society and the high cost of data acquisition in large-scale networks, these approaches have wide applications in many fields and are valuable for understanding and controlling the collective dynamics of a variety of heterogeneous networked systems.  相似文献   

12.
13.
Maximum likelihood and maximum parsimony are two key methods for phylogenetic tree reconstruction. Under certain conditions, each of these two methods can perform more or less efficiently, resulting in unresolved or disputed phylogenies. We show that a neural network can distinguish between four-taxon alignments that were evolved under conditions susceptible to either long-branch attraction or long-branch repulsion. When likelihood and parsimony methods are discordant, the neural network can provide insight as to which tree reconstruction method is best suited to the alignment. When applied to the contentious case of Strepsiptera evolution, our method shows robust support for the current scientific view, that is, it places Strepsiptera with beetles, distant from flies.  相似文献   

14.
《IRBM》2022,43(5):422-433
BackgroundElectrocardiogram (ECG) is a method of recording the electrical activity of the heart and it provides a diagnostic means for heart-related diseases. Arrhythmia is any irregularity of the heartbeat that causes an abnormality in the heart rhythm. Early detection of arrhythmia has great importance to prevent many diseases. Manual analysis of ECG recordings is not practical for quickly identifying arrhythmias that may cause sudden deaths. Hence, many studies have been presented to develop computer-aided-diagnosis (CAD) systems to automatically identify arrhythmias.MethodsThis paper proposes a novel deep learning approach to identify arrhythmias in ECG signals. The proposed approach identifies arrhythmia classes using Convolutional Neural Network (CNN) trained by two-dimensional (2D) ECG beat images. Firstly, ECG signals, which consist of 5 different arrhythmias, are segmented into heartbeats which are transformed into 2D grayscale images. Afterward, the images are used as input for training a new CNN architecture to classify heartbeats.ResultsThe experimental results show that the classification performance of the proposed approach reaches an overall accuracy of 99.7%, sensitivity of 99.7%, and specificity of 99.22% in the classification of five different ECG arrhythmias. Further, the proposed CNN architecture is compared to other popular CNN architectures such as LeNet and ResNet-50 to evaluate the performance of the study.ConclusionsTest results demonstrate that the deep network trained by ECG images provides outstanding classification performance of arrhythmic ECG signals and outperforms similar network architectures. Moreover, the proposed method has lower computational costs compared to existing methods and is more suitable for mobile device-based diagnosis systems as it does not involve any complex preprocessing process. Hence, the proposed approach provides a simple and robust automatic cardiac arrhythmia detection scheme for the classification of ECG arrhythmias.  相似文献   

15.
With ever-increasing available data, predicting individuals'' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a “new” computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals'' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups.  相似文献   

16.
Disturbance events strongly affect the composition, structure, and function of forest ecosystems; however, existing US land management inventories were not designed to monitor disturbance. To begin addressing this gap, the North American Forest Dynamics (NAFD) project has examined a geographic sample of 50 Landsat satellite image time series to assess trends in forest disturbance across the conterminous United States for 1985–2005. The geographic sample design used a probability-based scheme to encompass major forest types and maximize geographic dispersion. For each sample location disturbance was identified in the Landsat series using the Vegetation Change Tracker (VCT) algorithm. The NAFD analysis indicates that, on average, 2.77 Mha y?1 of forests were disturbed annually, representing 1.09% y?1 of US forestland. These satellite-based national disturbance rates estimates tend to be lower than those derived from land management inventories, reflecting both methodological and definitional differences. In particular, the VCT approach used with a biennial time step has limited sensitivity to low-intensity disturbances. Unlike prior satellite studies, our biennial forest disturbance rates vary by nearly a factor of two between high and low years. High western US disturbance rates were associated with active fire years and insect activity, whereas variability in the east is more strongly related to harvest rates in managed forests. We note that generating a geographic sample based on representing forest type and variability may be problematic because the spatial pattern of disturbance does not necessarily correlate with forest type. We also find that the prevalence of diffuse, non-stand-clearing disturbance in US forests makes the application of a biennial geographic sample problematic. Future satellite-based studies of disturbance at regional and national scales should focus on wall-to-wall analyses with annual time step for improved accuracy.  相似文献   

17.
We report a method using radial basis function (RBF) networks to estimate the time evolution of population activity in topologically organized neural structures from single-neuron recordings. This is an important problem in neuroscience research, as such estimates may provide insights into systems-level function of these structures. Since single-unit neural data tends to be unevenly sampled and highly variable under similar behavioral conditions, obtaining such estimates is a difficult task. In particular, a class of cells in the superior colliculus called buildup neurons can have very narrow regions of saccade vectors for which they discharge at high rates but very large surround regions over which they discharge at low, but not zero, levels. Estimating the dynamic movement fields for these cells for two spatial dimensions at closely spaced timed intervals is a difficult problem, and no general method has been described that can be applied to all buildup cells. Estimation of individual collicular cells' spatiotemporal movement fields is a prerequisite for obtaining reliable two-dimensional estimates of the population activity on the collicular motor map during saccades. Therefore, we have developed several computational-geometry-based algorithms that regularize the data before computing a surface estimation using RBF networks. The method is then expanded to the problem of estimating simultaneous spatiotemporal activity occurring across the superior colliculus during a single movement (the inverse problem). In principle, this methodology could be applied to any neural structure with a regular, two-dimensional organization, provided a sufficient spatial distribution of sampled neurons is available.  相似文献   

18.
Increasing awareness of the issue of deforestation and degradation in the tropics has resulted in efforts to monitor forest resources in tropical countries. Advances in satellite-based remote sensing and ground-based technologies have allowed for monitoring of forests with high spatial, temporal and thematic detail. Despite these advances, there is a need to engage communities in monitoring activities and include these stakeholders in national forest monitoring systems. In this study, we analyzed activity data (deforestation and forest degradation) collected by local forest experts over a 3-year period in an Afro-montane forest area in southwestern Ethiopia and corresponding Landsat Time Series (LTS). Local expert data included forest change attributes, geo-location and photo evidence recorded using mobile phones with integrated GPS and photo capabilities. We also assembled LTS using all available data from all spectral bands and a suite of additional indices and temporal metrics based on time series trajectory analysis. We predicted deforestation, degradation or stable forests using random forest models trained with data from local experts and LTS spectral-temporal metrics as model covariates. Resulting models predicted deforestation and degradation with an out of bag (OOB) error estimate of 29% overall, and 26% and 31% for the deforestation and degradation classes, respectively. By dividing the local expert data into training and operational phases corresponding to local monitoring activities, we found that forest change models improved as more local expert data were used. Finally, we produced maps of deforestation and degradation using the most important spectral bands. The results in this study represent some of the first to combine local expert based forest change data and dense LTS, demonstrating the complementary value of both continuous data streams. Our results underpin the utility of both datasets and provide a useful foundation for integrated forest monitoring systems relying on data streams from diverse sources.  相似文献   

19.
DQ-FIT and CV-SORT have been developed to facilitate the automatic analysis of data sampled by radiotelemetry, but they can also be used with other data sampled in chronobiological settings. After import of data, DQ-FIT performs conventional linear, as well as rhythm analysis according to user-defined specifications. Linear analysis includes calculation of mean values, load values (percentage of values above a defined limit), highest and lowest readings, and areas under the (parameter-time) curve (AUC). All of these parameters are calculated for the total sampling interval and for user-defined day and night periods. Rhythm analysis is performed by fitting of partial Fourier series with up to six harmonics. The contribution of each harmonic to the overall variation of data is tested statistically; only those components are included in the best-fit function that contribute significantly. Parameters calculated in DQ-FIT's rhythm analysis include mesor, amplitudes, and acrophases of all rhythmic components; significance and percentage rhythm of the combined best fit; maximum and minimum of the fitted curve and times of their occurrence. In addition, DQ-FIT uses the first derivative of the fitted curve (i.e., its slope) to determine the time and extent of maximal increases and decreases within the total sampling interval or user-defined intervals of interest, such as the times of lights on or off. CV-SORT can be used to create tables or graphs from groups of data sets analyzed by DQ-FIT. Graphs are created in CV-SORT by calculation of group mean profiles from individual best-fit curves rather than their curve parameters. This approach allows the user to combine data sets that differ in the number and/ or period length of harmonics included. In conclusion, DQ-FIT and CV-SORT can be helpful in the analysis of time-dependent data sampled by telemetry or other monitoring systems. The software can be obtained on request by every interested researcher. (Chronobiology International, 14(6), 561–574, 1997)  相似文献   

20.

Backgrounds

Despite continuing progress in X-ray crystallography and high-field NMR spectroscopy for determination of three-dimensional protein structures, the number of unsolved and newly discovered sequences grows much faster than that of determined structures. Protein modeling methods can possibly bridge this huge sequence-structure gap with the development of computational science. A grand challenging problem is to predict three-dimensional protein structure from its primary structure (residues sequence) alone. However, predicting residue contact maps is a crucial and promising intermediate step towards final three-dimensional structure prediction. Better predictions of local and non-local contacts between residues can transform protein sequence alignment to structure alignment, which can finally improve template based three-dimensional protein structure predictors greatly.

Methods

CNNcon, an improved multiple neural networks based contact map predictor using six sub-networks and one final cascade-network, was developed in this paper. Both the sub-networks and the final cascade-network were trained and tested with their corresponding data sets. While for testing, the target protein was first coded and then input to its corresponding sub-networks for prediction. After that, the intermediate results were input to the cascade-network to finish the final prediction.

Results

The CNNcon can accurately predict 58.86% in average of contacts at a distance cutoff of 8 Å for proteins with lengths ranging from 51 to 450. The comparison results show that the present method performs better than the compared state-of-the-art predictors. Particularly, the prediction accuracy keeps steady with the increase of protein sequence length. It indicates that the CNNcon overcomes the thin density problem, with which other current predictors have trouble. This advantage makes the method valuable to the prediction of long length proteins. As a result, the effective prediction of long length proteins could be possible by the CNNcon.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号