首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.  相似文献   

2.
Non-linear data structure extraction using simple hebbian networks   总被引:1,自引:0,他引:1  
. We present a class a neural networks algorithms based on simple hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to self-organise; such networks have previously been shown to be capable of performing principal component analysis (PCA). In this paper, this is extended to exploratory projection pursuit (EPP), which is a statistical method for investigating structure in high-dimensional data sets. As opposed to previous proposals for networks which learn using hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network literature on non-linear PCA. Received: 30 May 1994/Accepted in revised form: 18 November 1994  相似文献   

3.
Tables of means, over assessors, are often used to summarize the results of sensory profile experiments. These tables are sometimes further summarized by Principal Components Analysis (PCA) to give plots of the samples in the principal sensory dimensions. An alternative procedure is to use Generalized Procrutes Analysis (GPA) on the assessor data to allow for differences in usage of the vocabulary and in the proportion of the scale used. It is shown that these methods give different configurations in the principal sensory dimensions when applied to the data from a study of cheeses (Muir et al. 1995). Using a Jackknife method to calculate the variability of the samples in the principal sensory dimensions, the results from the GPA method are shown to have a higher dimensionality than from the PCA method. Jackknife estimates of variability are used to calculate confidence ellipses to attach to the sensory space maps.  相似文献   

4.
In this paper, a new method for QRS complex analysis and estimation based on principal component analysis (PCA) and polynomial fitting techniques is presented. Multi-channel ECG signals were recorded and QRS complexes were obtained from every channel and aligned perfectly in matrices. For every channel, the covariance matrix was calculated from the QRS complex data matrix of many heartbeats. Then the corresponding eigenvectors and eigenvalues were calculated and reconstruction parameter vectors were computed by expansion of every beat in terms of the principal eigenvectors. These parameter vectors show short-term fluctuations that have to be discriminated from abrupt changes or long-term trends that might indicate diseases. For this purpose, first-order poly-fit methods were applied to the elements of the reconstruction parameter vectors. In healthy volunteers, subsequent QRS complexes were estimated by calculating the corresponding reconstruction parameter vectors derived from these functions. The similarity, absolute error and RMS error between the original and predicted QRS complexes were measured. Based on this work, thresholds can be defined for changes in the parameter vectors that indicate diseases.  相似文献   

5.
Principal component analysis (PCA) is a dimensionality reduction and data analysis tool commonly used in many areas. The main idea of PCA is to represent high-dimensional data with a few representative components that capture most of the variance present in the data. However, there is an obvious disadvantage of traditional PCA when it is applied to analyze data where interpretability is important. In applications, where the features have some physical meanings, we lose the ability to interpret the principal components extracted by conventional PCA because each principal component is a linear combination of all the original features. For this reason, sparse PCA has been proposed to improve the interpretability of traditional PCA by introducing sparsity to the loading vectors of principal components. The sparse PCA can be formulated as an ? 1 regularized optimization problem, which can be solved by proximal gradient methods. However, these methods do not scale well because computation of the exact gradient is generally required at each iteration. Stochastic gradient framework addresses this challenge by computing an expected gradient at each iteration. Nevertheless, stochastic approaches typically have low convergence rates due to the high variance. In this paper, we propose a convex sparse principal component analysis (Cvx-SPCA), which leverages a proximal variance reduced stochastic scheme to achieve a geometric convergence rate. We further show that the convergence analysis can be significantly simplified by using a weak condition which allows a broader class of objectives to be applied. The efficiency and effectiveness of the proposed method are demonstrated on a large-scale electronic medical record cohort.  相似文献   

6.
How does the brain prioritize among the contents of working memory (WM) to appropriately guide behavior? Previous work, employing inverted encoding modeling (IEM) of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) datasets, has shown that unprioritized memory items (UMI) are actively represented in the brain, but in a “flipped”, or opposite, format compared to prioritized memory items (PMI). To acquire independent evidence for such a priority-based representational transformation, and to explore underlying mechanisms, we trained recurrent neural networks (RNNs) with a long short-term memory (LSTM) architecture to perform a 2-back WM task. Visualization of LSTM hidden layer activity using Principal Component Analysis (PCA) confirmed that stimulus representations undergo a representational transformation–consistent with a flip—while transitioning from the functional status of UMI to PMI. Demixed (d)PCA of the same data identified two representational trajectories, one each within a UMI subspace and a PMI subspace, both undergoing a reversal of stimulus coding axes. dPCA of data from an EEG dataset also provided evidence for priority-based transformations of the representational code, albeit with some differences. This type of transformation could allow for retention of unprioritized information in WM while preventing it from interfering with concurrent behavior. The results from this initial exploration suggest that the algorithmic details of how this transformation is carried out by RNNs, versus by the human brain, may differ.  相似文献   

7.
8.
Least-squares methods for blind source separation based on nonlinear PCA   总被引:2,自引:0,他引:2  
In standard blind source separation, one tries to extract unknown source signals from their instantaneous linear mixtures by using a minimum of a priori information. We have recently shown that certain nonlinear extensions of principal component type neural algorithms can be successfully applied to this problem. In this paper, we show that a nonlinear PCA criterion can be minimized using least-squares approaches, leading to computationally efficient and fast converging algorithms. Several versions of this approach are developed and studied, some of which can be regarded as neural learning algorithms. A connection to the nonlinear PCA subspace rule is also shown. Experimental results are given, showing that the least-squares methods usually converge clearly faster than stochastic gradient algorithms in blind separation problems.  相似文献   

9.
The recent explosion in procurement and availability of high-dimensional gene- and protein-expression profile datasets for cancer diagnostics has necessitated the development of sophisticated machine learning tools with which to analyze them. A major limitation in the ability to accurate classify these high-dimensional datasets stems from the 'curse of dimensionality', occurring in situations where the number of genes or peptides significantly exceeds the total number of patient samples. Previous attempts at dealing with this issue have mostly centered on the use of a dimensionality reduction (DR) scheme, Principal Component Analysis (PCA), to obtain a low-dimensional projection of the high-dimensional data. However, linear PCA and other linear DR methods, which rely on Euclidean distances to estimate object similarity, do not account for the inherent underlying nonlinear structure associated with most biomedical data. The motivation behind this work is to identify the appropriate DR methods for analysis of high-dimensional gene- and protein-expression studies. Towards this end, we empirically and rigorously compare three nonlinear (Isomap, Locally Linear Embedding, Laplacian Eigenmaps) and three linear DR schemes (PCA, Linear Discriminant Analysis, Multidimensional Scaling) with the intent of determining a reduced subspace representation in which the individual object classes are more easily discriminable.  相似文献   

10.
Towards an artificial brain   总被引:2,自引:1,他引:1  
M Conrad  R R Kampfner  K G Kirby  E N Rizki  G Schleis  R Smalz  R Trenary 《Bio Systems》1989,23(2-3):175-215; discussion 216-8
Three components of a brain model operating on neuromolecular computing principles are described. The first component comprises neurons whose input-output behavior is controlled by significant internal dynamics. Models of discrete enzymatic neurons, reaction-diffusion neurons operating on the basis of the cyclic nucleotide cascade, and neurons controlled by cytoskeletal dynamics are described. The second component of the model is an evolutionary learning algorithm which is used to mold the behavior of enzyme-driven neurons or small networks of these neurons for specific function, usually pattern recognition or target seeking tasks. The evolutionary learning algorithm may be interpreted either as representing the mechanism of variation and natural selection acting on a phylogenetic time scale, or as a conceivable ontogenetic adaptation mechanism. The third component of the model is a memory manipulation scheme, called the reference neuron scheme. In principle it is capable of orchestrating a repertoire of enzyme-driven neurons for coherent function. The existing implementations, however, utilize simple neurons without internal dynamics. Spatial navigation and simple game playing (using tic-tac-toe) provide the task environments that have been used to study the properties of the reference neuron model. A memory-based evolutionary learning algorithm has been developed that can assign credit to the individual neurons in a network. It has been run on standard benchmark tasks, and appears to be quite effective both for conventional neural nets and for networks of discrete enzymatic neurons. The models have the character of artificial worlds in that they map the hierarchy of processes in the brain (at the molecular, neuronal, and network levels), provide a task environment, and use this relatively self-contained setup to develop and evaluate learning and adaptation algorithms.  相似文献   

11.
In this paper we propose a computational model of bottom–up visual attention based on a pulsed principal component analysis (PCA) transform, which simply exploits the signs of the PCA coefficients to generate spatial and motional saliency. We further extend the pulsed PCA transform to a pulsed cosine transform that is not only data-independent but also very fast in computation. The proposed model has the following biological plausibilities. First, the PCA projection vectors in the model can be obtained by using the Hebbian rule in neural networks. Second, the outputs of the pulsed PCA transform, which are inherently binary, simulate the neuronal pulses in the human brain. Third, like many Fourier transform-based approaches, our model also accomplishes the cortical center-surround suppression in frequency domain. Experimental results on psychophysical patterns and natural images show that the proposed model is more effective in saliency detection and predict human eye fixations better than the state-of-the-art attention models.  相似文献   

12.
The goal of this study was to train an artificial neural network to generate accurate saccades in Listing's plane and then determine how the hidden units performed the visuomotor transformation. A three-layer neural network was successfully trained, using back-prop, to take in oculocentric retinal error vectors and three-dimensional eye orientation and to generate the correct head-centric motor error vector within Listing's plane. Analysis of the hidden layer of trained networks showed that explicit representations of desired target direction and eye orientation were not employed. Instead, the hidden-layer units consistently divided themselves into four parallel modules: a dominant "vector-propagation" class (approximately 50% of units) with similar visual and motor tuning but negligible position sensitivity and three classes with specific spatial relations between position, visual, and motor tuning. Surprisingly, the vector-propagation units, and only these, formed a highly precise and consistent orthogonal coordinate system aligned with Listing's plane. Selective "lesions" confirmed that the vector-propagation module provided the main drive for saccade magnitude and direction, whereas a balance between activity in the other modules was required for the correct eye-position modulation. Thus, contrary to popular expectation, error-driven learning in itself was sufficient to produce a "neural" algorithm with discrete functional modules and explicit coordinate systems, much like those observed in the real saccade generator.  相似文献   

13.
A recycling reactor system operated under sequential anoxic and oxic conditions was evaluated, in which the nutrients of piggery slurry were anaerobically and aerobically treated and then a portion of the effluent was recycled to the pigsty. The most dominant aerobic heterotrophs from the reactor were Alcaligenes faecalis (TSA-3), Brevundimonas diminuta (TSA-1) and Abiotrophia defectiva (TSA-2) in decreasing order, whereas lactic acid bacteria, LAB (MRS-1, etc.) were most dominantly observed in the anoxic tank. Here we have tried to model the nutrient removal process for each tank in the system based on population densities of heterotrophic and LAB. Principal component analysis (PCA) was first applied to delineate a relationship between input (microbial densities and treatment parameters such as population densities of heterotrophic and LAB, suspended solids (SS), COD, NH4 +–N, ortho-phosphorus, and total phosphorus) and output. Multi-layer neural networks using an error back-propagation learning algorithm were then employed to model the nutrient removal process for each tank. PCA filtration of microbial densities as input data was able to enhance generalization performance of the neural network, and this has led to a better prediction of the measured data. Neural networks independently trained for each treatment tank and the combined analysis of the subsequent tank data allowed a successful prediction of the treatment system for at least 2 days.  相似文献   

14.
Principal component analysis (PCA) is a one-group method. Its purpose is to transform correlated variables into uncorrelated ones and to find linear combinations accounting for a relatively large amount of the total variability, thus reducing the number of original variables to a few components only.
In the simultaneous analysis of different groups, similarities between the principal component structures can often be modelled by the methods of common principal components (CPCs) or partial CPCs. These methods assume that either all components or only some of them are common to all groups, the discrepancies being due mainly to sampling error.
Previous authors have dealt with the k-group situation either by pooling the data of all groups, or by pooling the within-group variance-covariance matrices before performing a PCA. The latter technique is known as multiple group principal component analysis or MGPCA (Thorpe, 1983a). We argue that CPC- or partial CPC-analysis is often more appropriate than these previous methods.
A morphometrical example using males and females of Microtus californicus and M. ochrogaster is presented, comparing PCA, CPC and partial CPC analyses. It is shown that the new methods yield estimated components having smaller standard errors than when groupwise analyses are performed. Formulas are given for estimating standard errors of the eigenvalues and eigenvectors, as well as for computing the likelihood ratio statistic used to test the appropriateness of the CPC- or partial CPC-model.  相似文献   

15.
PCA (principal components analysis) and ANN (artificial neural network) are two broadly used pattern recognition methods in metabolomics data-mining. Yet their limitations sometimes are great obstacles for researchers. In this paper the wavelet transform (WT) method was used to integrate with PCA and ANN to improve their performance in manipulating metabolomics data. A dataset was decomposed by wavelets and then reconstructed. The "hard thresholding" algorithm was used, through which the detail information was discarded, and the entire "metabolomics image" reconstructed on the significant information. It was supposed that the most relevant information was captured after this process. It was found that, thanks to its ability in denoising data, the WT method could significantly improve the performance of the non-linear essence-extracting method ANN in classifying samples; further integration of WT with PCA showed that WT could greatly enhance the ability of PCA in distinguishing one group of samples from another and also its ability in identifying potential biomarkers. The results highlighted WT as a promising resolution in bridging the gap between huge bytes of data and the instructive biological information.  相似文献   

16.
Abstract. Standardization by norms of sample position vectors as well as by sample totals has been used frequently in vegetation ecology. Both standardizations are only special cases of the Generalized Standardization Procedure (GSP) described in this paper. The general procedure allows a wide choice of data transformations simply by varying values of a single standardization parameter. Principal Components Analysis (PCA) often involutes opposite ends of a coenospace, producing results that may be difficult to interpret. Experiments with simulated as well as field data sets revealed that involuted gradients can be unfolded if GSP is applied prior to PCA. Compared to Correspondence Analysis, GSP-PCA was superior in recovering the structure of analysed coenoclines.  相似文献   

17.
Inspired by the coarse-to-fine visual perception process of human vision system,a new approach based on Gaussianmulti-scale space for defect detection of industrial products was proposed.By selecting different scale parameters of theGaussian kernel,the multi-scale representation of the original image data could be obtained and used to constitute the multi-variate image,in which each channel could represent a perceptual observation of the original image from different scales.TheMultivariate Image Analysis (MIA) techniques were used to extract defect features information.The MIA combined PrincipalComponent Analysis (PCA) to obtain the principal component scores of the multivariate test image.The Q-statistic image,derived from the residuals after the extraction of the first principal component score and noise,could be used to efficiently revealthe surface defects with an appropriate threshold value decided by training images.Experimental results show that the proposedmethod performs better than the gray histogram-based method.It has less sensitivity to the inhomogeneous of illumination,andhas more robustness and reliability of defect detection with lower pseudo reject rate.  相似文献   

18.
An improved method for deconvoluting complex spectral maps from bidimensional fluorescence monitoring is presented, relying on a combination of principal component analysis (PCA) and feedforward artificial neural networks (ANN). With the aim of reducing ANN complexity, spectral maps are first subjected to PCA, and the scores of the retained principal components are subsequently used as ANN input vector. The method is presented using the case study of an extractive membrane biofilm reactor, where fluorescence maps of a membrane-attached biofilm were analysed, which were collected under different reactor operating conditions. During ANN training, the spectral information is associated with process performance indicators. Originally, 231 excitation/emission pairs per fluorescence map were used as ANN input vector. Using PCA, each fluorescence map could be represented by a maximum of six principal components, thereby catching 99.5% of its variance. As a result, the dimension of the ANN input vector and hence the complexity of the artificial neural network was significantly reduced, and ANN training speed was increased. Correlations between principal components and ANN predicted process performance parameters were good with correlation coefficients in the order of 0.7 or higher.  相似文献   

19.
Skjaerven L  Martinez A  Reuter N 《Proteins》2011,79(1):232-243
Principal component analysis (PCA) and normal mode analysis (NMA) have emerged as two invaluable tools for studying conformational changes in proteins. To compare these approaches for studying protein dynamics, we have used a subunit of the GroEL chaperone, whose dynamics is well characterized. We first show that both PCA on trajectories from molecular dynamics (MD) simulations and NMA reveal a general dynamical behavior in agreement with what has previously been described for GroEL. We thus compare the reproducibility of PCA on independent MD runs and subsequently investigate the influence of the length of the MD simulations. We show that there is a relatively poor one-to-one correspondence between eigenvectors obtained from two independent runs and conclude that caution should be taken when analyzing principal components individually. We also observe that increasing the simulation length does not improve the agreement with the experimental structural difference. In fact, relatively short MD simulations are sufficient for this purpose. We observe a rapid convergence of the eigenvectors (after ca. 6 ns). Although there is not always a clear one-to-one correspondence, there is a qualitatively good agreement between the movements described by the first five modes obtained with the three different approaches; PCA, all-atoms NMA, and coarse-grained NMA. It is particularly interesting to relate this to the computational cost of the three methods. The results we obtain on the GroEL subunit contribute to the generalization of robust and reproducible strategies for the study of protein dynamics, using either NMA or PCA of trajectories from MD simulations.  相似文献   

20.
The vestibulo-ocular reflex (VOR) is capable of producing compensatory eye movements in three dimensions. It utilizes the head rotational velocity signals from the semicircular canals to control the contractions of the extraocular muscles. Since canal and muscle coordinate frames are not orthogonal and differ from one another, a sensorimotor transformation must be produced by the VOR neural network. Tensor theory has been used to construct a linear transformation that can model the three-dimensional behavior of the VOR. But tensor theory does not take the distributed, redundant nature of the VOR neural network into account. It suggests that the neurons subserving the VOR, such as vestibular nucleus neurons, should have specific sensitivity-vectors. Actual data, however, are not in accord. Data from the cat show that the sensitivity-vectors of vestibular nucleus neurons, rather than aligning with any specific vectors, are dispersed widely. As an alternative to tensor theory, we modeled the vertical VOR as a three-layered neural network programmed using the back-propagation learning algorithm. Units in mature networks had divergent sensitivity-vectors which resembled those of actual vestibular nucleus neurons in the cat. This similarity suggests that the VOR sensorimotor transformation may be represented redundantly rather than uniquely. The results demonstrate how vestibular nucleus neurons can encode the VOR sensorimotor transformation in a distributed manner.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号