首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Stochastic ICA contrast maximisation using OJA's nonlinear PCA algorithm   总被引:1,自引:0,他引:1  
Independent Component Analysis (ICA) is an important extension of linear Principal Component Analysis (PCA). PCA performs a data transformation to provide independence to second order, that is, decorrelation. ICA transforms data to provide approximate independence up to and beyond second order yielding transformed data with fully factorable probability densities. The linear ICA transformation has been applied to the classical statistical signal-processing problem of Blind Separation of Sources (BSS), that is, separating unknown original source signals from a mixture whose mode of mixing is undetermined. In this paper it is shown that Oja's Nonlinear PCA algorithm performs a general stochastic online adaptive ICA. This analysis is corroborated with three simulations. The first separates unknown mixtures of original natural images, which have sub-Gaussian densities, the second separates linear mixtures of natural speech whose densities are super-Gaussian. Finally unknown mixtures of original images, which have both sub- and super-Gaussian densities are separated.  相似文献   

2.
3.
Nonlinear blind signal separation is an important but rather difficult problem. Any general nonlinear independent component analysis algorithm for such a problem should specify which solution it tries to find. Several recent neural networks for separating the post nonlinear blind mixtures are limited to the diagonal nonlinearity, where there is no cross-channel nonlinearity. In this paper, a new semi-parametric hybrid neural network is proposed to separate the post nonlinearly mixed blind signals where cross-channel disturbance is included. This hybrid network consists of two cascading modules, which are a neural nonlinear module for approximating the post nonlinearity and a linear module for separating the predicted linear blind mixtures. The nonlinear module is a semi-parametric expansion made up of two sub-networks, one of which is a linear model and the other of which is a three-layer perceptron. These two sub-networks together produce a "weak" nonlinear operator and can approach relatively strong nonlinearity by tuning parameters. A batch learning algorithm based on the entropy maximization and the gradient descent method is deduced. This model is successfully applied to a blind signal separation problem with two sources. Our simulation results indicate that this hybrid model can effectively approach the cross-channel post nonlinearity and achieve a good visual quality as well as a high signal-to-noise ratio in some cases.  相似文献   

4.
Principal Component Analysis (PCA) is a classical technique in statistical data analysis, feature extraction and data reduction, aiming at explaining observed signals as a linear combination of orthogonal principal components. Independent Component Analysis (ICA) is a technique of array processing and data analysis, aiming at recovering unobserved signals or 'sources' from observed mixtures, exploiting only the assumption of mutual independence between the signals. The separation of the sources by ICA has great potential in applications such as the separation of sound signals (like voices mixed in simultaneous multiple records, for example), in telecommunication or in the treatment of medical signals. However, ICA is not yet often used by statisticians. In this paper, we shall present ICA in a statistical framework and compare this method with PCA for electroencephalograms (EEG) analysis.We shall see that ICA provides a more useful data representation than PCA, for instance, for the representation of a particular characteristic of the EEG named event-related potential (ERP).  相似文献   

5.
Least-squares methods for blind source separation based on nonlinear PCA   总被引:2,自引:0,他引:2  
In standard blind source separation, one tries to extract unknown source signals from their instantaneous linear mixtures by using a minimum of a priori information. We have recently shown that certain nonlinear extensions of principal component type neural algorithms can be successfully applied to this problem. In this paper, we show that a nonlinear PCA criterion can be minimized using least-squares approaches, leading to computationally efficient and fast converging algorithms. Several versions of this approach are developed and studied, some of which can be regarded as neural learning algorithms. A connection to the nonlinear PCA subspace rule is also shown. Experimental results are given, showing that the least-squares methods usually converge clearly faster than stochastic gradient algorithms in blind separation problems.  相似文献   

6.

Background  

An alternative to standard approaches to uncover biologically meaningful structures in micro array data is to treat the data as a blind source separation (BSS) problem. BSS attempts to separate a mixture of signals into their different sources and refers to the problem of recovering signals from several observed linear mixtures. In the context of micro array data, "sources" may correspond to specific cellular responses or to co-regulated genes.  相似文献   

7.
提出一种新的多通道脑电信号盲分离的方法,将小波变换和独立分量分析(independent component analysis,ICA)相结合,利用小波变换的滤噪作用,将混合在原始脑电的部分高频噪声滤除后,再重构原始脑电作为ICA的输入信号,有效地克服了现有ICA算法不能区分噪声的缺陷。实验结果表明,该方法对多通道脑电的盲分离是很有效的。  相似文献   

8.

Background

The electrocardiogram (ECG) is a diagnostic tool that records the electrical activity of the heart, and depicts it as a series of graph-like tracings, or waves. Being able to interpret these details allows diagnosis of a wide range of heart problems. Fetal electrocardiogram (FECG) extraction has an important impact in medical diagnostics during the mother pregnancy period. Since the observed FECG signals are often mixed with the maternal ECG (MECG) and the noise induced by the movement of electrodes or by mother motion, the separation process of the ECG signal sources from the observed data becomes quite complicated. One of its complexity is when the ECG sources are dependent, thus, in this paper we introduce a new approach of blind source separation (BSS) in the noisy context for both independent and dependent ECG signal source. This approach consist in denoising the observed ECG signals using a bilateral total variation (BTV) filter; then minimizing the Kullbak-Leibler divergence between copula densities to separate the FECG signal from the MECG one.

Results

We present simulation results illustrating the performance of our proposed method. We will consider many examples of independent/dependent source component signals. The results will be compared with those of the classical method called independent component analysis (ICA) under the same conditions. The accuracy of source estimation is evaluated through a criterion, called again the signal-to-noise-ratio (SNR). The first experiment shows that our proposed method gives accurate estimation of sources in the standard case of independent components, with performance around 27 dB in term of SNR. In the second experiment, we show the capability of the proposed algorithm to successfully separate two noisy mixtures of dependent source components - with classical criterion devoted to the independent case - fails, and that our method is able to deal with the dependent case with good performance.

Conclusions

In this work, we focus specifically on the separation of the ECG signal sources taken from skin two electrodes located on a pregnant woman’s body. The ECG separation is interpreted as a noisy linear BSS problem with instantaneous mixtures. Firstly, a denoising step is required to reduce the noise due to motion artifacts using a BTV filter as a very effective one-pass filter for denoising. Then, we use the Kullbak-Leibler divergence between copula densities to separate the fetal heart rate from the mother one, for both independent and dependent cases.
  相似文献   

9.
Human muscle activity can be assessed with surface electromyography (SEMG). Depending on electrode location and size, the recording volume under the sensor is likely to measure electrical potentials emanating from muscles other than the muscle of interest. This crosstalk issue makes interpretation of SEMG data difficult. The purpose of this paper was to study a crosstalk reduction technique called blind source separation (BSS). Most straightforward separation techniques rely on linearity and instantaneity (LI) of signal mixtures on the sensors. Literature on BSS for SEMG often makes hypothesis of linearity and instantaneity of the mixing model. Using simulation of SEMG mixtures and real SEMG recordings on the human extensor indicis (EI) and extensor digiti minimi (EDM) muscles during a task consisting of selective successive activations of EI and EDM muscles, cross-correlation between the sensors was proven to be directly dependent on instantaneity of the sources. Instantaneity hypothesis testing on real SEMG recordings showed that source instantaneity hypothesis is very sensitive to electrode location along the fibers direction. Source separation gains using JADE BSS algorithm depend strongly on instantaneity hypothesis. Using LI BSS on SEMG requires great attention to electrode positioning; we provide a tool to test these on EI/EDM muscles.  相似文献   

10.
空间独立成分分析实现fMRI信号的盲分离   总被引:7,自引:1,他引:6  
独立成分分析(ICA)在功能核磁共振成像(fMRI)技术中的应用是近年来人们关注的一个热点。简要介绍了空间独立成分分析(SICA)的模型和方法,将fMRI信号分析看作是一种盲源分离问题,用快速算法实现fMRI信号的盲源分离。对fMRI信号的研究大多是在假定已知事件相关时间过程曲线的情况下,利用相关性分析得到脑的激活区域。在不清楚有哪几种因素对fMRI信号有贡献、也不清楚其时间过程曲线的情况下,用SICA可以对fMRI信号进行盲源分离,提取不同独立成分得到任务相关成分、头动成分、瞬时任务相关成分、噪声干扰、以及其它产生fMRI信号的多种源信号。  相似文献   

11.
In this paper the problem of unistage selection with inequality constraints is formulated. If the predictor and criterion variables are all normally distributed, this problem can be written as a convex programming problem, with a linear objective function and with linear constraints and a quadratic constraint. Using the duality theory, for convex nonlinear programming it is proved, that its dual problem can be transformed into a convex minimization problem with non-negativity conditions. Good computational methods are known for solving this problem. By the help of the dual problem sufficient conditions for a solution of the original primal problem are derived and illustrated by an example of practical interest.  相似文献   

12.
一种独立分量分析的迭代算法和实验结果   总被引:9,自引:0,他引:9  
介绍盲信源分离中一种独立分量分析方法,基于信息论原理,给出了一个衡量输出分量统计独立的目标函数。最优化该目标函数,得出一种用于独立分量分析的迭代算法。相对于其他大多数独立分量分析方法来说,该算法的优点在于迭代过程中不需要计算信号的高阶统计量,收敛速度快。通过脑电信号和其他信号的计算机仿真和实验结果表明了算法的有效性。  相似文献   

13.
14.
This paper introduces an Independent Component Analysis (ICA) approach to the separation of nonlinear mixtures in the complex domain. Source separation is performed by a complex INFOMAX approach. The neural network which realizes the separation employs the so called "Mirror Model" and is based on adaptive activation functions, whose shape is properly modified during learning. Nonlinear functions involved in the processing of complex signals are realized by pairs of spline neurons called "splitting functions", working on the real and the imaginary part of the signal respectively. Theoretical proof of existence and uniqueness of the solution under proper assumptions is also provided. In particular a simple adaptation algorithm is derived and some experimental results that demonstrate the effectiveness of the proposed solution are shown.  相似文献   

15.
Flux blance analysis (FBA) has been shown to be a very effective tool to interpret and predict the metabolism of various microorganisms when the set of available measurements is not sufficient to determine the fluxes within the cell. In this methodology, an underdetermined stoichiometric model is solved using a linear programming (LP) approach. The predictions of FBA models can be improved if noisy measurements are checked for consistency, and these in turn are used to estimate model parameters. In this work, a formal methodology for data reconciliation and parameter estimation with underdetermined stoichiometric models is developed and assessed. The procedure is formulated as a nonlinear optimization problem, where the LP is transformed into a set of nonlinear constraints. However, some of these constraints violate standard regularity conditions, making the direct numerical solution very difficult. Hence, a barrier formulation is used to represent these constraints, and an iterative procedure is defined that allows solving the problem to the desired degree of convergence. This methodology is assessed using a stoichiometric yeast model. The procedure is used for data reconciliation where more reliable estimations of noisy measurements are computed. On the other hand, assuming unknown biomass composition, the procedure is applied for simultaneous data reconciliation and biomass composition estimation. In both cases it is verified that the f measurements required to get unbiased and reliable estimations is reduced if the LP approach is included as additional constraints in the optimization.  相似文献   

16.
We present a general quantitative genetic model for the evolution of reaction norms. This model goes beyond previous models by simultaneously permitting any shaped reaction norm and allowing for the imposition of genetic constraints. Earlier models are shown to be special cases of our general model; we discuss in detail models involving just two macroenvironments, linear reaction norms, and quadratic reaction norms. The model predicts that, for the case of a temporally varying environment, a population will converge on (1) the genotype with the maximum mean geometric fitness over all environments, (2) a linear reaction norm whose slope is proportional to the covariance between the environment of development and the environment of selection, and (3) a linear reaction norm even if nonlinear reaction norms are possible. An examination of experimental studies finds some limited support for these predictions. We discuss the limitations of our model and the need for more realistic gametic models and additional data on the genetic and developmental bases of plasticity.  相似文献   

17.
Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition 'dipolarity' defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison).  相似文献   

18.
Dependence of epidemic and population velocities on basic parameters.   总被引:11,自引:1,他引:10  
This paper describes the use of linear deterministic models for examining the spread of population processes, discussing their advantages and limitations. Their main advantages are that their assumptions are relatively transparent and that they are easy to analyze, yet they generally give the same velocity as more complex linear stochastic and nonlinear deterministic models. Their simplicity, especially if we use the elegant reproduction and dispersal kernel formulation of Diekmann and van den Bosch et al., allows us greater freedom to choose a biologically realistic model and greatly facilitates examination of the dependence of conclusions on model components and of how these are incorporated into the model and fitted from data. This is illustrated by consideration of a range of examples, including both diffusion and dispersal models and by discussion of their application to both epidemic and population dynamic problems. A general limitation on fitting models results from the poor accuracy of most ecological data, especially on dispersal distances. Confirmation of a model is thus rarely as convincing as those cases where we can clearly reject one. We also need to be aware that linear models provide only an upper bound for the velocity of more realistic nonlinear stochastic models and are almost wholly inadequate when it comes to modeling more complex aspects such as the transition to endemicity and endemic patterns. These limitations are, however, to a great extent shared by linear stochastic and nonlinear deterministic models.  相似文献   

19.
Denitrification and its regulating factors are of great importance to aquatic ecosystems, as denitrification is a critical process to nitrogen removal. Additionally, a by-product of denitrification, nitrous oxide, is a much more potent greenhouse gas than carbon dioxide. However, the estimation of denitrification rates is usually clouded with uncertainty, mainly due to high spatial and temporal variations, as well as complex regulating factors within wetlands. This hampers the development of general mechanistic models for denitrification as well, as most previously developed models were empirical or exhibited low predictability with numerous assumptions. In this study, we tested Artificial Neural Network (ANN) as an alternative to classic empirical models for simulating denitrification rates in wetlands. ANN, multiple linear regression (MLR) with two different methods, and simplified mechanistic models were applied to estimate the denitrification rates of 2-year observations in a mesocosm-scale constructed wetland system. MLR and simplified mechanistic models resulted in lower prediction power and higher residuals compared to ANN. Although the stepwise linear regression model estimated similar average values of denitrification rates, it could not capture the fluctuation patterns accurately. In contrast, ANN model achieved a fairly high predictability, with an R2 of 0.78 for model validation, 0.93 for model calibration (training), and a low root mean square error (RMSE) together with low bias, indicating a high capacity to simulate the dynamics of denitrification. According to a sensitivity analysis of the ANN, non-linear relationships between input variables and denitrification rates were well explained. In addition, we found that water temperature, denitrifying enzyme activity (DEA), and DO accounted for 70% of denitrification rates. Our results suggest that the ANN developed in this study has a greater performance in simulating variations in denitrification rates than multivariate linear regressions or simplified nonlinear mechanistic model.  相似文献   

20.
Simultaneous inference in general parametric models   总被引:6,自引:0,他引:6  
Simultaneous inference is a common problem in many areas of application. If multiple null hypotheses are tested simultaneously, the probability of rejecting erroneously at least one of them increases beyond the pre-specified significance level. Simultaneous inference procedures have to be used which adjust for multiplicity and thus control the overall type I error rate. In this paper we describe simultaneous inference procedures in general parametric models, where the experimental questions are specified through a linear combination of elemental model parameters. The framework described here is quite general and extends the canonical theory of multiple comparison procedures in ANOVA models to linear regression problems, generalized linear models, linear mixed effects models, the Cox model, robust linear models, etc. Several examples using a variety of different statistical models illustrate the breadth of the results. For the analyses we use the R add-on package multcomp, which provides a convenient interface to the general approach adopted here.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号