首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we address the important problem of feature selection for a P300-based brain computer interface (BCI) speller system in several aspects. Firstly, time segment selection and electroencephalogram channel selection are jointly performed for better discriminability of P300 and background signals. Secondly, in view of the situation that training data with labels are insufficient, we propose an iterative semi-supervised support vector machine for joint spatio-temporal feature selection as well as classification, in which both labeled training data and unlabeled test data are utilized. More importantly, the semi-supervised learning enables the adaptivity of the system. The performance of our algorithm has been evaluated through the analysis of a P300 dataset provided by BCI Competition 2005 and another dataset collected from an in-house P300 speller system. The results show that our algorithm for joint feature selection and classification achieves satisfactory performance, meanwhile it can significantly reduce the training effort of the system. Furthermore, this algorithm is implemented online and the corresponding results demonstrate that our algorithm can improve the adaptiveness of the P300-based BCI speller.  相似文献   

2.
《IRBM》2022,43(3):198-209
BackgroundFrequency band optimization improves the performance of common spatial pattern (CSP) in motor imagery (MI) tasks classification because MI-related electroencephalograms (EEGs) are highly frequency specific. Many variants of CSP algorithm divided the EEG into various sub bands and then applied CSP. However, the feature dimension of MI-EEG data increases with addition of frequency sub bands and requires efficient feature selection algorithms. The performance of CSP also depends on filtering techniques.MethodIn this study, we designed a dual tree complex wavelet transform based filter bank to filter the EEG into sub bands, instead of traditional filtering methods, which improved the spatial feature extraction efficiency. Further, after filtering EEG into different sub bands, we extracted spatial features from each sub band using CSP and optimized them by a proposed supervised learning framework based on neighbourhood component analysis (NCA). Subsequently, a support vector machine (SVM) is trained to perform classification.ResultsAn experimental study, conducted on two datasets (BCI Competition IV (Dataset 2b), and BCI competition III (Dataset IIIa)), validated the MI classification effectiveness of the proposed method in comparison with standard algorithms such as CSP, Filter bank CSP (CSP), and Discriminative FBCSP (DFBCSP). The average classification accuracy obtained by the proposed method for BCI Competition IV (Dataset 2b), and BCI Competition III (Dataset IIIa) are 84.02 ± 12.2 and 89.1 ± 7.50, respectively and found significant than that achieved by standard methods.ConclusionAchieved superior results suggest that the proposed algorithm can improve the performance of MI-based Brain-computer interface devices.  相似文献   

3.
In the context of brain-computer interface (BCI) system, the common spatial patterns (CSP) method has been used to extract discriminative spatial filters for the classification of electroencephalogram (EEG) signals. However, the classification performance of CSP typically deteriorates when a few training samples are collected from a new BCI user. In this paper, we propose an approach that maintains or improves the recognition accuracy of the system with only a small size of training data set. The proposed approach is formulated by regularizing the classical CSP technique with the strategy of transfer learning. Specifically, we incorporate into the CSP analysis inter-subject information involving the same task, by minimizing the difference between the inter-subject features. Experimental results on two data sets from BCI competitions show that the proposed approach greatly improves the classification performance over that of the conventional CSP method; the transformed variant proved to be successful in almost every case, based on a small number of available training samples.  相似文献   

4.
Brain-computer interaction (BCI) and physiological computing are terms that refer to using processed neural or physiological signals to influence human interaction with computers, environment, and each other. A major challenge in developing these systems arises from the large individual differences typically seen in the neural/physiological responses. As a result, many researchers use individually-trained recognition algorithms to process this data. In order to minimize time, cost, and barriers to use, there is a need to minimize the amount of individual training data required, or equivalently, to increase the recognition accuracy without increasing the number of user-specific training samples. One promising method for achieving this is collaborative filtering, which combines training data from the individual subject with additional training data from other, similar subjects. This paper describes a successful application of a collaborative filtering approach intended for a BCI system. This approach is based on transfer learning (TL), active class selection (ACS), and a mean squared difference user-similarity heuristic. The resulting BCI system uses neural and physiological signals for automatic task difficulty recognition. TL improves the learning performance by combining a small number of user-specific training samples with a large number of auxiliary training samples from other similar subjects. ACS optimally selects the classes to generate user-specific training samples. Experimental results on 18 subjects, using both nearest neighbors and support vector machine classifiers, demonstrate that the proposed approach can significantly reduce the number of user-specific training data samples. This collaborative filtering approach will also be generalizable to handling individual differences in many other applications that involve human neural or physiological data, such as affective computing.  相似文献   

5.
The primary goal of this study was to construct a simulation model of a biofeedback brain-computer interface (BCI) system to analyze the effect of biofeedback training on BCI users. A mathematical model of a man-machine visual-biofeedback BCI system was constructed to simulate a subject using a BCI system to control cursor movements. The model consisted of a visual tracking system, a thalamo-cortical model for EEG generation, and a BCI system. The BCI system in the model was realized for real experiments of visual biofeedback training. Ten sessions of visual biofeedback training were performed in eight normal subjects during a 3-week period. The task was to move a cursor horizontally across a screen, or to hold it at the screen’s center. Experimental conditions and EEG data obtained from real experiments were then simulated with the model. Three model parameters, representing the adaptation rate of gain in the visual tracking system and the relative synaptic strength between the thalamic reticular and thalamo-cortical cells in the Rolandic areas, were estimated by optimization techniques so that the performance of the model best fitted the experimental results. The serial changes of these parameters over the ten sessions, reflecting the effects of biofeedback training, were analyzed. The model simulation could reproduce results similar to the experimental data. The group mean success rate and information transfer rate improved significantly after training (56.6 to 81.1% and 0.19 to 0.76 bits/trial, respectively). All three model parameters displayed similar and statistically significant increasing trends with time. Extensive simulation with systematic changes of these parameters also demonstrated that assigning larger values to the parameters improved the BCI performance. We constructed a model of a biofeedback BCI system that could simulate experimental data and the effect of training. The simulation results implied that the improvement was achieved through a quicker adaptation rate in visual tracking gain and a larger synaptic gain from the visual tracking system to the thalamic reticular cells. In addition to the purpose of this study, the constructed biofeedback BCI model can also be used both to investigate the effects of different biofeedback paradigms and to test, estimate, or predict the performances of other newly developed BCI signal processing algorithms.  相似文献   

6.
This article concerns one of the most important problems of brain-computer interfaces (BCI) based on Steady State Visual Evoked Potentials (SSVEP), that is the selection of the a-priori most suitable frequencies for stimulation. Previous works related to this problem were done either with measuring systems that have little in common with actual BCI systems (e.g., single flashing LED) or were presented on a small number of subjects, or the tested frequency range did not cover a broad spectrum. Their results indicate a strong SSVEP response around 10 Hz, in the range 13–25 Hz, and at high frequencies in the band of 40–60 Hz. In the case of BCI interfaces, stimulation with frequencies from various ranges are used. The frequencies are often adapted for each user separately. The selection of these frequencies, however, was not yet justified in quantitative group-level study with proper statistical account for inter-subject variability. The aim of this study is to determine the SSVEP response curve, that is, the magnitude of the evoked signal as a function of frequency. The SSVEP response was induced in conditions as close as possible to the actual BCI system, using a wide range of frequencies (5–30 Hz, in step of 1 Hz). The data were obtained for 10 subjects. SSVEP curves for individual subjects and the population curve was determined. Statistical analysis were conducted both on the level of individual subjects and for the group. The main result of the study is the identification of the optimal range of frequencies, which is 12–18 Hz, for the registration of SSVEP phenomena. The applied criterion of optimality was: to find the largest contiguous range of frequencies yielding the strong and constant-level SSVEP response.  相似文献   

7.
Yasui Y  Pepe M  Hsu L  Adam BL  Feng Z 《Biometrics》2004,60(1):199-206
Training data in a supervised learning problem consist of the class label and its potential predictors for a set of observations. Constructing effective classifiers from training data is the goal of supervised learning. In biomedical sciences and other scientific applications, class labels may be subject to errors. We consider a setting where there are two classes but observations with labels corresponding to one of the classes may in fact be mislabeled. The application concerns the use of protein mass-spectrometry data to discriminate between serum samples from cancer and noncancer patients. The patients in the training set are classified on the basis of tissue biopsy. Although biopsy is 100% specific in the sense that a tissue that shows itself to have malignant cells is certainly cancer, it is less than 100% sensitive. Reference gold standards that are subject to this special type of misclassification due to imperfect diagnosis certainty arise in many fields. We consider the development of a supervised learning algorithm under these conditions and refer to it as partially supervised learning. Boosting is a supervised learning algorithm geared toward high-dimensional predictor data, such as those generated in protein mass-spectrometry. We propose a modification of the boosting algorithm for partially supervised learning. The proposal is to view the true class membership of the samples that are labeled with the error-prone class label as missing data, and apply an algorithm related to the EM algorithm for minimization of a loss function. To assess the usefulness of the proposed method, we artificially mislabeled a subset of samples and applied the original and EM-modified boosting (EM-Boost) algorithms for comparison. Notable improvements in misclassification rates are observed with EM-Boost.  相似文献   

8.
Clustering algorithms divide a set of observations into groups so that members of the same group share common features. In most of the algorithms, tunable parameters are set arbitrarily or by trial and error, resulting in less than optimal clustering. This paper presents a global optimization strategy for the systematic and optimal selection of parameter values associated with a clustering method. In the process, a performance criterion for the optimization model is proposed and benchmarked against popular performance criteria from the literature (namely, the Silhouette coefficient, Dunn's index, and Davies-Bouldin index). The tuning strategy is illustrated using the support vector clustering (SVC) algorithm and simulated annealing. In order to reduce the computational burden, the paper also proposes an alternative to the adjacency matrix method (used for the assignment of cluster labels), namely the contour plotting approach. Datasets tested include the iris and the thyroid datasets from the UCI repository, as well as lymphoma and breast cancer data. The optimal tuning parameters are determined efficiently, while the contour plotting approach leads to significant reductions in computational effort (CPU time) especially for large datasets. The performance criteria comparisons indicate mixed results. Specifically, the Silhouette coefficient and the Davies-Bouldin index perform better, while the Dunn's index is worse on average than the proposed performance index.  相似文献   

9.
《IRBM》2019,40(5):297-305
BackgroundBrain Computer Interface (BCI) systems have been widely used to develop sustainable assistive technology for people suffering from neurological impairments. A major limitation of current BCI systems is that they are based on Subject-dependent (SD) concept. The SD based BCI system is time consuming and inconvenient for physical or mental disables people and also not suitable for limited computer resources. In order to overcome these problems, recently subject-independent (SI) based BCI concept has been introduced to identify mental states of motor disabled people but the expected outcome of the SI based BCI has not been achieved yet. Hence this paper intends to present an efficient scheme for SI based BCI system. The goal of this research is to develop a method for classifying mental states which can be used by any user. For attaining this target, this study employs a supervised spatial filtering method with four types of feature extraction methods including Katz Fractal Dimension, Sub band Energy, Log Variance and Root Mean Square (RMS) and finally the obtained features are used as input to Linear Discriminant Analysis (LDA) classification model for identifying mental states for SI BCI system.ResultsThe performance of the proposed design is evaluated in several ways such as considering different time window length; different frequency bands; different number of channels. The mean classification accuracy using Katz feature is 84.35% which is the maximum output compare to other features that outperforms the existing methods.ConclusionsOur proposed design will help to make a new technology for development of real-time SI based BCI systems that can be more supportive for the motor disabled patients.  相似文献   

10.
《IRBM》2022,43(4):317-324
Brain-computer interface (BCI) speller is a system that provides an alternative communication for the disable people. The brain wave is translated into machine command through a BCI speller which can be used as a communication medium for the patients to express their thought without any motor movement. A BCI speller aims to spell characters by using the electroencephalogram (EEG) signal. Several types of BCI spellers are available based on the EEG signal. A standard BCI speller system consists of the following elements: BCI speller paradigm, data acquisition system and signal processing algorithms. In this work, a systematic review is provided on the BCI speller system and it includes speller paradigms, feature extraction, feature optimization and classification techniques for BCI speller. The advantages and limitations of different speller paradigm and machine learning algorithms are discussed in this article. Also, the future research directions are discussed which can overcome the limitations of present state-of-the-art techniques for BCI speller.  相似文献   

11.
Etienne RS 《Ecology letters》2007,10(7):608-618
As the utility of the neutral theory of biodiversity is increasingly being recognized, there is also an increasing need for proper tools to evaluate the relative importance of neutral processes (dispersal limitation and stochasticity). One of the key features of neutral theory is its close link to data: sampling formulas, giving the probability of a data set conditional on a set of model parameters, have been developed for parameter estimation and model comparison. However, only single local samples can be handled with the currently available sampling formulas, whereas data are often available for many small spatially separated plots. Here, I present a sampling formula for multiple, spatially separated samples from the same metacommunity, which is a generalization of earlier sampling formulas. I also provide an algorithm to generate data sets with the model and I introduce a general test of neutrality that does not require an alternative model; this test compares the probability of the observed data (calculated using the new sampling formula) with the probability of model-generated data sets. I illustrate this with tree abundance data from three large Panamanian neotropical forest plots. When the test is performed with model parameters estimated from the three plots, the model cannot be rejected; however, when parameter estimates previously reported for BCI are used, the model is strongly rejected. This suggests that neutrality cannot explain the structure of the three Panamanian tree communities on the local (BCI) and regional (Panama Canal Zone) scale simultaneously. One should be aware, however, that aspects of the model other than neutrality may be responsible for its failure. I argue that the spatially implicit character of the model is a potential candidate.  相似文献   

12.
A major barrier for a broad applicability of brain-computer interfaces (BCIs) based on electroencephalography (EEG) is the large number of EEG sensor electrodes typically used. The necessity for this results from the fact that the relevant information for the BCI is often spread over the scalp in complex patterns that differ depending on subjects and application scenarios. Recently, a number of methods have been proposed to determine an individual optimal sensor selection. These methods have, however, rarely been compared against each other or against any type of baseline. In this paper, we review several selection approaches and propose one additional selection criterion based on the evaluation of the performance of a BCI system using a reduced set of sensors. We evaluate the methods in the context of a passive BCI system that is designed to detect a P300 event-related potential and compare the performance of the methods against randomly generated sensor constellations. For a realistic estimation of the reduced system''s performance we transfer sensor constellations found on one experimental session to a different session for evaluation. We identified notable (and unanticipated) differences among the methods and could demonstrate that the best method in our setup is able to reduce the required number of sensors considerably. Though our application focuses on EEG data, all presented algorithms and evaluation schemes can be transferred to any binary classification task on sensor arrays.  相似文献   

13.
In this paper, EEG signals of 20 schizophrenic patients and 20 age-matched control participants are analyzed with the objective of determining the more informative channels and finally distinguishing the two groups. For each case, 22 channels of EEG were recorded. A two-stage feature selection algorithm is designed, such that, the more informative channels are first selected to enhance the discriminative information. Two methods, bidirectional search and plus-L minus-R (LRS) techniques are employed to select these informative channels. The interesting point is that most of selected channels are located in the temporal lobes (containing the limbic system) that confirm the neuro-phychological differences in these areas between the schizophrenic and normal participants. After channel selection, genetic algorithm (GA) is employed to select the best features from the selected channels. In this case, in addition to elimination of the less informative channels, the redundant and less discriminant features are also eliminated. A computationally fast algorithm with excellent classification results is obtained. Implementation of this efficient approach involves several features including autoregressive (AR) model parameters, band power, fractal dimension and wavelet energy. To test the performance of the final subset of features, classifiers including linear discriminant analysis (LDA) and support vector machine (SVM) are employed to classify the reduced feature set of the two groups. Using the bidirectional search for channel selection, a classification accuracy of 84.62% and 99.38% is obtained for LDA and SVM, respectively. Using the LRS technique for channel selection, a classification accuracy of 88.23% and 99.54% is also obtained for LDA and SVM, respectively. Finally, the results are compared and contrasted with two well-known methods namely, the single-stage feature selection (evolutionary feature selection) and principal component analysis (PCA)-based feature selection. The results show improved accuracy of classification in relatively low computational time with the two-stage feature selection.  相似文献   

14.
Feature selection from DNA microarray data is a major challenge due to high dimensionality in expression data. The number of samples in the microarray data set is much smaller compared to the number of genes. Hence the data is improper to be used as the training set of a classifier. Therefore it is important to select features prior to training the classifier. It should be noted that only a small subset of genes from the data set exhibits a strong correlation with the class. This is because finding the relevant genes from the data set is often non-trivial. Thus there is a need to develop robust yet reliable methods for gene finding in expression data. We describe the use of several hybrid feature selection approaches for gene finding in expression data. These approaches include filtering (filter out the best genes from the data set) and wrapper (best subset of genes from the data set) phases. The methods use information gain (IG) and Pearson Product Moment Correlation (PPMC) as the filtering parameters and biogeography based optimization (BBO) as the wrapper approach. K nearest neighbour algorithm (KNN) and back propagation neural network are used for evaluating the fitness of gene subsets during feature selection. Our analysis shows that an impressive performance is provided by the IG-BBO-KNN combination in different data sets with high accuracy (>90%) and low error rate.  相似文献   

15.
The complexity and scale of brain–computer interface (BCI) studies limit our ability to investigate how humans learn to use BCI systems. It also limits our capacity to develop adaptive algorithms needed to assist users with their control. Adaptive algorithm development is forced offline and typically uses static data sets. But this is a poor substitute for the online, dynamic environment where algorithms are ultimately deployed and interact with an adapting user. This work evaluates a paradigm that simulates the control problem faced by human subjects when controlling a BCI, but which avoids the many complications associated with full-scale BCI studies. Biological learners can be studied in a reductionist way as they solve BCI-like control problems, and machine learning algorithms can be developed and tested in closed loop with the subjects before being translated to full BCIs. The method is to map 19 joint angles of the hand (representing neural signals) to the position of a 2D cursor which must be piloted to displayed targets (a typical BCI task). An investigation is presented on how closely the joint angle method emulates BCI systems; a novel learning algorithm is evaluated, and a performance difference between genders is discussed.  相似文献   

16.
DNA microarray technology, originally developed to measure the level of gene expression, has become one of the most widely used tools in genomic study. The crux of microarray design lies in how to select a unique probe that distinguishes a given genomic sequence from other sequences. Due to its significance, probe selection attracts a lot of attention. Various probe selection algorithms have been developed in recent years. Good probe selection algorithms should produce a small number of candidate probes. Efficiency is also crucial because the data involved are usually huge. Most existing algorithms are usually not sufficiently selective and quite a large number of probes are returned. We propose a new direction to tackle the problem and give an efficient algorithm based on randomization to select a small set of probes and demonstrate that such a small set of probes is sufficient to distinguish each sequence from all the other sequences. Based on the algorithm, we have developed probe selection software RandPS, which runs efficiently in practice. The software is available on our website (http://www.csc.liv.ac.uk/ approximately cindy/RandPS/RandPS.htm). We test our algorithm via experiments on different genomes (Escherichia coli, Saccharamyces cerevisiae, etc.) and our algorithm is able to output unique probes for most of the genes efficiently. The other genes can be identified by a combination of at most two probes.  相似文献   

17.
In this paper we propose a new technique that adaptively extracts subject specific motor imagery related EEG patterns in the space–time–frequency plane for single trial classification. The proposed approach requires no prior knowledge of reactive frequency bands, their temporal behavior or cortical locations. For a given electrode array, it finds all these parameters by constructing electrode adaptive time–frequency segmentations that are optimized for discrimination. This is accomplished first by segmenting the EEG along the time axis with Local Cosine Packets. Next the most discriminant frequency subbands are selected in each time segment with a frequency axis clustering algorithm to achieve time and frequency band adaptation individually. Finally the subject adapted features are sorted according to their discrimination power to reduce dimensionality and the top subset is used for final classification. We provide experimental results for 5 subjects of the BCI competition 2005 dataset IVa to show the superior performance of the proposed method. In particular, we demonstrate that by using a linear support vector machine as a classifier, the classification accuracy of the proposed algorithm varied between 90.5% and 99.7% and the average classification accuracy was 96%.  相似文献   

18.
Obtaining satisfactory results with neural networks depends on the availability of large data samples. The use of small training sets generally reduces performance. Most classical Quantitative Structure-Activity Relationship (QSAR) studies for a specific enzyme system have been performed on small data sets. We focus on the neuro-fuzzy prediction of biological activities of HIV-1 protease inhibitory compounds when inferring from small training sets. We propose two computational intelligence prediction techniques which are suitable for small training sets, at the expense of some computational overhead. Both techniques are based on the FAMR model. The FAMR is a Fuzzy ARTMAP (FAM) incremental learning system used for classification and probability estimation. During the learning phase, each sample pair is assigned a relevance factor proportional to the importance of that pair. The two proposed algorithms in this paper are: 1) The GA-FAMR algorithm, which is new, consists of two stages: a) During the first stage, we use a genetic algorithm (GA) to optimize the relevances assigned to the training data. This improves the generalization capability of the FAMR. b) In the second stage, we use the optimized relevances to train the FAMR. 2) The Ordered FAMR is derived from a known algorithm. Instead of optimizing relevances, it optimizes the order of data presentation using the algorithm of Dagher et al. In our experiments, we compare these two algorithms with an algorithm not based on the FAM, the FS-GA-FNN introduced in [4], [5]. We conclude that when inferring from small training sets, both techniques are efficient, in terms of generalization capability and execution time. The computational overhead introduced is compensated by better accuracy. Finally, the proposed techniques are used to predict the biological activities of newly designed potential HIV-1 protease inhibitors.  相似文献   

19.
Salari N  Büchel C  Rose M 《PloS one》2012,7(5):e38090
The state of a neural assembly preceding an incoming stimulus is assumed to modulate the processing of subsequently presented stimuli. The nature of this state can differ with respect to the frequency of ongoing oscillatory activity. Oscillatory brain activity of specific frequency range such as alpha (8-12 Hz) and gamma (above 30 Hz) band oscillations are hypothesized to play a functional role in cognitive processing. Therefore, a selective modulation of this prestimulus activity could clarify the functional role of these prestimulus fluctuations. For this purpose, we adopted a novel non-invasive brain-computer-interface (BCI) strategy to selectively increase alpha or gamma band activity in the occipital cortex combined with an adaptive presentation of visual stimuli within specific brain states. During training, oscillatory brain activity was estimated online and fed back to the participants to enable a deliberate modulation of alpha or gamma band oscillations. Results revealed that volunteers selectively increased alpha and gamma frequency oscillations with a high level of specificity regarding frequency range and localization. At testing, alpha or gamma band activity was classified online and at defined levels of activity, visual objects embedded in noise were presented instantly and had to be detected by the volunteer. In experiment I, the effect of two levels of prestimulus gamma band activity on visual processing was examined. During phases of increased gamma band activity significantly more visual objects were detected. In experiment II, the effect was compared against increased levels of alpha band activity. An improvement of visual processing was only observed for enhanced gamma band activity. Both experiments demonstrate the specific functional role of prestimulus gamma band oscillations for perceptual processing. We propose that the BCI method permits the selective modulation of oscillatory activity and the direct assessment of behavioral consequences to test for functional dissociations of different oscillatory brain states.  相似文献   

20.
Most EEG-based brain-computer interface (BCI) paradigms include specific electrode positions. As the structures and activities of the brain vary with each individual, contributing channels should be chosen based on original records of BCIs. Phase measurement is an important approach in EEG analyses, but seldom used for channel selections. In this paper, the phase locking and concentrating value-based recursive feature elimination approach (PLCV-RFE) is proposed to produce robust-EEG channel selections in a P300 speller. The PLCV-RFE, deriving from the phase resetting mechanism, measures the phase relation between EEGs and ranks channels by the recursive strategy. Data recorded from 32 electrodes on 9 subjects are used to evaluate the proposed method. The results show that the PLCV-RFE substantially reduces channel sets and improves recognition accuracies significantly. Moreover, compared with other state-of-the-art feature selection methods (SSNRSF and SVM-RFE), the PLCV-RFE achieves better performance. Thus the phase measurement is available in the channel selection of BCI and it may be an evidence to indirectly support that phase resetting is at least one reason for ERP generations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号