首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Object categorization using single-trial electroencephalography (EEG) data measured while participants view images has been studied intensively. In previous studies, multiple event-related potential (ERP) components (e.g., P1, N1, P2, and P3) were used to improve the performance of object categorization of visual stimuli. In this study, we introduce a novel method that uses multiple-kernel support vector machine to fuse multiple ERP component features. We investigate whether fusing the potential complementary information of different ERP components (e.g., P1, N1, P2a, and P2b) can improve the performance of four-category visual object classification in single-trial EEGs. We also compare the classification accuracy of different ERP component fusion methods. Our experimental results indicate that the classification accuracy increases through multiple ERP fusion. Additional comparative analyses indicate that the multiple-kernel fusion method can achieve a mean classification accuracy higher than 72 %, which is substantially better than that achieved with any single ERP component feature (55.07 % for the best single ERP component, N1). We compare the classification results with those of other fusion methods and determine that the accuracy of the multiple-kernel fusion method is 5.47, 4.06, and 16.90 % higher than those of feature concatenation, feature extraction, and decision fusion, respectively. Our study shows that our multiple-kernel fusion method outperforms other fusion methods and thus provides a means to improve the classification performance of single-trial ERPs in brain–computer interface research.  相似文献   

2.
Multivariate pattern analysis is a technique that allows the decoding of conceptual information such as the semantic category of a perceived object from neuroimaging data. Impressive single-trial classification results have been reported in studies that used fMRI. Here, we investigate the possibility to identify conceptual representations from event-related EEG based on the presentation of an object in different modalities: its spoken name, its visual representation and its written name. We used Bayesian logistic regression with a multivariate Laplace prior for classification. Marked differences in classification performance were observed for the tested modalities. Highest accuracies (89% correctly classified trials) were attained when classifying object drawings. In auditory and orthographical modalities, results were lower though still significant for some subjects. The employed classification method allowed for a precise temporal localization of the features that contributed to the performance of the classifier for three modalities. These findings could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept decoding in the context of real-time brain-computer interface applications.  相似文献   

3.
Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition.  相似文献   

4.
Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition.  相似文献   

5.

Background  

State-of-the-art signal processing methods are known to detect information in single-trial event-related EEG data, a crucial aspect in development of real-time applications such as brain computer interfaces. This paper investigates one such novel approach, evaluating how individual classifier and feature subset tailoring affects classification of single-trial EEG finger movements. The discrete wavelet transform was used to extract signal features that were classified using linear regression and non-linear neural network models, which were trained and architecturally optimized with evolutionary algorithms. The input feature subsets were also allowed to evolve, thus performing feature selection in a wrapper fashion. Filter approaches were implemented as well by limiting the degree of optimization.  相似文献   

6.
The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system.  相似文献   

7.
Are objects coded by a small number of neurons or cortical regions that respond preferentially to the object in question, or by more distributed patterns of responses, including neurons or regions that respond only weakly? Distributed codes can represent a larger number of alternative items than sparse codes but produce ambiguities when multiple items are represented simultaneously (the "superposition" problem). Recent studies found category information in the distributed pattern of response across the ventral visual pathway, including in regions that do not "prefer" the object in question. However, these studies measured neural responses to isolated objects, a situation atypical of real-world vision, where multiple objects are usually present simultaneously ("clutter"). We report that information in the spatial pattern of fMRI response about standard object categories is severely disrupted by clutter and eliminated when attention is diverted. However, information about preferred categories in category-specific regions is undiminished by clutter and partly preserved under diverted attention. These findings indicate that in natural conditions, the pattern of fMRI response provides robust category information only for objects coded in selective cortical regions and highlight the vulnerability of distributed representations to clutter and the advantages of sparse cortical codes in mitigating clutter costs.  相似文献   

8.
Within the range of images that we might categorize as a “beach”, for example, some will be more representative of that category than others. Here we first confirmed that humans could categorize “good” exemplars better than “bad” exemplars of six scene categories and then explored whether brain regions previously implicated in natural scene categorization showed a similar sensitivity to how well an image exemplifies a category. In a behavioral experiment participants were more accurate and faster at categorizing good than bad exemplars of natural scenes. In an fMRI experiment participants passively viewed blocks of good or bad exemplars from the same six categories. A multi-voxel pattern classifier trained to discriminate among category blocks showed higher decoding accuracy for good than bad exemplars in the PPA, RSC and V1. This difference in decoding accuracy cannot be explained by differences in overall BOLD signal, as average BOLD activity was either equivalent or higher for bad than good scenes in these areas. These results provide further evidence that V1, RSC and the PPA not only contain information relevant for natural scene categorization, but their activity patterns mirror the fundamentally graded nature of human categories. Analysis of the image statistics of our good and bad exemplars shows that variability in low-level features and image structure is higher among bad than good exemplars. A simulation of our neuroimaging experiment suggests that such a difference in variance could account for the observed differences in decoding accuracy. These results are consistent with both low-level models of scene categorization and models that build categories around a prototype.  相似文献   

9.
Haushofer J  Kanwisher N 《Neuron》2007,53(6):773-775
How does experience change representations of visual objects in the brain? Do cortical object representations reflect category membership? In this issue of Neuron, Jiang et al. show that category training leads to sharpening of neural responses in high-level visual cortex; in contrast, category boundaries may be represented only in prefrontal cortex.  相似文献   

10.
 A new method is presented for quantitative evaluation of single-sweep phase and amplitude electroencephalogram (EEG) characteristics that is a more informative approach in comparison with conventional signal averaging. In the averaged potential, phase-locking and amplitude effects of the EEG response cannot be separated. To overcome this problem, single-trial EEG sweeps are decomposed into separate presentations of their phase relationships and amplitude characteristics. The stability of the phase-coupling to stimulus is then evaluated independently by analyzing the single-sweep phase presentations. The method has the following advantages: information about stability of the phase-locking can be used to assess event-related oscillatory activity; the method permits evaluation of the timing of event-related phase-locking; and a global assessment and comparison of the phase-locking of ensembles of single sweeps elicited in different processing conditions is possible. The method was employed to study auditory alpha and theta responses in young and middle-aged adults. The results showed that whereas amplitudes of frequency responses tended to decrease, the phase-locking increased significantly with age. The synchronization with stimulus (phase-locking) was the only parameter reliably to differentiate the brain responses of the two age groups, as well as to reveal specific age-related changes in frontal evoked alpha activity. Thus, the present approach can be used to evaluate dynamic brain processes more precisely. Received: 12 February 1996 / Accepted in revised form: 11 October 1996  相似文献   

11.
A novel discriminant method, termed local discriminative spatial patterns (LDSP), is proposed for movement-related potentials (MRPs)-based single-trial electroencephalogram (EEG) classification. Different from conventional discriminative spatial patterns (DSP), LDSP explicitly considers local structure of EEG trials in the construction of scatter matrices in the Fisher-like criterion. The underlying manifold structure of two-dimensional spatio-temporal EEG signals contains more discriminative information. LDSP is an extension to DSP in the sense that DSP can be formulated as a special case of LDSP. By constructing an adjacency matrix, LDSP is calculated as a generalized eigenvalue problem, and so is computationally straightforward. Experiments on MRPs-based single-trial EEG classification show the effectiveness of the proposed LDSP method.  相似文献   

12.
Han  Li  Liang  Zhang  Jiacai  Zhang  Changming  Wang  Li  Yao  Xia  Wu  Xiaojuan  Guo 《Cognitive neurodynamics》2015,9(2):103-112
A reactive brain-computer interface using electroencephalography (EEG) relies on the classification of evoked ERP responses. As the trial-to-trial variation is evitable in EEG signals, it is a challenge to capture the consistent classification features distribution. Clustering EEG trials with similar features and utilizing a specific classifier adjusted to each cluster can improve EEG classification. In this paper, instead of measuring the similarity of ERP features, the brain states during image stimuli presentation that evoked N1 responses were used to group EEG trials. The correlation between momentary phases of pre-stimulus EEG oscillations and N1 amplitudes was analyzed. The results demonstrated that the phases of time–frequency points about 5.3 Hz and 0.3 s before the stimulus onset have significant effect on the ERP classification accuracy. Our findings revealed that N1 components in ERP fluctuated with momentary phases of EEG. We also further studied the influence of pre-stimulus momentary phases on classification of N1 features. Results showed that linear classifiers demonstrated outstanding classification performance when training and testing trials have close momentary phases. Therefore, this gave us a new direction to improve EEG classification by grouping EEG trials with similar pre-stimulus phases and using each to train unit classifiers respectively.  相似文献   

13.
谭磊  赵书河  罗云霄  周洪奎  王安  雷步云 《生态学报》2014,34(24):7251-7260
对于基于像元的土地覆被分类来说,植被的分类是难点。使用多时相面向对象分类方法可以较好的解决这个问题。以山东省烟台市丘陵地区为研究区,采用Landsat TM(Landsat Thematic Mapper remotely sensed imagery)、DEM(Digital Elevation Model)、坡度、坡位、坡向等多种数据,利用基于对象特征的多时相分类方法对研究区进行土地覆盖自动分类。首先对影像进行多尺度分割并检验分割结果选取合适的分割尺度,然后分析对象的光谱、纹理、形状特征。根据各类地物的光谱特征、地理相关性、形状、空间分布等特征,明确类别之间的差异。建立决策树使用隶属度函数进行模糊分类,借助支持向量机提高分类精度。研究结果表明,通过使用多时相影像采用面向对象分类方法,相对于传统的基于像素的分类可以明显提高分类精度,尤其是解决了乔灌草的区分问题。  相似文献   

14.
Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of “sameness” among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16–17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item''s prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of “sameness.”  相似文献   

15.
In the present study, we examined whether infant Japanese macaques categorize objects without any training, using a similar technique also used with human infants (the paired-preference method). During the familiarization phase, subjects were presented twice with two pairs of different objects from one global-level category. During the test phase, they were presented twice with a pair consisting of a novel familiar-category object and a novel global-level category object. The subjects were tested with three global-level categories (animal, furniture, and vehicle). It was found that they showed significant novelty preferences as a whole, indicating that they processed similarities between familiarization objects and novel familiar-category objects. These results suggest that subjects responded distinctively to objects without training, indicating the possibility that infant macaques possess the capacity for categorization.  相似文献   

16.
Category formation allows us to group perceptual objects into meaningful classes and is fundamental to cognition. Categories can be derived from similarity relationships of object features by using prototypes or multiple exemplars, or from abstract relationships of features and rules . A variety of brain areas have been implicated in categorization processes, but mechanistic insights on the single-cell and local-network level are still rare and limited to the matching of individual objects to categories . For directional categorization of tone steps, as in melody recognition , abstract relationships between sequential events (higher or lower in frequency) have to be formed. To explore the neuronal mechanisms of this categorical identification of step direction, we trained monkeys for more than two years on a contour-discrimination task with multiple tone sequences. In the auditory cortex of these highly trained monkeys, we identified two interrelated types of neuronal firing: Increased phasic responses to tones categorically represented the reward-predicting downward frequency steps and not upward steps; subsequently, slow modulations of tonic firing predicted the behavioral decisions of the monkeys, including errors. Our results on neuronal mechanisms of categorical stimulus identification and of decision making attribute a cognitive role to auditory cortex, in addition to its role in signal processing.  相似文献   

17.
Antzoulatos EG  Miller EK 《Neuron》2011,71(2):243-249
Learning to classify diverse experiences into meaningful groups, like categories, is fundamental to normal cognition. To understand its neural basis, we simultaneously recorded from multiple electrodes in lateral prefrontal cortex and dorsal striatum, two interconnected brain structures critical for learning. Each day, monkeys learned to associate novel abstract, dot-based categories with a right versus left saccade. Early on, when they could acquire specific stimulus-response associations, striatum activity was an earlier predictor of the corresponding saccade. However, as the number of exemplars increased and monkeys had to learn to classify them, PFC activity began to predict the saccade associated with each category before the striatum. While monkeys were categorizing novel exemplars at a high rate, PFC activity was a strong predictor of their corresponding saccade early in the trial before the striatal neurons. These results suggest that striatum plays a greater role in stimulus-response association and PFC in abstraction of categories.  相似文献   

18.
Fragment-based learning of visual object categories   总被引:2,自引:0,他引:2  
When we perceive a visual object, we implicitly or explicitly associate it with a category we know. It is known that the visual system can use local, informative image fragments of a given object, rather than the whole object, to classify it into a familiar category. How we acquire informative fragments has remained unclear. Here, we show that human observers acquire informative fragments during the initial learning of categories. We created new, but naturalistic, classes of visual objects by using a novel "virtual phylogenesis" (VP) algorithm that simulates key aspects of how biological categories evolve. Subjects were trained to distinguish two of these classes by using whole exemplar objects, not fragments. We hypothesized that if the visual system learns informative object fragments during category learning, then subjects must be able to perform the newly learned categorization by using only the fragments as opposed to whole objects. We found that subjects were able to successfully perform the classification task by using each of the informative fragments by itself, but not by using any of the comparable, but uninformative, fragments. Our results not only reveal that novel categories can be learned by discovering informative fragments but also introduce and illustrate the use of VP as a versatile tool for category-learning research.  相似文献   

19.
Beauchamp MS  Lee KE  Argall BD  Martin A 《Neuron》2004,41(5):809-823
Two categories of objects in the environment-animals and man-made manipulable objects (tools)-are easily recognized by either their auditory or visual features. Although these features differ across modalities, the brain integrates them into a coherent percept. In three separate fMRI experiments, posterior superior temporal sulcus and middle temporal gyrus (pSTS/MTG) fulfilled objective criteria for an integration site. pSTS/MTG showed signal increases in response to either auditory or visual stimuli and responded more to auditory or visual objects than to meaningless (but complex) control stimuli. pSTS/MTG showed an enhanced response when auditory and visual object features were presented together, relative to presentation in a single modality. Finally, pSTS/MTG responded more to object identification than to other components of the behavioral task. We suggest that pSTS/MTG is specialized for integrating different types of information both within modalities (e.g., visual form, visual motion) and across modalities (auditory and visual).  相似文献   

20.
Using a rapid serial visual presentation paradigm, we previously showed that the average amplitudes of six event-related potential (ERP) components were affected by different categories of emotional faces. In the current study, we investigated the six discriminating components on a single-trial level to clarify whether the amplitude difference between experimental conditions results from a difference in the real variability of single-trial amplitudes or from latency jitter across trials. It is found that there were consistent amplitude differences in the single-trial P1, N170, VPP, N3, and P3 components, demonstrating that a substantial proportion of the average amplitude differences can be explained by the pure variability in amplitudes on a single-trial basis between experimental conditions. These single-trial results verified the three-stage scheme of facial expression processing beyond multitrial ERP averaging, and showed the three processing stages of "fear popup", "emotional/unemotional discrimination", and "complete separation" based on the single-trial ERP dynamics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号