首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Brain computer interfaces (BCI) provide a new approach to human computer communication, where the control is realised via performing mental tasks such as motor imagery (MI). In this study, we investigate a novel method to automatically segment electroencephalographic (EEG) data within a trial and extract features accordingly in order to improve the performance of MI data classification techniques. A new local discriminant bases (LDB) algorithm using common spatial patterns (CSP) projection as transform function is proposed for automatic trial segmentation. CSP is also used for feature extraction following trial segmentation. This new technique also allows to obtain a more accurate picture of the most relevant temporal–spatial points in the EEG during the MI. The results are compared with other standard temporal segmentation techniques such as sliding window and LDB based on the local cosine transform (LCT).  相似文献   

3.
Dual channel segmentation of the EEG signal has been developed. The purpose was to divide the signals into segments, according to information common for the two channels. The criterion for segmentation was based on the changes in the cross-spectrum of the two signals. It has been shown theoretically, as well as by simulation studies and by analysis of real EEG data that this method is sensitive to changes common for both channels, whereas segmentation does not occur as a result of changes in each channel separately.  相似文献   

4.
The history of quantitative, computerized electroencephalogram (EEG) analysis is reviewed. It is shown that, until very recently, the basic approach to EEG analysis involved the assumption that the EEG is stochastic. Consequently, statistical pattern recognition techniques, segmentation procedures, syntactic methods, knowledge-based approaches, and even artificial neural network methods have been developed with different levels of success. A fundamentally different approach to computerized EEG analysis, however, is making its way into the laboratories. The basic idea, inspired by recent advances in the area of non-linear dynamics, and especially the theory of chaos, is to view an EEG as the output of a deterministic system of relatively simple complexity, but containing non-linearities. This suggests that studying the geometrical dynamics of EEGs, and the development of neurophysiologically realistic models of EEG generation may produce more successful automated EEG analysis techniques than the classical, stochastic methods. Evidence supporting the non-linear dynamics paradigm is reviewed, and possible research paths are indicated.  相似文献   

5.
Phasic organisation of human EEG alpha activity was studied in a pilot investigation using a previously suggested EEG segmental analysis methodology. The EEG was recorded in three normal subjects under resting conditions. The segmentation procedure enabled effective identifying of the periods with different amplitude in alpha band and the short-term transitions between them. Mean intersegmental variability of amplitude envelope were computed for the eyes closed and eyes open EEG in each of 16 standard derivations. Analysis of segment amplitude distributions showed that the difference between the average alpha activity amplitude in these conditions were determined mainly by variations in the number of segments of different amplitude classes and not by a shift of the distribution or by change of its width. Distribution and quartile analysis of mean segment amplitudes provide evidence for possible functional heterogeneity of upper and middle subranges of the amplitude range.  相似文献   

6.
Time-varying AR modeling is applied to sleep EEG signal, in order to perform parameter estimation and detect changes in the signal characteristics (segmentation). Several types of basis functions have been analyzed to determine how closely they can approximate parameter changes characteristics of the EEG signal. The TV-AR model was applied to a large number of simulated signal segments, in order to examine the behaviour of the estimation under various conditions such as variations in the EEG parameters and in the location of segment boundaries, and different orders of the basis functions. The set of functions that is the basis for the Discrete Cosine Transform (DCT), and the Walsh functions were found to be the most efficient in the estimation of the model parameters. A segmentation algorithm based on an “Identification function” calculated from the estimated model parameters is suggested.  相似文献   

7.
Use of the dynamic clusters method for automatic extraction of compressed information about recorded EEG signal is presented. The computer first divides the record into quasi-stationary segments by means of adaptive segmentation. Second, the extracted segments are classified by a method of dynamic clusters into homogeneous classes. One part of the used clustering algorithm permits to specify and draw the most typical class members, which may represent the whole studied EEG signal and may be used as input for the further phase of the automatic EEG analysis, i.e. for the classification of the whole EEG records. The above procedure was applied to a 75 sec long EEG record of anaesthetized cat intoxicated by CO.  相似文献   

8.
The elucidation of the complex machinery used by the human brain to segregate and integrate information while performing high cognitive functions is a subject of imminent future consequences. The most significant contributions to date in this field, known as cognitive neuroscience, have been achieved by using innovative neuroimaging techniques, such as electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI), which measure variations in both the time and the space of some interpretable physical magnitudes. Extraordinary maps of cerebral activation involving function-restricted brain areas, as well as graphs of the functional connectivity between them, have been obtained from EEG and fMRI data by solving some spatio-temporal inverse problems, which constitutes a top-down approach. However, in many cases, a natural bridge between these maps/graphs and the causal physiological processes is lacking, leading to some misunderstandings in their interpretation. Recent advances in the comprehension of the underlying physiological mechanisms associated with different cerebral scales have provided researchers with an excellent scenario to develop sophisticated biophysical models that permit an integration of these neuroimage modalities, which must share a common aetiology. This paper proposes a bottom-up approach, involving physiological parameters in a specific mesoscopic dynamic equations system. Further observation equations encapsulating the relationship between the mesostates and the EEG/fMRI data are obtained on the basis of the physical foundations of these techniques. A methodology for the estimation of parameters from fused EEG/fMRI data is also presented. In this context, the concepts of activation and effective connectivity are carefully revised. This new approach permits us to examine and discuss some future prospects for the integration of multimodal neuroimages.  相似文献   

9.
Respiratory cycle related EEG change (RCREC) is characterized by significant relative EEG power changes within different stages of respiration during sleep. RCREC has been demonstrated to predict sleepiness in patients with obstructive sleep apnoea and is hypothesized to represent microarousals. As such RCREC may provide a sensitive marker of respiratory arousals. A key step in quantification of RCREC is respiratory signal segmentation which is conventionally based on local maxima and minima of the nasal flow signal. We have investigated an alternative respiratory cycle segmentation method based on inspiratory/expiratory transitions. Sixty two healthy paediatric participants were recruited through staff of local universities in Bolivia. Subjects underwent attended polysomnography on a single night (Compumedics PS2 system). Studies were sleep staged according to standard criteria. C3/A2 EEG channel and time-locked nasal flow (thermistor) were used in RCREC quantification. Forty Seven subjects aged 7–17 (11.4 ± 3) years (24M:23F) were found to have usable polysomnographs for the purpose of RCREC calculation. Respiratory cycles were segmented using both the conventional and novel (transition) methods and differences in RCREC derived from the two methods were compared in each frequency band. Significance of transition RCREC as measured by Fisher's F value through analysis of variance (ANOVA) was found to be significantly higher than the conventional RCREC in all frequency bands (P < 0.05) but beta. This increase in statistical significance of RCREC as demonstrated with the novel transition segmentation approach suggests better alignment of the respiratory cycle segments with the underlying physiology driving RCREC.  相似文献   

10.
This article deals with a new approach in sleep characterization that combines EEG source localisation methods with standard frequency analysis of multielectrode EEGs. First, we describe the theoretical methodology and the benefits that we get from a three-dimensional image (LORETA) of the cerebral activity related to a frequency band. Then, this new application is used as signal-processing technique on sleep EEG recordings obtained from young male adults using four frequency bands (delta 0.5-3.5 Hz, theta 4.0-7.5 Hz, alpha 8.0-12.5 Hz and beta 13.0-32.0 Hz) in different sleep stages. Finally, we show that the obtained results are highly consistent with other physiological assessments (standard EEG mapping, functional magnetic resonance imaging, etc.), but give us more realistic additional information on the generators of electromagnetic cerebral activity.  相似文献   

11.
Electroencephalographic (EEG) analysis has emerged as a powerful tool for brain state interpretation and diagnosis, but not for the diagnosis of mental disorders; this may be explained by its low spatial resolution or depth sensitivity. This paper concerns the diagnosis of schizophrenia using EEG, which currently suffers from several cardinal problems: it heavily depends on assumptions, conditions and prior knowledge regarding the patient. Additionally, the diagnostic experiments take hours, and the accuracy of the analysis is low or unreliable. This article presents the “TFFO” (Time-Frequency transformation followed by Feature-Optimization), a novel approach for schizophrenia detection showing great success in classification accuracy with no false positives. The methodology is designed for single electrode recording, and it attempts to make the data acquisition process feasible and quick for most patients.  相似文献   

12.
In this paper, a method is described to evaluate EEGs by means of a piece-wise analysis. The procedure involves the recursive computation of a 5th-order autoregressive model by means of a Kalman filter. As an illustration the result of applying this method to sleep recordings is described in this paper. Also, an objective comparison of this method with a more conventional approach (based on analyzing 30-s intervals), and with adaptive segmentation (Praetorius et al., 1977) was carried out using the same data. The results obtained indicate that our method is useful in extracting elementary patterns from an EEG and that the piece-wise analysis approach is to be favored over more conventional techniques.  相似文献   

13.
The unpredictability of the occurrence of epileptic seizures makes it difficult to detect and treat this condition effectively. An automatic system that characterizes epileptic activities in EEG signals would allow patients or the people near them to take appropriate precautions, would allow clinicians to better manage the condition, and could provide more insight into these phenomena thereby revealing important clinical information. Various methods have been proposed to detect epileptic activity in EEG recordings. Because of the nonlinear and dynamic nature of EEG signals, the use of nonlinear Higher Order Spectra (HOS) features is a seemingly promising approach. This paper presents the methodology employed to extract HOS features (specifically, cumulants) from normal, interictal, and epileptic EEG segments and to use significant features in classifiers for the detection of these three classes. In this work, 300 sets of EEG data belonging to the three classes were used for feature extraction and classifier development and evaluation. The results show that the HOS based measures have unique ranges for the different classes with high confidence level (p-value < 0.0001). On evaluating several classifiers with the significant features, it was observed that the Support Vector Machine (SVM) presented a high detection accuracy of 98.5% thereby establishing the possibility of effective EEG segment classification using the proposed technique.  相似文献   

14.
Texture discontinuities are a fundamental cue by which the visual system segments objects from their background. The neural mechanisms supporting texture-based segmentation are therefore critical to visual perception and cognition. In the present experiment we employ an EEG source-imaging approach in order to study the time course of texture-based segmentation in the human brain. Visual Evoked Potentials were recorded to four types of stimuli in which periodic temporal modulation of a central 3° figure region could either support figure-ground segmentation, or have identical local texture modulations but not produce changes in global image segmentation. The image discontinuities were defined either by orientation or phase differences across image regions. Evoked responses to these four stimuli were analyzed both at the scalp and on the cortical surface in retinotopic and functional regions-of-interest (ROIs) defined separately using fMRI on a subject-by-subject basis. Texture segmentation (tsVEP: segmenting versus non-segmenting) and cue-specific (csVEP: orientation versus phase) responses exhibited distinctive patterns of activity. Alternations between uniform and segmented images produced highly asymmetric responses that were larger after transitions from the uniform to the segmented state. Texture modulations that signaled the appearance of a figure evoked a pattern of increased activity starting at ~143 ms that was larger in V1 and LOC ROIs, relative to identical modulations that didn't signal figure-ground segmentation. This segmentation-related activity occurred after an initial response phase that did not depend on the global segmentation structure of the image. The two cue types evoked similar tsVEPs up to 230 ms when they differed in the V4 and LOC ROIs. The evolution of the response proceeded largely in the feed-forward direction, with only weak evidence for feedback-related activity.  相似文献   

15.
The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1) a probabilistic measure that evaluates segmentation without ground truth and 2) a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.  相似文献   

16.
Sleep spindles occur thousands of times during normal sleep and can be easily detected by visual inspection of EEG signals. These characteristics make spindles one of the most studied EEG structures in mammalian sleep. In this work we considered global spindles, which are spindles that are observed simultaneously in all EEG channels. We propose a methodology that investigates both the signal envelope and phase/frequency of each global spindle. By analysing the global spindle phase we showed that 90% of spindles synchronize with an average latency time of 0.1 s. We also measured the frequency modulation (chirp) of global spindles and found that global spindle chirp and synchronization are not correlated. By investigating the signal envelopes and implementing a homogeneous and isotropic propagation model, we could estimate both the signal origin and velocity in global spindles. Our results indicate that this simple and non-invasive approach could determine with reasonable precision the spindle origin, and allowed us to estimate a signal speed of 0.12 m/s. Finally, we consider whether synchronization might be useful as a non-invasive diagnostic tool.  相似文献   

17.
Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS) algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE) framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework.  相似文献   

18.
In this article, we discuss an application of a fictitious domain method to the numerical simulation of the mechanical process induced by press-fitting cementless femoral implants in total hip replacement surgeries. Here, the primary goal is to demonstrate the feasibility of the method and its advantages over competing numerical methods for a wide range of applications for which the primary input originates from computed tomography-, magnetic resonance imaging- or other regular-grid medical imaging data. For this class of problems, the fictitious domain method is a natural choice, because it avoids the segmentation, surface reconstruction and meshing phases required by unstructured geometry-conforming simulation methods. We consider the implantation of a press-fit femoral artificial prosthesis as a prototype problem for sketching the application path of the methodology. Of concern is the assessment of the robustness and speed of the methodology, for both factors are critical if one were to consider patient-specific modelling. To this end, we report numerical results that exhibit optimal convergence rates and thus shed a favourable light on the approach.  相似文献   

19.
《IRBM》2009,30(3):104-113
We propose a new technique for general purpose, semi-interactive and multi-object segmentation in N-dimensional images, applied to the extraction of cardiac structures in MultiSlice Computed Tomography (MSCT) imaging. The proposed approach makes use of a multi-agent scheme combined with a supervised classification methodology allowing the introduction of a priori information and presenting fast computing times. The multi-agent system is organised around a communicating agent which manages a population of situated agents which segment the image through cooperative and competitive interactions. The proposed technique has been tested on several patient data sets. Some typical results are finally presented and discussed.  相似文献   

20.
Giant unilamellar lipid vesicles, artificial replacements for cell membranes, are a promising tool for in vitro assessment of interactions between products of nanotechnologies and biological membranes. However, the effect of nanoparticles can not be derived from observations on a single specimen, vesicle populations should be observed instead. We propose an adaptation of the Markov random field image segmentation model which allows detection and segmentation of numerous vesicles in micrographs. The reliability of this model with different lighting, blur, and noise characteristics of micrographs is examined and discussed. Moreover, the automatic segmentation is tested on micrographs with thousands of vesicles and the result is compared to that of manual segmentation. The segmentation step presented is part of a methodology we are developing for bio-nano interaction assessment studies on lipid vesicles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号