共查询到20条相似文献,搜索用时 8 毫秒
1.
Xiaoying Tang Kenichi Oishi Andreia V. Faria Argye E. Hillis Marilyn S. Albert Susumu Mori Michael I. Miller 《PloS one》2013,8(6)
This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA) for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI) atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM) algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups. 相似文献
2.
Sophie Lancelot Roxane Roche Afifa Slimen Caroline Bouillot Elise Levigoureux Jean-Baptiste Langlois Luc Zimmer Nicolas Costes 《PloS one》2014,9(10)
Introduction
Preclinical in vivo imaging requires precise and reproducible delineation of brain structures. Manual segmentation is time consuming and operator dependent. Automated segmentation as usually performed via single atlas registration fails to account for anatomo-physiological variability. We present, evaluate, and make available a multi-atlas approach for automatically segmenting rat brain MRI and extracting PET activies.Methods
High-resolution 7T 2DT2 MR images of 12 Sprague-Dawley rat brains were manually segmented into 27-VOI label volumes using detailed protocols. Automated methods were developed with 7/12 atlas datasets, i.e. the MRIs and their associated label volumes. MRIs were registered to a common space, where an MRI template and a maximum probability atlas were created. Three automated methods were tested: 1/registering individual MRIs to the template, and using a single atlas (SA), 2/using the maximum probability atlas (MP), and 3/registering the MRIs from the multi-atlas dataset to an individual MRI, propagating the label volumes and fusing them in individual MRI space (propagation & fusion, PF). Evaluation was performed on the five remaining rats which additionally underwent [18F]FDG PET. Automated and manual segmentations were compared for morphometric performance (assessed by comparing volume bias and Dice overlap index) and functional performance (evaluated by comparing extracted PET measures).Results
Only the SA method showed volume bias. Dice indices were significantly different between methods (PF>MP>SA). PET regional measures were more accurate with multi-atlas methods than with SA method.Conclusions
Multi-atlas methods outperform SA for automated anatomical brain segmentation and PET measure’s extraction. They perform comparably to manual segmentation for FDG-PET quantification. Multi-atlas methods are suitable for rapid reproducible VOI analyses. 相似文献3.
Da Ma Manuel J. Cardoso Marc Modat Nick Powell Jack Wells Holly Holmes Frances Wiseman Victor Tybulewicz Elizabeth Fisher Mark F. Lythgoe Sébastien Ourselin 《PloS one》2014,9(1)
Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS) algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE) framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework. 相似文献
4.
《IRBM》2022,43(3):161-168
BackgroundAccurate delineation of organs at risk (OARs) is critical in radiotherapy. Manual delineation is tedious and suffers from both interobserver and intraobserver variability. Automatic segmentation of brain MR images has a wide range of applications in brain tumor radiotherapy. In this paper, we propose a multi-atlas based adaptive active contour model for OAR automatic segmentation in brain MR images.MethodsThe proposed method consists of two parts: multi-atlas based OAR contour initiation and an adaptive edge and local region based active contour evolution. In the adaptive active contour model, we define an energy functional with an adaptive edge intensity fitting force which is responsible for evaluating contour inwards or outwards, and a local region intensity fitting force which guides the evolution of the contour.ResultsExperimental results show that the proposed method achieved more accurate segmentation results in brainstem, eyes and lens automatic segmentation with the Dice Similar Coefficient (DSC) value of 87.19%, 91.96%, 77.11% respectively. Besides, the dosimetric parameters also demonstrate the high consistency of the manual OAR delineations and the auto segmentation results of the proposed method in brain tumor radiotherapy.ConclusionsThe geometric and dosimetric evaluations show the desirable performance of the proposed method on the application of OARs segmentations in brain tumor radiotherapy. 相似文献
5.
Brain-computer interaction (BCI) and physiological computing are terms that refer to using processed neural or physiological signals to influence human interaction with computers, environment, and each other. A major challenge in developing these systems arises from the large individual differences typically seen in the neural/physiological responses. As a result, many researchers use individually-trained recognition algorithms to process this data. In order to minimize time, cost, and barriers to use, there is a need to minimize the amount of individual training data required, or equivalently, to increase the recognition accuracy without increasing the number of user-specific training samples. One promising method for achieving this is collaborative filtering, which combines training data from the individual subject with additional training data from other, similar subjects. This paper describes a successful application of a collaborative filtering approach intended for a BCI system. This approach is based on transfer learning (TL), active class selection (ACS), and a mean squared difference user-similarity heuristic. The resulting BCI system uses neural and physiological signals for automatic task difficulty recognition. TL improves the learning performance by combining a small number of user-specific training samples with a large number of auxiliary training samples from other similar subjects. ACS optimally selects the classes to generate user-specific training samples. Experimental results on 18 subjects, using both nearest neighbors and support vector machine classifiers, demonstrate that the proposed approach can significantly reduce the number of user-specific training data samples. This collaborative filtering approach will also be generalizable to handling individual differences in many other applications that involve human neural or physiological data, such as affective computing. 相似文献
6.
Sergi Herrando Verena Keller Hans-Günther Bauer Lluís Brotons Mark Eaton Mikhail Kalyakin 《Bird Study》2013,60(2):149-158
Capsule: The first European Bird Census Council (EBCC) Atlas of European Breeding Birds has been widely used in scientific publications.Aims: To quantify how scientific publications have used data from the first European Bird Census Council (EBCC) Atlas of European Breeding Birds, what the topics of these studies have been, and to identify key aspects in which a second European Breeding Bird Atlas will provide new opportunities for basic and applied science.Methods: We searched Google Scholar to find papers published in scientific journals that cited the first atlas. We analysed the contents of a random selection of 100 papers citing this atlas and described the way these papers used information from it.Results: The first atlas has been cited in 3150 scientific publications, and can be regarded as a fundamental reference for studies about birds in Europe. It was extensively used as a key reference for the studied bird species. A substantial number of papers re-analysed atlas data to derive new information on species distribution, ecological traits and population sizes. Distribution and ecology were the most frequent topics of studies referring to the atlas, but this source of information was used in a diverse range of studies. In this context, climate change, impact of agriculture and habitat loss were, by order, the most frequently studied environmental pressures. Constraints in the atlas, such as the poor coverage in the east of Europe, the lack of information on distribution change and the coarse resolution were identified as issues limiting the use of the atlas for some purposes.Conclusions: This study demonstrates the scientific value of European-wide breeding bird atlases. A second atlas, with its almost complete coverage across Europe, the incorporation of changes in distribution between the two atlases and the inclusion of modelled maps at a resolution of 10?×?10?km will certainly become a key data source and reference for researchers in the near future. 相似文献
7.
《IRBM》2022,43(6):678-686
ObjectivesFeature selection in data sets is an important task allowing to alleviate various machine learning and data mining issues. The main objectives of a feature selection method consist on building simpler and more understandable classifier models in order to improve the data mining and processing performances. Therefore, a comparative evaluation of the Chi-square method, recursive feature elimination method, and tree-based method (using Random Forest) used on the three common machine learning methods (K-Nearest Neighbor, naïve Bayesian classifier and decision tree classifier) are performed to select the most relevant primitives from a large set of attributes. Furthermore, determining the most suitable couple (i.e., feature selection method-machine learning method) that provides the best performance is performed.Materials and methodsIn this paper, an overview of the most common feature selection techniques is first provided: the Chi-Square method, the Recursive Feature Elimination method (RFE) and the tree-based method (using Random Forest). A comparative evaluation of the improvement (brought by such feature selection methods) to the three common machine learning methods (K- Nearest Neighbor, naïve Bayesian classifier and decision tree classifier) are performed. For evaluation purposes, the following measures: micro-F1, accuracy and root mean square error are used on the stroke disease data set.ResultsThe obtained results show that the proposed approach (i.e., Tree Based Method using Random Forest, TBM-RF, decision tree classifier, DTC) provides accuracy higher than 85%, F1-score higher than 88%, thus, better than the KNN and NB using the Chi-Square, RFE and TBM-RF methods.ConclusionThis study shows that the couple - Tree Based Method using Random Forest (TBM-RF) decision tree classifier successfully and efficiently contributes to find the most relevant features and to predict and classify patient suffering of stroke disease.” 相似文献
8.
The problem of multiple surface clustering is a challenging task, particularly when the surfaces intersect. Available methods such as Isomap fail to capture the true shape of the surface near by the intersection and result in incorrect clustering. The Isomap algorithm uses shortest path between points. The main draw back of the shortest path algorithm is due to the lack of curvature constrained where causes to have a path between points on different surfaces. In this paper we tackle this problem by imposing a curvature constraint to the shortest path algorithm used in Isomap. The algorithm chooses several landmark nodes at random and then checks whether there is a curvature constrained path between each landmark node and every other node in the neighborhood graph. We build a binary feature vector for each point where each entry represents the connectivity of that point to a particular landmark. Then the binary feature vectors could be used as a input of conventional clustering algorithm such as hierarchical clustering. We apply our method to simulated and some real datasets and show, it performs comparably to the best methods such as K-manifold and spectral multi-manifold clustering. 相似文献
9.
Elizabeth M. Sweeney Joshua T. Vogelstein Jennifer L. Cuzzocreo Peter A. Calabresi Daniel S. Reich Ciprian M. Crainiceanu Russell T. Shinohara 《PloS one》2014,9(4)
Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. 相似文献
10.
基于流形学习的基因表达谱数据可视化 总被引:2,自引:0,他引:2
基因表达谱的可视化本质上是高维数据的降维问题。采用流形学习算法来解决基因表达谱的降维数据可视化,讨论了典型的流形学习算法(Isomap和LLE)在表达谱降维中的适用性。通过类内/类间距离定量评价数据降维的效果,对两个典型基因芯片数据集(结肠癌基因表达谱数据集和急性白血病基因表达谱数据集)进行降维分析,发现两个数据集的本征维数都低于3,因而可以用流形学习方法在低维投影空间中进行可视化。与传统的降维方法(如PCA和MDS)的投影结果作比较,显示Isomap流形学习方法有更好的可视化效果。 相似文献
11.
Lijie Huang Guangfu Zhou Zhaoguo Liu Xiaobin Dang Zetian Yang Xiang-Zhen Kong Xu Wang Yiying Song Zonglei Zhen Jia Liu 《PloS one》2016,11(1)
The functional region of interest (fROI) approach has increasingly become a favored methodology in functional magnetic resonance imaging (fMRI) because it can circumvent inter-subject anatomical and functional variability, and thus increase the sensitivity and functional resolution of fMRI analyses. The standard fROI method requires human experts to meticulously examine and identify subject-specific fROIs within activation clusters. This process is time-consuming and heavily dependent on experts’ knowledge. Several algorithmic approaches have been proposed for identifying subject-specific fROIs; however, these approaches cannot easily incorporate prior knowledge of inter-subject variability. In the present study, we improved the multi-atlas labeling approach for defining subject-specific fROIs. In particular, we used a classifier-based atlas-encoding scheme and an atlas selection procedure to account for the large spatial variability across subjects. Using a functional atlas database for face recognition, we showed that with these two features, our approach efficiently circumvented inter-subject anatomical and functional variability and thus improved labeling accuracy. Moreover, in comparison with a single-atlas approach, our multi-atlas labeling approach showed better performance in identifying subject-specific fROIs. 相似文献
12.
Purpose
Semi-automated diffusion tensor imaging (DTI) analysis of white matter (WM) microstructure offers a clinically feasible technique to assess neonatal brain development and provide early prognosis, but is limited by variable methods and insufficient evidence regarding optimal parameters. The purpose of this research was to investigate the influence of threshold values on semi-automated, atlas-based brain segmentation in very-low-birth-weight (VLBW) preterm infants at near-term age.Materials and Methods
DTI scans were analyzed from 45 VLBW preterm neonates at near-term-age with no brain abnormalities evident on MRI. Brain regions were selected with a neonatal brain atlas and threshold values: trace <0.006 mm2/s, fractional anisotropy (FA)>0.15, FA>0.20, and FA>0.25. Relative regional volumes, FA, axial diffusivity (AD), and radial diffusivity (RD) were compared for twelve WM regions.Results
Near-term brain regions demonstrated differential effects from segmentation with the three FA thresholds. Regional DTI values and volumes selected in the PLIC, CereP, and RLC varied the least with the application of different FA thresholds. Overall, application of higher FA thresholds significantly reduced brain region volume selected, increased variability, and resulted in higher FA and lower RD values. The lower threshold FA>0.15 selected 78±21% of original volumes segmented by the atlas, compared to 38±12% using threshold FA>0.25.Conclusion
Results indicate substantial and differential effects of atlas-based DTI threshold parameters on regional volume and diffusion scalars. A lower, more inclusive FA threshold than typically applied for adults is suggested for consistent analysis of WM regions in neonates. 相似文献13.
Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation. 相似文献
14.
Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. 相似文献
15.
Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images. 相似文献
16.
17.
Topic models and neural networks can discover meaningful low-dimensional latent representations of text corpora; as such, they have become a key technology of document representation. However, such models presume all documents are non-discriminatory, resulting in latent representation dependent upon all other documents and an inability to provide discriminative document representation. To address this problem, we propose a semi-supervised manifold-inspired autoencoder to extract meaningful latent representations of documents, taking the local perspective that the latent representation of nearby documents should be correlative. We first determine the discriminative neighbors set with Euclidean distance in observation spaces. Then, the autoencoder is trained by joint minimization of the Bernoulli cross-entropy error between input and output and the sum of the square error between neighbors of input and output. The results of two widely used corpora show that our method yields at least a 15% improvement in document clustering and a nearly 7% improvement in classification tasks compared to comparative methods. The evidence demonstrates that our method can readily capture more discriminative latent representation of new documents. Moreover, some meaningful combinations of words can be efficiently discovered by activating features that promote the comprehensibility of latent representation. 相似文献
18.
Wendeson S. Oliveira Joyce Vitor Teixeira Tsang Ing Ren George D. C. Cavalcanti Jan Sijbers 《PloS one》2016,11(2)
Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels’ appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi’s filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods. 相似文献
19.
Bert Vandeghinste Stefaan Vandenberghe Chris Vanhove Steven Staelens Roel Van Holen 《PloS one》2013,8(7)
The aim of this study is to investigate whether reliable and accurate 3D geometrical models of the murine aortic arch can be constructed from sparse-view data in vivo micro-CT acquisitions. This would considerably reduce acquisition time and X-ray dose. In vivo contrast-enhanced micro-CT datasets were reconstructed using a conventional filtered back projection algorithm (FDK), the image space reconstruction algorithm (ISRA) and total variation regularized ISRA (ISRA-TV). The reconstructed images were then semi-automatically segmented. Segmentations of high- and low-dose protocols were compared and evaluated based on voxel classification, 3D model diameters and centerline differences. FDK reconstruction does not lead to accurate segmentation in the case of low-view acquisitions. ISRA manages accurate segmentation with 1024 or more projection views. ISRA-TV needs a minimum of 256 views. These results indicate that accurate vascular models can be obtained from micro-CT scans with 8 times less X-ray dose and acquisition time, as long as regularized iterative reconstruction is used. 相似文献
20.
Zhu Shenghuo Wang Dingding Yu Kai Li Tao Gong Yihong 《IEEE/ACM transactions on computational biology and bioinformatics / IEEE, ACM》2010,7(1):25-36
Gene expression data usually contain a large number of genes but a small number of samples. Feature selection for gene expression data aims at finding a set of genes that best discriminate biological samples of different types. Using machine learning techniques, traditional gene selection based on empirical mutual information suffers the data sparseness issue due to the small number of samples. To overcome the sparseness issue, we propose a model-based approach to estimate the entropy of class variables on the model, instead of on the data themselves. Here, we use multivariate normal distributions to fit the data, because multivariate normal distributions have maximum entropy among all real-valued distributions with a specified mean and standard deviation and are widely used to approximate various distributions. Given that the data follow a multivariate normal distribution, since the conditional distribution of class variables given the selected features is a normal distribution, its entropy can be computed with the log-determinant of its covariance matrix. Because of the large number of genes, the computation of all possible log-determinants is not efficient. We propose several algorithms to largely reduce the computational cost. The experiments on seven gene data sets and the comparison with other five approaches show the accuracy of the multivariate Gaussian generative model for feature selection, and the efficiency of our algorithms. 相似文献