首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Dividing the image into superpixels contributes to further processing of the image. Simple linear iterative clustering (SLIC) algorithm achieves good segmentation result by clustering color and distance characteristics of pixels. However, finite superpixels easily cause under-segmentation. Therefore, the work corrects segmentation result of SLIC by k-means clustering method calculating similarity based on weighted Euclidean distance. After that, the under-segmentation superpixel blocks are conducted with k-means clustering based on binary classification. Result shows that the corrected SLIC segmentation has better visual effect and index.  相似文献   

2.

Background

The finite element method (FEM) is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures) of interest (ROIs) may be irregular and fuzzy.

Methods

A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements.

Results

The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues.

Conclusion

The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information.
  相似文献   

3.
BACKGROUND: Measurement of muscle fiber size and determination of size distribution is important in the assessment of neuromuscular disease. Fiber size estimation by simple inspection is inaccurate and subjective. Manual segmentation and measurement are time-consuming and tedious. We therefore propose an automated image analysis method for objective, reproducible, and time-saving measurement of muscle fibers in routinely hematoxylin-eosin stained cryostat sections. METHODS: The proposed segmentation technique makes use of recent advances in level set based segmentation, where classical edge based active contours are extended by region based cues, such as color and texture. Segmentation and measurement are performed fully automatically. Multiple morphometric parameters, i.e., cross sectional area, lesser diameter, and perimeter are assessed in a single pass. The performance of the computed method was compared to results obtained by manual measurement by experts. RESULTS: The correct classification rate of the computed method was high (98%). Segmentation and measurement results obtained manually or automatically did not reveal any significant differences. CONCLUSIONS: The presented region based active contour approach has been proven to accurately segment and measure muscle fibers. Complete automation minimizes user interaction, thus, batch processing, as well as objective and reproducible muscle fiber morphometry are provided.  相似文献   

4.
BACKGROUND: Epiluminescence microscopy (ELM) is a noninvasive clinical tool recently developed for the diagnosis of pigmented skin lesions (PSLs), with the aim of improving melanoma screening strategies. However, the complexity of the ELM grading protocol means that considerable expertise is required for differential diagnosis. In this paper we propose a computer-based tool able to screen ELM images of PSLs in order to aid clinicians in the detection of lesion patterns useful for differential diagnosis. METHODS: The method proposed is based on the supervised classification of pixels of digitized ELM images, and leads to the construction of classes of pixels used for image segmentation. This process has two major phases, i.e., a learning phase, where several hundred pixels are used in order to train and validate a classification model, and an application step, which consists of a massive classification of billions of pixels (i.e., the full image) by means of the rules obtained in the first phase. RESULTS: Our results show that the proposed method is suitable for lesion-from-background extraction, for complete image segmentation into several typical diagnostic patterns, and for artifact rejection. Hence, our prototype has the potential to assist in distinguishing lesion patterns which are associated with diagnostic information such as diffuse pigmentation, dark globules (black dots and brown globules), and the gray-blue veil. CONCLUSIONS: The system proposed in this paper can be considered as a tool to assist in PSL diagnosis.  相似文献   

5.
刘国成  张杨  黄建华  汤文亮 《昆虫学报》2015,58(12):1338-1343
【目的】叶螨(spider mite)是为害多种农作物的主要害虫,叶螨识别传统方法依靠肉眼,比较费时费力,为研究快速自动识别方法,引入计算机图像分析算法。【方法】该方法基于K-means聚类算法对田间作物上的叶螨图像进行分割与识别。【结果】对比传统RGB彩色分割方法,K-means聚类算法能够有效地对叶片上叶螨图像进行分割和识别。K-means聚类算法平均识别时间为3.56 s,平均识别准确率93.95%。识别时间 T 随图像总像素 Pi 的增加而增加。【结论】K-means聚类组合算法能够应用于叶螨图像分割与识别。  相似文献   

6.
【目的】油茶树害虫的种类较多,其中油茶毒蛾Euproctis pseudoconspersa幼虫是危害较大的害虫之一。为完成油茶毒蛾幼虫的自动检测需要对其图像进行分割,油茶毒蛾幼虫图像的分割效果直接影响到图像的自动识别。【方法】本文提出了基于邻域最大差值与区域合并的油茶毒蛾幼虫图像分割算法,该方法主要是对相邻像素RGB的3个分量进行差值运算,最大差值若为0,则进行相邻像素合并得出初始的分割图像,根据合并准则进一步合并,得到最终分割结果。【结果】实验结果表明,该算法可以快速有效地将油茶毒蛾幼虫图像中的背景和虫体分割开来。【结论】使用JSEG分割算法、K均值聚类分割算法、快速几何可变形分割算法和本文算法对油茶毒蛾幼虫图像进行分割,将结果进行对比发现本文方法的分割效果最佳,且处理时间较短。  相似文献   

7.
In this paper, we address the problems of fully automatic localization and segmentation of 3D vertebral bodies from CT/MR images. We propose a learning-based, unified random forest regression and classification framework to tackle these two problems. More specifically, in the first stage, the localization of 3D vertebral bodies is solved with random forest regression where we aggregate the votes from a set of randomly sampled image patches to get a probability map of the center of a target vertebral body in a given image. The resultant probability map is then further regularized by Hidden Markov Model (HMM) to eliminate potential ambiguity caused by the neighboring vertebral bodies. The output from the first stage allows us to define a region of interest (ROI) for the segmentation step, where we use random forest classification to estimate the likelihood of a voxel in the ROI being foreground or background. The estimated likelihood is combined with the prior probability, which is learned from a set of training data, to get the posterior probability of the voxel. The segmentation of the target vertebral body is then done by a binary thresholding of the estimated probability. We evaluated the present approach on two openly available datasets: 1) 3D T2-weighted spine MR images from 23 patients and 2) 3D spine CT images from 10 patients. Taking manual segmentation as the ground truth (each MR image contains at least 7 vertebral bodies from T11 to L5 and each CT image contains 5 vertebral bodies from L1 to L5), we evaluated the present approach with leave-one-out experiments. Specifically, for the T2-weighted MR images, we achieved for localization a mean error of 1.6 mm, and for segmentation a mean Dice metric of 88.7% and a mean surface distance of 1.5 mm, respectively. For the CT images we achieved for localization a mean error of 1.9 mm, and for segmentation a mean Dice metric of 91.0% and a mean surface distance of 0.9 mm, respectively.  相似文献   

8.
MOTIVATION: Inner holes, artifacts and blank spots are common in microarray images, but current image analysis methods do not pay them enough attention. We propose a new robust model-based method for processing microarray images so as to estimate foreground and background intensities. The method starts with a very simple but effective automatic gridding method, and then proceeds in two steps. The first step applies model-based clustering to the distribution of pixel intensities, using the Bayesian Information Criterion (BIC) to choose the number of groups up to a maximum of three. The second step is spatial, finding the large spatially connected components in each cluster of pixels. The method thus combines the strengths of the histogram-based and spatial approaches. It deals effectively with inner holes in spots and with artifacts. It also provides a formal inferential basis for deciding when the spot is blank, namely when the BIC favors one group over two or three. RESULTS: We apply our methods for gridding and segmentation to cDNA microarray images from an HIV infection experiment. In these experiments, our method had better stability across replicates than a fixed-circle segmentation method or the seeded region growing method in the SPOT software, without introducing noticeable bias when estimating the intensities of differentially expressed genes. AVAILABILITY: spotSegmentation, an R language package implementing both the gridding and segmentation methods is available through the Bioconductor project (http://www.bioconductor.org). The segmentation method requires the contributed R package MCLUST for model-based clustering (http://cran.us.r-project.org). CONTACT: fraley@stat.washington.edu.  相似文献   

9.
Digital image analysis of cell nuclei is useful to obtain quantitative information for the diagnosis and prognosis of cancer. However, the lack of a reliable automatic nuclear segmentation is a limiting factor for high-throughput nuclear image analysis. We have developed a method for automatic segmentation of nuclei in Feulgen-stained histological sections of prostate cancer. A local adaptive thresholding with an object perimeter gradient verification step detected the nuclei and was combined with an active contour model that featured an optimized initialization and worked within a restricted region to improve convergence of the segmentation of each nucleus. The method was tested on 30 randomly selected image frames from three cases, comparing the results from the automatic algorithm to a manual delineation of 924 nuclei. The automatic method segmented a few more nuclei compared to the manual method, and about 73% of the manually segmented nuclei were also segmented by the automatic method. For each nucleus segmented both manually and automatically, the accuracy (i.e., agreement with manual delineation) was estimated. The mean segmentation sensitivity/specificity were 95%/96%. The results from the automatic method were not significantly different from the ground truth provided by manual segmentation. This opens the possibility for large-scale nuclear analysis based on automatic segmentation of nuclei in Feulgen-stained histological sections.  相似文献   

10.

Background  

The finite element method (FEM) has been used to simulate cardiac and hepatic radiofrequency (RF) ablation. The FEM allows modeling of complex geometries that cannot be solved by analytical methods or finite difference models. In both hepatic and cardiac RF ablation a common control mode is temperature-controlled mode. Commercial FEM packages don't support automating temperature control. Most researchers manually control the applied power by trial and error to keep the tip temperature of the electrodes constant.  相似文献   

11.
ObjectiveInvestigating the application of CT images when diagnosing lung cancer based on finite mixture model is the objective. Method: 120 clean healthy rats were taken as the research objects to establish lung cancer rat model and carry out lung CT image examination. After the successful CT image data preprocessing, the image is segmented by different methods, which include lung nodule segmentation on the basis of Adaptive Particle Swarm Optimization – Gaussian mixture model (APSO-GMM), lung nodule segmentation on the basis of Adaptive Particle Swarm Optimization – gamma mixture model (APSO-GaMM), lung nodule segmentation based on statistical information and self-selected mixed distribution model, and lung nodule segmentation based on neighborhood information and self-selected mixed distribution model. The segmentation effect is evaluated. Results: Compared with the results of lung nodule segmentation based on statistical information and self-selected mixed distribution model, the Dice coefficient of lung nodule segmentation based on neighborhood information and self-selected mixed distribution model is higher, the relative final measurement accuracy is smaller, the segmentation is more accurate, but the running time is longer. Compared with APSO-GMM and APSO-GaMM, the dice value of self-selected mixed distribution model segmentation method is larger, and the final measurement accuracy is smaller. Conclusion: Among the five methods, the dice value of the self-selected mixed distribution model based on neighborhood information is the largest, and the relative accuracy of the final measurement is the smallest, indicating that the segmentation effect of the self-selected mixed distribution model based on neighborhood information is the best.  相似文献   

12.
We present a rectangle-based segmentation algorithm that sets up a graph and performs a graph cut to separate an object from the background. However, graph-based algorithms distribute the graph's nodes uniformly and equidistantly on the image. Then, a smoothness term is added to force the cut to prefer a particular shape. This strategy does not allow the cut to prefer a certain structure, especially when areas of the object are indistinguishable from the background. We solve this problem by referring to a rectangle shape of the object when sampling the graph nodes, i.e., the nodes are distributed non-uniformly and non-equidistantly on the image. This strategy can be useful, when areas of the object are indistinguishable from the background. For evaluation, we focus on vertebrae images from Magnetic Resonance Imaging (MRI) datasets to support the time consuming manual slice-by-slice segmentation performed by physicians. The ground truth of the vertebrae boundaries were manually extracted by two clinical experts (neurological surgeons) with several years of experience in spine surgery and afterwards compared with the automatic segmentation results of the proposed scheme yielding an average Dice Similarity Coefficient (DSC) of 90.97±2.2%.  相似文献   

13.
A novel machine‐learning method to distinguish between tumor and normal tissue in optical coherence tomography (OCT) has been developed. Pre‐clinical murine ear model implanted with mouse colon carcinoma CT‐26 was used. Structural‐image‐based feature sets were defined for each pixel and machine learning classifiers were trained using “ground truth” OCT images manually segmented by comparison with histology. The accuracy of the OCT tumor segmentation method was then quantified by comparing with fluorescence imaging of tumors expressing genetically encoded fluorescent protein KillerRed that clearly delineates tumor borders. Because the resultant 3D tumor/normal structural maps are inherently co‐registered with OCT derived maps of tissue microvasculature, the latter can be color coded as belonging to either tumor or normal tissue. Applications to radiomics‐based multimodal OCT analysis are envisioned.   相似文献   

14.
Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system’s uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.  相似文献   

15.
An accurate cultural insect detection and recognition relies mainly on a proper automatic segmentation. This paper deals with butterfly segmentation in ecological images characterized by several artifacts like the complexity of environmental decors and cluttered backgrounds. The distractors contained in the rich ecological environment and the huge difference between butterfly species complicate severely the segmentation and make it a challenging task. As butterflies appears to be well contrasted from their surrounding, we suggest to explore the saliency property to delineate accurately the butterfly boundaries. In this vein, we perform a graph ranking process with high level guidance according to foreground and background cues to improve the quality of segmentation. The ranking accuracy is improved through a weighting scheme that combines accurately color, texture and spatial information. The contribution of each used feature is controlled according to its relevance in highlighting butterfly regions. After that, we initialize foreground seeds from most salient pixels and background seeds from less salient pixels as an input for a Graph-cut algorithm to extract the butterfly from the background. Comparative evaluation has shown that our segmentation scheme outperforms some existing segmentation methods that provide high segmentation scores.  相似文献   

16.
Unsupervised clustering represents a powerful technique for self-organized segmentation of biomedical image time series data describing groups of pixels exhibiting similar properties of local signal dynamics. The theoretical background is presented in the beginning, followed by several medical applications demonstrating the flexibility and conceptual power of these techniques. These applications range from functional MRI data analysis to dynamic contrast-enhanced perfusion MRI and breast MRI. For fMRI, these methods can be employed to identify and separate time courses of interest, along with their associated spatial patterns. When applied to dynamic perfusion MRI, they identify groups of voxels associated with time courses that are clinically informative and straightforward to interpret. In breast MRI, a segmentation of the lesion is achieved and in addition a subclassification is obtained within the lesion with regard to regions characterized by different MRI signal time courses. In the present paper, we conclude that unsupervised clustering techniques provide a robust method for blind analysis of time series image data in the important and current field of functional and dynamic MRI.  相似文献   

17.
To develop and test a system for computer-assisted image analysis, repeated video recordings of reed canary-grass roots (Phalaris arundinacea L.) were made in an 18-window rhizotron. The images were digitized and processed using a Unix computer and the Khoros software development environment.Two image sizes, 126×95 mm and 61×46 mm, both comprising 650 × 490 pixels, were compared. Among image processing techniques used were median filtering, segmentation and skeletonization. Root area and length in both the topsoil and subsoil were estimated using the two image sizes. The resolution (image size) strongly affected the calculated root lengths. The results were compared with root length measurements obtained manually.Statistically significant differences in root length and area in the topsoil were detected between the sampling dates using the computer-assisted methods. Possible sources of error and methods for reducing them are discussed.  相似文献   

18.
目的:以成人肱骨为例,将医学图像三维重建技术和有限元方法结合应用于正骨手法研究,建立正常肱骨有限元模型,验证模型的有效性并进行生物力学分析。方法:选择一位青年男性志愿者,对其上肢自尺桡骨上端至肱骨头进行连续断层扫描,得到CT图像,将CT数据导入MIMICS软件中,通过图像分割、三维重建和材料属性赋值,构建正常肱骨有限元模型,利用ANSYS软件进行力学分析,与文献中肱骨的生物力学数据相比较,以此验证模型的有效性。结果:建立了正常肱骨三维几何模型和有限元模型。利用ANSYS软件,对模型进行了有效性验证。所建模型物理特性与真实骨骼相近,能很好地反映骨骼的力学变化,实现手法的定量分析。结论:所建立的肱骨模型外形逼真、在不同载荷下的应力值与相关文献一致,可用作中医仿真系统中的虚拟骨折模型。  相似文献   

19.
Quantitative analysis of digitized IHC-stained tissue sections is increasingly used in research studies and clinical practice. Accurate quantification of IHC staining, however, is often complicated by conventional tissue counterstains caused by the color convolution of the IHC chromogen and the counterstain. To overcome this issue, we implemented a new counterstain, Acid Blue 129, which provides homogeneous tissue background staining. Furthermore, we combined this counterstaining technique with a simple, robust, fully automated image segmentation algorithm, which takes advantage of the high degree of color separation between the 3-amino-9-ethyl-carbazole (AEC) chromogen and the Acid Blue 129 counterstain. Rigorous validation of the automated technique against manual segmentation data, using Ki-67 IHC sections from rat C6 glioma and β-amyloid IHC sections from transgenic mice with amyloid precursor protein (APP) mutations, has shown the automated method to produce highly accurate results compared with ground truth estimates based on the manually segmented images. The synergistic combination of the novel tissue counterstaining and image segmentation techniques described in this study will allow for accurate, reproducible, and efficient quantitative IHC studies for a wide range of antibodies and tissues. (J Histochem Cytochem 56:873–880, 2008)  相似文献   

20.
The mechanism underlying pollen tube growth involves diverse genes and molecular pathways. Alterations in the regulatory genes or pathways cause phenotypic changes reflected by cellular morphology, which can be captured using fluorescence microscopy. Determining and classifying pollen tube morphological phenotypes in such microscopic images is key to our understanding the involvement of genes and pathways. In this context, we propose a computational method to extract quantitative morphological features, and demonstrate that these features reflect morphological differences relevant to distinguish different defects of pollen tube growth. The corresponding software tool furthermore includes a novel semi-automated image segmentation approach, allowing to highly accurately identify the boundary of a pollen tube in a microscopic image.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号