首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Purpose

To overcome the severe intensity inhomogeneity and blurry boundaries in HIFU (High Intensity Focused Ultrasound) ultrasound images, an accurate and efficient multi-scale and shape constrained localized region-based active contour model (MSLCV), was developed to accurately and efficiently segment the target region in HIFU ultrasound images of uterine fibroids.

Methods

We incorporated a new shape constraint into the localized region-based active contour, which constrained the active contour to obtain the desired, accurate segmentation, avoiding boundary leakage and excessive contraction. Localized region-based active contour modeling is suitable for ultrasound images, but it still cannot acquire satisfactory segmentation for HIFU ultrasound images of uterine fibroids. We improved the localized region-based active contour model by incorporating a shape constraint into region-based level set framework to increase segmentation accuracy. Some improvement measures were proposed to overcome the sensitivity of initialization, and a multi-scale segmentation method was proposed to improve segmentation efficiency. We also designed an adaptive localizing radius size selection function to acquire better segmentation results.

Results

Experimental results demonstrated that the MSLCV model was significantly more accurate and efficient than conventional methods. The MSLCV model has been quantitatively validated via experiments, obtaining an average of 0.94 for the DSC (Dice similarity coefficient) and 25.16 for the MSSD (mean sum of square distance). Moreover, by using the multi-scale segmentation method, the MSLCV model’s average segmentation time was decreased to approximately 1/8 that of the localized region-based active contour model (the LCV model).

Conclusions

An accurate and efficient multi-scale and shape constrained localized region-based active contour model was designed for the semi-automatic segmentation of uterine fibroid ultrasound (UFUS) images in HIFU therapy. Compared with other methods, it provided more accurate and more efficient segmentation results that are very close to those obtained from manual segmentation by a specialist.  相似文献   

2.
Active contour models are of great importance for image segmentation and can extract smooth and closed boundary contours of the desired objects with promising results. However, they cannot work well in the presence of intensity inhomogeneity. Hence, a novel region-based active contour model is proposed by taking image intensities and ‘vesselness values’ from local phase-based vesselness enhancement into account simultaneously to define a novel multi-feature Gaussian distribution fitting energy in this paper. This energy is then incorporated into a level set formulation with a regularization term for accurate segmentations. Experimental results based on publicly available STructured Analysis of the Retina (STARE) demonstrate our model is more accurate than some existing typical methods and can successfully segment most small vessels with varying width.  相似文献   

3.
Intensity inhomogeneity causes many difficulties in image segmentation and the understanding of magnetic resonance (MR) images. Bias correction is an important method for addressing the intensity inhomogeneity of MR images before quantitative analysis. In this paper, a modified model is developed for segmenting images with intensity inhomogeneity and estimating the bias field simultaneously. In the modified model, a clustering criterion energy function is defined by considering the difference between the measured image and estimated image in local region. By using this difference in local region, the modified method can obtain accurate segmentation results and an accurate estimation of the bias field. The energy function is incorporated into a level set formulation with a level set regularization term, and the energy minimization is conducted by a level set evolution process. The proposed model first appeared as a two-phase model and then extended to a multi-phase one. The experimental results demonstrate the advantages of our model in terms of accuracy and insensitivity to the location of the initial contours. In particular, our method has been applied to various synthetic and real images with desirable results.  相似文献   

4.
谭磊  赵书河  罗云霄  周洪奎  王安  雷步云 《生态学报》2014,34(24):7251-7260
对于基于像元的土地覆被分类来说,植被的分类是难点。使用多时相面向对象分类方法可以较好的解决这个问题。以山东省烟台市丘陵地区为研究区,采用Landsat TM(Landsat Thematic Mapper remotely sensed imagery)、DEM(Digital Elevation Model)、坡度、坡位、坡向等多种数据,利用基于对象特征的多时相分类方法对研究区进行土地覆盖自动分类。首先对影像进行多尺度分割并检验分割结果选取合适的分割尺度,然后分析对象的光谱、纹理、形状特征。根据各类地物的光谱特征、地理相关性、形状、空间分布等特征,明确类别之间的差异。建立决策树使用隶属度函数进行模糊分类,借助支持向量机提高分类精度。研究结果表明,通过使用多时相影像采用面向对象分类方法,相对于传统的基于像素的分类可以明显提高分类精度,尤其是解决了乔灌草的区分问题。  相似文献   

5.

Background  

The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL) boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology.  相似文献   

6.
Feature segmentation is an essential phase for geometric modeling and shape processing in anatomical study of human skeleton and clinical digital treatment of orthopedics. Due to various degrees of freedom of bone surface, the existing segmentation algorithms can hardly meet specific medical need. To address this, a novel segmentation methodology for anatomical features of femur model based on medical semantics is put forward. First, anatomical reference objects (ARO) are created to represent typical characteristics of femur anatomy by 3D point fitting in combination with medical priori knowledge. Then, local point clouds between adjacent anatomies are selected according to the AROs to extract boundary feature point (BFP)s. Finally, the complete model of femur is divided into anatomical regions by executing the enhanced watershed algorithm guided with BFPs. Experimental results show that the proposed method has the advantages of automatic segmentation of femoral head, neck and other complex areas, and the segmentation results have better medical semantics. In addition, the slight modification of segmentation results can be achieved by adjusting a few threshold parameter values, which improves the convenience of modification for ordinary users.  相似文献   

7.
Right ventricle segmentation is a challenging task in cardiac image analysis due to its complex anatomy and huge shape variations. In this paper, we proposed a semi-automatic approach by incorporating the right ventricle region and shape information into livewire framework and using one slice segmentation result for the segmentation of adjacent slices. The region term is created using our previously proposed region growing algorithm combined with the SUSAN edge detector while the shape prior is obtained by forming a signed distance function (SDF) from a set of binary masks of the right ventricle and applying PCA on them. Short axis slices are divided into two groups: primary and secondary slices. A primary slice is segmented by the proposed modified livewire and the livewire seeds are transited to a pre-processed version of upper and lower slices (secondary) to find new seed positions in these slices. The shortest path algorithm is applied on each pair of seeds for segmentation. This method is applied on 48 MR patients (from MICCAI’12 Right Ventricle Segmentation Challenge) and yielded an average Dice Metric of 0.937 ± 0.58 and the Hausdorff Distance of 5.16 ± 2.88 mm for endocardium segmentation. The correlation with the ground truth contours were measured as 0.99, 0.98, and 0.93 for EDV, ESV and EF respectively. The qualitative and quantitative results declare that the proposed method outperforms the state-of-the-art methods that uses the same dataset and the cardiac global functional parameters are calculated robustly by the proposed method.  相似文献   

8.
昆虫图像分割方法及其应用   总被引:1,自引:0,他引:1  
王江宁  纪力强 《昆虫学报》2011,54(2):211-217
昆虫图像自动鉴定是一种快速鉴定昆虫的方法,图像分割则是其中关键步骤。通过搜集和整理国内外近年来针对昆虫图像的分割方法和研究,发现对昆虫图像分割的研究日趋增多。随着计算机图像技术的发展,昆虫图像分割方法吸收了许多图像分割领域中新兴的方法, 诸如采用水平集、边缘流以及结合形状、纹理、色彩等多种要素的智能分割(如JSEG方法)等。虽然大量的图像分割方法被引入到昆虫图像研究中,但是目前分割技术依然是阻碍昆虫图像广泛应用的关键。本文经过总结和分析,发现目前昆虫图像分割研究的往往在各自的测试集上有良好表现, 但是缺乏统一的评价标准, 因此很多方法在昆虫图像中应用难以推广。针对研究中的存在的这些问题,需建立良好的昆虫图像分割评价体系,本文建议通过建立统一的昆虫图像库以及对昆虫图像分割的评价方法深入研究,并且这些工作是当前昆虫图像分割研究亟待完善任务。  相似文献   

9.
Liver segmentation from abdominal computed tomography (CT) volumes is extremely important for computer-aided liver disease diagnosis and surgical planning of liver transplantation. Due to ambiguous edges, tissue adhesion, and variation in liver intensity and shape across patients, accurate liver segmentation is a challenging task. In this paper, we present an efficient semi-automatic method using intensity, local context, and spatial correlation of adjacent slices for the segmentation of healthy liver regions in CT volumes. An intensity model is combined with a principal component analysis (PCA) based appearance model to exclude complex background and highlight liver region. They are then integrated with location information from neighboring slices into graph cuts to segment the liver in each slice automatically. Finally, a boundary refinement method based on bottleneck detection is used to increase the segmentation accuracy. Our method does not require heavy training process or statistical model construction, and is capable of dealing with complicated shape and intensity variations. We apply the proposed method on XHCSU14 and SLIVER07 databases, and evaluate it by MICCAI criteria and Dice similarity coefficient. Experimental results show our method outperforms several existing methods on liver segmentation.  相似文献   

10.
Determination of the stresses in soft tissues such as ligaments and tendons under uniaxial tension require accurate measurement of their cross-sectional area. Of the many methods available, there are concerns regarding contact methods which exert external loads and deform the cross-sectional shape of soft tissues. Hence, the area measurements are affected. On the other hand, non-contact methods have difficulties in dealing with complex shapes, especially with concavities. To address these problems, a new measurement system using a charge-coupled device (CCD) laser displacement sensor has been developed and tested. This system measures the complete surface profile of the object by rotating the laser 360° around the soft tissue. Then, the cross-sectional shape is reconstructed and the cross-sectional area determined via Simpson's rule. The system's accuracy was first verified with objects of various cross-sectional shapes and areas (cylinder: 23.1, 76.5, 510.3 mm2; cuboid: 34.3, 163.8, 316.7 mm2, and cylinder with concavities: 121.4 mm2). The CCD laser reflectance system's accuracy was within 2.0% for these objects. To test biological application, the goat Achilles tendon and the anteromedial bundle of the porcine anterior cruciate ligament specimens were measured and compared to values obtained using another accepted technique, the laser micrometer system. The areas obtained using the CCD laser reflectance system were 4.4% and 9.7% lower than those obtained with the laser micrometer system respectively. These differences could be mainly attributed to concavities. Thus, the CCD laser reflectance system is an improved method for measuring the cross-sectional shape and area of soft tissues since it can detect and account for concavities without physically contacting the specimen.  相似文献   

11.
Microarray experiments generate data sets with information on the expression levels of thousands of genes in a set of biological samples. Unfortunately, such experiments often produce multiple missing expression values, normally due to various experimental problems. As many algorithms for gene expression analysis require a complete data matrix as input, the missing values have to be estimated in order to analyze the available data. Alternatively, genes and arrays can be removed until no missing values remain. However, for genes or arrays with only a small number of missing values, it is desirable to impute those values. For the subsequent analysis to be as informative as possible, it is essential that the estimates for the missing gene expression values are accurate. A small amount of badly estimated missing values in the data might be enough for clustering methods, such as hierachical clustering or K-means clustering, to produce misleading results. Thus, accurate methods for missing value estimation are needed. We present novel methods for estimation of missing values in microarray data sets that are based on the least squares principle, and that utilize correlations between both genes and arrays. For this set of methods, we use the common reference name LSimpute. We compare the estimation accuracy of our methods with the widely used KNNimpute on three complete data matrices from public data sets by randomly knocking out data (labeling as missing). From these tests, we conclude that our LSimpute methods produce estimates that consistently are more accurate than those obtained using KNNimpute. Additionally, we examine a more classic approach to missing value estimation based on expectation maximization (EM). We refer to our EM implementations as EMimpute, and the estimate errors using the EMimpute methods are compared with those our novel methods produce. The results indicate that on average, the estimates from our best performing LSimpute method are at least as accurate as those from the best EMimpute algorithm.  相似文献   

12.

Background

Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it.

Methodology/Principal Findings

From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection.

Conclusions/Significance

A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion.  相似文献   

13.
Liver-vessel segmentation plays an important role in vessel structure analysis for liver surgical planning. This paper presents a liver-vessel segmentation method based on extreme learning machine (ELM). Firstly, an anisotropic filter is used to remove noise while preserving vessel boundaries from the original computer tomography (CT) images. Then, based on the knowledge of prior shapes and geometrical structures, three classical vessel filters including Sato, Frangi and offset medialness filters together with the strain energy filter are used to extract vessel structure features. Finally, the ELM is applied to segment liver vessels from background voxels. Experimental results show that the proposed method can effectively segment liver vessels from abdominal CT images, and achieves good accuracy, sensitivity and specificity.  相似文献   

14.
This paper presents a new module for heart sounds segmentation based on S-transform. The heart sounds segmentation process segments the PhonoCardioGram (PCG) signal into four parts: S1 (first heart sound), systole, S2 (second heart sound) and diastole. It can be considered one of the most important phases in the auto-analysis of PCG signals. The proposed segmentation module can be divided into three main blocks: localization of heart sounds, boundaries detection of the localized heart sounds and classification block to distinguish between S1 and S2. An original localization method of heart sounds are proposed in this study. The method named SSE calculates the Shannon energy of the local spectrum calculated by the S-transform for each sample of the heart sound signal. The second block contains a novel approach for the boundaries detection of S1 and S2. The energy concentrations of the S-transform of localized sounds are optimized by using a window width optimization algorithm. Then the SSE envelope is recalculated and a local adaptive threshold is applied to refine the estimated boundaries. To distinguish between S1 and S2, a feature extraction method based on the singular value decomposition (SVD) of the S-matrix is applied in this study. The proposed segmentation module is evaluated at each block according to a database of 80 sounds, including 40 sounds with cardiac pathologies.  相似文献   

15.

Background

Methods of manual cell localization and outlining are so onerous that automated tracking methods would seem mandatory for handling huge image sequences, nevertheless manual tracking is, astonishingly, still widely practiced in areas such as cell biology which are outside the influence of most image processing research. The goal of our research is to address this gap by developing automated methods of cell tracking, localization, and segmentation. Since even an optimal frame-to-frame association method cannot compensate and recover from poor detection, it is clear that the quality of cell tracking depends on the quality of cell detection within each frame.

Methods

Cell detection performs poorly where the background is not uniform and includes temporal illumination variations, spatial non-uniformities, and stationary objects such as well boundaries (which confine the cells under study). To improve cell detection, the signal to noise ratio of the input image can be increased via accurate background estimation. In this paper we investigate background estimation, for the purpose of cell detection. We propose a cell model and a method for background estimation, driven by the proposed cell model, such that well structure can be identified, and explicitly rejected, when estimating the background.

Results

The resulting background-removed images have fewer artifacts and allow cells to be localized and detected more reliably. The experimental results generated by applying the proposed method to different Hematopoietic Stem Cell (HSC) image sequences are quite promising.

Conclusion

The understanding of cell behavior relies on precise information about the temporal dynamics and spatial distribution of cells. Such information may play a key role in disease research and regenerative medicine, so automated methods for observation and measurement of cells from microscopic images are in high demand. The proposed method in this paper is capable of localizing single cells in microwells and can be adapted for the other cell types that may not have circular shape. This method can be potentially used for single cell analysis to study the temporal dynamics of cells.  相似文献   

16.
In this paper, a novel watershed approach based on seed region growing and image entropy is presented which could improve the medical image segmentation. The proposed algorithm enables the prior information of seed region growing and image entropy in its calculation. The algorithm starts by partitioning the image into several levels of intensity using watershed multi-degree immersion process. The levels of intensity are the input to a computationally efficient seed region segmentation process which produces the initial partitioning of the image regions. These regions are fed to entropy procedure to carry out a suitable merging which produces the final segmentation. The latter process uses a region-based similarity representation of the image regions to decide whether regions can be merged. The region is isolated from the level and the residual pixels are uploaded to the next level and so on, we recall this process as multi-level process and the watershed is called multi-level watershed. The proposed algorithm is applied to challenging applications: grey matter–white matter segmentation in magnetic resonance images (MRIs). The established methods and the proposed approach are experimented by these applications to a variety of simulating immersion, multi-degree, multi-level seed region growing and multi-level seed region growing with entropy. It is shown that the proposed method achieves more accurate results for medical image oversegmentation.  相似文献   

17.
The authors propose a CT image segmentation method using structural analysis that is useful for objects with structural dynamic characteristics. Motivation of our research is from the area of genetic activity. In order to reveal the roles of genes, it is necessary to create mutant mice and measure differences among them by scanning their skeletons with an X-ray CT scanner. The CT image needs to be manually segmented into pieces of the bones. It is a very time consuming to manually segment many mutant mouse models in order to reveal the roles of genes. It is desirable to make this segmentation procedure automatic. Although numerous papers in the past have proposed segmentation techniques, no general segmentation method for skeletons of living creatures has been established. Against this background, the authors propose a segmentation method based on the concept of destruction analogy. To realize this concept, structural analysis is performed using the finite element method (FEM), as structurally weak areas can be expected to break under conditions of stress. The contribution of the method is its novelty, as no studies have so far used structural analysis for image segmentation. The method's implementation involves three steps. First, finite elements are created directly from the pixels of a CT image, and then candidates are also selected in areas where segmentation is thought to be appropriate. The second step involves destruction analogy to find a single candidate with high strain chosen as the segmentation target. The boundary conditions for FEM are also set automatically. Then, destruction analogy is implemented by replacing pixels with high strain as background ones, and this process is iterated until object is decomposed into two parts. Here, CT image segmentation is demonstrated using various types of CT imagery.  相似文献   

18.
Partial caching of large media objects such as video files has been proposed recently as the caching of entire objects can easily exhaust the storage resources of a proxy server. In this paper the idea of segmenting video files into chunks and applying replacement decisions at the chunk level rather than on entire videos is examined. It is shown that a higher byte hit ratio (BHR) can be achieved by appropriately adjusting the replacement granularity. The price paid for the improved BHR performance is that the replacement algorithm takes a longer time to converge to the steady state BHR. For the segmentation of video into chunks two methods are presented. The Fixed Chunk Size segmentation scheme that is rather simple and reveals the basic trade-off between byte hit ratio (BHR) and responsiveness to changes of popularity; the Variable Chunk Size segmentation scheme that uses the request frequencies to dynamically adjust the size of the chunk and is shown to be capable of combining a small response time with high BHR. Moreover, a variation of the fixed chunk size segmentation scheme is presented, which is shown to improve its performance by switching between different chunk sizes. Video segmentation is also considered as a mechanism to provide for caching differentiation based on access costs. By employing access cost dependent chunk sizes an overall access cost reduction is demonstrated.  相似文献   

19.
20.
The basic concepts, notions and methods of geometric morphometrics (GM) are considered. This approach implies multivariate analysis of landmark coordinates located following certain rules on the surface of a morphological object. The aim of GM is to reveal differences between morphological objects by their shapes as such, the "size factor" being excluded. The GM is based on the concept of Kendall's space (KS) defined as a hypersphere with points distributed on its surface. These points are the shapes defined as aligned landmark configurations. KS is a non-Euclidian space, its metrics called Procrustes is defined by landmark configuration of a reference shape relative to which other shapes are aligned and compared. The differences among shapes are measured as Procrustes distances between respective points. For the linear methods of multivariate statistics to be applied to comparison of shapes, the respective points are projected onto the tangent plane (tangent space), the tangent point being defined by the reference. There are two principal methods of shape comparisons in GM: the Procrustes superimposition (a version of the least squares analysis) and thin-plate spline analysis. In the first case, Procrustes residuals are the outcome shape variables which remain after isometric alignment of the shapes being compared. Their summation over all landmarks yields Procrustes distances among these shapes. The Procrustes distances can be used in multivariate analyses just as the Euclidian distances. In the second case, the shapes are fitted to the references by stretching/compressing and shearing until complete identity of their landmark configurations. Eigenvectors of resulting bending energy matrix are defined as new shape variables, principal warps which yield another shape space with the origin defined by the reference. Projections of the shapes being compared onto principal warps yield partial warps, and their covariance matrix decomposition into eigenvectors yields relative warps which are similar to principal components (in particular, they are mutually orthogonal). Both partial and relative warps can be used in many multivariate statistic analyses as quantitative shape variables. Results of thin-plate spline analysis can be represented graphically by transformation grid which displays type, amount and localization of the shape differences. Basis rules of sample composition and landmark positioning to be used in GM are considered. At present, rigid (with minimal degrees of freedom) 2D morphological objects are most suitable for GM applications. It is important to recognize three type of real landmarks, and additionally semi-landmarks and "virtual" landmarks. Some procedures of thin-plate spline analysis are considered exemplified by some study cases, as well as applications of some standard multivariate methods to GM results. They make it possible to evaluate correlation between different shapes, as well as between a shape and some non-shape variables (linear measurements etc); to evaluate the differences among organisms by shape of a morphological structure; to identify landmarks which most accounted for both correlation and differences between the shapes. An annotated list of most popular softwares for GM is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号