首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
目的:通过超声图像预处理和对图像分割方法的改进,完成超声心动图中心腔轮廓的提取。方法:首先,运用基于斑点指数的滤波方法对超声图像进行去噪。其次,对超声图像进行分段非线性灰度变换,提高图像对比度。最后,利用改进的基于C-V模型的水平集算法对超声图像进行分割,得到精确的初始轮廓。结果:1基于斑点指数的图像滤波方法可以在不丢失细节的情况下对超声图像进行噪声滤除。2分段非线性灰度变换可以有效提高超声图像的对比度。3改进的C-V模型可以成功的对含有斑点噪声的超声图像进行分割。结论:本文的超声图像预处理方法和分割算法可以有效提取心腔轮廓,降低斑点噪声对图像分割结果的影响。  相似文献   

2.
基于数学形态学的基因芯片图像滤波   总被引:5,自引:0,他引:5  
目的:对基因芯片256级灰度图像进行滤波。方法:数学形态学中的开运算方法。结果:滤除了芯片图像中存在的绝大部分噪声,并较好地保持了图像的细节。结论:通过定性和定量的比较分析,开运算滤波方法滤噪效果好,运算速度快,因此比均值滤波、中值滤波更适合于基因芯片图像处理。  相似文献   

3.
针对光声图像重建过程中存在的原始光声信号信噪比差、重建图像对比度低、分辨率不足等问题,提出了基于Renyi熵的光声图像重建滤波算法.该算法首先根据原始光声信号的Renyi熵分布情况,确定分割阈值,并滤除杂波信号;再利用滤波后的光声数据进行延时叠加光声图像重建.利用该滤波算法分别处理铅笔芯横截面(零维)、头发丝(一维)以及小鼠大脑皮层血管(二维)等不同维度样本的光声信号,实验结果表明:相比Renyi熵处理之前,重建图像对比度平均增强了32.45%,分辨率平均提高了30.78%,信噪比提高了47.66%,均方误差降低了35.01%;相比典型的滤波处理算法(模极大值法和阈值去噪法),本研究中图像的对比度、分辨率和信噪比分别提高了25.94%/10.60%、27.90%/19.48%、35.21%/10.60%,均方误差减小了28.57%/16.66%.因此,选择利用Renyi熵滤波算法处理光声信号,从而使光声图像重建质量得到大幅改善.  相似文献   

4.
王小兵  孙久运 《生物磁学》2011,(20):3954-3957
目的:医学影像在获取、存储、传输过程中会不同程度地受到噪声污染,这极大影像了其在临床诊疗中的应用。为了有效地滤除医学影像噪声,提出了一种混合滤波算法。方法:该算法首先将含有高斯和椒盐噪声的图像进行形态学开运算,然后对开运算后的图像进行二维小波分解,得到高频和低频小波分解系数。保留低频系数不变,将高频系数经过维纳滤波器进行滤波,最后进行小波系数重构。结果:采用该混合滤波算法、小波阚值去噪、中值滤波、维纳滤波分别对含有混合噪声的医学影像分别进行滤除噪声处理,该滤波算法去噪后影像的PSNR值明显高于其他三种方法。结论:该混合滤波算法是一种较为有效的医学影像噪声滤除方法。  相似文献   

5.
一种滤除医学影像噪声的混合滤波算法   总被引:1,自引:0,他引:1       下载免费PDF全文
目的:医学影像在获取、存储、传输过程中会不同程度地受到噪声污染,这极大影像了其在临床诊疗中的应用。为了有效地滤除医学影像噪声,提出了一种混合滤波算法。方法:该算法首先将含有高斯和椒盐噪声的图像进行形态学开运算,然后对开运算后的图像进行二维小波分解,得到高频和低频小波分解系数。保留低频系数不变,将高频系数经过维纳滤波器进行滤波,最后进行小波系数重构。结果:采用该混合滤波算法、小波阈值去噪、中值滤波、维纳滤波分别对含有混合噪声的医学影像分别进行滤除噪声处理,该滤波算法去噪后影像的PSNR值明显高于其他三种方法。结论:该混合滤波算法是一种较为有效的医学影像噪声滤除方法。  相似文献   

6.
昆虫图像分割方法及其应用   总被引:1,自引:0,他引:1  
王江宁  纪力强 《昆虫学报》2011,54(2):211-217
昆虫图像自动鉴定是一种快速鉴定昆虫的方法,图像分割则是其中关键步骤。通过搜集和整理国内外近年来针对昆虫图像的分割方法和研究,发现对昆虫图像分割的研究日趋增多。随着计算机图像技术的发展,昆虫图像分割方法吸收了许多图像分割领域中新兴的方法, 诸如采用水平集、边缘流以及结合形状、纹理、色彩等多种要素的智能分割(如JSEG方法)等。虽然大量的图像分割方法被引入到昆虫图像研究中,但是目前分割技术依然是阻碍昆虫图像广泛应用的关键。本文经过总结和分析,发现目前昆虫图像分割研究的往往在各自的测试集上有良好表现, 但是缺乏统一的评价标准, 因此很多方法在昆虫图像中应用难以推广。针对研究中的存在的这些问题,需建立良好的昆虫图像分割评价体系,本文建议通过建立统一的昆虫图像库以及对昆虫图像分割的评价方法深入研究,并且这些工作是当前昆虫图像分割研究亟待完善任务。  相似文献   

7.
目的:采用MR脑肿瘤图像分割与矩方法进行结合,以获取特定器官及组织的轮廓。方法:对MR脑肿瘤图像进行分割,并对分割的结果进行矩描述。通过分析当前常用的医学图像分割方法,采用了一种基于形变模型的医学图像分割方法,并按照相应的理论算法模型和实现步骤对医学图像进行了处理,最后用Visual C 6.0编程,并对MR脑肿瘤图像进行分割实验。结果:从切割的图形中可以看出,本分割方法分割边界清晰,总体不确定性较小,利用矩技术所提取的图像特征在基于内容的图像检索中是有效的。结论:本分割方法切实可行,分割效果较好,为进一步的MR脑肿瘤图像分析和研究提供了一种有效工具。  相似文献   

8.
针对传统震动滤波和各向异性扩散混合模型存在缺点,提出了一种新的图像增强和去噪方法。该方法将改进的震动滤波项和图像细节保真项同时引入增强和去噪方程,使其根据图像结构信息产生相应变化幅度。通过实验表明,本文提出的方法达到较理想的增强和去噪效果,使得生物医学图像不仅具有很好的平滑效果,而且增强了边缘,同时保留了尽可能多的图像结构和细节信息,并且还很大程度上缩短了计算时间。  相似文献   

9.
【目的】油茶树害虫的种类较多,其中油茶毒蛾Euproctis pseudoconspersa幼虫是危害较大的害虫之一。为完成油茶毒蛾幼虫的自动检测需要对其图像进行分割,油茶毒蛾幼虫图像的分割效果直接影响到图像的自动识别。【方法】本文提出了基于邻域最大差值与区域合并的油茶毒蛾幼虫图像分割算法,该方法主要是对相邻像素RGB的3个分量进行差值运算,最大差值若为0,则进行相邻像素合并得出初始的分割图像,根据合并准则进一步合并,得到最终分割结果。【结果】实验结果表明,该算法可以快速有效地将油茶毒蛾幼虫图像中的背景和虫体分割开来。【结论】使用JSEG分割算法、K均值聚类分割算法、快速几何可变形分割算法和本文算法对油茶毒蛾幼虫图像进行分割,将结果进行对比发现本文方法的分割效果最佳,且处理时间较短。  相似文献   

10.
【目的】通过菌落测试片提取菌落并计数,在农业、食品业、医疗卫生等领域中是一项常用且重要的工作。目前,菌落自动计数算法大都是以菌落培养皿为主要工作对象,对菌落测试片适用性较差。另外,目前相关技术在常规的粘连物体分割中有着较好的效果,但在菌落分割计数中,由于菌落本身的形态特征,对粘连菌落分割计数的效果尚不够精准。【方法】为解决此类问题,本文提出一种基于目标颜色基及梯度方向匹配的菌落分割计数算法。首先利用图像中菌落的颜色特征作为基,将图像转换到基空间内,以增强菌落与背景之间的差异,其次利用菌落图像的梯度幅值特征对梯度方向进行滤波,然后通过梯度方向进行匹配,进而将粘连的菌落分割,最后利用非极大值抑制的方法筛选出菌落并计数。【结果】经试验,本研究算法的计数精度可达98.00%,能够满足实际需求。【结论】在针对菌落的目标分割计数中,本研究算法不仅计数精度高,而且具有较好的鲁棒性,在对不同厂家的菌落总数测试片菌落分割计数中均有优异效果;然而在对大面积目标的检测分割中算法的准确率会有所下降,因此,该算法更适合于菌落等小目标的检测分割。  相似文献   

11.
植物群落构建的生态过滤机制研究进展   总被引:1,自引:0,他引:1  
许驭丹  董世魁  李帅  沈豪 《生态学报》2019,39(7):2267-2281
生物多样性的形成和维持机制,即群落构建机制,一直以来都是群落生态学研究的核心问题。植物群落构建的确定性过程主要是生态过滤机制(包括环境过滤和生物过滤,其中生物过滤包括种间竞争和种内功能性状变异)作用的结果。学者们构建了大量的理论、方法、模型来解释和验证生态过滤机制对群落构建的影响,并取得了显著的成果。然而,关于生态过滤机制在不同尺度的作用、生态过滤机制的各要素分解和量化等方面的研究仍有诸多疑问。重点综述了环境过滤、种间竞争和种内功能性状变异的最新研究进展,并指出了现有研究的不足之处。在未来的研究中,应注重生态过滤机制的各要素分解和量化,加强研究手段的综合运用,关注时空动态变化对植物群落构建的影响,重视对不同植物群落构建机制的共性和个性特征的认识,同时强调与其他生态过程、群落构建机制的整合。通过这些尝试,有助于人们更好的理解植物群落构建过程中的生态过滤机制的作用。  相似文献   

12.
Image segmentation of medical images is a challenging problem with several still not totally solved issues, such as noise interference and image artifacts. Region-based and histogram-based segmentation methods have been widely used in image segmentation. Problems arise when we use these methods, such as the selection of a suitable threshold value for the histogram-based method and the over-segmentation followed by the time-consuming merge processing in the region-based algorithm. To provide an efficient approach that not only produce better results, but also maintain low computational complexity, a new region dividing based technique is developed for image segmentation, which combines the advantages of both regions-based and histogram-based methods. The proposed method is applied to the challenging applications: Gray matter (GM), White matter (WM) and cerebro-spinal fluid (CSF) segmentation in brain MR Images. The method is evaluated on both simulated and real data, and compared with other segmentation techniques. The obtained results have demonstrated its improved performance and robustness.  相似文献   

13.
In this paper, we present a weighted radial edge filtering algorithm with adaptive recovery of dropout regions for the semi-automatic delineation of endocardial contours in short-axis echocardiographic image sequences. The proposed algorithm requires minimal user intervention at the end diastolic frame of the image sequence for specifying the candidate points of the contour. The region of interest is identified by fitting an ellipse in the region defined by the specified points. Subsequently, the ellipse centre is used for originating the radial lines for filtering. A weighted radial edge filter is employed for the detection of edge points. The outliers are corrected by global as well as local statistics. Dropout regions are recovered by incorporating the important temporal information from the previous frame by means of recursive least squares adaptive filter. This ensures fairly accurate segmentation of the cardiac structures for further determination of the functional cardiac parameters. The proposed algorithm was applied to 10 data-sets over a full cardiac cycle and the results were validated by comparing computer-generated boundaries to those manually outlined by two experts using Hausdorff distance (HD) measure, radial mean square error (rmse) and contour similarity index. The rmse was 1.83 mm with a HD of 5.12 ± 1.21 mm. We have also compared our results with two existing approaches, level set and optical flow. The results indicate an improvement when compared with ground truth due to incorporation of temporal clues. The weighted radial edge filtering algorithm in conjunction with adaptive dropout recovery offers semi-automatic segmentation of heart chambers in 2D echocardiography sequences for accurate assessment of global left ventricular function to guide therapy and staging of the cardiovascular diseases.  相似文献   

14.
Lütz-Meindl U  Aichinger N 《Protoplasma》2004,223(2-4):155-162
Summary. In the present study energy-filtering transmission electron microscopy by use of an in-column spectrometer is employed as a powerful tool for ultrastructural analysis of plant cells. Images of unstained very thin (50 nm) and thick (140 nm) sections of the unicellular green alga Micrasterias denticulata, as a model system for a growing plant cell, taken by conventional transmission electron microscopy are compared to those obtained from filtering at zero energy loss (elastic bright field) and to those generated by energy filtering below the carbon-specific absorption edge at about 250 eV. The results show that the high-contrast images produced by the latter technique are distinctly superior in contrast and information content to micrographs taken at conventional transmission electron microscopy mode or at elastic bright field. Post- or en bloc staining with heavy metals, which is indispensable for conventional bright-field transmission electron microscopy, can be completely omitted. Delicate structural details such as membranous or filamentous connections between organelles, organelle interactions, or vesicle and vacuole contents are clearly outlined against the cytoplasmic background. Also, immunoelectron microscopic localization of macromolecules benefits from energy-filtering transmission electron microscopy by a better and more accurate assignment of antigens and structures and by facilitating the detection of immunomarkers without renunciation of contrast.  相似文献   

15.
In this paper, we present an objective method for localization of proteins in blood brain barrier (BBB) vasculature using standard immunohistochemistry (IHC) techniques and bright-field microscopy. Images from the hippocampal region at the BBB are acquired using bright-field microscopy and subjected to our segmentation pipeline which is designed to automatically identify and segment microvessels containing the protein glucose transporter 1 (GLUT1). Gabor filtering and k-means clustering are employed to isolate potential vascular structures within cryosectioned slabs of the hippocampus, which are subsequently subjected to feature extraction followed by classification via decision forest. The false positive rate (FPR) of microvessel classification is characterized using synthetic and non-synthetic IHC image data for image entropies ranging between 3 and 8 bits. The average FPR for synthetic and non-synthetic IHC image data was found to be 5.48% and 5.04%, respectively.  相似文献   

16.
《IRBM》2020,41(6):304-315
Vascular segmentation is often required in medical image analysis for various imaging modalities. Despite the rich literature in the field, the proposed methods need most of the time adaptation to the particular investigation and may sometimes lack the desired accuracy in terms of true positive and false positive detection rate. This paper proposes a general method for vascular segmentation based on locally connected filtering applied in a multiresolution scheme. The filtering scheme performs progressive detection and removal of the vessels from the image relief at each resolution level, by combining directional 2D-3D locally connected filters (LCF). An important property of the LCF is that it preserves (positive contrasted) structures in the image if they are topologically connected with other similar structures in their local environment. Vessels, which appear as curvilinear structures, can be filtered out by an appropriate LCF set-up which will minimally affect sheet-like structures. The implementation in a multiresolution framework allows dealing with different vessel sizes. The outcome of the proposed approach is illustrated on several image modalities including lung, liver and coronary arteries. It is shown that besides preserving high accuracy in detecting small vessels, the proposed technique is less sensitive with respect to noise and the presence of pathologies of positive-contrast appearance on the images. The detection accuracy is compared with a previously developed approach on the 20 patient database from the VESSEL12 challenge.  相似文献   

17.
Functional characters have the potential to act as indicators of species turnover between local communities. Null models provide a powerful statistical approach to test for patterns using functional character information. A combined null model/functional character approach provides the ability to distinguish between the effect of competition and environmental filtering on species turnover. We measured 13 functional characters relating directly to resource use for the fish species found in French lakes. We combined this functional character data with a null model approach to test whether co-occurring species overlapped more or less than expected at random for four primary niche axes. We used an environmentally constrained null model approach to determine if the same mechanisms were responsible for species turnover at different sections of the altitudinal gradient. Functional diversity indices were used to examine the variation in functional character diversity with altitude, as a test of the hypothesis that competitive intensity decreases with increasing environmental adversity. The unconstrained null model showed that environmental filtering was the dominant influence on species turnover between lakes. In the constrained null model, there was much less evidence for environmental filtering, emphasising the strong effect of altitude on turnover in functional character values between local communities. Different results were obtained for low-altitude and high-altitude lake subsets, with more evidence for the effect of environmental filtering being found in the high-altitude lakes. This demonstrates that different processes may influence species turnover throughout an environmental gradient. Functional diversity values showed a slight decrease with altitude, indicating that there was only weak evidence that competitive intensity decreased with increasing altitude. Variation resource availability and environmental stress probably cause the observed turnover in functional characters along the altitudinal gradient, though the effects of dispersal limitation and species introductions in high-altitude lakes cannot be ruled out.  相似文献   

18.
Intensity normalization is an important pre-processing step in the study and analysis of DaTSCAN SPECT imaging. As most automatic supervised image segmentation and classification methods base their assumptions regarding the intensity distributions on a standardized intensity range, intensity normalization takes on a very significant role. In this work, a comparison between different novel intensity normalization methods is presented. These proposed methodologies are based on Gaussian Mixture Model (GMM) image filtering and mean-squared error (MSE) optimization. The GMM-based image filtering method is achieved according to a probability threshold that removes the clusters whose likelihood are negligible in the non-specific regions. The MSE optimization method consists of a linear transformation that is obtained by minimizing the MSE in the non-specific region between the intensity normalized image and the template. The proposed intensity normalization methods are compared to: i) a standard approach based on the specific-to-non-specific binding ratio that is widely used, and ii) a linear approach based on the α-stable distribution. This comparison is performed on a DaTSCAN image database comprising analysis and classification stages for the development of a computer aided diagnosis (CAD) system for Parkinsonian syndrome (PS) detection. In addition, these proposed methods correct spatially varying artifacts that modulate the intensity of the images. Finally, using the leave-one-out cross-validation technique over these two approaches, the system achieves results up to a 92.91% of accuracy, 94.64% of sensitivity and 92.65 % of specificity, outperforming previous approaches based on a standard and a linear approach, which are used as a reference. The use of advanced intensity normalization techniques, such as the GMM-based image filtering and the MSE optimization improves the diagnosis of PS.  相似文献   

19.
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号