首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 343 毫秒
1.
超声图像处理中Snake模型研究   总被引:3,自引:0,他引:3  
Snake模型是一种基于高层信息的有效目标轮廓提取算法,其优点是作用过程及最后结果的目标轮廓是一条完整的曲线,从而引起广泛的关注。鉴于医学超声图像的信噪比较低,用经典的边缘提取算法无法得到较好的结果,因此人们将Snake模型进行了各种各样的改进,并且越来越多地将它运用到医学超声图像处理中来。本文对乳腺超声图像进行阈值分割、形态滤波等一系列预处理后,将改进的Snake模型对乳腺超声图像进行肿瘤的边缘提取,得到了比较好的结果。  相似文献   

2.
目的:采用MR脑肿瘤图像分割与矩方法进行结合,以获取特定器官及组织的轮廓。方法:对MR脑肿瘤图像进行分割,并对分割的结果进行矩描述。通过分析当前常用的医学图像分割方法,采用了一种基于形变模型的医学图像分割方法,并按照相应的理论算法模型和实现步骤对医学图像进行了处理,最后用Visual C 6.0编程,并对MR脑肿瘤图像进行分割实验。结果:从切割的图形中可以看出,本分割方法分割边界清晰,总体不确定性较小,利用矩技术所提取的图像特征在基于内容的图像检索中是有效的。结论:本分割方法切实可行,分割效果较好,为进一步的MR脑肿瘤图像分析和研究提供了一种有效工具。  相似文献   

3.
目的:针对GVF Snake模型算法收敛容易陷入局部极小值及对初始轮廓位置敏感等缺点,提出一种动态方向梯度矢量流模型(DDGVF),使其更适合医学图像的分割。方法:利用主动轮廓模型的提取和跟踪特定区域内目标轮廓的方法,将其应用于医学图像如CT、MRI和超声图像的处理,以获取特定器官及组织的轮廓。结果:动态方向梯度矢量流场(DDGVF)能够较好地提取出脑肿瘤图像。结论:利用该方法能够较好地分割提取出脑肿瘤图像的肿瘤病变区域,为进一步对其纹理和形状等特征进行描述和分析提供了可靠的依据。  相似文献   

4.
目的:通过超声图像预处理和对图像分割方法的改进,完成超声心动图中心腔轮廓的提取。方法:首先,运用基于斑点指数的滤波方法对超声图像进行去噪。其次,对超声图像进行分段非线性灰度变换,提高图像对比度。最后,利用改进的基于C-V模型的水平集算法对超声图像进行分割,得到精确的初始轮廓。结果:1基于斑点指数的图像滤波方法可以在不丢失细节的情况下对超声图像进行噪声滤除。2分段非线性灰度变换可以有效提高超声图像的对比度。3改进的C-V模型可以成功的对含有斑点噪声的超声图像进行分割。结论:本文的超声图像预处理方法和分割算法可以有效提取心腔轮廓,降低斑点噪声对图像分割结果的影响。  相似文献   

5.
本文提出了基于最大熵和改进的PCNN(Pulse Coupled Neural Network)相结合的新方法,采用最大熵确定PCNN网络的循环迭代次数.提出的方法无需考虑PCNN参数的选择,可有效的自动分割各种医学图像,同时利用最大熵得到最优分割结果.该方法对于PCNN理论在医学图像分割领域的应用有着重要的意义.  相似文献   

6.
本文以蒙特卡罗模拟方法为基础,结合组织光学的光子传输模型,提出了一种新的图像分割算法,该算法将复杂的图像分割问题简化为大量简单的光子传输随机实验,通过分析传输规律来获取目标区域.在随后的实验中,结合细胞核提取这一问题建立了一个简单的光学传输模型,并依据此模型分别对人造图和实际图进行了分割.人造图的分割结果表明了该算法的可行性,说明了该算法的一些优点;而实际图的分割结果则反映了该算法的不足之处,文章针对其中存在的问题和算法待改进之处进行了分析.  相似文献   

7.
树木年轮学的研究需要统计树龄和测量轮宽,由此推算环境变换和树木生长信息,因此准确提取年轮特征信息至关重要。精准识别出年轮图像中的早材、晚材和树皮是实现自动化测量年轮参数的首要工作。树木年轮的生长过程中存在年轮的早材和晚材间边界过渡模糊、节疤和伪年轮等现象,且年轮圆盘在砍伐和采集过程中表面会存在毛刺和噪声点,使用传统的图像分割算法难以得到理想的效果。本文结合深度神经网络的特点,针对年轮图像的分割问题,构建了基于U-Net卷积神经网络的年轮图像语义分割模型。首先,对采集的100张年轮圆盘图像进行标注,并通过旋转、透视和图像变形等方式做数据增强,生成20000张数据集,随机选择其中16000张作为训练数据集,4000张作为测试数据集。其次,根据图像数据集的特征,利用Tensorflow深度学习框架,设计构建基于U-Net卷积神经网络的年轮圆盘图像分割网络。然后,将训练样本输送进网络,设置优化训练参数,对年轮图像分割网络进行迭代训练,直至评价指标和损失函数不再变化。最后,用训练好的模型对测试集样本进行分割,并进行分割指标评估。结果表明:该算法可有效避免毛刺、锯痕和节疤等因素的影响,完整地分割出年轮的晚材和树皮区域,在4000张测试数据集上分割的平均准确率达到96.51%,平均区域重合度达到82.30%。与传统图像处理算法相比,本文所采用的基于U-Net卷积神经网络的年轮图像分割算法,能够达到更好的分割效果,同时具有更强的泛化能力和鲁棒性。  相似文献   

8.
目的:研究磁共振(Magnetic resonance,MR)脑图像中海马的自动分割方法及海马的形态学分析方法,为阿尔茨海默病(Alzheimer’s disease,AD)的早期诊断提供依据。方法:对20例AD患者和60名正常对照者行MRI T1 WI 3D容积扫描,建立海马的三维主动表观模型,并以此模型对每个个体脑部磁共振图像上的海马进行自动识别和三维分割,分别建立正常对照组和AD组的海马统计形状模型,比较AD组与正常对照组间海马形状的差异性。结果:海马三维分割方法与手动分割方法在海马体积测量上无统计学差别(P>0.05);AD患者海马头部发生萎缩(P<0.05)。结论:基于主动表观模型的MR脑图像海马自动识别和三维分割法是准确可靠的;海马头部萎缩可作为AD诊断的依据之一。  相似文献   

9.
本文描述一种基于知识的三维医学图像自动分割方法,用于进行人体颅内出血(Intracranial Hemorrhage,ICH)的分割和分析。首先,数字化CT胶片,并自动对数字化后的胶片按照有无异常分类。然后,阀值结合模糊C均值聚类算法将图像分类成多个具有统一亮度的区域。最后,在先验知识以及预定义的规则的基础上,借助基于知识的专家系统将各个区域标记为背景、钙化点、血肿、颅骨、脑干。  相似文献   

10.
[目的]具有复杂背景的蝴蝶图像前背景分割难度大.本研究旨在探索基于深度学习显著性目标检测的蝴蝶图像自动分割方法.[方法]应用DUTS-TR数据集训练F3Net显著性目标检测算法构建前背景预测模型,然后将模型用于具有复杂背景的蝴蝶图像数据集实现蝴蝶前背景自动分割.在此基础上,采用迁移学习方法,保持ResNet骨架不变,利...  相似文献   

11.
Predictive models of tumor response based on heterogeneity metrics in medical images, such as textural features, are highly suggestive. However, the demonstrated sensitivity of these features to noise does affect the model being developed. An in-depth analysis of the noise influence on the extraction of texture features was performed based on the assumption that an improvement in information quality can also enhance the predictive model. A heuristic approach was used that recognizes from the beginning that the noise has its own texture and it was analysed how it affects the quantitative signal data. A simple procedure to obtain noise image estimation is shown; one which makes it possible to extract the noise-texture features at each observation. The distance measured between the textural features in signal and estimated noise images allows us to determine the features affected in each observation by the noise and, for example, to exclude some of them from the model. A demonstration was carried out using synthetic images applying realistic noise models found in medical images. Drawn conclusions were applied to a public cohort of clinical images obtained using FDG-PET to show how the predictive model could be improved. A gain in the area under the receiver operating characteristic curve between 10 and 20% when noise texture information is used was shown. An improvement between 20 and 30% can be appreciated in the estimated model quality.  相似文献   

12.
Conduction of tele-3D-computer assisted operations as well as other telemedicine procedures often requires highest possible quality of transmitted medical images and video. Unfortunately, those data types are always associated with high telecommunication and storage costs that sometimes prevent more frequent usage of such procedures. We present a novel algorithm for lossless compression of medical images that is extremely helpful in reducing the telecommunication and storage costs. The algorithm models the image properties around the current, unknown pixel and adjusts itself to the local image region. The main contribution of this work is the enhancement of the well known approach of predictor blends through highly adaptive determination of blending context on a pixel-by-pixel basis using classification technique. We show that this approach is well suited for medical image data compression. Results obtained with the proposed compression method on medical images are very encouraging, beating several well known lossless compression methods. The predictor proposed can also be used in other image processing applications such as segmentation and extraction of image regions.  相似文献   

13.
T. Janani  Y. Darak  M. Brindha 《IRBM》2021,42(2):83-93
The recent advances in digital medical imaging and storage in cloud are bringing about more demands for efficient and secure image retrieval and management. Typically, medical images are very sensitive to changes where any modifications in its content may bring about an erroneous medical diagnosis. Therefore, securing medical images is a very essential process and the major task is, the medical image must maintain their sensitive contents at the time of reconstruction. The proposed methodology executes a secure image encryption and search of medical images proficiently over encrypted image database without leaking any sensitive data. It also ensures medical data integrity by introducing an efficient recovery mechanism on ROI of the image. The proposed scheme obtains recovery information about the image from the ROI of the medical data and embeds it in the RONI region using IWT transform which act as a reversible watermarking. If any alterations or tampers are caused to ROI at the third-party end, then it can be identified and recovered from the obtained recovery data. Besides, the model also executes a Copyright protection scheme to locate the authorized users, who illegally duplicate and distribute the retrieved image to unauthorized entities.  相似文献   

14.
About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC) or independent component analysis (ICA), two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC) and medial temporal lobe (MTL). Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.  相似文献   

15.
Leaf disease is an important factor restricting the high quality and high yield of the soybean plant. Insufficient control of soybean diseases will destroy the local ecological environment and break the stability of the food chain. To overcome the low accuracy in recognizing soybean leaf diseases using traditional deep learning models and complexity in chemical analysis operations, in this study, a recognition model of soybean leaf diseases was proposed based on an improved deep learning model. First, four types of soybean diseases (Septoria Glycines Hemmi, Soybean Brown Leaf Spot, Soybean Frogeye Leaf Spot, and Soybean Phyllosticta Leaf Spot) were taken as research objects. Second, image preprocessing and data expansion of original images were carried out using image registration, image segmentation, region calibration and data enhancement. The data set containing 53, 250 samples was randomly divided into the training set, verification set, and test set according to the ratio of 7:2:1. Third, the convolution layer weight of the pre-training model based on the ImageNet open data set was transferred to the convolution layer of the ResNet18 model to reconstruct the global average pooling layer and the fully connected layer for constructing recognition model of TRNet18 model. Finally, the recognition accuracy of the four leaf diseases reached 99.53%, the Macro-F1 was 99.54%, and the average recognition time was 0.047184 s. Compared with AlexNet, ResNet18, ResNet50, and TRNet50 models, the recognition accuracy and Macro-F1 of the TRNet18 model were improved by 6.03% and 5.99% respectively, and the model recognition time was saved by 16.67%, The results showed that the proposed TRNet18 model had higher classification accuracy and stronger robustness, which can not only provide a reference for accurate recognition of other crop diseases, but also be transplanted to the mobile terminal for recognition of crop leaf diseases.  相似文献   

16.
The ultimate goal of machine vision is image understanding-the ability not only to recover image structure but also to know what it represents. By definition, this involves the use of models which describe and label the expected structure of the world. Over the past decade, model-based vision has been applied successfully to images of man-made objects. It has proved much more difficult to develop model-based approaches to the interpretation of images of complex and variable structures such as faces or the internal organs of the human body (as visualized in medical images). In such cases it has been problematic even to recover image structure reliably, without a model to organize the often noisy and incomplete image evidence. The key problem is that of variability. To be useful, a model needs to be specific-that is, to be capable of representing only ''legal'' examples of the modelled object(s). It has proved difficult to achieve this whilst allowing for natural variability. Recent developments have overcome this problem; it has been shown that specific patterns of variability in shape and grey-level appearance can be captured by statistical models that can be used directly in image interpretation. The details of the approach are outlined and practical examples from medical image interpretation and face recognition are used to illustrate how previously intractable problems can now be tackled successfully. It is also interesting to ask whether these results provide any possible insights into natural vision; for example, we show that the apparent changes in shape which result from viewing three-dimensional objects from different viewpoints can be modelled quite well in two dimensions; this may lend some support to the ''characteristic views'' model of natural vision.  相似文献   

17.
The automatic computerized detection of regions of interest (ROI) is an important step in the process of medical image processing and analysis. The reasons are many, and include an increasing amount of available medical imaging data, existence of inter-observer and inter-scanner variability, and to improve the accuracy in automatic detection in order to assist doctors in diagnosing faster and on time. A novel algorithm, based on visual saliency, is developed here for the identification of tumor regions from MR images of the brain. The GBM saliency detection model is designed by taking cue from the concept of visual saliency in natural scenes. A visually salient region is typically rare in an image, and contains highly discriminating information, with attention getting immediately focused upon it. Although color is typically considered as the most important feature in a bottom-up saliency detection model, we circumvent this issue in the inherently gray scale MR framework. We develop a novel pseudo-coloring scheme, based on the three MRI sequences, viz. FLAIR, T2 and T1C (contrast enhanced with Gadolinium). A bottom-up strategy, based on a new pseudo-color distance and spatial distance between image patches, is defined for highlighting the salient regions in the image. This multi-channel representation of the image and saliency detection model help in automatically and quickly isolating the tumor region, for subsequent delineation, as is necessary in medical diagnosis. The effectiveness of the proposed model is evaluated on MRI of 80 subjects from the BRATS database in terms of the saliency map values. Using ground truth of the tumor regions for both high- and low- grade gliomas, the results are compared with four highly referred saliency detection models from literature. In all cases the AUC scores from the ROC analysis are found to be more than 0.999 ± 0.001 over different tumor grades, sizes and positions.  相似文献   

18.
This paper investigates image processing and pattern recognition techniques to estimate atmospheric visibility based on the visual content of images from off-the-shelf cameras. We propose a prediction model that first relates image contrast measured through standard image processing techniques to atmospheric transmission. This is then related to the most common measure of atmospheric visibility, the coefficient of light extinction. The regression model is learned using a training set of images and corresponding light extinction values as measured using a transmissometer.The major contributions of this paper are twofold. First, we propose two predictive models that incorporate multiple scene regions into the estimation: regression trees and multivariate linear regression. Incorporating multiple regions is important since regions at different distances are effective for estimating light extinction under different visibility regimes. The second major contribution is a semi-supervised learning framework, which incorporates unlabeled training samples to improve the learned models. Leveraging unlabeled data for learning is important since in many applications, it is easier to obtain observations than to label them. We evaluate our models using a dataset of images and ground truth light extinction values from a visibility camera system in Phoenix, Arizona.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号