首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 876 毫秒
1.
【目的】油茶树害虫的种类较多,其中油茶毒蛾Euproctis pseudoconspersa幼虫是危害较大的害虫之一。为完成油茶毒蛾幼虫的自动检测需要对其图像进行分割,油茶毒蛾幼虫图像的分割效果直接影响到图像的自动识别。【方法】本文提出了基于邻域最大差值与区域合并的油茶毒蛾幼虫图像分割算法,该方法主要是对相邻像素RGB的3个分量进行差值运算,最大差值若为0,则进行相邻像素合并得出初始的分割图像,根据合并准则进一步合并,得到最终分割结果。【结果】实验结果表明,该算法可以快速有效地将油茶毒蛾幼虫图像中的背景和虫体分割开来。【结论】使用JSEG分割算法、K均值聚类分割算法、快速几何可变形分割算法和本文算法对油茶毒蛾幼虫图像进行分割,将结果进行对比发现本文方法的分割效果最佳,且处理时间较短。  相似文献   

2.
【目的】通过菌落测试片提取菌落并计数,在农业、食品业、医疗卫生等领域中是一项常用且重要的工作。目前,菌落自动计数算法大都是以菌落培养皿为主要工作对象,对菌落测试片适用性较差。另外,目前相关技术在常规的粘连物体分割中有着较好的效果,但在菌落分割计数中,由于菌落本身的形态特征,对粘连菌落分割计数的效果尚不够精准。【方法】为解决此类问题,本文提出一种基于目标颜色基及梯度方向匹配的菌落分割计数算法。首先利用图像中菌落的颜色特征作为基,将图像转换到基空间内,以增强菌落与背景之间的差异,其次利用菌落图像的梯度幅值特征对梯度方向进行滤波,然后通过梯度方向进行匹配,进而将粘连的菌落分割,最后利用非极大值抑制的方法筛选出菌落并计数。【结果】经试验,本研究算法的计数精度可达98.00%,能够满足实际需求。【结论】在针对菌落的目标分割计数中,本研究算法不仅计数精度高,而且具有较好的鲁棒性,在对不同厂家的菌落总数测试片菌落分割计数中均有优异效果;然而在对大面积目标的检测分割中算法的准确率会有所下降,因此,该算法更适合于菌落等小目标的检测分割。  相似文献   

3.
【目的】本研究旨在探索使用计算机视觉技术实现对鳞翅目标本图像的前背景分割方法。【方法】首先对用于训练和测试的昆虫标本图像去除背景,获得昆虫图像的前背景分割参考标准,对过大的昆虫图像进行缩小处理;其次对训练集图像采用旋转、平移、缩放等方法进行数据增强,剪切出中心区域作为有效图像。求取所有训练样本的均值图像,并从所有输入中减去该均值图像。测试用图像只做归一化但不进行数据增强。微调全卷积神经网络,重点调整结构产生变化的卷积层和反卷积层的参数,用前述训练数据集训练直至收敛。对于待分割图像,只要将图像归一化后输入到训练好的全卷积网络,网络将输出前背景分割结果。【结果】该方法在包含823个样本的测试集中进行了测试,取得的m Io U(mean Intersection over Union)达94.96%,而且分割的视觉效果已经非常接近于人工分割的结果。【结论】实验结果证明通过训练全卷积神经网络可以有效实现鳞翅目标本图像的前背景自动分割。  相似文献   

4.
刘国成  张杨  黄建华  汤文亮 《昆虫学报》2015,58(12):1338-1343
【目的】叶螨(spider mite)是为害多种农作物的主要害虫,叶螨识别传统方法依靠肉眼,比较费时费力,为研究快速自动识别方法,引入计算机图像分析算法。【方法】该方法基于K-means聚类算法对田间作物上的叶螨图像进行分割与识别。【结果】对比传统RGB彩色分割方法,K-means聚类算法能够有效地对叶片上叶螨图像进行分割和识别。K-means聚类算法平均识别时间为3.56 s,平均识别准确率93.95%。识别时间 T 随图像总像素 Pi 的增加而增加。【结论】K-means聚类组合算法能够应用于叶螨图像分割与识别。  相似文献   

5.
竺乐庆  张真 《昆虫学报》2013,56(11):1335-1341
【目的】为了给林业、 农业或植物检疫等行业人员提供一种方便快捷的昆虫种类识别方法, 本文提出了一种新颖的鳞翅目昆虫图像自动识别方法。【方法】首先通过预处理对采集的昆虫标本图像去除背景, 分割出双翅, 并对翅图像的位置进行校正。然后把校正后的翅面分割成多个超像素, 用每个超像素的l, a, b颜色及x, y坐标平均值作为其特征数据。接下来用稀疏编码(SC)算法训练码本、 生成编码并汇集成特征向量训练量化共轭梯度反向传播神经网络(SCG BPNN), 并用得到的BPNN进行分类识别。【结果】该方法对包含576个样本的昆虫图像的数据库进行了测试, 取得了高于99%的识别正确率, 并有理想的时间性能、 鲁棒性及稳定性。【结论】实验结果证明了本文方法在识别鳞翅目昆虫图像上的有效性。  相似文献   

6.
【目的】本研究旨在探讨深度学习模型在蝴蝶科级标本图像自动识别中的可行性和泛化能力。【方法】为了提高识别模型的鲁棒性和泛化能力,将锤角亚目中6个科1 117种蝴蝶标本图像通过水平翻转、增加图像对比度与亮度以及添加噪声的方式增强图像数据集。在Caffe框架下,利用迁移学习方法,首先使用Image Net数据集中的图像训练Caffe Net模型,迭代31万次后得到初始化的网络权值;然后利用蝴蝶图像训练已预训练好的Caffe Net模型,通过参数微调,获得一个蝴蝶科级标本图像自动识别的卷积神经网络模型。为了比较深度学习和传统模式识别两种方法建立的模型的泛化能力,对相同训练样本提取全局特征和局部特征,训练支持向量机(support vector machine,SVM)分类器。所有的模型在与训练样本图像来源一致和不一致的两个测试样本集上进行测试。【结果】当测试样本与训练样本来源一致,均为蝴蝶标本图像时,基于Caffe Net的蝴蝶识别模型对6个科的蝴蝶识别准确率平均达到95.8%,基于Gabor的SVM分类器也获得了94.8%的识别率。当测试样本与训练样本来源不一致,为自然环境下拍摄的蝴蝶图像时,两种方法获得的识别率均下降,但Caffe Net模型对蝴蝶自然图像的平均识别率仍能达到65.6%,而基于Gabor的SVM分类器的识别率仅为38.9%。【结论】利用Caffe Net模型进行蝴蝶科级标本图像识别是可行的,相比较传统模式识别方法,基于深度学习的蝴蝶识别模型具有更好的泛化能力。  相似文献   

7.
本文描述一种基于知识的三维医学图像自动分割方法,用于进行人体颅内出血(Intracranial Hemorrhage,ICH)的分割和分析。首先,数字化CT胶片,并自动对数字化后的胶片按照有无异常分类。然后,阀值结合模糊C均值聚类算法将图像分类成多个具有统一亮度的区域。最后,在先验知识以及预定义的规则的基础上,借助基于知识的专家系统将各个区域标记为背景、钙化点、血肿、颅骨、脑干。  相似文献   

8.
【目的】采用盐酸硫胺(aneurine hydrochloride,VB1)作为保护配体和还原剂,制备荧光稳定的VB1保护的铜纳米簇(aneurine hydrochloride protected copper nanoclusters, VB1-Cu NCs),并用于痕量Fe3+的检测。【方法】使用VB1作为保护配体和还原剂,合成VB1-Cu NCs。通过紫外-可见吸收光谱、荧光光谱和粒径进行表征,并探究了VB1-Cu NCs的pH响应性、对Fe3+的选择性以及线性范围。【结果】制备的VB1-CuNCs具有良好的水溶性,优异的稳定性和超细的尺寸。VB1-Cu NCs作为荧光探针检测Fe3+,在0–5μmol/L和5–500μmol/L范围内均呈良好的线性,检测限为0.085μmol/L。利用该方法检测实际微生物样品毛癣菌(Trichophyto...  相似文献   

9.
【目的】为减轻基层测报人员工作量,提高稻纵卷叶螟Cnaphalocrocis medinalis性诱测报的准确率和实时性,实现监测数据可追溯,建立了基于机器视觉的稻纵卷叶螟性诱智能监测系统。【方法】稻纵卷叶螟性诱智能监测系统包括基于机器视觉的智能性诱捕器、基于深度学习的稻纵卷叶螟检测模型、系统Web前端和服务器端。利用工业相机、光源和Android平板搭建了智能性诱捕器的机器视觉系统;建立了基于改进的YOLOv3和DBTNet-101双层网络的稻纵卷叶螟检测模型;利用HTML, CSS, JavaScript和Vue搭建系统Web前端展示稻纵卷叶螟检测与计数结果;使用Django框架搭建服务器端,对来自智能性诱捕器通过4G网络上传的图像进行接收与结果反馈;采用MySQL数据库保存图像和模型检测结果等信息。【结果】基于机器视觉的稻纵卷叶螟性诱智能监测系统利用智能性诱捕器自动定期上传稻纵卷叶螟图像至服务器,部署在服务器上的目标检测模型对稻纵卷叶螟成虫进行实时自动检测,精确率和召回率分别达97.6%和98.6%;用户可通过Web前端查看稻纵卷叶螟检测结果图。【结论】基于机器视觉的稻纵卷叶螟性诱智能监测系统实现了图像的定时自动采集、稻纵卷叶螟成虫的准确检测与计数,实现了稻纵卷叶螟性诱监测的智能化和实时性,减轻了测报人员的工作量,监测数据可追溯。  相似文献   

10.
斑马鱼是一种当今应用十分广泛的模式生物,研究其实验图像的信息自动提取具有较高的应用价值。药物培养斑马鱼胚胎在发育成长过程中的生理影响的定量分析,很难通过人工肉眼观察的方式得出较为准确的指标值,通常要借助于计算机。斑马鱼图像定量分析的前提基础就是可以对斑马鱼胚胎图像,按照斑马鱼的生理结构即头部、躯干和卵黄三个部分在图像中分割开。但是实际情况是斑马鱼药物实验中特定药物的实验组数量有限,不能通过学习训练等机器学习的方式分割,只能使用图像处理建模分割。本文使用距离变换结合分水岭算法、减法聚类结合K-means聚类算法以及减法聚类结合漫水填充算法分别对图像进行语义分割,发现减法聚类结合漫水填充算法的分割效果达到了满足研究所需的生理结构分割的目的,为后续医学研究中的定量分析奠定了较好的基础。  相似文献   

11.
An accurate cultural insect detection and recognition relies mainly on a proper automatic segmentation. This paper deals with butterfly segmentation in ecological images characterized by several artifacts like the complexity of environmental decors and cluttered backgrounds. The distractors contained in the rich ecological environment and the huge difference between butterfly species complicate severely the segmentation and make it a challenging task. As butterflies appears to be well contrasted from their surrounding, we suggest to explore the saliency property to delineate accurately the butterfly boundaries. In this vein, we perform a graph ranking process with high level guidance according to foreground and background cues to improve the quality of segmentation. The ranking accuracy is improved through a weighting scheme that combines accurately color, texture and spatial information. The contribution of each used feature is controlled according to its relevance in highlighting butterfly regions. After that, we initialize foreground seeds from most salient pixels and background seeds from less salient pixels as an input for a Graph-cut algorithm to extract the butterfly from the background. Comparative evaluation has shown that our segmentation scheme outperforms some existing segmentation methods that provide high segmentation scores.  相似文献   

12.
【目的】探究深度学习在草地贪夜蛾Spodoptera frugiperda成虫自动识别计数上的可行性,并评估模型的识别计数准确率,为害虫机器智能监测提供图像识别与计数方法。【方法】设计一种基于性诱的害虫图像监测装置,定时自动采集诱捕到的草地贪夜蛾成虫图像,结合采集船形诱捕器粘虫板上草地贪夜蛾成虫图像,构建数据集;应用YOLOv5深度学习目标检测模型进行特征学习,通过草地贪夜蛾原始图像、清除边缘残缺目标、增加相似检测目标(斜纹夜蛾成虫)、无检测目标负样本等不同处理的数据集进行模型训练,得到Yolov5s-A1, Yolov5s-A2, Yolov5s-AB, Yolov5s-ABC 4个模型,对比在不同遮挡程度梯度下的测试样本不同模型检测结果,用准确率(P)、召回率(R)、F1值、平均准确率(average precision, AP)和计数准确率(counting accuracy, CA)评估各模型的差异。【结果】通过原始图像集训练的模型Yolov5s-A1的识别准确率为87.37%,召回率为90.24%,F1值为88.78;清除边缘残缺目标图像集训练得到的模型Yolov5s-A2的识别准确率为93.15%,召回率为84.77%,F1值为88.76;增加斜纹夜蛾成虫样本图像训练的模型Yolov5s-AB的识别准确率为96.23%,召回率为91.85%,F1值为93.99;增加斜纹夜蛾成虫和无检测对象负样本训练的模型Yolov5s-ABC的识别准确率为94.76%,召回率为88.23%,F1值为91.38。4个模型的AP值从高到低排列如下:Yolov5s-AB>Yolov5s-ABC> Yolov5s-A2>Yolov5s-A1,其中Yolov5s-AB与Yolov5s-ABC结果相近;CA值从高到低排列如下:Yolov5s-AB>Yolov5s-ABC>Yolov5s-A2>Yolov5s-A1。【结论】结果表明本文提出的方法应用于控制条件下害虫图像监测设备及诱捕器粘虫板上草地贪夜蛾成虫的识别计数是可行的,深度学习技术对于草地贪夜蛾成虫的识别和计数是有效的。基于深度学习的草地贪夜蛾成虫自动识别与计数方法对虫体姿态变化、杂物干扰等有较好的鲁棒性,可从各种虫体姿态及破损虫体中自动统计出草地贪夜蛾成虫的数量,在害虫种群监测中具有广阔的应用前景。  相似文献   

13.
Automatic image segmentation of immunohistologically stained breast tissue sections helps pathologists to discover the cancer disease earlier. The detection of the real number of cancer nuclei in the image is a very tedious and time consuming task. Segmentation of cancer nuclei, especially touching nuclei, presents many difficulties to separate them by traditional segmentation algorithms. This paper presents a new automatic scheme to perform both classification of breast stained nuclei and segmentation of touching nuclei in order to get the total number of cancer nuclei in each class. Firstly, a modified geometric active contour model is used for multiple contour detection of positive and negative nuclear staining in the microscopic image. Secondly, a touching nuclei method based on watershed algorithm and concave vertex graph is proposed to perform accurate quantification of the different stains. Finally, benign nuclei are identified by their morphological features and they are removed automatically from the segmented image for positive cancer nuclei assessment. The proposed classification and segmentation schemes are tested on two datasets of breast cancer cell images containing different level of malignancy. The experimental results show the superiority of the proposed methods when compared with other existing classification and segmentation methods. On the complete image database, the segmentation accuracy in term of cancer nuclei number is over than 97%, reaching an improvement of 3–4% over earlier methods.  相似文献   

14.
PurposeTo assess the impact of lung segmentation accuracy in an automatic pipeline for quantitative analysis of CT images.MethodsFour different platforms for automatic lung segmentation based on convolutional neural network (CNN), region-growing technique and atlas-based algorithm were considered. The platforms were tested using CT images of 55 COVID-19 patients with severe lung impairment. Four radiologists assessed the segmentations using a 5-point qualitative score (QS). For each CT series, a manually revised reference segmentation (RS) was obtained. Histogram-based quantitative metrics (QM) were calculated from CT histogram using lung segmentationsfrom all platforms and RS. Dice index (DI) and differences of QMs (ΔQMs) were calculated between RS and other segmentations.ResultsHighest QS and lower ΔQMs values were associated to the CNN algorithm. However, only 45% CNN segmentations were judged to need no or only minimal corrections, and in only 17 cases (31%), automatic segmentations provided RS without manual corrections. Median values of the DI for the four algorithms ranged from 0.993 to 0.904. Significant differences for all QMs calculated between automatic segmentations and RS were found both when data were pooled together and stratified according to QS, indicating a relationship between qualitative and quantitative measurements. The most unstable QM was the histogram 90th percentile, with median ΔQMs values ranging from 10HU and 158HU between different algorithms.ConclusionsNone of tested algorithms provided fully reliable segmentation. Segmentation accuracy impacts differently on different quantitative metrics, and each of them should be individually evaluated according to the purpose of subsequent analyses.  相似文献   

15.
基于移动终端的稻田飞虱调查方法   总被引:2,自引:0,他引:2  
【目的】建立一种基于移动终端的稻田飞虱调查方法,以减轻测报人员劳动强度,提高稻田飞虱调查的客观性,实现稻飞虱调查结果可追溯。【方法】利用Android相机、可伸缩手持杆和装载控制相机APP的Android手机研制了稻田飞虱图像采集仪。在Android开发环境下,利用socket通信和视频编码等技术,实现Android相机的视频采集与编码模块、视频传输模块和相机命令控制模块等。利用Android NDK开发和Java web等技术,实现手机端的视频预览模块、手机控制模块、图像上传模块等。相机实时拍摄的视频将压缩成H.264格式,通过RTSP/RTP协议控制其传输至手机端。手机端通过解压缩,实现实时预览相机所拍摄的视频,并控制相机拍摄水稻茎基部飞虱图像,同时将图像传输到手机端。稻飞虱识别算法部署在云服务器上。手机端可选择稻飞虱图像上传至云服务器,云服务器运行稻飞虱自动识别算法,结果返回至手机端。【结果】基于移动终端的稻田飞虱调查方法利用手机可以实时预览相机拍摄的水稻茎基部飞虱画面,控制相机拍照。云服务器上稻飞虱自动识别算法对图像中的飞虱平均检测率为86.9%,虚警率为11.2%;对稻飞虱各虫态平均检测率为81.7%,虚警率为16.6%。【结论】基于移动终端的稻田飞虱调查方法可以便捷地采集到水稻茎基部飞虱图像,实现稻田飞虱不同虫态的识别与计数。该方法可大大减轻测报人员的劳动量,避免稻飞虱田间调查的主观性,实现稻飞虱田间调查的可追溯。  相似文献   

16.
Saliency detection is widely used in many visual applications like image segmentation, object recognition and classification. In this paper, we will introduce a new method to detect salient objects in natural images. The approach is based on a regional principal color contrast modal, which incorporates low-level and medium-level visual cues. The method allows a simple computation of color features and two categories of spatial relationships to a saliency map, achieving higher F-measure rates. At the same time, we present an interpolation approach to evaluate resulting curves, and analyze parameters selection. Our method enables the effective computation of arbitrary resolution images. Experimental results on a saliency database show that our approach produces high quality saliency maps and performs favorably against ten saliency detection algorithms.  相似文献   

17.
Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号