首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
竺乐庆  张大兴  张真 《昆虫学报》2015,58(12):1331-1337
【目的】本研究旨在探索使用先进的计算机视觉技术实现对昆虫图像的自动分类方法。【方法】通过预处理对采集的昆虫标本图像去除背景,获得昆虫图像的前景蒙板,并由蒙板确定的轮廓计算出前景图像的最小包围盒,剪切出由最小包围盒确定的前景有效区域,然后对剪切得到的图像进行特征提取。首先提取颜色名特征,把原来的RGB(Red-Green-Blue)图像的像素值映射到11种颜色名空间,其值表示RGB值属于该颜色名的概率,每个颜色名平面划分成3×3像素大小的网格,用每格的概率均值作为网格中心点的描述子,最后用空阈金字塔直方图统计的方式形成颜色名视觉词袋特征;其次提取OpponentSIFT(Opponent Scale Invariant Feature Transform)特征,首先把RGB图像变换到对立色空间,对该空间每通道提取SIFT特征,最后用空域池化和直方图统计方法形成OpponentSIFT视觉词袋。将两种词袋特征串接后得到该昆虫图像的特征向量。使用昆虫图像样本训练集提取到的特征向量训练SVM(Support Vector Machine)分类器,使用这些训练得到的分类器即可实现对鳞翅目昆虫的分类识别。【结果】该方法在包含10种576个样本的昆虫图像数据库中进行了测试,取得了100%的识别正确率。【结论】试验结果证明基于颜色名和OpponentSIFT特征可以有效实现对鳞翅目昆虫图像的识别。  相似文献   

2.
3.
  1. Color variation is one of the most obvious examples of variation in nature, but biologically meaningful quantification and interpretation of variation in color and complex patterns are challenging. Many current methods for assessing variation in color patterns classify color patterns using categorical measures and provide aggregate measures that ignore spatial pattern, or both, losing potentially important aspects of color pattern.
  2. Here, we present Colormesh, a novel method for analyzing complex color patterns that offers unique capabilities. Our approach is based on unsupervised color quantification combined with geometric morphometrics to identify regions of putative spatial homology across samples, from histology sections to whole organisms. Colormesh quantifies color at individual sampling points across the whole sample.
  3. We demonstrate the utility of Colormesh using digital images of Trinidadian guppies (Poecilia reticulata), for which the evolution of color has been frequently studied. Guppies have repeatedly evolved in response to ecological differences between up‐ and downstream locations in Trinidadian rivers, resulting in extensive parallel evolution of many phenotypes. Previous studies have, for example, compared the area and quantity of discrete color (e.g., area of orange, number of black spots) between these up‐ and downstream locations neglecting spatial placement of these areas. Using the Colormesh pipeline, we show that patterns of whole‐animal color variation do not match expectations suggested by previous work.
  4. Colormesh can be deployed to address a much wider range of questions about color pattern variation than previous approaches. Colormesh is thus especially suited for analyses that seek to identify the biologically important aspects of color pattern when there are multiple competing hypotheses or even no a priori hypotheses at all.
  相似文献   

4.
Itti and Koch’s (Vision Research 40:1489–1506, 2000) saliency-based visual attention model is a broadly accepted model that describes how attention processes are deployed in the visual cortex in a pure bottom-up strategy. This work complements their model by modifying the color feature calculation. Evidence suggests that S-cone responses are elicited in the same spatial distribution and have the same sign as responses to M-cone stimuli; these cells are tentatively referred to as red-cyan. For other cells, the S-cone input seems to be aligned with the L-cone input; these cells might be green-magenta cells. To model red-cyan and green-magenta double-opponent cells, we implement a center-surround difference approach of the aforementioned model. The resulting color maps elicited enhanced responses to color salient stimuli when compared to the classic ones at high statistical significance levels. We also show that the modified model improves the prediction of locations attended by human viewers.  相似文献   

5.
6.
Inspired by theories of higher local order autocorrelation (HLAC), this paper presents a simple, novel, yet very powerful approach for wood recognition. The method is suitable for wood database applications, which are of great importance in wood related industries and administrations. At the feature extraction stage, a set of features is extracted from Mask Matching Image (MMI). The MMI features preserve the mask matching information gathered from the HLAC methods. The texture information in the image can then be accurately extracted from the statistical and geometrical features. In particular, richer information and enhanced discriminative power is achieved through the length histogram, a new histogram that embodies the width and height histograms. The performance of the proposed approach is compared to the state-of-the-art HLAC approaches using the wood stereogram dataset ZAFU WS 24. By conducting extensive experiments on ZAFU WS 24, we show that our approach significantly improves the classification accuracy.  相似文献   

7.
8.
9.
10.
Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images.  相似文献   

11.
为了给生产单位害虫管理的普通技术人员提供简便易操作的昆虫鉴别方法, 本文提出了一种新颖的基于图像颜色及纹理特征的昆虫图像识别方法。鳞翅目昆虫翅面图像经过预处理, 确定目标区域, 再进行特征提取。首先将彩色图像从三原色(red-green-blue, RGB)空间转换至色调饱和值(HSV)空间并提取有效区域内的色度、饱和度直方图特征, 然后经图像位置校准, 提取灰度图的双树复小波变换(DTCWT)特征; 匹配首先计算两颜色直方图特征向量之间的相关性, 将相关性大于阈值的样本再进一步用DTCWT特征匹配; DTCWT匹配通过计算Canberra距离实现, 从通过第一层颜色匹配的样本中取出最近邻作为最终匹配类别。算法在包含100类鳞翅目昆虫的图像库中进行试验验证, 取得了76%的识别率, 其中前翅识别率则达92%, 同时取得了理想的时间性能。试验结果证明了本文方法的有效性。  相似文献   

12.
In flow microfluorometry (FMF) analysis cells stained with a fluorescent dye that binds specifically to DNA are passed through the instrument. The number of cells in the population having a fluorescence intensity is recorded in a single channel of a multichannel pulse height analyzer. The result is a DNA fluorescence histogram for the population.A method for decomposing an FMF histogram into its G1, S and G2 + M components, corresponding to similarly designated phases of the cell cycle is given. This technique can also be applied to find the parameters in all of the previous approaches. The parameters are calculated by iteration which eliminates the need for non-linear optimization procedures.  相似文献   

13.
A mathematical model for decomposing an FMF histogram into its G1S and G2 + M components is developed. Under certain restrictions, the model applies to both asynchronous and synchronous populations. Two numerical techniques for estimating the percentage of cells in each component are outlined. Using the assumption of exponential growth, theoretical expressions for the percentage of cells in each state and for the S density are derived. This leads to a rapid method for determining the mean time spent in each state by a cell.  相似文献   

14.
15.
16.
17.
18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号