首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
【目的】蝴蝶属鳞翅目(Lepidoptera)昆虫,其对生存环境敏感,能够作为区域生态环境的指示物种,自然环境下蝴蝶种类自动识别对生态系统稳定有重要意义。现有研究中蝴蝶种类和数量较少,且多以标本图像作为识别对象,鉴于此,本研究构建了自然环境下蝴蝶图像数据集,提出一种以残差网络为基础的蝴蝶种类识别模型LDResNet。【方法】首先,引入可变形卷积,增强网络对不同形状蝴蝶图像的特征提取能力,获得更细粒度的特征;其次,在可变形卷积后嵌入注意力机制,增大蝴蝶特征权重,降低冗余信息干扰;最后,利用改进的深度可分离卷积降低模型参数量。【结果】在自建数据集上实验,LDResNet模型取得了87.61%的平均识别准确率,较原始模型提升了3.14%,模型参数量仅为1.04 MB。【结论】LDResNet模型相较其他模型,在平均识别准确率和参数量方面均有明显优势,本研究模型可为自然环境下的蝴蝶种类自动识别提供技术支持。  相似文献   

2.
为了探索基于深度神经网络模型的牙形刺图像智能识别效果, 研究选取奥陶纪8种牙形刺作为研究对象, 通过体视显微镜采集牙形刺图像1188幅, 收集整理公开发表文献的牙形刺图像778幅, 将图像数据集划分为训练集和测试集。通过对训练集图像进行旋转、翻转、滤波增强处理, 解决了训练样本不足的问题。基于ResNet-18、ResNet-34、ResNet-50、ResNet-101、ResNet-152五种残差神经网络模型, 采用迁移学习方法, 对网络模型进行训练以获取模型参数, 五种模型测试Top-1准确率分别为85.37%、85.85%、83.90%、81.95%、80.00%, Top-2准确率分别为94.63%、94.63%、94.15%、93.17%、93.66%, 模型对牙形刺图像具有较好的识别效果。通过对比研究发现, ResNet-34识别准确率最高, 说明对于特征简单的牙形刺属种, 增加网络深度并不一定能提升准确率, 而确定合适深度的模型则不仅可以提高识别准确率, 还可以节约计算资源。通过ResNet-34模型的迁移学习训练和重新训练效果对比可以看出, 迁移学习不仅可以获得较高的准确率, 而且可以较快获取模型参数, 因而可作为小样本古生物化石图像识别的重要方法。研究还发现, 体视显微镜下牙形刺图像的识别准确率高于扫描电镜下图像识别准确率, 化石完整性和相似性、照相角度以及数据集的大小是影响图像识别准确率的主要原因。  相似文献   

3.
卷积神经网络可以通过树木年轮样本构造特征图像实现物种识别的自动化。本研究通过建立树木年轮样本构造特征图像集,选用LeNet、AlexNet、GoogLeNet和VGGNet 4个卷积神经网络模型,实现基于树木年轮横切面的计算机自动化树种精准识别,进而确定各模型的树种识别准确率,明晰不同树种在自动识别中的混淆情况,探测不同模型识别结果的差异。结果表明: 本研究训练的用于树种识别的卷积神经网络模型具有较好的可信度;4个模型中GoogLeNet模型树种识别准确率最高,为96.7%,LeNet模型识别准确率最低(66.4%);不同模型对于所选树种的识别结果具有一致性,表现为对蒙古栎识别准确率最高(AlexNet模型识别率达到100%),对臭冷杉的识别准确率最低。本研究中也存在类似结构树种的识别混淆情况。模型在科和属水平的识别准确率高于种水平;阔叶树种因其显著的结构差异容易区分,阔叶树树种的识别准确率高于针叶树。总体上,通过卷积神经网络,探测了树木年轮特征的深层信息,达到树种的精准识别,提供了一种快速便捷的自动树种初筛鉴定方法。  相似文献   

4.
有孔虫个体微小、数量众多、地理分布广、演化迅速, 是记录海洋沉积环境的重要载体, 在海相生物地层划分和对比中具有十分重要的作用。因有孔虫属种众多, 传统的属种鉴定需要经验丰富的专业人员进行人工鉴定且耗时较长, 此外人工鉴定古生物面临人才匮乏和工作量大等问题。卷积神经网络在计算机视觉领域的应用可较好的解决上述问题。利用古生物专家对中新世浮游有孔虫化石标注为指导, 根据有孔虫化石不同方向的视角分类, 结合卷积神经网络算法, 开发了有孔虫化石图像识别系统。研究发现, 通过有孔虫化石腹视、缘视和背视角度分类, 采取两级分段式鉴定算法对中新世浮游有孔虫属一级进行识别, 属一级鉴定准确率达到82%左右。  相似文献   

5.
昆虫监测中美国白蛾Hyphantria Cunea的人工辨识、分类费时费力,且主观性强。本文利用RPN人工神经网络模型对美国白蛾图像数据进行特征提取,并对比分析Inception_v2,ResNet50,ResNet101网络模型,设计了一种改进的美国白蛾人工神经网络识别模型IHCDM(Improved Hyphantria Cunea Artificial Neural Network Recognition Model,IHCDM),采用端到端方法在GPU处理器上对该模型进行了训练,并对其进行了实验验证。结果表明:该模型对美国白蛾的识别准确率可达99.5%,相比于ResNet50与ResNet101网络模型,识别准确率提高了0.5%与0.4%。超参数微调后,在置信度阈值为0.85时,识别准确率99.7%,识别速度0.09 ms/张。IHCDM模型为美国白蛾的快速辨识、分类提供了一种新方法。  相似文献   

6.
目的:提出一种基于人类视觉注意力机制的RE-Net网络结构以使卷积神经网络(CNN)更适用于眼底相的屈光不正的智能诊断评估。方法:RE-Net由Res Net34作为骨干网络,进一步使用了上下文注意力模块,包括通道注意力机制和空间注意力机制,使其相应的通道发挥最大的作用,提高响应区域的权重。结果:使用了4358张眼底图像作为RE-Net的训练集。在包含485张眼底图像的测试集上,分类准确率分别为,高度近视93.3%,中度近视89.7%,轻度近视83.2%,轻度远视82.5%,中度远视79.5%,重度远视84.6%,平均分类准确率达85.5%,曲线下面积(AUC)为0.909,灵敏度为0.93,特异性为0.89, Kappa值为0.79 (x~2=23.21,P0.05)。结论:基于深度学习的RE-NET人工智能诊断系统能较好进行屈光不正的诊断评估,有望为屈光不正提供一种新的筛查工具。  相似文献   

7.
史春妹  谢佳君  顾佳音  刘丹  姜广顺 《生态学报》2021,41(12):4685-4693
东北虎个体的自动识别是种群数量评估和制定有效保护策略的重要基础。以东北虎林园和怪坡虎园38 只虎为研究对象,将目标检测方法首次应用到东北虎个体识别研究中,采用多种深度卷积神经网络模型,以实现虎个体的自动识别。首先通过相机在不同角度对 38 只东北虎进行拍摄取样,建立包含13579张图像的虎样本数据集。由于虎的体侧条纹信息不具有对称性,所以运用单次多盒目标检测(Single Shot MultiBox Detector, SSD)方法,对虎的躯干左侧条纹、右侧条纹以及脸部等不同部位图像,进行自动检测并分割提取,极大节省手工截取时间。在检测分割出的左右侧及脸部不同部位图片基础上,运用上、下、左、右平移变换进行数据增强,使图片数目扩大为原来的5 倍。采用LeNet、AlexNet、ZFNet、VGG16、ResNet34共5 种卷积神经网络模型进行个体自动识别。为了提高识别准确率,运用平均值和最大值不同组合方式来优化池化操作,并在全连接层引入概率分别为0.1、0.2、0.3、0.4的丢弃(Dropout)操作防止过拟合。实验表明,目标检测模型耗时较少,截取分割老虎不同部位条纹能达到0.6 s/张,远快于人工截取速度,并且在测试集上准确率能达到97.4%。不同姿态下的目标部位都能正确识别并分割。ResNet34模型的准确率优于其他网络模型,左右侧条纹以及脸部图像识别准确率分别为93.75%、97.01%和 86.28%,右侧条纹识别准确率优于左侧条纹和脸部图像。研究为野生虎自动相机影像的识别提供技术参考。在未来研究中,对东北虎个体影响数据进行扩充,选取更多影像数据进行训练,使网络具有更强的适应性,从而实现更准确的个体识别。  相似文献   

8.
鸟鸣识别是生态监测的重要手段,为进一步提升鸟鸣识别的准确性和鲁棒性,本文提出了1种新的基于深度特征融合的鸟鸣识别方法。该方法首先利用深度特征提取网络对鸟鸣的对数梅尔谱图和补充特征集的深度特征进行提取,再将两种深度特征进行融合,最后使用轻量级梯度提升机(light gradient boosting machine,light GBM)分类器进行分类。本文充分利用深度神经网络的特征提取能力以及light GBM的分类性能,将特征提取和特征分类过程进行分离,从而实现了高准确率的鸟鸣识别。实验结果显示,本文提出的方法在北京百鸟数据集中取得了目前已知的最佳结果,模型的平均准确率达到了98.70%,平均F1分数达到了98.84%。相比传统方法,深度融合特征在鸟鸣识别任务上准确率提升了5.62%以上。同时,引入的light GBM分类器使分类准确率提升了3.02%。此外,在CLO-43SD和Bird CLEF2022比赛的数据集中,本文方法也展现出卓越的性能,分别取得了98.32%和91.12%的平均准确率。本文还引入了类激活图对不同类型鸟鸣的识别结果进行可解释性分析,揭示了...  相似文献   

9.
针对传统方法在蛋白质二级结构分类中精度低的问题,介绍了一种基于灰狼优化算法的卷积神经网络图像分类算法.首先,选取卷积神经网络模型中所需优化的参数,并且初始化灰狼优化算法的迭代次数、灰狼数量、搜索边界和空间维数;其次,计算优化参数的个体适应度函数,对个体适应度进行排序,确定历史最优解、优解和次优解,更新灰狼的位置;最后,利用经过参数优化的卷积神经网络模型对蛋白质二级结构进行分类.从蛋白质数据库中获得蛋白质二级结构3D模型,转化为多角度拍摄的2D图像作为数据集进行实验.选取残差网络、AlexNet和VGG16三种模型,分别得到92.6%、87.3%和88.9%的准确率,在同数据集下,使用传统方法中的支持向量机和贝叶斯分类器进行对比实验,得到67.0%和53.0%的准确率.实验结果表明,在蛋白质二级结构分类中,与传统方法相比较,基于灰狼优化算法的卷积神经网络精度更高.  相似文献   

10.
文章充分利用红鳍东方鲀(Takifugu rubripes)体侧纹理特征提出了一种基于轻量级卷积神经网络的鱼类个体身份识别方法,可在无损前提下实现红鳍东方鲀个体身份的高精度识别。首先,采用SOLOv2模型进行前景分割,并结合红鳍东方鲀体型特点,通过质心和哈希值计算方法完成数据集生成和筛选;随后,从多维度分别测试主流深度学习图像分类骨干网络和不同损失函数在红鳍东方鲀身份识别中的效果;继而,在MobileNet v2骨干网络基础上,耦合Softmax Loss函数,建立了一种适用于红鳍东方鲀的个体身份无损识别的最优组合方法。研究结果表明,文章方法准确率可达90.2%,优于其他相关主流方法(准确率73.6%—89.3%),相关研究成果将为循环水养殖鱼类个体身份无损识别和精准生物量估算提供技术支撑。  相似文献   

11.
Diptera insects have the characteristics of spreading diseases and destroying forests. There are similarities among different species, which makes it difficult to identify a Diptera insect. Most traditional convolutional neural networks have large parameters and high recognition latency. Therefore, they are not suitable for deploying models on embedded devices for classification and recognition. This paper proposes an improved neural architecture based on differentiable search method. First, we designed a network search cell by adding the feature output of the previous layer to each search cell. Second, we added the attention module to the search space to expand the searchable range. At the same time, we used methods such as model quantization and limiting the ReLU function to the ReLU6 function to reduce computer resource consumption. Finally, the network model was transplanted to the NVIDIA Jetson Xavier NX embedded development platform to verify the network performance so that the neural architecture search could be organically combined with the embedded development platform. The experimental results show that the designed neural architecture achieves 98.9% accuracy on the Diptera insect dataset with a latency of 8.4 ms. It has important practical significance for the recognition of Diptera insects in embedded devices.  相似文献   

12.
Detecting and monitoring underwater organisms is very important for sea aquaculture. The human eye struggles to quickly distinguish between aquatic species due to their variety and dense dispersion. In this paper, a deep learning object detection algorithm based on YOLOv7 is used to design a new network, called Underwater-YOLOv7 (U-YOLOv7), for underwater organism detection. This model satisfies the requirements with regards to both speed and accuracy. First, a network combining CrossConv and an efficient squeeze-excitation module is created. This network increases the extraction of channel information while reducing parameters and enhancing the feature fusion of the network. Second, a lightweight Content-Aware ReAssembly of FEatures (CARAFE) operator is used to obtain more semantic information about underwater images before feature fusion. A 3D attention mechanism is incorporated to improve the anti-interference ability of the model in underwater recognition. Finally, a decoupling head using hybrid convolution is designed to accelerate convergence and improve the accuracy of underwater detection. The results show that the network proposed in this paper obtains an improvement of 3.2% in accuracy, 2.3% in recall, and 2.8% in the mean average precision value and runs at up to 179 fps, far outperforming other advanced networks. Moreover, it has a higher estimation accuracy than the YOLOv7 network.  相似文献   

13.
An effective practice for monitoring bird communities is the recognition and identification of their acoustic signals, whether simple, complex, fixed or variable. A method for the passive monitoring of diversity, activity and acoustic phenology of structural species of a bird community in an annual cycle is presented. The method includes the semi-automatic elaboration of a dataset of 22 vocal and instrumental forms of 16 species. To analyze bioacoustic richness, the UMAP algorithm was run on two parallel feature extraction channels. A convolutional neural network was trained using STFT-Mel spectrograms to perform the task of automatic identification of bird species. The predictive performance was evaluated by obtaining a minimum average precision of 0.79, a maximum equal to 1.0 and a mAP equal to 0.97. The model was applied to a huge set of passive recordings made in a network of urban wetlands for one year. The acoustic activity results were synchronized with climatological temperature data and sunlight hours. The results confirm that the proposed method allows for monitoring a taxonomically diverse group of birds that nourish the annual soundscape of an ecosystem, as well as detecting the presence of cryptic species that often go unnoticed.  相似文献   

14.
The performance of an artificial neural network for automaticidentification of phytoplankton was investigated with data fromalgal laboratory cultures, analysed on the Optical PlanktonAnalyser (OPA), a flow cytometer especially developed for theanalysis of phytoplankton. Data from monocultures of eight algalspecies were used to train a neural network. The performanceof the trained network was tested with OPA data from mixturesof laboratory cultures. The network could distinguish Cyanobacteriafrom other algae with 99% accuracy. The identification of specieswas performed with less accuracy, but was generally >90%.This indicates that a neural network under supervised learningcan be used for automatic identification of species in relativelycomplex mixtures. Incorporation of such a system may also increasethe operational size range of a flow cytometer. The combinationof the OPA and neural network data analysis offers the elementsto build an operational automatic algal identification system.  相似文献   

15.
The accurate identification of rice varieties using rapid and nondestructive hyperspectral technology is of practical significance for rice cultivation and agricultural production. This paper proposes a convolutional neural network classification model based on a self-attention mechanism (self-attention-1D-CNN) to improve accuracy in distinguishing between crop species in fields using canopy spectral information. After experimental materials were planted in the research area, portable equipment was used to collect the canopy hyperspectral data for rice during the booting stage. Five preprocessing methods and three extraction methods were used to process the data. A comparison of the classification accuracy of different classification models showed that the self-attention-1D-CNN proposed in this study achieved the best classification with an accuracy of 99.93%. The research demonstrated the feasibility of using hyperspectral technology for the fine classification of rice varieties, and the feasibility of using the CNN model as a potential classification method for near-ground crop monitoring and classification.  相似文献   

16.
The reproductive performance of sows is an important indicator for evaluating the economic efficiency and production level of pigs. In this paper, we design and propose a lightweight sow oestrus detection method based on acoustic data and deep convolutional neural network (CNN) algorithms by collecting and analysing short-frequency and long-frequency sow oestrus sounds. We use visual log-mel spectrograms, which can reflect three-dimensional information, as inputs to the network model to improve the overall recognition accuracy. The improved lightweight MobileNetV3_esnet model is used to identify oestrus and nonoestrus sounds and is compared with existing algorithms. The model outperforms the other algorithms, with 97.12% precision, 97.34% recall, 97.59% F1-score, and 97.52% accuracy; the model size is 5.94 MB. Compared with traditional oestrus monitoring methods, the proposed method can more accurately boost the vocal characteristics exhibited by sows in latent oestrus, thus providing an efficient and accurate approach for use in practical applications of oestrus monitoring and early warning systems on pig farms.  相似文献   

17.
Plant species recognition is an important research area in image recognition in recent years. However, the existing plant species recognition methods have low recognition accuracy and do not meet professional requirements in terms of recognition accuracy. Therefore, ShuffleNetV2 was improved by combining the current hot concern mechanism, convolution kernel size adjustment, convolution tailoring, and CSP technology to improve the accuracy and reduce the amount of computation in this study. Six convolutional neural network models with sufficient trainable parameters were designed for differentiation learning. The SGD algorithm is used to optimize the training process to avoid overfitting or falling into the local optimum. In this paper, a conventional plant image dataset TJAU10 collected by cell phones in a natural context was constructed, containing 3000 images of 10 plant species on the campus of Tianjin Agricultural University. Finally, the improved model is compared with the baseline version of the model, which achieves better results in terms of improving accuracy and reducing the computational effort. The recognition accuracy tested on the TJAU10 dataset reaches up to 98.3%, and the recognition precision reaches up to 93.6%, which is 5.1% better than the original model and reduces the computational effort by about 31% compared with the original model. In addition, the experimental results were evaluated using metrics such as the confusion matrix, which can meet the requirements of professionals for the accurate identification of plant species.  相似文献   

18.
Repeat photography is an efficient method for documenting long-term landscape changes. So far, the usage of repeat photographs for quantitative analyses is limited to approaches based on manual classification. In this paper, we demonstrate the application of a convolutional neural network (CNN) for the automatic detection and classification of woody regrowth vegetation in repeat landscape photographs. We also tested if the classification results based on the automatic approach can be used for quantifying changes in woody vegetation cover between image pairs. The CNN was trained with 50 × 50 pixel tiles of woody vegetation and non-woody vegetation. We then tested the classifier on 17 pairs of repeat photographs to assess the model performance on unseen data. Results show that the CNN performed well in differentiating woody vegetation from non-woody vegetation (accuracy = 87.7%), but accuracy varied strongly between individual images. The very similar appearance of woody vegetation and herbaceous species in photographs made this a much more challenging task compared to the classification of vegetation as a single class (accuracy = 95.2%). In this regard, image quality was identified as one important factor influencing classification accuracy. Although the automatic classification provided good individual results on most of the 34 test photographs, change statistics based on the automatic approach deviated from actual changes. Nevertheless, the automatic approach was capable of identifying clear trends in increasing or decreasing woody vegetation in repeat photographs. Generally, the use of repeat photography in landscape monitoring represents a significant added value to other quantitative data retrieved from remote sensing and field measurements. Moreover, these photographs are able to raise awareness on landscape change among policy makers and public as well as they provide clear feedback on the effects of land management.  相似文献   

19.
Biometric identification provides an important tool for precision livestock farming. This study investigates the effect of weight gain and sheep maturation on recognition performance. Sheep facial identification was implemented using two convolutional neural network (CNN) called Faster R-CNN, and ResNet50V2, equipped with the state-of-art Additive Angular Margin (ArcFace) loss function. The identification model was tested on 47 young sheep at different stages, during a 3-month growth period, when they were between 2 and 5 months old, throughout which the sheep gained approximately 30 kilograms in weight. Results revealed that when the model was trained and tested on images of sheep aged 2 months, the average accuracy of the group was 95.4%, compared with 91.3% when trained on images of sheep aged 2 months but tested on images of sheep aged 5 months.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号