首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
White blood cell (WBC) detection plays a vital role in peripheral blood smear analysis. However, cell detection remains a challenging task due to multi-cell adhesion, different staining and imaging conditions. Owing to the powerful feature extraction capability of deep learning, object detection methods based on convolutional neural networks (CNNs) have been widely applied in medical image analysis. Nevertheless, the CNN training is time-consuming and inaccuracy, especially for large-scale blood smear images, where most of the images are background. To address the problem, we propose a two-stage approach that treats WBC detection as a small salient object detection task. In the first saliency detection stage, we use the Itti's visual attention model to locate the regions of interest (ROIs), based on the proposed adaptive center-surround difference (ACSD) operator. In the second WBC detection stage, the modified CenterNet model is performed on ROI sub-images to obtain a more accurate localization and classification result of each WBC. Experimental results showed that our method exceeds the performance of several existing methods on two different data sets, and achieves a state-of-the-art mAP of over 98.8%.  相似文献   

2.
《IRBM》2022,43(5):422-433
BackgroundElectrocardiogram (ECG) is a method of recording the electrical activity of the heart and it provides a diagnostic means for heart-related diseases. Arrhythmia is any irregularity of the heartbeat that causes an abnormality in the heart rhythm. Early detection of arrhythmia has great importance to prevent many diseases. Manual analysis of ECG recordings is not practical for quickly identifying arrhythmias that may cause sudden deaths. Hence, many studies have been presented to develop computer-aided-diagnosis (CAD) systems to automatically identify arrhythmias.MethodsThis paper proposes a novel deep learning approach to identify arrhythmias in ECG signals. The proposed approach identifies arrhythmia classes using Convolutional Neural Network (CNN) trained by two-dimensional (2D) ECG beat images. Firstly, ECG signals, which consist of 5 different arrhythmias, are segmented into heartbeats which are transformed into 2D grayscale images. Afterward, the images are used as input for training a new CNN architecture to classify heartbeats.ResultsThe experimental results show that the classification performance of the proposed approach reaches an overall accuracy of 99.7%, sensitivity of 99.7%, and specificity of 99.22% in the classification of five different ECG arrhythmias. Further, the proposed CNN architecture is compared to other popular CNN architectures such as LeNet and ResNet-50 to evaluate the performance of the study.ConclusionsTest results demonstrate that the deep network trained by ECG images provides outstanding classification performance of arrhythmic ECG signals and outperforms similar network architectures. Moreover, the proposed method has lower computational costs compared to existing methods and is more suitable for mobile device-based diagnosis systems as it does not involve any complex preprocessing process. Hence, the proposed approach provides a simple and robust automatic cardiac arrhythmia detection scheme for the classification of ECG arrhythmias.  相似文献   

3.
Visual detection of plants diseases over a large area is time-consuming, and the results are prone to errors due to the subjective nature of human evaluations. Several automatic disease detection techniques that improve detection time and improve accuracy compared to visual methods exist, yet they are not suitable for immediate detection. In this paper, we propose a hybrid convolution neural network (CNN) model to speed up the detection of fall armyworms (faw) infested maize leaves. Specifically, the proposed system combines unmanned aerial vehicle (UAV) technology, to autonomously capture maize leaves, and a hybrid CNN model, which is based on a parallel structure specifically designed to take advantage of the benefits of both individual models, namely VGG16 and InceptionV3. We compare the performance of the proposed model in terms of accuracy and training time to four existing CNN models, namely VGG16, InceptionV3, XceptionNet, and Resnet50. The results show that compared to existing models, the proposed hybrid model reduces the training time by 16% to 44% compared to other models while exhibiting the most superior accuracy of 96.98%.  相似文献   

4.
The present paper introduces a focus stacking‐based approach for automated quantitative detection of Plasmodium falciparum malaria from blood smear. For the detection, a custom designed convolutional neural network (CNN) operating on focus stack of images is used. The cell counting problem is addressed as the segmentation problem and we propose a 2‐level segmentation strategy. Use of CNN operating on focus stack for the detection of malaria is first of its kind, and it not only improved the detection accuracy (both in terms of sensitivity [97.06%] and specificity [98.50%]) but also favored the processing on cell patches and avoided the need for hand‐engineered features. The slide images are acquired with a custom‐built portable slide scanner made from low‐cost, off‐the‐shelf components and is suitable for point‐of‐care diagnostics. The proposed approach of employing sophisticated algorithmic processing together with inexpensive instrumentation can potentially benefit clinicians to enable malaria diagnosis.   相似文献   

5.
Evaluation of blood smear is a commonly clinical test these days. Most of the time, the hematologists are interested on white blood cells (WBCs) only. Digital image processing techniques can help them in their analysis and diagnosis. For example, disease like acute leukemia is detected based on the amount and condition of the WBC. The main objective of this paper is to segment the WBC to its two dominant elements: nucleus and cytoplasm. The segmentation is conducted using a proposed segmentation framework that consists of an integration of several digital image processing algorithms. Twenty microscopic blood images were tested, and the proposed framework managed to obtain 92% accuracy for nucleus segmentation and 78% for cytoplasm segmentation. The results indicate that the proposed framework is able to extract the nucleus and cytoplasm region in a WBC image sample.  相似文献   

6.
R.R. Janghel  Y.K. Rathore 《IRBM》2021,42(4):258-267
ObjectivesAlzheimer's Disease (AD) is the most general type of dementia. In all leading countries, it is one of the primary reasons of death in senior citizens. Currently, it is diagnosed by calculating the MSME score and by the manual study of MRI Scan. Also, different machine learning methods are utilized for automatic diagnosis but existing has some limitations in terms of accuracy. So, main objective of this paper to include a preprocessing method before CNN model to increase the accuracy of classification.Materials and methodIn this paper, we present a deep learning-based approach for detection of Alzheimer's Disease from ADNI database of Alzheimer's disease patients, the dataset contains fMRI and PET images of Alzheimer's patients along with normal person's image. We have applied 3D to 2D conversion and resizing of images before applying VGG-16 architecture of Convolution neural network for feature extraction. Finally, for classification SVM, Linear Discriminate, K means clustering, and Decision tree classifiers are used.ResultsThe experimental result shows that the average accuracy of 99.95% is achieved for the classification of the fMRI dataset, while the average accuracy of 73.46% is achieved with the PET dataset. On comparing results on the basis of accuracy, specificity, sensitivity and on some other parameters we found that these results are better than existing methods.Conclusionsthis paper, suggested a unique way to increase the performance of CNN models by applying some preprocessing on image dataset before sending to CNN architecture for feature extraction. We applied this method on ADNI database and on comparing the accuracies with other similar approaches it shows better results.  相似文献   

7.
Hemangiopericytoma growth in syngeneic male mice F1 (CBA X C57BL/bj) led to regular hemopoietic changes: an increase in the spleen weight, spleen cell count, in the number of colony-forming units (CFU). It also induced augmentation of myelopoiesis in the spleen and leukocytosis with a sharp increase of segmented granulocytes in the circulating blood characterized as the "leukemoid reaction" syndrome. The latter occurred even when the tumour cells were transplanted into the splenectomized host. Although less marked, this leukemoid reaction was also noted in advanced hepatoma transplanted to syngeneic male mice F1 (CBA X C57BL/6j). No leukemoid reaction was observed in these mice after grafting syngeneic strain of urinary bladder carcinoma.  相似文献   

8.
Repeat photography is an efficient method for documenting long-term landscape changes. So far, the usage of repeat photographs for quantitative analyses is limited to approaches based on manual classification. In this paper, we demonstrate the application of a convolutional neural network (CNN) for the automatic detection and classification of woody regrowth vegetation in repeat landscape photographs. We also tested if the classification results based on the automatic approach can be used for quantifying changes in woody vegetation cover between image pairs. The CNN was trained with 50 × 50 pixel tiles of woody vegetation and non-woody vegetation. We then tested the classifier on 17 pairs of repeat photographs to assess the model performance on unseen data. Results show that the CNN performed well in differentiating woody vegetation from non-woody vegetation (accuracy = 87.7%), but accuracy varied strongly between individual images. The very similar appearance of woody vegetation and herbaceous species in photographs made this a much more challenging task compared to the classification of vegetation as a single class (accuracy = 95.2%). In this regard, image quality was identified as one important factor influencing classification accuracy. Although the automatic classification provided good individual results on most of the 34 test photographs, change statistics based on the automatic approach deviated from actual changes. Nevertheless, the automatic approach was capable of identifying clear trends in increasing or decreasing woody vegetation in repeat photographs. Generally, the use of repeat photography in landscape monitoring represents a significant added value to other quantitative data retrieved from remote sensing and field measurements. Moreover, these photographs are able to raise awareness on landscape change among policy makers and public as well as they provide clear feedback on the effects of land management.  相似文献   

9.
《IRBM》2022,43(5):405-413
PurposeLeukaemia is diagnosed conventionally by observing the peripheral blood and bone marrow smear using a microscope and with the help of advanced laboratory tests. Image processing-based methods, which are simple, fast, and cheap, can be used to detect and classify leukemic cells by processing and analysing images of microscopic smear. The proposed study aims to classify Acute Lymphoblastic Leukaemia (ALL) by Deep Learning (DL) based techniques.ProceduresThe study used Deep Convolutional Neural Networks (DNNs) to classify ALL according to WHO classification scheme without using any image segmentation and feature extraction that involves intense computations. Images from an online image bank of American Society of Haematology (ASH) were used for the classification.FindingsA classification accuracy of 94.12% is achieved by the study in isolating the B-cell and T-cell ALL images using a pretrained CNN AlexNet as well as LeukNet, a custom-made deep learning network designed by the proposed work. The study also compared the classification performances using three different training algorithms.ConclusionsThe paper detailed the use of DNNs to classify ALL, without using any image segmentation and feature extraction techniques. Classification of ALL into subtypes according to the WHO classification scheme using image processing techniques is not available in literature to the best of the knowledge of the authors. The present study considered the classification of ALL only, and detection of other types of leukemic images can be attempted in future research.  相似文献   

10.
Current flow‐based blood counting devices require expensive and centralized medical infrastructure and are not appropriate for field use. In this article we report a streamlined, easy‐to‐use method to count red blood cells (RBC), white blood cells (WBC), platelets (PLT) and 3‐part WBC differential through a cost‐effective and automated image‐based blood counting system. The approach consists of using a compact, custom‐built microscope with large field‐of‐view to record bright‐field and fluorescence images of samples that are diluted with a single, stable reagent mixture and counted using automatic algorithms. Sample collection utilizes volume‐controlled capillary tubes, which are then dropped into a premixed, shelf‐stable solution to stain and dilute in a single step. Sample measurement and analysis are fully automated, requiring no input from the user. Cost of the system is minimized through the use of custom‐designed motorized components. We compare the performance of our system, as operated by trained and untrained users, to the clinical gold standard on 120 adult blood samples, demonstrating agreement within Clinical Laboratory Improvement Amendments guidelines, with no statistical difference in performance among different operator groups. The system's cost‐effectiveness, automation and performance indicate that it can be successfully translated for use in low‐resource settings where central hematology laboratories are not accessible.   相似文献   

11.
An automatic method for quantification of images of microvessels by computing area proportions and number of objects is presented. The objects are segmented from the background using dynamic thresholding of the average component size histogram. To be able to count the objects, fragmented objects are connected, all objects are filled, and touching objects are separated using a watershed segmentation algorithm. The method is fully automatic and robust with respect to illumination and focus settings. A test set consisting of images grabbed with different focus and illumination for each field of view, was used to test the method, and the proposed method showed less variation than the intraoperator variation using manual threshold. Further, the method showed good correlation to manual object counting (r = 0.80) on an other test set.  相似文献   

12.
摘要 目的:探讨有核红细胞数(nucleated red blood cell count,NRBCs)在白血病患者危险度分层评估中的意义。方法:选择2016年2月到2019年7月在厦门大学附属成功医院(本院)诊治的急性髓系白血病(Acute myeloid leukemia,AML)患者120例,检测其NRBCs并进行危险度分层,回顾分析患者的临床资料并与其NRBCs进行相关性分析。结果:120例患者中,危险度分层为低危40例,中危60例,高危20例。低危组和中高危组的患者年龄、性别、核仁磷酸蛋白(nucleophosmin,NPM1)突变、骨髓原始细胞等对比差异无统计学意义(P>0.05),其外周血原始细胞、FMS 样酪氨酸激酶-3(FMS-like tyrosinekinase 3,FLT3)突变、急性生理和慢性健康状况Ⅱ(acute physiology andchron ic health evaluationⅡ,APACHEⅡ)评分、白细胞(white blood cell,WBC)、血红蛋白(hemoglobin,Hb)、血小板(platelet,PLT) 、白蛋白(albumin,ALB)与丙氨酸氨基转移酶(alanine aminotransferase,ALT)值等对比差异有统计学意义(P<0.05)。低危组的NRBCs为3.94±0.29个,显著低于中高危组(11.87±2.11个,P=0.000)。Pearson相关分析显示危险度分层与NRBCs、外周血原始细胞、APACHEⅡ评分、FLT3突变、PLT有显著相关性(r=0.823、0.566、0.494、0.578、0.781,P<0.05)。logistic回归分析显示NRBCs、外周血原始细胞、APACHEⅡ评分、FLT3突变、PLT为影响急性髓系白血病患者危险度分层的主要因素(P<0.05)。结论:不同危险度分层的白血病患者的NRBCs具有显著差异,其与患者的病理特征显著相关,也是影响患者危险度分层的主要因素。  相似文献   

13.
Videos and images from camera traps are more and more used by ecologists to estimate the population of species on a territory. It is a laborious work since experts have to analyse massive data sets manually. This takes also a lot of time to filter these videos when many of them do not contain animals or are with human presence. Fortunately, deep learning algorithms for object detection can help ecologists to identify multiple relevant species on their data and to estimate their population. In this study, we propose to go even further by using object detection model to detect, classify and count species on camera traps videos. To this end, we developed a 3-step process: (i) At the first stage, after splitting videos into images, we annotate images by associating bounding boxes to each label thanks to MegaDetector algorithm; (ii) then, we extend MegaDetector based on Faster R-CNN architecture with backbone Inception-ResNet-v2 in order to not only detect the 13 relevant classes but also to classify them; (iii) finally, we design a method to count individuals based on the maximum number of bounding boxes detected. This final stage of counting is evaluated in two different contexts: first including only detection results (i.e. comparing our predictions against the right number of individuals, no matter their true class), then an evolved version including both detection and classification results (i.e. comparing our predictions against the right number in the right class). The results obtained during the evaluation of our model on the test data set are: (i) 73,92% mAP for classification, (ii) 96,88% mAP for detection with a ratio Intersection-Over-Union (IoU) of 0.5 (overlapping ratio between groundtruth bounding box and the detected one), and (iii) 89,24% mAP for detection at IoU = 0.75. Highly represented classes, like humans, have highest values of mAP around 81% whereas less represented classes in the train data set, such as dogs, have lowest values of mAP around 66%. Regarding the proposed counting method, we predicted a count either exact or ± 1 unit for 87% with detection results and for 48% with detection and classification results of our test data set. Our model is also able to detect empty videos. To the best of our knowledge, this is the first study in France about the use of object detection model on a French national park to locate, identify and estimate the population of species from camera trap videos.  相似文献   

14.
Classification and subsequent diagnosis of cardiac arrhythmias is an important research topic in clinical practice. Confirmation of the type of arrhythmia at an early stage is critical for reducing the risk and occurrence of cardiovascular events. Nevertheless, diagnoses must be confirmed by a combination of specialist experience and electrocardiogram (ECG) examination, which can lead to delays in diagnosis. To overcome such obstacles, this study proposes an automatic ECG classification algorithm based on transfer learning and continuous wavelet transform (CWT). The transfer learning method is able to transfer the domain knowledge and features of images to a EGG, which is a one-dimensional signal when a convolutional neural network (CNN) is used for classification. Meanwhile, CWT is used to convert a one-dimensional ECG signal into a two-dimensional signal map consisting of time-frequency components. Considering that morphological features can be helpful in arrhythmia classification, eight features related to the R peak of an ECG signal are proposed. These auxiliary features are integrated with the features extracted by the CNN and then fed into the fully linked arrhythmia classification layer. The CNN developed in this study can also be used for bird activity detection. The classification experiments were performed after converting the two types of audio files containing songbird sounds and those without songbird sounds from the NIPS4Bplus bird song dataset into the Mel spectrum. Compared to the most recent methods in the same field, the classification results improved accuracy and recognition by 11.67% and 11.57%, respectively.  相似文献   

15.
Plant diseases cause significant food loss and hence economic loss around the globe. Therefore, automatic plant disease identification is a primary task to take proper medications for controlling the spread of the diseases. Large variety of plants species and their dissimilar phytopathological symptoms call for the implementation of supervised machine learning techniques for efficient and reliable disease identification and classification. With the development of deep learning strategies, convolutional neural network (CNN) has paved its way for classification of multiple plant diseases by extracting rich features. However, several characteristics of the input images especially captured in real world environment, viz. complex or indistinguishable background, presence of multiple leaves with the diseased leaf, small lesion area, solemnly affect the robustness and accuracy of the CNN modules. Available strategies usually applied standard CNN architectures on the images captured in the laboratory environment and very few have considered practical in-field leaf images for their studies. However, those studies are limited with very limited number of plant species. Therefore, there is need of a robust CNN module which can successfully recognize and classify the dissimilar leaf health conditions of non-identical plants from the in-field RGB images. To achieve the above goal, an attention dense learning (ADL) mechanism is proposed in this article by merging mixed sigmoid attention learning with the basic dense learning process of deep CNN. The basic dense learning process derives new features at higher layer considering all lower layer features and that provides fast and efficient training process. Further, the attention learning process amplifies the learning ability of the dense block by discriminating the meaningful lesion portions of the images from the background areas. Other than adding an extra layer for attention learning, in the proposed ADL block the output features from higher layer dense learning are used as an attention mask to the lower layers. For an effective and fast classification process, five ADL blocks are stacked to build a new CNN architecture named DADCNN-5 for obtaining classification robustness and higher testing accuracy. Initially, the proposed DADCNN-5 module is applied on publicly available extended PlantVillage dataset to classify 38 different health conditions of 14 plant species from 54,305 images. Classification accuracy of 99.93% proves that the proposed CNN module can be used for successful leaf disease identification. Further, the efficacy of the DADCNN-5 model is checked after performing stringent experiments on a new real world plant leaf database, created by the authors. The new leaf database contains 10,851 real-world RGB leaf images of 17 plant species for classifying their 44 distinguished health conditions. Experimental outcomes reveal that the proposed DADCNN-5 outperforms the existing machine learning and standard CNN architectures, and achieved 97.33% accuracy. The obtained sensitivity, specificity and false positive rate values are 96.57%, 99.94% and 0.063% respectively. The module takes approximately 3235 min for training process and achieves 99.86% of training accuracy. Visualization of Class activation mapping (CAM) depicts that DADCNN-5 is able to learn distinguishable features from semantically important regions (i.e. lesion regions) on the leaves. Further, the robustness of the DADCNN-5 is established after experimenting with augmented and noise contaminated images of the practical database.  相似文献   

16.
Plant diseases play a significant role in agricultural production, in which early detection of plant diseases is deemed an essential task. Current computational intelligence and computer vision methods have been promising to improve disease diagnosis. Convolutional Neural Networks (CNN) models are capable of detecting plant diseases in an agricultural field and plantation leaf images. MobileNetV2 refers to an appropriate CNN model for mobile devices with subordinate parameters and model file sizes. However, the effectiveness of MobileNetV2 requires improvement to capture more critical features. Xception refers to the extension of InceptionV3 with fewer and excellent parameters in extracting features. This research suggests an ensemble of MobileNetV2 and Xception by concatenating the extracted features to improve plant disease detection performance. This study indicated that MobileNetV2, Xception, and ensemble model achieved 97.32%, 98.30%, and 99.10% accuracy when considering the entire Plant Village dataset. Particularly, MobileNetV2 and Xception models' accuracy improved by 1.8% and 0.8%, respectively. In addition, our model captures 99.52% of all metric scores in the user-defined dataset. Our model indicated better performance than the seven state-of-the-art CNN models, both individually and in ensemble design. It can be integrated with mobile devices, providing fewer parameters and model file size than an ensemble of MobileNetV2 with InceptionResnetV2, VGG19, and VGG16.  相似文献   

17.
《IRBM》2021,42(5):378-389
White Blood Cells play an important role in observing the health condition of an individual. The opinion related to blood disease involves the identification and characterization of a patient's blood sample. Recent approaches employ Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and merging of CNN and RNN models to enrich the understanding of image content. From beginning to end, training of big data in medical image analysis has encouraged us to discover prominent features from sample images. A single cell patch extraction from blood sample techniques for blood cell classification has resulted in the good performance rate. However, these approaches are unable to address the issues of multiple cells overlap. To address this problem, the Canonical Correlation Analysis (CCA) method is used in this paper. CCA method views the effects of overlapping nuclei where multiple nuclei patches are extracted, learned and trained at a time. Due to overlapping of blood cell images, the classification time is reduced, the dimension of input images gets compressed and the network converges faster with more accurate weight parameters. Experimental results evaluated using publicly available database show that the proposed CNN and RNN merging model with canonical correlation analysis determines higher accuracy compared to other state-of-the-art blood cell classification techniques.  相似文献   

18.
【目的】探究深度学习在草地贪夜蛾Spodoptera frugiperda成虫自动识别计数上的可行性,并评估模型的识别计数准确率,为害虫机器智能监测提供图像识别与计数方法。【方法】设计一种基于性诱的害虫图像监测装置,定时自动采集诱捕到的草地贪夜蛾成虫图像,结合采集船形诱捕器粘虫板上草地贪夜蛾成虫图像,构建数据集;应用YOLOv5深度学习目标检测模型进行特征学习,通过草地贪夜蛾原始图像、清除边缘残缺目标、增加相似检测目标(斜纹夜蛾成虫)、无检测目标负样本等不同处理的数据集进行模型训练,得到Yolov5s-A1, Yolov5s-A2, Yolov5s-AB, Yolov5s-ABC 4个模型,对比在不同遮挡程度梯度下的测试样本不同模型检测结果,用准确率(P)、召回率(R)、F1值、平均准确率(average precision, AP)和计数准确率(counting accuracy, CA)评估各模型的差异。【结果】通过原始图像集训练的模型Yolov5s-A1的识别准确率为87.37%,召回率为90.24%,F1值为88.78;清除边缘残缺目标图像集训练得到的模型Yolov5s-A2的识别准确率为93.15%,召回率为84.77%,F1值为88.76;增加斜纹夜蛾成虫样本图像训练的模型Yolov5s-AB的识别准确率为96.23%,召回率为91.85%,F1值为93.99;增加斜纹夜蛾成虫和无检测对象负样本训练的模型Yolov5s-ABC的识别准确率为94.76%,召回率为88.23%,F1值为91.38。4个模型的AP值从高到低排列如下:Yolov5s-AB>Yolov5s-ABC> Yolov5s-A2>Yolov5s-A1,其中Yolov5s-AB与Yolov5s-ABC结果相近;CA值从高到低排列如下:Yolov5s-AB>Yolov5s-ABC>Yolov5s-A2>Yolov5s-A1。【结论】结果表明本文提出的方法应用于控制条件下害虫图像监测设备及诱捕器粘虫板上草地贪夜蛾成虫的识别计数是可行的,深度学习技术对于草地贪夜蛾成虫的识别和计数是有效的。基于深度学习的草地贪夜蛾成虫自动识别与计数方法对虫体姿态变化、杂物干扰等有较好的鲁棒性,可从各种虫体姿态及破损虫体中自动统计出草地贪夜蛾成虫的数量,在害虫种群监测中具有广阔的应用前景。  相似文献   

19.
绿视率是用于绿色空间感知的直观评价标准,传统研究的绿视率多基于平面影像进行计算,不能完全反映三维空间中人对绿量的主观感受。基于全景影像,提出全景绿视率的概念,通过全景相机获取球面全景照片,将等距圆柱投影转换为等积圆柱投影,利用基于语义分割的卷积神经网络模型,自动识别植被区域面积以实现全景绿视率自动化识别和计量。通过比较5项卷积神经网络模型对绿视率的识别效果,显示出Dilated ResNet-105神经网络模型具有最高的识别准确度。以武汉市武昌区紫阳公园为例,对各级园路和广场的全景绿视率进行计算和分析。将卷积神经网络的识别结果同人工判别结果进行对比研究,结果显示:使用Dilated ResNet-105卷积神经网络对绿植范围识别的平均交并比(mIoU)为62.53%,与人工识别的平均差异为9.17%。全景绿视率自动识别和计算可以为相关研究提供新的思路,实现客观准确、快速便捷的绿视率测量评估。  相似文献   

20.
In wheat (Triticum aestivum L) and other cereals, the number of ears per unit area is one of the main yield‐determining components. An automatic evaluation of this parameter may contribute to the advance of wheat phenotyping and monitoring. There is no standard protocol for wheat ear counting in the field, and moreover it is time consuming. An automatic ear‐counting system is proposed using machine learning techniques based on RGB (red, green, blue) images acquired from an unmanned aerial vehicle (UAV). Evaluation was performed on a set of 12 winter wheat cultivars with three nitrogen treatments during the 2017–2018 crop season. The automatic system uses a frequency filter, segmentation and feature extraction, with different classification techniques, to discriminate wheat ears in micro‐plot images. The relationship between the image‐based manual counting and the algorithm counting exhibited high levels of accuracy and efficiency. In addition, manual ear counting was conducted in the field for secondary validation. The correlations between the automatic and the manual in‐situ ear counting with grain yield were also compared. Correlations between the automatic ear counting and grain yield were stronger than those between manual in‐situ counting and GY, particularly for the lower nitrogen treatment. Methodological requirements and limitations are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号