首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Extraction of soybean planting area based on feature fusion technology of multi-source low altitude unmanned aerial vehicle images
Institution:1. National Engineering Research Center for Agro-Ecological Big Data Analysis & Application, Anhui University, Hefei 230601, China;2. School of Spatial Informatics and Geomatics Engineering, Anhui University of Science and Technology, Huainan 232001, China;3. Key Laboratory of Geospatial Technology for Middle and Lower Yellow River Regions (Henan University), Ministry of Education, Kaifeng 475004, China;1. Faculty of Fisheries and Protection of Waters, CENAKVA, University of South Bohemia in Ceske Budejovice, Czech Republic;2. Faculty of Agriculture, University of South Bohemia in Ceske Budejovice, Czech Republic;1. Department of Crop Sciences, University of Illinois at Urbana-Champaign, 1101 W. Peabody Drive, Urbana, IL 61801, USA;2. Department of Agricultural and Biological Engineering, University of Illinois at Urbana-Champaign, 1304 W. Pennsylvania Avenue, Urbana, IL 61801, USA;3. Department of Geography and Geographic Information Science, University of Illinois at Urbana-Champaign, 605 East Springfield Avenue, Champaign, IL, USA
Abstract:Soybean is an important food and oil crop in the world. It is of great significance to statics the planting scale accurately for optimizing the crop planting structure and world food security. The technology of accurately extracting the area of soybean planting areas at the field scale using UAV images combined with deep learning algorithms is important for the application. In this study, firstly, RGB images and multispectral images (RGN) were acquired simultaneously by the quad-rotor UAV DJ-Phantom4 Pro at a flying height of 200 m. And the features were extracted from the RGB and RGN images. Further, the fusion image of RGB + VIs and the fusion image of RGN + VIs were obtained by concatenating the band reflectivity of the original image with the calculated Vegetation Index (VI). Then, the soybean planting area was segmented from the feature fusion images by U-Net. And the accuracy of the two sensors was compared. The results showed that the Kappa coefficients obtained based on RGB image, RGN image, CME(the combination of CIVE, MExG, and ExGR), ODR(the combination of OSAVI, DVI, and RDVI), RGB + CME(the combination of RGB and CME), and RGN + ODR(the combination of RGN and ODR) were 0.8806, 0.9327, 0.8437, 0.9330, 0.9420, and 0.9238, respectively. The Kappa coefficient of the combination of the original image and the vegetation index was higher than the original image, indicating that the vegetation index calculation was beneficial to improving the soybean recognition accuracy of the U-Net model. Among them, the precision of the soybean planting area extracted from RGB + CME was the highest, and the Kappa coefficient was 0.9420. Finally, the soybean recognition accuracy of U-Net was compared with the results of DeepLabv3+, Random Forest, and Support Vector Machine. The accuracy of U-Net was the best. It can be concluded that this research proposed the method that was using U-Net trained the fusion image of the original image and vegetation index feature fusion image obtained by the UAV platform, which can effectively segment soybean planting areas. The conclusion of this work provided important technical support for farm level, family cooperatives, and other business entities to manage finely soybean planting and production at low cost.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号