Next Article in Journal
Effect of Co-Application of Chinese Milk Vetch and Iron-Modified Biochar on Rice in Antimony-Polluted Soil
Previous Article in Journal
Nanoagrochemicals versus Conventional Fertilizers: A Field Case Study with Tailor-Made Nanofertilizers for Sustainable Crop Efficiency of Brassica oleracea L. convar. Capitata var. Sabauda
Previous Article in Special Issue
Banana Bunch Weight Estimation and Stalk Central Point Localization in Banana Orchards Based on RGB-D Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Image Segmentation-Based Oilseed Rape Row Detection for Infield Navigation of Agri-Robot

by
Guoxu Li
1,2,
Feixiang Le
1,
Shuning Si
1,2,
Longfei Cui
1,* and
Xinyu Xue
1,*
1
Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
2
Graduate School of Chinese Academy of Agricultural Sciences, Beijing 100081, China
*
Authors to whom correspondence should be addressed.
Agronomy 2024, 14(9), 1886; https://doi.org/10.3390/agronomy14091886
Submission received: 29 July 2024 / Revised: 22 August 2024 / Accepted: 22 August 2024 / Published: 23 August 2024
(This article belongs to the Collection Advances of Agricultural Robotics in Sustainable Agriculture 4.0)

Abstract

The segmentation and extraction of oilseed rape crop rows are crucial steps in visual navigation line extraction. Agricultural autonomous navigation robots face challenges in path recognition in field environments due to factors such as complex crop backgrounds and varying light intensities, resulting in poor segmentation and slow detection of navigation lines in oilseed rape crops. Therefore, this paper proposes VC-UNet, a lightweight semantic segmentation model that enhances the U-Net model. Specifically, VGG16 replaces the original backbone feature extraction network of U-Net, Convolutional Block Attention Module (CBAM) are integrated at the upsampling stage to enhance focus on segmentation targets. Furthermore, channel pruning of network convolution layers is employed to optimize and accelerate the model. The crop row trapezoidal ROI regions are delineated using end-to-end vertical projection methods with serialized region thresholds. Then, the centerline of oilseed rape crop rows is fitted using the least squares method. Experimental results demonstrate an average accuracy of 94.11% for the model and an image processing speed of 24.47 fps/s. After transfer learning for soybean and maize crop rows, the average accuracy reaches 91.57%, indicating strong model robustness. The average yaw angle deviation of navigation line extraction is 3.76°, with a pixel average offset of 6.13 pixels. Single image transmission time is 0.009s, ensuring real-time detection of navigation lines. This study provides upper-level technical support for the deployment of agricultural robots in field trials.
Keywords: visual navigation; agricultural robot; path recognition; oilseed rape; image segmentation visual navigation; agricultural robot; path recognition; oilseed rape; image segmentation

Share and Cite

MDPI and ACS Style

Li, G.; Le, F.; Si, S.; Cui, L.; Xue, X. Image Segmentation-Based Oilseed Rape Row Detection for Infield Navigation of Agri-Robot. Agronomy 2024, 14, 1886. https://doi.org/10.3390/agronomy14091886

AMA Style

Li G, Le F, Si S, Cui L, Xue X. Image Segmentation-Based Oilseed Rape Row Detection for Infield Navigation of Agri-Robot. Agronomy. 2024; 14(9):1886. https://doi.org/10.3390/agronomy14091886

Chicago/Turabian Style

Li, Guoxu, Feixiang Le, Shuning Si, Longfei Cui, and Xinyu Xue. 2024. "Image Segmentation-Based Oilseed Rape Row Detection for Infield Navigation of Agri-Robot" Agronomy 14, no. 9: 1886. https://doi.org/10.3390/agronomy14091886

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop