Next Article in Journal
Deep Integration Between Polarimetric Forward-Transmission Fiber-Optic Communication and Distributed Sensing Systems
Next Article in Special Issue
A Stride Toward Wine Yield Estimation from Images: Metrological Validation of Grape Berry Number, Radius, and Volume Estimation
Previous Article in Journal
System Integration Design of High-Performance Piezo-Actuated Fast-Steering Mirror for Laser Beam Steering System
Previous Article in Special Issue
Consecutive Image Acquisition without Anomalies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vision-Based Localization Method for Picking Points in Tea-Harvesting Robots

Key Laboratory of Agricultural Sensors, Ministry of Agriculture and Rural Affairs, School of Information and Artificial Intelligence, Anhui Agricultural University, Hefei 230036, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(21), 6777; https://doi.org/10.3390/s24216777
Submission received: 24 September 2024 / Revised: 12 October 2024 / Accepted: 17 October 2024 / Published: 22 October 2024

Abstract

To address the issue of accurately recognizing and locating picking points for tea-picking robots in unstructured environments, a visual positioning method based on RGB-D information fusion is proposed. First, an improved T-YOLOv8n model is proposed, which improves detection and segmentation performance across multi-scale scenes through network architecture and loss function optimizations. In the far-view test set, the detection accuracy of tea buds reached 80.8%; for the near-view test set, the mAP0.5 values for tea stem detection in bounding boxes and masks reached 93.6% and 93.7%, respectively, showing improvements of 9.1% and 14.1% over the baseline model. Secondly, a layered visual servoing strategy for near and far views was designed, integrating the RealSense depth sensor with robotic arm cooperation. This strategy identifies the region of interest (ROI) of the tea bud in the far view and fuses the stem mask information with depth data to calculate the three-dimensional coordinates of the picking point. The experiments show that this method achieved a picking point localization success rate of 86.4%, with a mean depth measurement error of 1.43 mm. The proposed method improves the accuracy of picking point recognition and reduces depth information fluctuations, providing technical support for the intelligent and rapid picking of premium tea.
Keywords: deep learning; RGB-D; tea; picking point localization deep learning; RGB-D; tea; picking point localization

Share and Cite

MDPI and ACS Style

Yang, J.; Li, X.; Wang, X.; Fu, L.; Li, S. Vision-Based Localization Method for Picking Points in Tea-Harvesting Robots. Sensors 2024, 24, 6777. https://doi.org/10.3390/s24216777

AMA Style

Yang J, Li X, Wang X, Fu L, Li S. Vision-Based Localization Method for Picking Points in Tea-Harvesting Robots. Sensors. 2024; 24(21):6777. https://doi.org/10.3390/s24216777

Chicago/Turabian Style

Yang, Jingwen, Xin Li, Xin Wang, Leiyang Fu, and Shaowen Li. 2024. "Vision-Based Localization Method for Picking Points in Tea-Harvesting Robots" Sensors 24, no. 21: 6777. https://doi.org/10.3390/s24216777

APA Style

Yang, J., Li, X., Wang, X., Fu, L., & Li, S. (2024). Vision-Based Localization Method for Picking Points in Tea-Harvesting Robots. Sensors, 24(21), 6777. https://doi.org/10.3390/s24216777

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop