Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN
Abstract
:1. Introduction
2. Materials and Methods
2.1. Calibration of Thermal Camera
2.2. Field Data Collection
2.3. Data Preparation
2.3.1. Image Frames from Videos
2.3.2. Labeling
2.3.3. Data Augmentation
2.3.4. Data Splitting
2.4. Training Model Development
2.4.1. Faster R-CNN
2.4.2. Loss Function
2.5. Training Platform and Validation
2.6. Model Testing
3. Results
3.1. Calibration Performance of Thermal Cameras in Different Lighting
3.2. Model Training and Validation
3.3. Model Testing
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Dong, Y. Japan: Aging of the Agricultural Labor Force and its Solutions; World food Prize Foundation: Des Moines, IA, USA, 2018. [Google Scholar]
- Vadlamudi, S. How Artificial Intelligence Improves Agricultural Productivity and Sustainability: A Global Thematic Analysis. Asia Pac. J. Energy Environ. 2019, 6, 91–100. [Google Scholar]
- Bergerman, M.; Billingsley, J.; Reid, J.; Van Henten, E. Robotics in agriculture and forestry. In Springer Handbook of Robotics; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2016; pp. 1463–1492. [Google Scholar]
- Takai, R.; Barawid, O., Jr.; Ishii, K.; Noguchi, N. Development of Crawler-Type Robot Tractor based on GPS and IMU. IFAC Proc. Vol. 2010, 43, 151–156. [Google Scholar] [CrossRef]
- Li, M.; Imou, K.; Wakabayashi, K.; Yokoyama, S. Review of research on agricultural vehicle autonomous guidance. Int. J. Agric. Biol. Eng. 2009, 2, 1–16. [Google Scholar]
- Ahamed, T.; Takigawa, T.; Koike, M.; Honma, T.; Hasegawa, H.; Zhang, Q. Navigation using a laser range finder for autonomous tractor (part 1)—positioning of implement. J. Jpn. Soc. Agric. Mach. 2006, 68, 68–77. [Google Scholar]
- Ahamed, T.; Takigawa, T.; Koike, M.; Honma, T.; Hasegawa, H.; Zhang, Q. Navigation using a laser range finder for autonomous tractor (part 2)—Navigation for approach composed of multiple paths. J. Jpn. Soc. Agric. Mach. 2006, 68, 78–86. [Google Scholar]
- Ahamed, T.; Tian, L.; Takigawa, T.; Zhang, Y. Development of Auto-Hitching Navigation System for Farm Implements Using Laser Range Finder. Trans. ASABE 2009, 52, 1793–1803. [Google Scholar] [CrossRef]
- Malavazi, F.B.; Guyonneau, R.; Fasquel, J.-B.; Lagrange, S.; Mercier, F. LiDAR-only based navigation algorithm for an autonomous agricultural robot. Comput. Electron. Agric. 2018, 154, 71–79. [Google Scholar] [CrossRef]
- Subramanian, V.; Burks, T.F.; Arroyo, A.A. Development of machine vision and laser radar based autonomous vehicle guidance systems for citrus grove navigation. Comput. Electron. Agric. 2006, 53, 130–143. [Google Scholar] [CrossRef]
- Takagaki, A.; Masuda, R.; Michihisa, I.; Masahiko, S. Image Processing for Ridge/Furrow Discrimination for Autonomous Agricultural Vehicles Navigation. IFAC Proc. 2013, 46, 47–51. [Google Scholar] [CrossRef]
- Zhang, Y.; Chen, H.; He, Y.; Ye, M.; Cai, X.; Zhang, D. Road segmentation for all-day outdoor robot navigation. Neurocomputing 2018, 314, 316–325. [Google Scholar] [CrossRef]
- Guo, Z.; Li, X.; Xu, Q.; Sun, Z. Robust semantic segmentation based on RGB-thermal in variable lighting scenes. Measurement 2021, 186, 110176. [Google Scholar] [CrossRef]
- Beyaz, A.; Özkaya, M.T. Canopy analysis and thermographic abnormalities determination possibilities of olive trees by using data mining algorithms. Not. Bot. Horti Agrobot. Cluj-Napoca 2021, 49, 12139. [Google Scholar] [CrossRef]
- Hespeler, S.C.; Nemati, H.; Ehsan, D.-N. Non-destructive thermal imaging for object detection via advanced deep learning for robotic inspection and harvesting of chili peppers. Artif. Intell. Agric. 2021, 5, 102–117. [Google Scholar] [CrossRef]
- da Silva, D.Q.; Dos Santos, F.N.; Sousa, A.J.; Filipe, V. Visible and Thermal Image-Based Trunk Detection with Deep Learning for Forestry Mobile Robotics. J. Imaging 2021, 7, 176. [Google Scholar] [CrossRef]
- Wang, K.; Meng, Z.; Wu, Z. Deep Learning-Based Ground Target Detection and Tracking for Aerial Photography from UAVs. Appl. Sci. 2021, 11, 8434. [Google Scholar] [CrossRef]
- Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [Green Version]
- Li, W.; Feng, X.S.; Zha, K.; Li, S.; Zhu, H.S. Summary of Target Detection Algorithms. J. Phys. Conf. Ser. 2021, 1757, 012003. [Google Scholar] [CrossRef]
- Qi, L.; Li, B.; Chen, L.; Wang, D.; Dong, L.; Jia, X.; Huang, J.; Ge, C.; Xue, G. Ship Target Detection Algorithm Based on Improved Faster R-CNN. Electronics 2019, 8, 959. [Google Scholar] [CrossRef] [Green Version]
- Alex, K.; Ilya, S.; Geoffrey, E.H. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Processing Syst. 2012, 25, 1097–1105. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 14–19 June 2020; 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, New York, NY, USA, 6 August 2002; 2015; pp. 1440–1448. [Google Scholar]
- Chahal, K.S.; Dey, K. A Survey of Modern Object Detection Literature using Deep Learning. arXiv 2018, arXiv:1808.07256. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 36, 1137–1149. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; 2016; pp. 779–788. [Google Scholar]
- Wen, H.; Dai, F.; Yuan, Y. A Study of YOLO Algorithm for Target Detection. J. Adv. Inn Artif. Life Robot. 2021, 2, 287–290. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOL09000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 18–20 June 1996; 2016; pp. 7263–7271. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Parico, A.I.B.; Ahamed, T. Real Time Pear Fruit Detection and Counting Using YOLOv4 Models and Deep SORT. Sensors 2021, 21, 4803. [Google Scholar] [CrossRef]
- Xu, B.; Wang, W.; Guo, L.; Chen, G.; Wang, Y.; Zhang, W.; Li, Y. Evaluation of Deep Learning for Automatic Multi-View Face Detection in Cattle. Agriculture 2021, 11, 1062. [Google Scholar] [CrossRef]
- Mirhaji, H.; Soleymani, M.; Asakereh, A.; Mehdizadeh, S.A. Fruit detection and load estimation of an orange orchard using the YOLO models through simple approaches in different imaging and illumination conditions. Comput. Electron. Agric. 2021, 191, 106533. [Google Scholar] [CrossRef]
- Beyaz, A.; Nourani, A. Date Fruit Varieties Classification Based on Dryness Levels by Using YOLOv3. Ama. Agric. Mech. Asia. Afr. Lat. Am. 2021, 51, 1193–1210. [Google Scholar]
- Adami, D.; Ojo, M.O.; Giordano, S. Design, Development and Evaluation of an Intelligent Animal Repelling System for Crop Protection Based on Embedded Edge-AI. IEEE Access 2021, 9, 132125–132139. [Google Scholar] [CrossRef]
- Srivastava, S.; Divekar, A.V.; Anilkumar, C.; Naik, I.; Kulkarni, V.; Pattabiraman, V. Comparative analysis of deep learning image detection algorithms. J. Big Data 2021, 8, 1–27. [Google Scholar] [CrossRef]
- Mahmud, S.; Zahid, A.; Das, A.K.; Muzammil, M.; Khan, M.U. A systematic literature review on deep learning applications for precision cattle farming. Comput. Electron. Agric. 2021, 187, 106313. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Oostwal, E.; Straat, M.; Biehl, M. Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation. Phys. A Stat. Mech. Appl. 2021, 564, 125517. [Google Scholar] [CrossRef]
- Li, Z.; Li, Y.; Yang, Y.; Guo, R.; Yang, J.; Yue, J.; Wang, Y. A high-precision detection method of hydroponic lettuce seedlings status based on improved Faster R-CNN. Comput. Electron. Agric. 2021, 182, 106054. [Google Scholar] [CrossRef]
- Jiang, D.; Li, G.; Tan, C.; Huang, L.; Sun, Y.; Kong, J. Semantic segmentation for multiscale target based on object recognition using the improved Faster-RCNN model. Future Gener. Comput. Syst. 2021, 123, 94–104. [Google Scholar] [CrossRef]
- Zhang, H.; Tian, Y.; Wang, K.; Zhang, W.; Wang, F.-Y. Mask SSD: An effective single-stage approach to object instance segmentation. IEEE Trans. Image Process 2019, 29, 2078–2093. [Google Scholar] [CrossRef]
- Thanpattranon, P.; Ahamed, T.; Takigawa, T. Navigation of an Autonomous Tractor for a Row-Type Tree Plantation Using a Laser Range Finder—Development of a Point-to-Go Algorithm. Robotics 2015, 4, 341–364. [Google Scholar] [CrossRef] [Green Version]
- Thanpattranon, P.; Ahamed, T.; Takigawa, T. Navigation of autonomous tractor for orchards and plantations using a laser range finder: Automatic control of trailer position with tractor. Biosyst. Eng. 2016, 147, 90–103. [Google Scholar] [CrossRef]
Date | Time | Light Condition |
---|---|---|
24 August 2021 | 19:00–20:00 | No light |
26 August 2021 | 13:00–14:00 | Strong light |
6 September 2021 | 17:00–18:00 | Low light |
Iterations Times | 10,000 | 20,000 | 30,000 | 40,000 |
---|---|---|---|---|
Training time | 3.05 h | 6.02 h | 8.98 h | 11.97 h |
Objects Proposals | FPS | Detection Time (Images, ms) | Detection Time (Videos, ms) |
---|---|---|---|
100 | 13.9 | 72 | 72 |
200 | 12.3 | 81 | 81 |
300 | 11.1 | 83 | 90 |
400 | 10.6 | 93 | 95 |
500 | 9.9 | 95 | 101 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, A.; Noguchi, R.; Ahamed, T. Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN. Sensors 2022, 22, 2065. https://doi.org/10.3390/s22052065
Jiang A, Noguchi R, Ahamed T. Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN. Sensors. 2022; 22(5):2065. https://doi.org/10.3390/s22052065
Chicago/Turabian StyleJiang, Ailian, Ryozo Noguchi, and Tofael Ahamed. 2022. "Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN" Sensors 22, no. 5: 2065. https://doi.org/10.3390/s22052065
APA StyleJiang, A., Noguchi, R., & Ahamed, T. (2022). Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN. Sensors, 22(5), 2065. https://doi.org/10.3390/s22052065