Detection of Residual Film on the Field Surface Based on Faster R-CNN Multiscale Feature Fusion
Abstract
:1. Introduction
- (1)
- Construct a dataset of residual film images containing different light intensities.
- (2)
- The Faster R-CNN model is improved by adding a feature pyramid network structure with an attention mechanism to the backbone feature extraction network. Additionally, instead of using the NMS algorithm, the Soft-NMS algorithm is used in the RPN structure.
- (3)
- Test and verify the improved model and compare the performance of the improved model with other object detection models in the same experimental environment.
- (4)
- Design and develop the residual film recognition software based on the improved model.
2. Materials and Methods
2.1. Dataset of Residual Film
2.2. The Detection Model of Residual Film
2.2.1. Convolutional Block Attention Module
2.2.2. Soft Non-Maximum Suppression
2.2.3. Residual Film Detection Model Based on Multiscale Feature Fusion
2.3. Experimental Environment and Evaluation Index
2.3.1. Experimental Environment
2.3.2. Evaluation Index
3. Results and Discussion
3.1. Selection of Feature Extraction Network
3.2. Ablation Experiment Results
3.3. Comparison with Other Object Detection Models
3.4. APP Development
4. Conclusions
- 1.
- Based on the Faster R-CNN model, the feature extraction ability of the residual film at different scales was enhanced by using FPN. The residual film regions in the images were focused on by utilizing CBAM, and the missed detection rate of the model was reduced by using the Soft-NMS algorithm instead of the NMS algorithm in the RPN structure. The average accuracy, F1-score, and average detection time of the model were 83.45%, 0.89, and 248.36 ms, respectively.
- 2.
- To investigate the effects of each improvement method based on Faster R-CNN on the model detection accuracy, four sets of ablation experiments were conducted on a homemade residual film image dataset. The experimental results showed that the AP and F1-score of the MFFM Faster R-CNN model are both higher than the original Faster R-CNN, and the detection speed is also better.
- 3.
- To evaluate the detection performance of the proposed model in this study, SSD and YOLOv5 were used to conduct comparison tests. The experimental results showed that the AP value of MFFM Faster R-CNN increased by 17.02% and 8.64% compared with SSD and YOLOv5, respectively, and could identify the residual membrane more effectively. However, compared with SSD and YOLOv5, the average detection time of MFFM Faster R-CNN increased.
- 4.
- The MFFM Faster R-CNN model proposed in this study fuses multiscale residual film features, which improves the detection accuracy but does not reduce the detection time. The subsequent research will continue to optimize the detection model and focus on reducing the detection time on the basis of improving the detection accuracy of the residual film. Simultaneously, embedded development will be implemented to apply the detection model to the residual film intelligent picking device; to control the picking component to pick up the residual film.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Yan, C.; Mei, X.; He, W.; Zheng, S. Present situation of residue pollution of mulching plastic film and controlling measures. Trans. Chin. Soc. Agric. Eng. 2006, 22, 269–272. [Google Scholar]
- Xue, Y.; Cao, S.; Xu, Z.; Jin, T.; Jia, T.; Yan, C. Status and trends in application of technology to prevent plastic film residual pollution. J. Agro-Environ. Sci. 2017, 36, 1595–1600. [Google Scholar] [CrossRef]
- Xie, J.; Yang, Y.; Cao, S.; Zhang, Y.; Zhou, Y.; Ma, W. Design and experiments of rake type surface residual film recycling machine with guide chain. Trans. Chin. Soc. Agric. Eng. 2020, 36, 76–86. [Google Scholar] [CrossRef]
- He, W.; Yan, C.; Zhao, C.; Chang, X.; Liu, Q.; Liu, S. Study on the pollution by plastic mulch film and its countermeasures in China. J. Agro-Environ. Sci. 2009, 28, 533–538. [Google Scholar]
- Wang, Z.; He, H.; Zheng, X.; Zhang, J.; Li, W. Effect of cotton stalk returning to fields on residual film distribution in cotton fields under mulched drip irrigation in typical oasis area in Xinjiang. Trans. Chin. Soc. Agric. Eng. 2018, 34, 120–127. [Google Scholar] [CrossRef]
- Xu, Y.; Fang, S.; Ma, X.; Zhu, Q. Prevention and control strategy for the pollution of agricultural plastic film. Strateg. Study CAE 2018, 20, 96–102. [Google Scholar] [CrossRef]
- Zhao, Y.; Chen, X.; Wen, H.; Zheng, X.; Niu, Q.; Kang, J. Research status and prospect of control technology for residual plastic film pollution in farmland. Trans. Chin. Soc. Agric. Mach. 2017, 48, 1–14. [Google Scholar] [CrossRef]
- Bastiaanssen, W.G.; Molden, D.J.; Makin, I.W. Remote sensing for irrigated agriculture: Examples from research and possible applications. Agric. Water Manag. 2020, 46, 137–155. [Google Scholar] [CrossRef]
- Longley, P.A. Geographical information systems: Will developments in urban remote sensing and GIS lead to ‘better’urban geography? Prog. Hum. Geogr. 2002, 26, 231–239. [Google Scholar] [CrossRef]
- Lu, L.; Di, L.; Ye, Y. A decision-tree classifier for extracting transparent plastic-mulched landcover from Landsat-5 TM images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4548–4558. [Google Scholar] [CrossRef]
- Hasi, T.; Chen, Z. Mapping plastic-mulched farmland with multi-temporal Landsat-8 data. Remote Sens. 2017, 9, 557. [Google Scholar] [CrossRef]
- Xiong, Y.; Zhang, Q.; Chen, X.; Bao, A.; Zhang, J.; Wang, Y. Large scale agricultural plastic mulch detecting and monitoring with multi-source remote sensing data: A case study in Xinjiang, China. Remote Sens. 2019, 11, 2088. [Google Scholar] [CrossRef]
- Zheng, W.; Wang, R.; Cao, Y.; Jin, N.; Feng, H.; He, J. Remote sensing recognition of plastic-film-mulched farmlands on loess plateau basef on Google Earth Engine. Trans. Chin. Soc. Agric. Mach. 2022, 53, 224–234. [Google Scholar] [CrossRef]
- Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A review on UAV-based applications for precision agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef]
- Banerjee, B.P.; Raval, S.; Cullen, P.J. UAV-hyperspectral imaging of spectrally complex environments. Int. J. Remote Sens. 2020, 41, 4136–4159. [Google Scholar] [CrossRef]
- Duarte, A.; Borralho, N.; Cabral, P.; Caetano, M. Recent advances in forest insect pests and diseases monitoring using UAV-based data: A systematic review. Forests 2022, 13, 911. [Google Scholar] [CrossRef]
- Liang, C.; Wu, X.; Wang, F.; Song, Z.; Zhang, F. Research on recognition algorithm of field mulch film based on unmanned aerial vehicle. Acta Agric. Zhejiangensis 2019, 31, 1005–1011. [Google Scholar] [CrossRef]
- Wu, X.; Liang, C.; Zhang, D.; Yu, L.; Zhang, F. Identification method of plastic film residue based on UAV remote sensing images. Trans. Chin. Soc. Agric. Mach. 2020, 51, 189–195. [Google Scholar] [CrossRef]
- Zhu, X.; Li, S.; Xiao, G. Method on extraction of area and distribution of plastic-mulched farmland based on UAV images. Trans. Chin. Soc. Agric. Eng. 2019, 35, 106–113. [Google Scholar] [CrossRef]
- Sun, Y.; Han, J.; Chen, Z.; Shi, M.; Fu, H.; Yang, M. Monitoring method for UAV image of greenhouse and plastic-mulched landcover based on deep learning. Trans. Chin. Soc. Agric. Mach. 2018, 49, 133–140. [Google Scholar] [CrossRef]
- Wang, Y.; Wu, J.; Lan, P.; Li, F.; Ge, C.; Sun, F. Apple disease identification using improved Faster R-CNN. J. For. Eng. 2022, 7, 153–159. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2015; Volume 28. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar] [CrossRef]
- Yang, B.; Gao, Z.; Gao, Y.; Zhu, Y. Rapid detection and counting of wheat ears in the field using YOLOv4 with attention module. Agronomy 2021, 11, 1202. [Google Scholar] [CrossRef]
- Wang, C.Y.; Liao, H.Y.M.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 390–391. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollár, P.; Girshick, R.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar] [CrossRef]
- Peng, M.; Xia, J.; Peng, H. Efficient recognition of cotton and weed in field based on Faster R-CNN by integrating FPN. Trans. Chin. Soc. Agric. Eng. 2019, 35, 202–209. [Google Scholar] [CrossRef]
- Mukti, I.Z.; Biswas, D. Transfer learning based plant diseases detection using ResNet50. In Proceedings of the 2019 4th International Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh, 20–22 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Swasono, D.I.; Tjandrasa, H.; Fathicah, C. Classification of tobacco leaf pests using VGG16 transfer learning. In Proceedings of the 2019 12th International Conference on Information & Communication Technology and System (ICTS), Surabaya, Indonesia, 18 July 2019; pp. 176–181. [Google Scholar] [CrossRef]
- Xu, Z.; Sun, K.; Mao, J. Research on ResNet101 network chemical reagent label image classification based on transfer learning. In Proceedings of the 2020 IEEE 2nd International Conference on Civil Aviation Safety and Information Technology (ICCASIT), Weihai, China, 14–16 October 2020; pp. 354–358. [Google Scholar] [CrossRef]
- Tan, H.Q.; Li, Y.X.; Zhu, M.; Deng, Y.X.; Tong, M.H. Detecting overlapping fish population using image enhancement and improved Faster-RCNN networks. Trans. Chin. Soc. Agric. Eng. 2022, 38, 167–176. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-F.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016, Proceedings, Part I 14; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- Zheng, J.; Zhang, S.; Yang, P.; Yu, H.; Li, S.; Zheng, J.; Zhou, W. Log volume detection method based on deep learning and depth information. J. For. Eng. 2023, 8, 135–140. [Google Scholar] [CrossRef]
Parameter | Base Learning Rate | Momentum | Iterations | Weight Decay | the Threshold-Value of Soft-NMS |
---|---|---|---|---|---|
Value | 0.001 | 0.937 | 500 | 0.0005 | 0.3 |
Backbone | Precision (%) | Recall (%) | AP (%) | F1-score | Ta (ms) |
---|---|---|---|---|---|
VGG16 | 68.12 | 67.14 | 65.10 | 0.68 | 252.48 |
ResNet50 | 83.33 | 67.86 | 65.62 | 0.75 | 261.54 |
ResNet101 | 86.49 | 68.57 | 69.84 | 0.76 | 326.82 |
Model | NMS | Soft-NMS | FPN | CBAM | AP (%) | F1-score | Ta (ms) |
---|---|---|---|---|---|---|---|
Model-Ⅰ | √ | 65.62 | 0.75 | 261.54 | |||
Model-Ⅱ | √ | √ | 74.42 | 0.81 | 219.68 | ||
Model-Ⅲ | √ | √ | √ | 78.50 | 0.82 | 237.26 | |
Model-Ⅳ (Our model) | √ | √ | √ | 83.45 | 0.91 | 248.36 |
Model | AP (%) | F1-score | Ta (ms) |
---|---|---|---|
YOLOv5 | 66.43 | 0.68 | 192.75 |
SSD | 74.81 | 0.73 | 176.48 |
MFFM Faster R-CNN | 83.45 | 0.91 | 248.36 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, T.; Jiang, Y.; Wang, X.; Xie, J.; Wang, C.; Shi, Q.; Zhang, Y. Detection of Residual Film on the Field Surface Based on Faster R-CNN Multiscale Feature Fusion. Agriculture 2023, 13, 1158. https://doi.org/10.3390/agriculture13061158
Zhou T, Jiang Y, Wang X, Xie J, Wang C, Shi Q, Zhang Y. Detection of Residual Film on the Field Surface Based on Faster R-CNN Multiscale Feature Fusion. Agriculture. 2023; 13(6):1158. https://doi.org/10.3390/agriculture13061158
Chicago/Turabian StyleZhou, Tong, Yongxin Jiang, Xuenong Wang, Jianhua Xie, Changyun Wang, Qian Shi, and Yi Zhang. 2023. "Detection of Residual Film on the Field Surface Based on Faster R-CNN Multiscale Feature Fusion" Agriculture 13, no. 6: 1158. https://doi.org/10.3390/agriculture13061158
APA StyleZhou, T., Jiang, Y., Wang, X., Xie, J., Wang, C., Shi, Q., & Zhang, Y. (2023). Detection of Residual Film on the Field Surface Based on Faster R-CNN Multiscale Feature Fusion. Agriculture, 13(6), 1158. https://doi.org/10.3390/agriculture13061158