Storage Tank Target Detection for Large-Scale Remote Sensing Images Based on YOLOv7-OT
Abstract
:1. Introduction
- Scale diversity: The size of similar ground targets varies greatly; for example, the diameter of a small industrial tank is about several meters, and the diameter of a large tank can reach tens of meters.
- Perspective specificity: Compared with natural images, remote sensing images have a single imaging perspective, less available information, and uncertain target direction.
- Small target problem: Remote sensing images cover a large area, and the target to be detected is smaller than the image as a whole, often only a few dozen pixels.
- High background complexity: Since remote sensing images cover a large area and have a single perspective, the field of view will contain a lot of background information, which will strongly interfere with target detection.
2. Data and Model
2.1. Dataset
2.2. Model
3. Method
3.1. Pre-Detection Stage
3.2. Mid-Detection STAGE
3.3. Post-Detection Stage
4. Result and Discussion
4.1. Model Evaluation Indicators
- True Positive (TP): The model predicts positive, and the actual situation is also positive, indicating that the algorithm prediction result is correct.
- False Positive (FP): The model predicts positive, and the actual situation is negative, indicating that the algorithm prediction result is incorrect.
- True Negative (TN): The model predicts negative, and the actual situation is also negative, indicating that the algorithm prediction result is correct.
- False Negative (FN): The model predicts negative, and the actual situation is positive, indicating that the algorithm prediction result is incorrect.
4.2. Experimental Results
4.3. Model Application
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Guo, Q.; Xi, X.; Yang, S.; Cai, M. Technology strategies to achieve carbon peak and carbon neutrality for China’s metal mines. Int. J. Miner. Metall. Mater. 2022, 29, 626–634. [Google Scholar] [CrossRef]
- Zeng, Y.; Wang, X.M.; Tang, H. The Scientific Connotation, Realization Path and Challenges of Carbon Neutral Strategy of Carbon Dafeng. Mod. Chem. 2022, 42, 1–4. [Google Scholar]
- Xiao, L.L. China’s Summit Diplomacy and National Green Strategy Capacity Building in the Context of Carbon Neutrality. J. Nanjing Univ. Sci. Technol. 2023, 36, 7–15. [Google Scholar]
- Shi, T.; Han, G.; Ma, X.; Mao, H.; Chen, C.; Han, Z.; Pei, Z.; Zhang, H.; Li, S.; Gong, W. Quantifying factory-scale CO2/CH4 emission based on mobile measurements and EMISSION-PARTITION model: Cases in China. Environ. Res. Lett. 2023, 18, 034028. [Google Scholar] [CrossRef]
- Pei, Z.; Han, G.; Mao, H.; Chen, C.; Shi, T.; Yang, K.; Ma, X.; Gong, W. Improving quantification of methane point source emissions from imaging spectroscopy. Remote Sens. Environ. 2023, 295, 113652. [Google Scholar] [CrossRef]
- Ramsden, A.E.; Ganesan, A.L.; Western, L.M.; Rigby, M.; Manning, A.J.; Foulds, A.; France, J.L.; Barker, P.; Levy, P.; Say, D.; et al. Quantifying fossil fuel methane emissions using observations of atmospheric ethane and an uncertain emission ratio. Atmos. Chem. Phys. 2022, 22, 3911–3929. [Google Scholar] [CrossRef]
- Han, G.; Huang, Y.; Shi, T.; Zhang, H.; Li, S.; Zhang, H.; Chen, W.; Liu, J.; Gong, W. Quantifying CO2 emissions of power plants with Aerosols and Carbon Dioxide Lidar onboard DQ-1. Remote Sens. Environ. 2024, 313, 114368. [Google Scholar] [CrossRef]
- Cao, D.; Xue, M.; Bai, D.; Wei, L.; Yang, S.; Sun, J.; Wang, Q.; Mao, Y. Characteristics and Quantification of Methane Emissions from Petroleum and Gas Processing Stations. Chin. J. Environ. Eng. 2023, 17, 4088–4095. [Google Scholar]
- Qu, J.S.; Qu, S.B.; Wang, Z.J. Feature-based fuzzy-neural network approach for target classification and recognition in remote sensing images. J. Remote Sens. 2009, 13, 67–74. [Google Scholar]
- Cheng, G.; Han, J. A Survey on Object Detection in Optical Remote Sensing Images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef]
- Chen, S.; Kang, Q.; Wang, Z.; Shen, Z.; Pu, H.; Han, H.; Gu, Z. Target detection method by airborne and spaceborne images fusion based on past images. In LIDAR Imaging Detection and Target Recognition; SPIE: Bellingham, WA, USA, 2017; Volume 10605. [Google Scholar]
- Wang, X.; Ban, Y.; Guo, H.; Hong, L. Deep learning model for target detection in remote sensing images fusing multilevel features. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 250–253. [Google Scholar]
- Wang, Y.; Sun, G.; Guo, S. Target detection method for low-resolution remote sensing image based on ESRGAN and ReDet. Photonics 2021, 8, 431. [Google Scholar] [CrossRef]
- Fan, L.; Wang, Y.; Hu, G.; Li, F.; Dong, Y.; Zheng, H.; Ling, C.; Huang, Y.; Ding, X. Diffusion-Based Continuous Feature Representation for Infrared Small-Dim Target Detection. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5003617. [Google Scholar] [CrossRef]
- Shi, W.; Bao, J.; Yao, Y. Remote sensing image target detection and identification based on deep learning. Comput. Appl. 2020, 40, 3558–3562. [Google Scholar]
- Basaeed, E.; Łoza, A.; Al-Mualla, M. Integrated remote sensing image fusion framework for target detection. In Proceedings of the 2013 IEEE 20th International Conference on Electronics, Circuits, and Systems (ICECS), Abu Dhabi, United Arab Emirates, 8–11 December 2013; pp. 86–87. [Google Scholar]
- Shu, C.; Sun, L. Automatic target recognition method for multitemporal remote sensing image. Open Phys. 2020, 18, 170–181. [Google Scholar] [CrossRef]
- Li, X.; Liu, Y. Oil tank detection in optical remote sensing imagery based on quasi-circular shadow. J. Electron. Inf. Technol. 2016, 38, 1489–1495. [Google Scholar]
- Wang, T.; Li, Y.; Yu, S.; Liu, Y. Estimating the volume of oil tanks based on high-resolution remote sensing images. Remote Sens. 2019, 11, 793. [Google Scholar] [CrossRef]
- Li, C.; Guo, H.; Ma, D.; Yu, D.; Huang, C. Comparative analysis of the accuracy of deep learning algorithms for oil tank detection in remote sensing imagery. Mar. Surv. Charting 2021, 2, 52–56. [Google Scholar]
- Zhu, M.; Wang, Z.; Bai, L.; Zhang, J.; Tao, J.; Chen, L. Detection of industrial storage tanks at the city-level from optical satellite remote sensing images. Image Signal Process. Remote Sens. XXVII 2021, 11862, 266–272. [Google Scholar]
- Yu, P.; Wang, X.; Jiang, G.; Liu, J.; Xu, B. An Improved YOLOv4 Algorithm for Detecting Typical Targets in Remote Sensing Images. J. Surv. Mapp. Sci. 2021, 38, 280–286. [Google Scholar]
- Li, X.; Te, R.; Yi, F.; Xu, G. TCS-YOLO model for global oil storage tank inspection. Opt. Precis. Eng. 2023, 31, 246–262. [Google Scholar] [CrossRef]
- Sun, W.; Hu, C.; Luo, N.; Zhao, Q. An optimization method of multiscale storage tank target detection introducing an attention mechanism. Geocarto Int. 2024, 39, 2339304. [Google Scholar] [CrossRef]
- Li, K.; Wan, G.; Cheng, G.; Meng, L.; Han, J. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 159, 296–307. [Google Scholar] [CrossRef]
- Cheng, G.; Han, J.W.; Lu, X.Q. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE 2020, 105, 1865–1883. [Google Scholar] [CrossRef]
- Cheng, G.; Zhou, P.; Han, J. Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7405–7415. [Google Scholar] [CrossRef]
- Zhang, Y.L.; Yuan, Y.; Feng, Y.C. Hierarchical and Robust Convolutional Neural Network for Very High-Resolution Remote Sensing Object Detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5535–5548. [Google Scholar] [CrossRef]
- Yao, Y.; Jiang, Z.; Zhang, H. Oil tank detection based on salient region and geometric features. In Optoelectronic Imaging and Multimedia Technology III, Beijing, China; SPIE: Bellingham, WA, USA, 2014; pp. 276–281. [Google Scholar]
- Cai, X.; Sui, H.; Lv, R.; Song, Z. Automatic circular oil tank detection in high-resolution optical image based on visual saliency and Hough transform. In Proceedings of the 2014 IEEE Workshop on Electronics, Computer and Applications, Ottawa, ON, Canada, 8–9 May 2014; pp. 408–411. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Li, G.; Suo, R.; Zhao, G.; Gao, C.; Fu, L.; Shi, F.; Dhupia, J.; Li, R.; Cui, Y. Real-time detection of kiwifruit flower and bud simultaneously in orchard using YOLOv4 for robotic pollination. Comput. Electron. Agric. 2022, 193, 106641. [Google Scholar] [CrossRef]
- Wu, D.; Jiang, S.; Zhao, E.; Liu, Y.; Zhu, H.; Wang, W.; Wang, R. Detection of Camellia oleifera fruit in complex scenes by using YOLOv7 and data augmentation. Appl. Sci. 2022, 12, 11318. [Google Scholar] [CrossRef]
- Cui, W.; Li, Z.; Duanmu, A.; Xue, S.; Guo, Y.; Ni, C.; Zhu, T.; Zhang, Y. CCG-YOLOv7: A Wood Defect Detection Model for Small Targets Using Improved YOLOv7. IEEE Access 2024, 12, 10575–10585. [Google Scholar] [CrossRef]
- Zou, H.; He, G.; Yao, Y.; Zhu, F.; Zhou, Y.; Chen, X. YOLOv7-EAS: A Small Target Detection of Camera Module Surface Based on Improved YOLOv7. Adv. Theory Simul. 2023, 6, 2300397. [Google Scholar] [CrossRef]
- Zhang, Y.; Ye, M.; Zhu, G.; Liu, Y.; Guo, P.; Yan, J. FFCA-YOLO for Small Object Detection in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5611215. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
Dataset | Resolution | Image Size/Pixel | Number of Images Containing Storage Tanks | Number of Storage Tanks |
---|---|---|---|---|
DIOR | 0.5 m–30 m | 800 × 800 | 1244 | 20,361 |
NWPUU_RESISC45 | 0.2 m–30 m | 256 × 256 | 688 | 12,405 |
NWPU-VHR-10 | 0.5 m–2 m | (500–1100) × (500–1000) | 165 | 1698 |
TGRS-HRRSD | 0.15 m–1.2 m | (152–10569) × (152–10569) | 897 | 4406 |
Self-built dataset | 0.5 m–3 m | 512 × 512 | 574 | 7205 |
Total | —— | —— | 3568 | 46,075 |
Confusion Matrix | Predicted Value | ||
Positive | Negative | ||
Real value | Positive | TP | FN |
Negative | FP | TN |
Models | Precision | Recall | mAP@0.5 | F1-Score | FPS |
---|---|---|---|---|---|
YOLOv7-OT | 0.89 | 0.87 | 0.90 | 0.88 | 67.4 |
Faster RCNN | 0.83 | 0.74 | 0.84 | 0.78 | 6.3 |
YOLOv7 | 0.85 | 0.78 | 0.87 | 0.81 | 60.3 |
YOLOv10 | 0.89 | 0.89 | 0.90 | 0.89 | 63.4 |
Area | True Targets | YOLOv7-OT Precision | YOLOv7 Precision | YOLOv10 Precision | Faster RCNN Precision |
---|---|---|---|---|---|
Area. I | 3392 | 96.0% | 82.5% | 78.1% | 80.3% |
Area. II | 206 | 76.2% | 41.5% | 44.9% | 39.7% |
Area. III | 1593 | 92.0% | 80.1% | 74.7% | 72.5% |
Area. IV | 3330 | 96.9% | 75.8% | 73.5% | 76.6% |
Area. V | 692 | 92.2% | 70.3% | 70.1% | 73.5% |
Area. VI | 3679 | 98.9% | 88.0% | 82.5% | 80.1% |
Total | 12892 | 95.9% | 79.8% | 76.2% | 76.6% |
Area | True Grids | YOLOv7-OT Precision | YOLOv7 Precision | YOLOv10 Precision | Faster RCNN Precision |
---|---|---|---|---|---|
Area. I | 443 | 88.0% | 46.4% | 94.7% | 50.2% |
Area. II | 27 | 64.2% | 11.4% | 32.9% | 10.7% |
Area. III | 150 | 90.9% | 38.9% | 73.2% | 35.7% |
Area. IV | 388 | 94.4% | 22.5% | 72.1% | 20.4% |
Area. V | 83 | 88.2% | 29.6% | 76.9% | 35.7% |
Area.VI | 412 | 96.7% | 53.0% | 85.1% | 61.9% |
Total | 1503 | 91.8% | 39.3% | 79.7% | 34.5% |
Model | Pre-Detection | Mid-Detection | Post-Detection | Precision | mAP@0.5 |
---|---|---|---|---|---|
YOLOv7 | -- | -- | -- | 79.8% | 0.87 |
YOLOv7+ Pre-detection | √ | -- | -- | 82.9% | 0.87 |
YOLOv7+ Mid-detection | -- | √ | -- | 83.6% | 0.90 |
YOLOv7+ Post-detection | -- | -- | √ | 92.7% | 0.87 |
YOLOv7-OT | √ | √ | √ | 95.9% | 0.90 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wan, Y.; Zhan, Z.; Ren, P.; Fan, L.; Liu, Y.; Li, L.; Dai, Y. Storage Tank Target Detection for Large-Scale Remote Sensing Images Based on YOLOv7-OT. Remote Sens. 2024, 16, 4510. https://doi.org/10.3390/rs16234510
Wan Y, Zhan Z, Ren P, Fan L, Liu Y, Li L, Dai Y. Storage Tank Target Detection for Large-Scale Remote Sensing Images Based on YOLOv7-OT. Remote Sensing. 2024; 16(23):4510. https://doi.org/10.3390/rs16234510
Chicago/Turabian StyleWan, Yong, Zihao Zhan, Peng Ren, Lu Fan, Yu Liu, Ligang Li, and Yongshou Dai. 2024. "Storage Tank Target Detection for Large-Scale Remote Sensing Images Based on YOLOv7-OT" Remote Sensing 16, no. 23: 4510. https://doi.org/10.3390/rs16234510
APA StyleWan, Y., Zhan, Z., Ren, P., Fan, L., Liu, Y., Li, L., & Dai, Y. (2024). Storage Tank Target Detection for Large-Scale Remote Sensing Images Based on YOLOv7-OT. Remote Sensing, 16(23), 4510. https://doi.org/10.3390/rs16234510