UltraHi-PrNet: An Ultra-High Precision Deep Learning Network for Dense Multi-Scale Target Detection in SAR Images
Abstract
:1. Introduction
- 1.
- Firstly, a novel scale transfer layer is constructed, which can transfer the target features of different scales from the bottom network to the top network, while at the same time, ensuring that the ultra-small-scale and small-scale target features in SAR images can be better extracted, as well as the large-scale target features in SAR images. This method avoids the problem of missing detections of multi-scale targets in SAR images.
- 2.
- Then, a novel scale expansion layer is constructed, which can better expand the receptive field of feature extraction and can extract the features of both large-scale targets and ultra-large-scale targets simultaneously. This method solves the problem that large-scale and ultra-large-scale targets cannot be detected simultaneously in SAR images.
- 3.
- Finally, an ultra-high precision deep learning network is established based on the ResNet101 backbone, the FPN architecture, and the Faster R-CNN [31], which can better detect ultra-small-scale targets, large-scale targets, and ultra-large-scale targets simultaneously. This method can detect targets with similar-scale differences, large-scale differences, and ultra-large-scale differences simultaneously. According to the experimental results, the algorithm has excellent performance in target detection at different scales.
2. Proposed Method
2.1. Ideas of the Method and Overall Structure
2.1.1. Ideas of the Method
2.1.2. Overall Structure
- 1.
- SAR image preprocessing: The size of SAR images was used to determine whether to preprocess SAR images. If the image to be detected is a large SAR image, the target occupies a small proportion of the whole large image. Therefore, the large SAR image needs to be reasonably segmented into some small images in advance, and then, target detection is carried out.
- 2.
- Feature extraction network: Firstly, the pre-processed images are fed into the backbone network for feature extraction, which is mainly composed of three parts: ResNet101, scale transfer layer, and scale expansion layer. The initial extracted features are then fed into a feature pyramid network (FPN) for feature fusion. Finally, the fused features are fed into the region proposal network (RPN) network.
- 3.
- Region proposal network: The candidate region of a multi-scale target in a SAR image is screened.
- 4.
- Detection network: The final multi-scale target detection is mainly performed by the detection head of the Faster R-CNN, including confidence scores and bounding boxes.
2.2. Network Architecture
2.2.1. Scale Transfer Layer
2.2.2. Scale Expansion Layer
2.2.3. UltraHi-PrNet
2.2.4. Region Proposal Network
2.2.5. Detection Network
2.3. Loss Function
3. Experiments and Results
3.1. Settings
3.2. Dataset
3.3. Evaluation Metric
3.4. Evaluation of UltraHi-PrNet
3.4.1. Effect of Scale Transfer Layer
3.4.2. Effect of Scale Expansion Layer
3.4.3. Effect of UltraHi-PrNet
4. Discussion
4.1. Comparison with Other Algorithms
4.2. Target Detection in Large-Scale SAR Images
Preprocessing
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, Z.; Li, S.; Liu, Z.; Yang, H.; Wu, J.; Yang, J. Bistatic Forward-Looking SAR MP-DPCA Method for Space–Time Extension Clutter Suppression. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6565–6579. [Google Scholar] [CrossRef]
- Li, Z.; Zhang, X.; Yang, Q.; Xiao, Y.; An, H.; Yang, H.; Wu, J.; Yang, J. Hybrid SAR-ISAR Image Formation via Joint FrFT-WVD Processing for BFSAR Ship Target High-Resolution Imaging. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
- Novak, L.M.; Owirka, G.J.; Netishen, C.M. Performance of a high-resolution polarimetric SAR automatic target recognition system. Linc. Lab. J. 1993, 6, 11–24. [Google Scholar]
- Morgan, D.A. Deep convolutional neural networks for ATR from SAR imagery. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XXII; SPIE: Bellingham, WA, USA, 2015; Volume 9475, pp. 116–128. [Google Scholar]
- Ao, W.; Xu, F.; Li, Y.; Wang, H. Detection and discrimination of ship targets in complex background from spaceborne ALOS-2 SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 536–550. [Google Scholar] [CrossRef]
- Robey, F.C.; Fuhrmann, D.R.; Kelly, E.J.; Nitzberg, R. A CFAR adaptive matched filter detector. IEEE Trans. Aerosp. Electron. Syst. 1992, 28, 208–216. [Google Scholar] [CrossRef] [Green Version]
- Aytekin, Ö.; Zöngür, U.; Halici, U. Texture-based airport runway detection. IEEE Geosci. Remote Sens. Lett. 2012, 10, 471–475. [Google Scholar] [CrossRef]
- Freund, Y.; Schapire, R.; Abe, N. A short introduction to boosting. J. Jpn. Soc. Artif. Intell. 1999, 14, 1612. [Google Scholar]
- Sun, Y.; Liu, Z.; Todorovic, S.; Li, J. Adaptive boosting for SAR automatic target recognition. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 112–125. [Google Scholar] [CrossRef]
- Tang, G.; Xiao, Z.; Liu, Q.; Liu, H. A novel airport detection method via line segment classification and texture classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2408–2412. [Google Scholar] [CrossRef]
- Oliver, C.; Quegan, S. Understanding Synthetic Aperture Radar Images; SciTech Publishing: Raleigh, NC, USA, 2004. [Google Scholar]
- Zhao, D.; Ma, Y.; Jiang, Z.; Shi, Z. Multiresolution airport detection via hierarchical reinforcement learning saliency model. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2855–2866. [Google Scholar] [CrossRef]
- Hou, B.; Chen, X.; Jiao, L. Multilayer CFAR detection of ship targets in very high resolution SAR images. IEEE Geosci. Remote Sens. Lett. 2014, 12, 811–815. [Google Scholar]
- He, J.; Wang, Y.; Liu, H.; Wang, N.; Wang, J. A novel automatic PolSAR ship detection method based on superpixel-level local information measurement. IEEE Geosci. Remote Sens. Lett. 2018, 15, 384–388. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, H. PolSAR ship detection based on superpixel-level scattering mechanism distribution features. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1780–1784. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 25. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Brunot, A.; Cortes, C.; Drucker, H.; Boser, B.; Henderson, D.; Guyon, I.; Sackinger, E.; et al. LeNet-5, Convolutional Neural Networks. 2015. Available online: http://yann.lecun.com/exdb/lenet (accessed on 14 September 2022).
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 142–158. [Google Scholar] [CrossRef]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE international Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Liu, N.; Cao, Z.; Cui, Z.; Pi, Y.; Dang, S. Multi-scale proposal generation for ship detection in SAR images. Remote Sens. 2019, 11, 526. [Google Scholar] [CrossRef] [Green Version]
- Dai, H.; Du, L.; Wang, Y.; Wang, Z. A modified CFAR algorithm based on object proposals for ship target detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1925–1929. [Google Scholar] [CrossRef]
- Li, T.; Liu, Z.; Xie, R.; Ran, L. An improved superpixel-level CFAR detection method for ship targets in high-resolution SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 184–194. [Google Scholar] [CrossRef]
- Zhai, L.; Li, Y.; Su, Y. Inshore ship detection via saliency and context information in high-resolution SAR images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1870–1874. [Google Scholar] [CrossRef]
- Hong, F.; Lu, C.H.; Liu, C.; Liu, R.R.; Wei, J. A traffic surveillance multi-scale vehicle detection object method base on encoder-decoder. IEEE Access 2020, 8, 47664–47674. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Wang, J.; Lu, C.; Jiang, W. Simultaneous ship detection and orientation estimation in SAR images based on attention module and angle regression. Sensors 2018, 18, 2851. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; Volume 28. [Google Scholar]
- Jiao, J.; Zhang, Y.; Sun, H.; Yang, X.; Gao, X.; Hong, W.; Fu, K.; Sun, X. A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection. IEEE Access 2018, 6, 20881–20892. [Google Scholar] [CrossRef]
- Qingyun, F.; Lin, Z.; Zhaokui, W. An efficient feature pyramid network for object detection in remote sensing imagery. IEEE Access 2020, 8, 93058–93068. [Google Scholar] [CrossRef]
- Nie, X.; Duan, M.; Ding, H.; Hu, B.; Wong, E.K. Attention mask R-CNN for ship detection and segmentation from remote sensing images. IEEE Access 2020, 8, 9325–9334. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [Green Version]
- Cui, Z.; Tang, C.; Cao, Z.; Dang, S. SAR Unlabeled Target Recognition Based on Updating CNN With Assistant Decision. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1585–1589. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Cui, Z.; Li, Q.; Cao, Z.; Liu, N. Dense attention pyramid networks for multi-scale ship detection in SAR images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8983–8997. [Google Scholar] [CrossRef]
- Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. {TensorFlow}: A System for {Large-Scale} Machine Learning. In Proceedings of the 12th USENIX symposium on operating systems design and implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Li, J.; Qu, C.; Shao, J. Ship detection in SAR images based on an improved faster R-CNN. In Proceedings of the 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China, 13–14 November 2017; pp. 1–6. [Google Scholar]
- Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. A SAR dataset of ship detection for deep learning under complex backgrounds. Remote Sens. 2019, 11, 765. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8759–8768. [Google Scholar]
- Cui, Z.; Wang, X.; Liu, N.; Cao, Z.; Yang, J. Ship detection in large-scale SAR images via spatial shuffle-group enhance attention. IEEE Trans. Geosci. Remote Sens. 2020, 59, 379–391. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Zhou, X.; Wang, D.; Krähenbühl, P. Objects as points. arXiv 2019, arXiv:1904.07850. [Google Scholar]
Dataset | Sensor | Resolution | Polarization |
---|---|---|---|
SSDD | Sentinel-1, RadarSat-2 | 1 m–10 m | Full |
AIR-SARShip-1.0 | Gaofen-3 | 1 m, 3 m | Single |
SAR-ship-dataset | Gaofen-3, Sentinel-1 | , , , etc. | Dual, Full |
Gaofen-3 Airport Dataset | Gaofen-3 | 3 m, 5 m, 8 m, 10 m, etc. | Full |
Methods | Input Size | Class | Recall | Precision | AP | mAP |
---|---|---|---|---|---|---|
The original method | 600 × 800 | ship airport | 91.1% 90.5% | 86.2% 85.3% | 89.5% 87.3% | 88.4% |
The proposed method | 600 × 800 | ship airport | 95.8% 95.2% | 89.5% 89.6% | 93.1% 92.7% | 92.9% |
Methods | Input Size | Class | Recall | Precision | AP | mAP |
---|---|---|---|---|---|---|
The original method | 600 × 800 | ship airport | 91.1% 90.5% | 86.2% 85.3% | 89.5% 87.3% | 88.4% |
The proposed method | 600 × 800 | ship airport | 95.4% 95.0% | 90.2% 88.6% | 92.8% 92.0% | 92.4% |
Methods | Input Size | Class | Recall | Precision | AP | mAP |
---|---|---|---|---|---|---|
The original method | 600 × 800 | ship airport | 91.1% 90.5% | 86.2% 85.3% | 89.5% 87.3% | 88.4% |
The proposed method | 600 × 800 | ship airport | 99.3% 99.1% | 94.8% 93.7% | 97.2% 96.6% | 96.9% |
Methods | Input size | Class | Recall | Precision | AP | mAP |
---|---|---|---|---|---|---|
YOLOv4 | 600 × 800 | ship airport | 88.9% 87.9% | 93.3% 92.6% | 88.7% 87.7% | 88.2% |
Improved Faster R-CNN | 600 × 800 | ship airport | 90.4% 87.2% | 87.0% 83.1% | 89.7% 87.9% | 88.8% |
SSD-512 | 600 × 800 | ship airport | 89.8% 88.1% | 94.5% 93.1% | 89.6% 89.2% | 89.4% |
The proposed method | 600 × 800 | ship airport | 99.3% 99.1% | 94.8% 93.7% | 97.2% 96.6% | 96.9% |
Methods | Input size | Class | Recall | Precision | AP | mAP |
---|---|---|---|---|---|---|
DAPN | 600 × 800 | ship airport | 95.6% 94.5% | 90.1% 88.9% | 90.5% 89.1% | 89.8% |
SSE-CenterNet | 600 × 800 | ship airport | 84.2% 82.6% | 97.1% 94.2% | 95.2% 93.4% | 94.3% |
The proposed method | 600 × 800 | ship airport | 99.3% 99.1% | 94.8% 93.7% | 97.2% 96.6% | 96.9% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, Z.; Cui, Z.; Zang, Z.; Meng, X.; Cao, Z.; Yang, J. UltraHi-PrNet: An Ultra-High Precision Deep Learning Network for Dense Multi-Scale Target Detection in SAR Images. Remote Sens. 2022, 14, 5596. https://doi.org/10.3390/rs14215596
Zhou Z, Cui Z, Zang Z, Meng X, Cao Z, Yang J. UltraHi-PrNet: An Ultra-High Precision Deep Learning Network for Dense Multi-Scale Target Detection in SAR Images. Remote Sensing. 2022; 14(21):5596. https://doi.org/10.3390/rs14215596
Chicago/Turabian StyleZhou, Zheng, Zongyong Cui, Zhipeng Zang, Xiangjie Meng, Zongjie Cao, and Jianyu Yang. 2022. "UltraHi-PrNet: An Ultra-High Precision Deep Learning Network for Dense Multi-Scale Target Detection in SAR Images" Remote Sensing 14, no. 21: 5596. https://doi.org/10.3390/rs14215596
APA StyleZhou, Z., Cui, Z., Zang, Z., Meng, X., Cao, Z., & Yang, J. (2022). UltraHi-PrNet: An Ultra-High Precision Deep Learning Network for Dense Multi-Scale Target Detection in SAR Images. Remote Sensing, 14(21), 5596. https://doi.org/10.3390/rs14215596