An Anchor-Free Network for Increasing Attention to Small Objects in High Resolution Remote Sensing Images
Abstract
:1. Introduction
- We propose an upsampling method using depth-separable deconvolution. With this special way of expanding the feature map, the pixel values around the central pixel can be diversified when the resolution is expanded, so that the obtained feature map is closer to the original map.
- We replace the conventional convolution in Contextual Transformer Networks with dilated convolution, which can make better use of local context information to extract the feature information of small objects in remote sensing images and use EcaNet to improve the feature extraction ability of the network while maintaining the parameters of the original network model.
- We studied the impact of network input size on the performance of the network model and optimized the network input size. At the same time, we used circular smooth tags to achieve more timely remote sensing image rotation object detection.
- By comparing the performance of the related algorithms of horizontal object detection and rotationally invariant object detection, the proposed method has the best performance in remote sensing image object detection.
2. Selection of Baseline Model
- Input
- 2.
- Backbone
- 3.
- Neck
- 4.
- Prediction
3. Materials and Methods
3.1. New Upsampling Method
3.2. New Feature Extraction Method
3.3. Introduction of EcaNet
3.4. Optimized Network Input Size
3.5. Rotationally Invariant Object Detection
4. Experiments, Results, and Discussion
4.1. Dataset and Environment
4.2. Evaluation Indicators
4.3. Experimental Results
4.3.1. Horizontal Object Detection
4.3.2. Rotationally Invariant Object Detection
4.3.3. Ablation Study
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, D.; Wang, M.; Jiang, J. China’s high-resolution optical remote sensing satellites and their mapping applications. Geo-Spat. Inf. Sci. 2021, 24, 85–94. [Google Scholar] [CrossRef]
- Li, L.J.; Wu, Y. Application of remote-sensing-image fusion to the monitoring of mining induced subsidence. J. China Univ. Min. Technol. 2008, 18, 531–536. [Google Scholar] [CrossRef]
- Ansith, S.; Bini, A.A. Land use classification of high resolution remote sensing images using an encoder based modified GAN architecture. Displays 2022, 74, 102229. [Google Scholar]
- Gong, M.; O’Donnell, R.; Miller, C.; Scott, M.; Simis, S.; Groom, S.; Tyler, A.; Hunter, P.; Spyrakos, E.; Merchant, C.; et al. Adaptive smoothing to identify spatial structure in global lake ecological processes using satellite remote sensing data. Spat. Stat. 2022, 50, 100615. [Google Scholar] [CrossRef]
- Dong, J.; Li, L.; Li, Y.; Yu, Q. Inter-comparisons of mean, trend and interannual variability of global terrestrial gross primary production retrieved from remote sensing approach. Sci. Total Environ. 2022, 822, 153343. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Bashir SM, A.; Khan, M.; Ullah, Q.; Wang, R.; Song, Y.; Guo, Z.; Niu, Y. Remote sensing image super-resolution and object detection: Benchmark and state of the art. Expert Syst. Appl. 2022, 197, 116793. [Google Scholar] [CrossRef]
- Zhao, Z.Q.; Zheng, P.; Xu, S.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
- Chengji, X.U.; Wang, X.; Yang, Y. Attention-YOLO: YOLO Detection Algorithm That Introduces Attention Mechanism. Comput. Eng. Appl. 2019, 55, 13–23. [Google Scholar]
- Wang, Y.; Gao, L.; Hong, D.; Sha, J.; Liu, L.; Zhang, B.; Rong, X.; Zhang, Y. Mask DeepLab: End-to-end image segmentation for change detection in high-resolution remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102582. [Google Scholar] [CrossRef]
- Xuan, S.; Li, S.; Zhao, Z.; Zhou, Z.; Gu, Y. Rotation adaptive correlation filter for moving object tracking in satellite videos. Neurocomputing 2021, 438, 94–106. [Google Scholar] [CrossRef]
- Kumawat, A.; Panda, S. Feature detection and description in remote sensing images using a hybrid feature detector. Procedia Comput. Sci. 2018, 132, 277–287. [Google Scholar] [CrossRef]
- Liu, L.; Li, C.; Sun, X.; Zhao, J. Event alert and detection in smart cities using anomaly information from remote sensing earthquake data. Comput. Commun. 2020, 153, 397–405. [Google Scholar] [CrossRef]
- Qi, X.; Zhu, P.; Wang, Y.; Zhang, L.; Peng, J.; Wu, M.; Chen, J.; Zhao, X.; Zang, N.; Mathiopoulosd, P.T. MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding. ISPRS J. Photogramm. Remote Sens. 2020, 169, 337–350. [Google Scholar] [CrossRef]
- Li, K.; Wan, G.; Cheng, G.; Meng, L.; Han, J. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 159, 296–307. [Google Scholar] [CrossRef]
- Fu, K.; Chang, Z.; Zhang, Y.; Xu, G.; Sun, X. Rotation-aware and multi-scale convolutional neural network for object detection in remote sensing images. ISPRS J. Photogramm. Remote Sens. 2020, 161, 294–308. [Google Scholar] [CrossRef]
- Xiaolin, F.; Fan, H.; Ming, Y.; Tongxin, Z.; Ran, B.; Zenghui, Z.; Zhiyuan, G. Small object detection in remote sensing images based on super-resolution. Pattern Recognit. Lett. 2022, 153, 107–112. [Google Scholar] [CrossRef]
- Tong, K.; Wu, Y. Deep learning-based detection from the perspective of small or tiny objects: A survey. Image Vis. Comput. 2022, 123, 104471. [Google Scholar] [CrossRef]
- Neubeck, A.; Van Gool, L. Efficient non-maximum suppression. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 3, pp. 850–855. [Google Scholar]
- Law, H.; Deng, J. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 734–750. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF INTERNATIONAL Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9627–9636. [Google Scholar]
- Zhou, X.; Wang, D.; Krähenbühl, P. Objects as points. arXiv 2019, arXiv:1904.07850. [Google Scholar]
- Liu, J.; Yang, D.; Hu, F. Multiscale Object Detection in Remote Sensing Images Combined with Multi-Receptive-Field Features and Relation-Connected Attention. Remote Sens. 2022, 14, 427. [Google Scholar] [CrossRef]
- Wei, W.; Ru, Y.; Ye, Z. Improve the remote sensing image target detection of centernet. Comput. Eng. Appl. 2021, 57, 9. [Google Scholar]
- Zheng, Z.; Lei, L.; Sun, H.; Kuang, G. FAGNet: Multi-Scale Object Detection Method in Remote Sensing Images by Combining MAFPN and GVR. J. Comput.-Aided Des. Comput. Graph. 2021, 33, 883–894. [Google Scholar] [CrossRef]
- Shi, P.; Zhao, Z.; Fan, X.; Yan, X.; Yan, W.; Xin, Y. Remote Sensing Image Object Detection Based on Angle Classification. IEEE Access 2021, 9, 118696–118707. [Google Scholar] [CrossRef]
- Lim, J.S.; Astrid, M.; Yoon, H.J.; Lee, S.I. Small object detection using context and attention. In Proceedings of the 2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Jeju Island, Republic of Korea, 13–16 April 2021; pp. 181–186. [Google Scholar]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Li, Y.; Yao, T.; Pan, Y.; Mei, T. Contextual transformer networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 1489–1500. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Wang, Q.; Wu, B.; Zhu, P.; Li, B.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 11531–11539. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 June 2017; pp. 7263–7271. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Qi, L.; Kuen, J.; Gu, J.; Lin, Z.; Wang, Y.; Chen, Y.; Li, Y.; Jia, J. Multi-scale aligned distillation for low-resolution detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14443–14453. [Google Scholar]
- Yang, X.; Yan, J. Arbitrary-oriented object detection with circular smooth label. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 677–694. [Google Scholar]
- Xia, G.S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; Zhang, L. DOTA: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3974–3983. [Google Scholar]
- Wang, C.; Bai, X.; Wang, S.; Zhou, P. Multiscale visual attention networks for object detection in VHR remote sensing images. IEEE Geosci. Remote Sens. Lett. 2018, 16, 310–314. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2018, 28, 91–99. [Google Scholar] [CrossRef] [PubMed]
- Dai, J.; Li, Y.; He, K.; Sun, J. R-fcn: Object detection via region-based fully convolutional networks. arXiv 2016, arXiv:1605.06409. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Han, J.; Ding, J.; Xue, N.; Xia, G.S. Redet: A rotation-equivariant detector for aerial object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Yang, X.; Liu, Q.; Yan, J.; Li, A.; Zhang, Z.; Yu, G. R3det: Refined single-stage detector with feature refinement for rotating object. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 22 February–1 March 2021; Volume 35. [Google Scholar]
- Zhao, P.; Qu, Z.; Bu, Y.; Tan, W.; Guan, Q. Polardet: A fast, more precise detector for rotated target in aerial images. Int. J. Remote Sens. 2021, 42, 5831–5861. [Google Scholar] [CrossRef]
- Han, J.; Ding, J.; Li, J.; Xia, G.S. Align deep features for oriented object detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5602511. [Google Scholar] [CrossRef]
- Guo, Z.; Liu, C.; Zhang, X.; Jiao, J.; Ji, X.; Ye, Q. Beyond bounding-box: Convex-hull feature adaptation for oriented and densely packed object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8792–8801. [Google Scholar]
- Li, W.; Chen, Y.; Hu, K.; Zhu, J. Oriented reppoints for aerial object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1829–1838. [Google Scholar]
- Xie, X.; Cheng, G.; Wang, J.; Yao, X.; Han, J. Oriented R-CNN for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 3520–3529. [Google Scholar]
- Qing, Y.; Liu, W.; Feng, L.; Gao, W. Improved Yolo network for free-angle remote sensing target detection. Remote Sens. 2021, 13, 2171. [Google Scholar] [CrossRef]
- Zhang, Z.; Guo, W.; Zhu, S.; Yu, W. Toward arbitrary-oriented ship detection with rotated region proposal and discrimination networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1745–1749. [Google Scholar] [CrossRef]
- Liao, M.; Zhu, Z.; Shi, B.; Xia, G.S.; Bai, X. Rotation-sensitive regression for oriented scene text detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5909–5918. [Google Scholar]
- Wang, Y.; Zhang, Y.; Zhang, Y.; Zhao, L.; Sun, X.; Guo, Z. SARD: Towards scale-aware rotated object detection in aerial imagery. IEEE Access 2019, 7, 173855–173865. [Google Scholar] [CrossRef]
- Ding, J.; Xue, N.; Long, Y.; Xia, G.S.; Lu, Q. Learning RoI transformer for oriented object detection in aerial images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 2849–2858. [Google Scholar]
- Sun, P.; Zheng, Y.; Zhou, Z.; Xu, W.; Ren, Q. R4 Det: Refined single-stage detector with feature recursion and refinement for rotating object detection in aerial images. Image Vis. Comput. 2020, 103, 104036. [Google Scholar] [CrossRef]
Parameters | Configuration |
---|---|
Operating System | Ubuntu 20.04.2 LTS |
CPU | AMD Ryzen 5 3600 |
GPU | GeForce RTX 3060 Ti |
Languages | Python |
Platform | CUDA11.1,CuDNN8.0 |
Framework | Pytorch 1.7.0,Torchvision 0.8.1 |
Models | YOLOX-s | YOLOX-m | YOLOX-l | YOLOX-x | DCE_ YOLOX-s | DCE_ YOLOX-m | DCE_ YOLOX-l | DCE_ YOLOX-x | |
---|---|---|---|---|---|---|---|---|---|
mAP | |||||||||
small-vehicle | 0.5320 | 0.4412 | 0.6301 | 0.5458 | 0.7402 | 0.7319 | 0.7853 | 0.7518 | |
large-vehicle | 0.8312 | 0.8186 | 0.8655 | 0.8081 | 0.8732 | 0.8781 | 0.8903 | 0.8729 | |
plane | 0.8953 | 0.9042 | 0.9013 | 0.9005 | 0.9019 | 0.9049 | 0.9024 | 0.9041 | |
storage-tank | 0.7367 | 0.7883 | 0.7698 | 0.7852 | 0.8351 | 0.8715 | 0.8673 | 0.8686 | |
ship | 0.8442 | 0.8929 | 0.8597 | 0.8558 | 0.8964 | 0.9001 | 0.8865 | 0.8975 | |
harbor | 0.8103 | 0.8647 | 0.8418 | 0.8251 | 0.8645 | 0.8803 | 0.8799 | 0.8816 | |
ground-track-field | 0.7224 | 0.7864 | 0.7769 | 0.7747 | 0.7299 | 0.7808 | 0.7801 | 0.8063 | |
soccer-ball-field | 0.7234 | 0.7858 | 0.7621 | 0.7646 | 0.8112 | 0.8348 | 0.8125 | 0.8389 | |
tennis-court | 0.9043 | 0.9064 | 0.9048 | 0.9061 | 0.9064 | 0.9075 | 0.9076 | 0.9069 | |
swimming-pool | 0.7115 | 0.7722 | 0.7525 | 0.7199 | 0.7670 | 0.7425 | 0.8046 | 0.8030 | |
baseball-diamond | 0.7547 | 0.8034 | 0.7573 | 0.7593 | 0.7765 | 0.7902 | 0.8077 | 0.8308 | |
roundabout | 0.7100 | 0.7371 | 0.7594 | 0.7630 | 0.7203 | 0.7712 | 0.7761 | 0.8088 | |
basketball-court | 0.7697 | 0.8823 | 0.8570 | 0.8305 | 0.8741 | 0.8996 | 0.8897 | 0.8989 | |
bridge | 0.5795 | 0.6787 | 0.6380 | 0.6562 | 0.6518 | 0.7212 | 0.6942 | 0.7294 | |
helicopter | 0.7976 | 0.8490 | 0.8422 | 0.8316 | 0.7484 | 0.8236 | 0.8432 | 0.7850 | |
all classes | 0.7549 | 0.7941 | 0.7946 | 0.7818 | 0.8065 | 0.8292 | 0.8352 | 0.8390 |
Model | mAP 0.5 | mAP 0.5:0.95 | Speed (ms) | FLOPs (G) |
---|---|---|---|---|
YOLOX-s | 75.49 | 51.71 | 14.30 | 26.67 |
YOLOX-m | 79.41 | 56.30 | 26.26 | 73.55 |
YOLOX-l | 79.46 | 56.83 | 39.98 | 155.37 |
YOLOX-x | 78.18 | 56.45 | 68.82 | 281.59 |
DCE_YOLOX-s | 80.65 | 57.01 | 27.08 | 68.27 |
DCE_YOLOX-m | 82.92 | 59.71 | 54.40 | 188.28 |
DCE_YOLOX-l | 83.52 | 61.30 | 91.74 | 397.76 |
DCE_YOLOX-x | 83.90 | 62.35 | 155.82 | 720.86 |
Method | Backbone | mAP |
---|---|---|
SSD300 | VGG16 | 10.9 |
SSD512 | VGG16 | 20.8 |
Faster R-CNN | ResNet101 | 60.5 |
R-FCN | ResNet101 | 47.2 |
YOLOv2 | Darknet19 | 21.4 |
YOLOv3 | Darknet53 | 53.7 |
YOLOv5s | CSPFocus | 71.0 |
YOLOv5m | CSPFocus | 73.5 |
YOLOv5l | CSPFocus | 74.4 |
YOLOv5x | CSPFocus | 73.7 |
DCE_YOLOX-x | CSPFocus | 83.9 |
Method | Backbone | mAP | Memory |
---|---|---|---|
R3Det [47] | ResNet50 | 70.08 | 143 |
ReDet | ResNet50 | 76.25 | 125 |
PolarDet [48] | ResNet50 | 75.02 | 150 |
S2ANet [49] | ResNet50 | 74.12 | 148 |
CFA [50] | ResNet50 | 73.45 | 141 |
Oriented Reppoints [51] | ResNet50 | 75.97 | 230.5 |
Oriented R-CNN [52] | ResNet50 | 75.87 | 158 |
RepVGG-YOLO [53] | RepVGG | 74.13 | - |
DCE-YOLOX | CSPFocus | 76.41 | 68.5 |
Method | Backbone | Image Size | mAP | FPS |
---|---|---|---|---|
R2PN [54] | VGG16 | - | 79.6 | - |
RRD [55] | VGG16 | 384 × 384 | 84.3 | - |
SARD [56] | ResNet101 | 800 × 800 | 85.4 | 1.5 |
ROI Transformer [57] | ResNet101 | 512 × 800 | 86.2 | 5.9 |
R4Det [58] | ResNet50 | 800 × 800 | 88.17 | 8.6 |
RepVGG-YOLO | RepVGG | - | 91.54 | 22 |
DCE-YOLOX | CSPFocus | 768 × 768 | 95.9 | 17.5 |
DS-DConv | DCoTNet | EcaNet | O-N-I-S | CSL | Precision | Recall | mAP |
---|---|---|---|---|---|---|---|
√ | 0.74904 | 0.68846 | 0.72001 | ||||
√ | √ | 0.74896 | 0.70741 | 0.73304 | |||
√ | √ | √ | 0.75729 | 0.70814 | 0.73422 | ||
√ | √ | √ | √ | 0.76333 | 0.70947 | 0.76142 | |
√ | √ | √ | √ | √ | 0.77661 | 0.72814 | 0.76413 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, H.; Guo, W.; Zhao, Q. An Anchor-Free Network for Increasing Attention to Small Objects in High Resolution Remote Sensing Images. Appl. Sci. 2023, 13, 2073. https://doi.org/10.3390/app13042073
Zhou H, Guo W, Zhao Q. An Anchor-Free Network for Increasing Attention to Small Objects in High Resolution Remote Sensing Images. Applied Sciences. 2023; 13(4):2073. https://doi.org/10.3390/app13042073
Chicago/Turabian StyleZhou, Huaping, Wei Guo, and Qi Zhao. 2023. "An Anchor-Free Network for Increasing Attention to Small Objects in High Resolution Remote Sensing Images" Applied Sciences 13, no. 4: 2073. https://doi.org/10.3390/app13042073
APA StyleZhou, H., Guo, W., & Zhao, Q. (2023). An Anchor-Free Network for Increasing Attention to Small Objects in High Resolution Remote Sensing Images. Applied Sciences, 13(4), 2073. https://doi.org/10.3390/app13042073