Selecting Post-Processing Schemes for Accurate Detection of Small Objects in Low-Resolution Wide-Area Aerial Imagery
Abstract
:1. Introduction
2. Algorithms Adapted for Vehicle Detection
3. Post-Processing Schemes
4. Experiments and Results
4.1. Experimental Setup
4.2. Datasets
4.3. Classifications and Evaluation Metrics
- True positive (TP): correct detection. Only one TP (which has the largest overlap with ground truth if exists) is counted if multiple detections intersect the same ground truth object, or a single detection intersects multiple ground truth objects.
- Splits (S): if multiple detections touch a single object in the ground truth, then only one TP (the one having largest overlap) is counted, all other touches are regarded as Splits.
- Merges (M): if a single detection is associated with multiple objects in the ground truth, then all other objects except the TP in a row are counted as Merges.
- False negative (FN) or Miss: detection failure, indicated by a ground truth object that fails to intersect any detection.
- False positive (FP): incorrect detection, indicated by a detection which fails to intersect any ground truth object.
4.4. Results and Analysis
4.4.1. Six Vehicle Detection Algorithms Combined with our Three-Stage Post-Processing Scheme or Filtering by SI
4.4.2. Seven Post-Processing Schemes Combined with Ten Algorithms
4.4.3. Ten Algorithms Combined with the Best Three Post-Processing Schemes
4.4.4. Qualitative Performance Evaluation
4.4.5. FC-DenseNet: Two-Stage Machine Learning for Small Object Detection
4.4.6. Final Tests to Evaluate the Two-Stage Learning Approach with Post-Processing for Online VEDAI
4.4.7. Computational Efficiency
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Chen, G.; Wang, H.-T.; Chen, K.; Li, Z.-Y.; Song, Z.-D.; Liu, Y.-L.; Knoll, A. A survey of the four pillars for small object detection: Multiscale representation, contextual information, super-resolution, and region proposal. IEEE Trans. Syst. Man Cybern. Syst. 2020, 1–18. [Google Scholar] [CrossRef]
- Sivaraman, S.; Trivedi, M.M. Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1773–1795. [Google Scholar] [CrossRef] [Green Version]
- Gao, X.; Ram, S.; Rodríguez, J.J. A Performance Comparison of Automatic Detection Schemes in Wide-Area Aerial Imagery. In Proceedings of the 2016 IEEE Southwest Symposium of Image Analysis and Interpretation (SSIAI), Santa Fe, NM, USA, 6–8 March 2016; pp. 125–128. [Google Scholar]
- Gao, X. Automatic Detection, Segmentation, and Tracking of Vehicles in Wide-Area Aerial Imagery. Master’s Thesis, Department of Electrical and Computer Engineering, The University of Arizona, Tucson, AZ, USA, October 2016. [Google Scholar]
- Prokaj, J.; Medioni, G. Persistent Tracking for Wide-Area Aerial Surveillance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 1186–1193. [Google Scholar]
- Ram, S.; Rodríguez, J.J. Vehicle Detection in Aerial Images using Multi-scale Structure Enhancement and Symmetry. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3817–3821. [Google Scholar]
- Zhou, H.; Wei, L.; Creighton, D.; Nahavandi, S. Orientation aware vehicle detection in aerial images. Electron. Lett. 2017, 53, 1406–1408. [Google Scholar] [CrossRef]
- Xiang, X.-Z.; Zhai, M.-L.; Lv, N.; Saddik, A.E. Vehicle counting based on vehicle detection and tracking from aerial videos. Sensors 2018, 18, 2560. [Google Scholar] [CrossRef] [Green Version]
- Philip, R.C.; Ram, S.; Gao, X.; Rodríguez, J.J. A Comparison of Tracking Algorithm Performance for Objects in Wide Area Imagery. In Proceedings of the 2014 IEEE Southwest Symposium of Image Analysis and Interpretation (SSIAI), San Diego, CA, USA, 6–8 April 2014; pp. 109–112. [Google Scholar]
- Han, S.-K.; Yoo, J.-S.; Kwon, S.-C. Real-time vehicle detection method in bird-view unmanned aerial vehicle imagery. Sensors 2019, 19, 3958. [Google Scholar] [CrossRef] [Green Version]
- Gao, X. Vehicle detection in wide-area aerial imagery: Cross-association of detection schemes with post-processings. Int. J. Image Min. 2018, 3, 106–116. [Google Scholar] [CrossRef] [Green Version]
- Gao, X. A thresholding scheme of eliminating false detections on vehicles in wide-area aerial imagery. Int. J. Signal Image Syst. Eng. 2018, 11, 217–224. [Google Scholar] [CrossRef]
- Gao, X.; Ram, S.; Rodríguez, J.J. A post-processing scheme for the performance improvement of vehicle detection in wide-area aerial imagery. Signal Image Video Process. 2020, 14, 625–633, 635. [Google Scholar] [CrossRef]
- Gao, X.; Szep, J.; Satam, P.; Hariri, S.; Ram, S.; Rodríguez, J.J. Spatio-temporal processing for automatic vehicle detection in wide-area aerial video. IEEE Access 2020, 8, 199562–199572. [Google Scholar] [CrossRef]
- Porter, R.; Fraser, A.M.; Hush, D. Wide-area motion imagery. IEEE Signal Process. Mag. 2010, 27, 56–65. [Google Scholar] [CrossRef]
- Tong, K.; Wu, Y.-Q.; Zhou, F. Recent advances in small object detection based on deep learning: A review. Image Vis. Comput. 2020, 97, 103910. [Google Scholar] [CrossRef]
- Schilling, H.; Bulatov, D.; Niessner, R.; Middelmann, W.; Soergel, U. Detection of vehicles in multisensor data via multibranch convolutional neural networks. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2018, 11, 4299–4316. [Google Scholar] [CrossRef]
- Shen, Q.; Liu, N.-Z.; Sun, H. Vehicle detection in aerial images based on lightweight deep convolutional network. IET Trans. Image Process. 2021, 15, 479–491. [Google Scholar] [CrossRef]
- Wu, X.; Li, W.; Dong, D.-F.; Tian, J.-J.; Tao, R.; Du, Q. Vehicle detection of multi-source remote sensing data using active fine-tuning network. ISPRS J. Photogramm. Remote Sens. 2020, 167, 39–53. [Google Scholar] [CrossRef]
- Gao, X. Performance evaluation of automatic object detection with post-processing schemes under enhanced measures in wide-are aerial imagery. Multimed. Tools Appl. 2020, 79, 30357–30386. [Google Scholar] [CrossRef]
- Zhu, L.; Chen, J.-X.; Hu, X.-W.; Fu, C.-W.; Xu, X.-M.; Qin, J.; Heng, P.-A. Aggregating attentional dilated features for salient object detection. IEEE Trans. Cir. Syst. Video Technol. 2020, 30, 3358–3371. [Google Scholar] [CrossRef]
- Salem, A.; Ghamry, N.; Meffert, B. Daubechies Versus Biorthogonal Wavelets for Moving Object Detection in Traffic Monitoring Systems. Informatik-Berichte. 2009, pp. 1–15. Available online: https://edoc.hu-berlin.de/bitstream/handle/18452/3139/229.pdf (accessed on 18 October 2021).
- Samarabandu, J.; Liu, X.-Q. An edge-based text region extraction algorithm for indoor mobile robot navigation. Int. J. Signal Process. 2007, 3, 273–280. [Google Scholar]
- Sharma, B.; Katiyar, V.K.; Gupta, A.K.; Singh, A. The automated vehicle detection of highway traffic images by differential morphological profile. J. Transp. Technol. 2014, 4, 150–156. [Google Scholar] [CrossRef] [Green Version]
- Zheng, Z.-Z.; Zhou, G.-Q.; Wang, Y.; Liu, Y.-L.; Li, X.-W.; Wang, X.-T.; Jiang, L. A novel vehicle detection method with high resolution highway aerial image. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2013, 6, 2338–2343. [Google Scholar] [CrossRef]
- Türmer, S. Car Detection in Low-Frame Rate Aerial Imagery of Dense Urban Areas. Ph.D. Thesis, Technische Universität München, Munich, Germany, 2014. [Google Scholar]
- Mancas, M.; Gosselin, B.; Macq, B.; Unay, D. Computational Attention for Defect Localization. In Proceedings of the ICVS Workshop on Computational Attention and Applications, Bielefeld, Germany, 21–24 March 2007; pp. 1–10. [Google Scholar]
- Saha, B.N.; Ray, N. Image thresholding by variational minimax optimization. Pattern Recognit. 2009, 42, 843–856. [Google Scholar] [CrossRef]
- Achanta, R.; Hemami, S.; Estrada, F.; Süsstrunk, S. Frequency-Tuned Salient Region Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 1597–1604. [Google Scholar]
- Achanta, R.; Süsstrunk, S. Saliency Detection using Maximum Symmetric Surround. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Hong Kong, China, 26–29 September 2010; pp. 2653–2656. [Google Scholar]
- Wu, Y.-Q.; Hou, W.; Wu, S.-H. Brain MRI segmentation using KFCM and Chan-Vese model. Trans. Tianjin Univ. 2011, 17, 215–219. [Google Scholar] [CrossRef]
- Huang, Z.-H.; Leng, J.-S. Texture extraction in natural scenes using region-based method. J. Digital Inf. Manag. 2014, 12, 246–254. [Google Scholar]
- Ali, F.B.; Powers, D.M.W. Fusion-based fastICA method: Facial expression recognition. J. Image Graph. 2014, 2, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Gleason, J.; Nefian, A.V.; Bouyssounousse, X.; Fong, T.; Bebis, G. Vehicle detection from aerial imagery. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 2065–2070. [Google Scholar]
- Trujillo-Pino, A.; Krissian, K.; Alemán-Flores, M.; Santana-Cedrés, D. Accurate subpixel edge location based on partial area effect. Image Vis. Comput. 2013, 31, 72–90. [Google Scholar] [CrossRef]
- Ray, K.S.; Chakraborty, S. Object detection by spatio-temporal analysis and tracking of the detected objects in a video with variable background. J. Visual Commun. Image Represent. 2019, 58, 662–674. [Google Scholar] [CrossRef]
- Tao, H.-J.; Lu, X.-B. Smoke vehicle detection based on multi-feature fusion and hidden Markov model. J. Real-Time Image Process. 2020, 17, 745–758. [Google Scholar] [CrossRef]
- Tao, H.-J.; Lu, X.-B. Smoke vehicle detection based on spatiotemporal bag-of-features and professional convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 3301–3316. [Google Scholar] [CrossRef]
- Jégou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y. The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 11–19. [Google Scholar]
- Shaikh, S.H.; Saeed, K.; Chaki, N. Moving Object Detection Using Background Subtraction; Springer: Berlin/Heidelberg, Germany, 2014; pp. 30–31. [Google Scholar]
- Kasturi, R.; Goldgof, D.; Soundararajan, P.; Manohar, V.; Garofolo, J.; Bowers, R.; Zhang, J. Framework for performance evaluation of face, text, and vehicle detection and tracking in video: Data, metrics, and protocol. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 319–336. [Google Scholar] [CrossRef] [PubMed]
- Kasturi, R.; Goldgof, D.; Soundararajan, P.; Manohar, V.; Boonstra, M.; Korzhova, V. Performance evaluation protocol for face, person and vehicle detection & tracking in video analysis and content extraction (VACE-II). Comput. Sci. Eng. 2006. Available online: https://catalog.ldc.upenn.edu/docs/LDC2011V03/ClearEval_Protocol_v5.pdf (accessed on 18 October 2021).
- Li, Y.-Y.; Huang, Q.; Pei, X.; Chen, Y.-Q.; Jiao, L.-C.; Shang, R.-H. Cross-layer attention network for small object detection in remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 2148–2161. [Google Scholar] [CrossRef]
- Li, C.-J.; Qu, Z.; Wang, S.-Y.; Liu, L. A method of cross-layer fusion multi-object detection and recognition based on improved faster R-CNN model in complex traffic environment. Pattern Recognit. Lett. 2021, 145, 127–134. [Google Scholar] [CrossRef]
- Bosquet, B.; Mucientes, M.; Brea, V.M. STDnet-ST: Spatio-temporal ConvNet for small object detection. Pattern Recognit. 2021, 116, 107929. [Google Scholar] [CrossRef]
- Wang, B.; Gu, Y.-J. An improved FBPN-based detection network for vehicles in aerial images. Sensors 2020, 20, 4709. [Google Scholar] [CrossRef] [PubMed]
- Gao, P.; Tian, T.; Li, L.-F.; Ma, J.-Y.; Tian, J.-W. DE-CycleGAN: An object enhancement network for weak vehicle detection in satellite images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 3403–3414. [Google Scholar] [CrossRef]
- Rabbi, J.; Ray, N.; Schubert, M.; Chowdhury, S.; Chao, D. Small-object detection in remote sensing images with end-to-end edge-enhanced GAN and object detector network. Remote Sens. 2020, 12, 1432. [Google Scholar] [CrossRef]
- Matczak, G.; Matzurek, P. Comparative Monte Carlo analysis of background estimation algorithms for unmanned aerial vehicle detection. Remote Sens. 2021, 13, 870. [Google Scholar] [CrossRef]
- Li, S.; Zhou, G.-Q.; Zheng, Z.-Z.; Liu, Y.-L.; Li, X.-W.; Zhang, Y.; Yue, T. The Relation between Accuracy and Size of Structure Element for Vehicle Detection with High Resolution Highway Aerial Images. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Melbourne, Australia, 21–26 July 2013; pp. 2645–2648. [Google Scholar]
- Elmikaty, M.; Stathaki, T. Car detection in aerial images of dense urban areas. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 51–63. [Google Scholar] [CrossRef]
- Razakarivony, S.; Jurie, F. Vehicle detection in aerial imagery: A small target detection benchmark. J. Visual Commun. Image Represent. 2016, 34, 187–203. [Google Scholar] [CrossRef] [Green Version]
- Nascimento, J.C.; Marques, J.S. Performance evaluation of object detection algorithms for video surveillance. IEEE Trans. Multimed. 2006, 8, 761–774. [Google Scholar] [CrossRef]
- Karasulu, B.; Korukoglu, S. Performance Evaluation Software: Moving Object Detection and Tracking in Videos; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; pp. 40–48. [Google Scholar]
- Jiao, L.-C.; Zhang, F.; Liu, F.; Yang, S.-Y.; Li, L.-L.; Feng, Z.-X.; Qu, R. A survey of deep learning-based object detection. IEEE Access 2019, 7, 128837–128868. [Google Scholar] [CrossRef]
- Chen, C.; Zhong, J.-D.; Tan, Y. Multiple oriented and small object detection with convolutional neural networks for aerial image. Remote Sens. 2019, 11, 2176. [Google Scholar] [CrossRef] [Green Version]
- Shen, J.-Q.; Liu, N.-Z.; Sun, H.; Zhou, H.-Y. Vehicle detection in aerial images based on lightweight deep convolutional network and generative adversarial network. IEEE Access 2019, 7, 148119–148130. [Google Scholar] [CrossRef]
- Qiu, H.-Q.; Li, H.-L.; Wu, Q.-B.; Meng, F.-M.; Xu, L.-F.; Ngan, K.N.; Shi, H.-C. Hierarchical context features embedding for object detection. IEEE Trans. Multimedia 2020, 22, 3039–3050. [Google Scholar] [CrossRef]
- Boukerche, A.; Hou, Z.-J. Object detection using deep learning methods in traffic scenarios. ACM Comput. Surv. 2021, 54, 1–35. [Google Scholar] [CrossRef]
- Fan, D.-P.; Cheng, M.-M.; Liu, Y.; Li, T.; Borji, A. Structure-measure: A New Way to Evaluate Foreground Maps. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4548–4557. [Google Scholar]
- Chen, K.-Q.; Fu, K.; Yan, M.-L.; Gao, X.; Sun, X.; Wei, X. Semantic segmentation of aerial images with shuffling convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 173–177. [Google Scholar] [CrossRef]
- Tayara, H.; Soo, K.-G.; Chong, K.-T. Vehicle detection and counting in high-resolution aerial images using convolutional regression neural network. IEEE Access 2018, 6, 2220–2230. [Google Scholar] [CrossRef]
- Yang, J.-X.; Xie, X.-M.; Yang, W.-Z. Effective contexts for UAV vehicle detection. IEEE Access 2019, 7, 85042–85054. [Google Scholar] [CrossRef]
- Zhao, Z.-Q.; Zheng, P.; Xu, S.-T.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3211–3231. [Google Scholar] [CrossRef] [Green Version]
- Fu, Z.-H.; Chen, Y.-W.; Yong, H.-W.; Jiang, R.-X.; Zhang, L.; Hua, X.-S. Foreground gating and background refining network for surveillance object detection. IEEE Trans. Image Process. 2019, 28, 6077–6090. [Google Scholar] [CrossRef]
- Franchi, G.; Fehri, A.; Yao, A. Deep morphological networks. Pattern Recognit. 2020, 102, 107246. [Google Scholar] [CrossRef]
- Zhang, X.-X.; Zhu, X. Moving vehicle detection in aerial infrared image sequences via fast image registration and improved YOLOv3 network. Int. J. Remote Sens. 2020, 41, 4312–4335. [Google Scholar] [CrossRef]
- Zhou, Y.-F.; Maskell, S. Detecting and Tracking Small Moving Objects in Wide Area Motion Imagery using Convolutional Neural Networks. In Proceedings of the 2019 22nd International Conference on Information Fusion, Ottawa, ON, Canada, 2–5 July 2019; pp. 1–8. [Google Scholar]
- Yang, M.-Y.; Liao, W.-T.; Li, X.-B.; Rosenhahn, B. Deep-learning for Vehicle Detection in Aerial Images. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3080–3084. [Google Scholar]
- Liu, C.-Y.; Ding, Y.-L.; Zhu, M.; Xiu, J.-H.; Li, M.-Y.; Li, Q.-H. Vehicle detection in aerial images using a fast-oriented region search and the vector of locally aggregated descriptors. Sensors 2019, 19, 3294. [Google Scholar] [CrossRef] [Green Version]
- Ram, S. Sparse Representations and Nonlinear Image Processing for Inverse Imaging Solutions. Ph.D. Thesis, Department of Electrical and Computer Engineering, The University of Arizona, Tucson, AZ, USA, 20 June 2017. [Google Scholar]
- Huang, X.-H.; He, P.; Rangarajan, A.; Ranka, S. Intelligent intersection: Two-stream convolutional networks for real-time near-accident detection in traffic video. ACM Trans. Spat. Alg. Syst. 2020, 6, 1–28. [Google Scholar] [CrossRef] [Green Version]
- Sommer, L.; Schuchert, T.; Beyerer, J. Comprehensive analysis of deep learning-based vehicle detection using aerial images. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 2733–2747. [Google Scholar] [CrossRef]
- Karim, S.; Zhang, Y.; Yin, S.-L.; Laghari, A.A.; Brohi, A.A. Impact of compressed and down-scaled training images on vehicle detection in remote sensing imagery. Multimed. Tools Appl. 2019, 78, 32565–32583. [Google Scholar] [CrossRef]
- Song, J.-G.; Park, H.-Y. Object recognition in very low-resolution images using deep collaborative learning. IEEE Access 2019, 7, 134071–134082. [Google Scholar]
- Shao, S.-C.; Tunc, C.; Satam, P.; Hariri, S. Real-time IRC Threat Detection Framework. In Proceedings of the 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS* W), Tucson, AZ, USA, 18–22 September 2017; pp. 318–323. [Google Scholar]
- Bernard, J.; Shao, S.-C.; Tunc, C.; Kheddouci, H.; Hariri, S. Quasi-cliques’ Analysis for IRC Channel Thread Detection. In Proceedings of the 7th International Conference on Complex Networks and Their Applications, Cambridge, UK, 11–13 December 2018; pp. 578–589. [Google Scholar]
- Wu, C.-K.; Shao, S.-C.; Tunc, C.; Hariri, S. Video anomaly detection using pre-trained deep convolutional neural nets and context mining. In Proceedings of the IEEE/ACS 17th International Conference on Computer Systems and Applications (AICCSA), Antalya, Turkey, 2–5 November 2020; pp. 1–8. [Google Scholar]
- Wu, C.-K.; Shao, S.-C.; Tunc, C.; Satam, P.; Hariri, S. An explainable and efficient deep learning framework for video anomaly detection. Cluster Comput. 2021, 25, 1–23. [Google Scholar] [CrossRef]
- Kurz, F.; Azimi, S.-M.; Sheu, C.-Y.; D’Angelo, P. Deep learning segmentation and 3D reconstruction of road markings using multi-view aerial imagery. ISPRS Int. J. Geo-Inf. 2019, 8, 47. [Google Scholar] [CrossRef] [Green Version]
- Yang, B.; Zhang, S.; Tian, Y.; Li, B.-J. Front-vehicle detection in video images based on temporal and spatial characteristics. Sensors 2019, 19, 1728. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mostofa, M.; Ferdous, S.N.; Riggan, B.S.; Nasrabadi, N.M. Joint-Srvdnet: Joint super resolution and vehicle detection network. IEEE Access 2020, 8, 82306–82319. [Google Scholar] [CrossRef]
- Liu, Y.-J.; Yang, F.-B.; Hu, P. Small-object detection in UAV-captured images via multi-branch parallel feature pyramid networks. IEEE Access 2020, 8, 145740–145750. [Google Scholar] [CrossRef]
- Qiu, H.-Q.; Li, H.-L.; Wu, Q.-B.; Shi, H.-C. Offset Bin Classification Network for Accurate Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 13188–13197. [Google Scholar]
- Shao, S.-C.; Tunc, C.; Al-Shawi, A.; Hariri, S. Automated Twitter author clustering with unsupervised learning for social media forensics. In Proceedings of the IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA), Abu Dhabi, United Arab Emirates, 3–7 November 2019; pp. 1–8. [Google Scholar]
- Lei, J.-F.; Dong, Y.-X.; Sui, H.-G. Tiny moving vehicle detection in satellite video with constraints of multiple prior information. Int. J. Remote Sens. 2021, 42, 4110–4125. [Google Scholar] [CrossRef]
- Zhang, J.-P.; Jia, X.-P.; Hu, J.-K. Local region proposing for frame-based vehicle detection in satellite videos. Remote Sens. 2019, 11, 2372. [Google Scholar] [CrossRef] [Green Version]
- Cao, S.; Yu, Y.-T.; Guan, H.-Y.; Deng, D.-F.; Yan, W.-Q. Affine-function transformation-based object matching for vehicle detection from unmanned aerial vehicle imagery. Remote Sens. 2019, 11, 1708. [Google Scholar] [CrossRef] [Green Version]
- Fan, D.-P.; Ji, G.-P.; Cheng, M.-M.; Shao, L. Concealed object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 1–17. [Google Scholar] [CrossRef]
- Zhu, B.; Ao, K. Single-stage rotation-decoupled detector for oriented object. Remote Sens. 2020, 12, 3262. [Google Scholar]
- Chen, J.; Wu, X.-X.; Duan, L.-X.; Chen, L. Sequential instance refinement for cross-domain object detection in images. IEEE Trans. Image Process. 2021, 30, 3970–3984. [Google Scholar] [CrossRef]
- Ming, Q.; Miao, L.-J.; Zhou, Z.-Q.; Song, J.-J.; Yang, X. Sparse label assignment for oriented object detection in aerial images. Remote Sens. 2021, 13, 2664. [Google Scholar] [CrossRef]
- Wu, J.-Q.; Xu, S.-B. From point to region: Accurate and efficient hierarchical small object detection in low-resolution remote sensing images. Remote Sens. 2021, 13, 2620. [Google Scholar] [CrossRef]
- Zhang, T.; Tang, H.; Ding, Y.; Li, P.-L.; Ji, C.; Xu, P.-L. FSRSS-Net: High-resolution mapping of buildings from middle-resolution satellite images using a super-resolution semantic segmentation network. Remote Sens. 2020, 13, 2290. [Google Scholar] [CrossRef]
- Zhou, S.-P.; Wang, J.-J.; Wang, L.; Zhang, J.-M.-Y.; Wang, F.; Huang, D.; Zheng, N.-N. Hierarchical and interactive refinement network for edge-preserving salient object detection. IEEE Trans. Image Process. 2021, 30, 1–14. [Google Scholar] [CrossRef] [PubMed]
- Shao, S.-C.; Tunc, C.; Al-Shawi, A.; Hariri, S. An ensemble of ensembles approach to author attribution for internet relay chat forensics. ACM Trans. Manage. Inform. Syst. 2020, 11, 1–25. [Google Scholar] [CrossRef]
- Shao, S.-C.; Tunc, C.; Al-Shawi, A.; Hariri, S. One-class Classification with Deep Autoencoder for Author Verification in Internet Relay Chat. In Proceedings of the IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA), Abu Dhabi, United Arab Emirates, 3–7 November 2019; pp. 1–8. [Google Scholar]
- Zhu, L.-L.; Geng, X.; Li, Z.; Liu, C. Improving YOLOv5 with attention mechanism for detecting boulders from planetary images. Remote Sens. 2021, 13, 3776. [Google Scholar] [CrossRef]
- Vasu, B.-K. Visualizing Resiliency of Deep Convolutional Network Interpretations for Aerial Imagery. Master’s Thesis, Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA, December 2018. [Google Scholar]
- Huang, F.; Qi, J.-Q.; Lu, H.-C.; Zhang, L.-H.; Ruan, X. Salient object detection via multiple instance learning. IEEE Trans. Image Process. 2017, 26, 1911–1922. [Google Scholar] [CrossRef]
- Lu, H.-C.; Li, X.-H.; Zhang, L.-H.; Ruan, X.; Yang, M.-H. Dense and sparse reconstruction error-based saliency descriptor. IEEE Trans. Image Process. 2016, 25, 1592–1603. [Google Scholar] [CrossRef]
- Liu, L.-C.; Chen, C.L.P.; You, X.-G.; Tang, Y.-Y.; Zhang, Y.-S.; Li, S.-T. Mixed noise removal via robust constrained sparse representation. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 2177–2189. [Google Scholar] [CrossRef]
- Wang, H.-R.; Celik, T. Sparse representation based hyper-spectral image classification. Signal. Image. Video Process. 2018, 12, 1009–1017. [Google Scholar] [CrossRef]
- He, Z.; Huang, L.; Zeng, W.-J.; Zhang, X.-N.; Jiang, Y.-X.; Zou, Q. Elongated small object detection from remote sensing images using hierarchical scale-sensitive networks. Remote Sens. 2021, 13, 3182. [Google Scholar] [CrossRef]
- Wang, Y.; Jia, Y.-N.; Gu, L.-Z. EFM-Net: Feature extraction and filtration with mask improvement network for object detection in remote sensing images. Remote Sens. 2021, 13, 4151. [Google Scholar] [CrossRef]
- Hou, Y.; Chen, M.-D.; Volk, R.; Soibelman, L. An approach to semantically segmenting building components and outdoor scenes based on multichannel aerial imagery datasets. Remote Sens. 2021, 13, 4357. [Google Scholar] [CrossRef]
- Guo, Y.-L.; Wang, H.-Y.; Hu, Q.-Y.; Liu, H.; Liu, L.; Bennamoun, M. Deep learning for 3D point clouds: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 4338–4364. [Google Scholar] [CrossRef] [PubMed]
- Singh, P.S.; Karthikeyan, S. Salient object detection in hyperspectral images using deep background reconstruction based anomaly detection. Remote Sens. Lett. 2022, 13, 184–195. [Google Scholar] [CrossRef]
Alg. | Tucson Dataset | Phoenix Dataset | ||
---|---|---|---|---|
3Stage Scheme [13] | Filtering by SI [24] | 3Stage Scheme [13] | Filtering by SI [24] | |
LC [27] | 0.860 | 0.736 | 0.625 | 0.482 |
FT [29] | 0.824 | 0.716 | 0.478 | 0.409 |
MSSS [30] | 0.724 | 0.620 | 0.450 | 0.385 |
VMO [28] | 0.775 | 0.551 | 0.704 | 0.501 |
TE [32] | 0.643 | 0.489 | 0.548 | 0.436 |
CV [31] | 0.584 | 0.523 | 0.401 | 0.257 |
Alg. | Tucson Dataset | Phoenix Dataset | ||
---|---|---|---|---|
3Stage Scheme [13] | S&C [4] | 3Stage Scheme [13] | S&C [4] | |
LC [27] | 0.839 ± 0.006 (Std. = 0.008) | 0.871 ± 0.005 (Std. = 0.007) | 0.589 ± 0.010 (Std. = 0.014) | 0.620 ± 0.009 (Std. = 0.012) |
VMO [28] | 0.745 ± 0.014 (Std. = 0.019) | 0.802 ± 0.009 (Std. = 0.012) | 0.696 ± 0.007 (Std. = 0.010) | 0.683 ± 0.014 (Std. = 0.020) |
FDE [34] | 0.871 ± 0.004 (Std. = 0.006) | 0.876 ± 0.005 (Std. = 0.007) | 0.658 ± 0.013 (Std. = 0.018) | 0.696 ± 0.009 (Std. = 0.013) |
Alg. | LC [27] | VMO [28] | FT [29] | MSSS [30] | KFCM-CV [31] | FDE [34] | PAE [35] | MF [25] | TE [32] | FICA [33] | |
---|---|---|---|---|---|---|---|---|---|---|---|
Post-Proc. | |||||||||||
Method 1 (M1): FiltDil [22] | T | 0.892 | 0.760 | 0.848 | 0.782 | 0.725 | 0.879 | 0.625 | 0.681 | 0.640 | 0.743 |
P | 0.549 | 0.714 | 0.608 | 0.631 | 0.306 | 0.643 | 0.313 | 0.454 | 0.621 | 0.392 | |
Method 2 (M2): HeurFilt [23] | T | 0.840 | 0.717 | 0.766 | 0.724 | 0.569 | 0.867 | 0.801 | 0.724 | 0.812 | 0.804 |
P | 0.538 | 0.606 | 0.453 | 0.354 | 0.296 | 0.591 | 0.480 | 0.380 | 0.525 | 0.377 | |
Method 3 (M3): 3Stage [13] | T | 0.860 | 0.775 | 0.824 | 0.724 | 0.584 | 0.866 | 0.793 | 0.618 | 0.643 | 0.720 |
P | 0.625 | 0.704 | 0.478 | 0.450 | 0.401 | 0.673 | 0.646 | 0.341 | 0.548 | 0.337 | |
Method 4 (M4): S&O [25] | T | 0.822 | 0.515 | 0.733 | 0.641 | 0.467 | 0.877 | 0.798 | 0.776 | 0.815 | 0.655 |
P | 0.583 | 0.640 | 0.439 | 0.470 | 0.205 | 0.651 | 0.535 | 0.588 | 0.498 | 0.355 | |
Method 5 (M5): S&C [4] | T | 0.881 | 0.801 | 0.861 | 0.776 | 0.587 | 0.885 | 0.814 | 0.722 | 0.680 | 0.723 |
P | 0.628 | 0.702 | 0.539 | 0.568 | 0.407 | 0.683 | 0.579 | 0.500 | 0.565 | 0.433 | |
Method 6 (M6): Enh3Stage [20] | T | 0.823 | 0.820 | 0.832 | 0.817 | 0.610 | 0.902 | 0.762 | 0.683 | 0.754 | 0.712 |
P | 0.611 | 0.688 | 0.598 | 0.592 | 0.479 | 0.614 | 0.627 | 0.518 | 0.642 | 0.461 | |
Method 7 (M7): SpatialProc [14] | T | 0.842 | 0.691 | 0.809 | 0.808 | 0.694 | 0.857 | 0.723 | 0.764 | 0.784 | 0.738 |
P | 0.646 | 0.544 | 0.547 | 0.572 | 0.346 | 0.632 | 0.534 | 0.482 | 0.557 | 0.426 |
Alg. | Scheme | Counts of Detection Types | Alg. | Scheme | Counts of Detection Types | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TP | S | M | FN | FP | TP | S | M | FN | FP | ||||
LC [27] | M0 | 7051 | 1631 | 688 | 333 | 7691 | FDE [34] | M0 | 6477 | 1677 | 362 | 1233 | 6703 |
M3 | 5862 | 241 | 410 | 1800 | 3110 | M3 | 5644 | 267 | 519 | 1909 | 1648 | ||
M5 | 6187 | 368 | 345 | 1540 | 3792 | M5 | 5447 | 150 | 974 | 1651 | 1347 | ||
M7 | 5988 | 123 | 415 | 1669 | 2327 | M6 | 5823 | 146 | 512 | 1689 | 2055 | ||
VMO [28] | M0 | 7104 | 10,527 | 188 | 584 | 16,633 | PAE [35] | M0 | 6977 | 6581 | 81 | 1014 | 14,416 |
M3 | 5585 | 653 | 853 | 1634 | 3065 | M3 | 5566 | 134 | 627 | 1879 | 2552 | ||
M5 | 5920 | 746 | 973 | 1179 | 3619 | M5 | 5807 | 373 | 497 | 1768 | 3198 | ||
M6 | 6247 | 813 | 671 | 1154 | 2890 | M6 | 5703 | 231 | 467 | 1902 | 3127 | ||
FT [29] | M0 | 7290 | 5000 | 174 | 608 | 14,888 | MF [25] | M0 | 6753 | 10,100 | 73 | 1246 | 37,742 |
M1 | 6357 | 1879 | 669 | 1046 | 5667 | M4 | 5056 | 1562 | 102 | 2914 | 2568 | ||
M6 | 6076 | 625 | 285 | 1711 | 3070 | M5 | 5125 | 322 | 1529 | 1418 | 6140 | ||
M7 | 5541 | 892 | 286 | 2245 | 2769 | M6 | 4994 | 240 | 277 | 3001 | 3280 | ||
MSSS [30] | M0 | 7457 | 5907 | 353 | 262 | 17,286 | TE [32] | M0 | 6755 | 6264 | 191 | 1126 | 25,879 |
M1 | 6394 | 2005 | 1049 | 629 | 4772 | M2 | 5631 | 1966 | 183 | 2258 | 3357 | ||
M6 | 6133 | 543 | 264 | 1675 | 3413 | M6 | 5668 | 1607 | 104 | 2300 | 2575 | ||
M7 | 5906 | 555 | 323 | 1843 | 3338 | M7 | 5597 | 1560 | 93 | 2382 | 2401 | ||
KFCM-CV [31] | M0 | 7111 | 12,053 | 428 | 533 | 41,539 | FICA [33] | M0 | 6766 | 4263 | 98 | 1208 | 32,360 |
M5 | 5212 | 1577 | 443 | 2417 | 9697 | M2 | 5426 | 880 | 18 | 2628 | 5262 | ||
M6 | 5294 | 1287 | 553 | 2225 | 6950 | M6 | 5227 | 419 | 230 | 2615 | 3991 | ||
M7 | 5145 | 1356 | 425 | 2502 | 8738 | M7 | 5545 | 189 | 604 | 1923 | 6660 |
Scheme | PWC %/Tucson Dataset | PWC %/Phoenix Dataset | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Alg. | M0 | M1 [22] | M2 [23] | M3 [13] | M4 [25] | M5 [4] | M6 [20] | M7 [14] | M0 | M1 [22] | M2 [23] | M3 [13] | M4 [25] | M5 [4] | M6 [20] | M7 [14] | |
LC [27] | 41.5 ± 1.1 | 27.6 ± 1.2 | 27.9 ± 1.2 | 27.3 ± 1.0 | 62.4 ± 0.7 | 58.4 ± 1.0 | 58.9 ± 1.0 | 52.3 ± 1.1 | |||||||||
VMO [28] | 66.3 ± 1.1 | 40.8 ± 1.1 | 46.9 ± 1.0 | 35.0 ± 1.2 | 74.2 ± 0.8 | 46.9 ± 1.0 | 49.8 ± 1.2 | 47.6 ± 1.1 | |||||||||
FT [29] | 62.7 ± 0.9 | 38.7 ± 1.0 | 28.8 ± 1.3 | 27.9 ± 1.0 | 72.2 ± 0.7 | 62.2 ± 0.8 | 57.4 ± 0.9 | 61.7 ± 1.0 | |||||||||
MSSS [30] | 65.7 ± 0.8 | 33.6 ± 1.0 | 30.9 ± 1.1 | 31.0 ± 1.0 | 73.5 ± 0.6 | 56.5 ± 1.0 | 58.0 ± 0.9 | 57.8 ± 1.0 | |||||||||
KFCM-CV [31] | 83.6 ± 0.4 | 61.0 ± 1.0 | 56.1 ± 1.4 | 46.9 ± 1.2 | 87.1 ± 0.3 | 76.5 ± 0.7 | 68.5 ± 1.0 | 79.1 ± 0.8 | |||||||||
FDE [34] | 50.4 ± 0.9 | 22.8 ± 0.8 | 22.0 ± 0.8 | 18.2 ± 1.1 | 59.0 ± 1.0 | 50.8 ± 1.2 | 46.4 ± 1.3 | 54.7 ± 1.1 | |||||||||
PAE [35] | 62.3 ± 0.9 | 36.2 ± 1.0 | 38.7 ± 1.0 | 38.5 ± 1.1 | 73.2 ± 0.8 | 50.6 ± 1.5 | 52.0 ± 1.4 | 53.4 ± 1.3 | |||||||||
MF [25] | 79.1 ± 1.6 | 37.4 ± 2.0 | 41.8 ± 1.2 | 48.2 ± 1.1 | 88.0 ± 0.6 | 61.8 ± 0.9 | 71.4 ± 1.1 | 65.1 ± 1.2 | |||||||||
TE [32] | 78.2 ± 0.6 | 31.8 ± 0.8 | 39.5 ± 1.2 | 38.2 ± 1.4 | 80.5 ± 1.0 | 63.3 ± 1.1 | 52.8 ± 1.3 | 61.5 ± 1.6 | |||||||||
FICA [33] | 71.4 ± 1.3 | 33.5 ± 1.6 | 35.4 ± 1.7 | 41.6 ± 1.3 | 88.5 ± 0.5 | 74.4 ± 1.2 | 70.1 ± 1.1 | 73.0 ± 0.9 |
Metric | Alg. | Tucson Dataset | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Scheme | LC [27] | VMO [28] | FT [29] | MSSS [30] | KFCM-CV [31] | FDE [34] | PAE [35] | MF [25] | TE [32] | FICA [33] | ||
MODA | M0: No Post-Proc. | 0.295 ± 0.034 | −0.979 ± 0.098 | −0.713 ± 0.066 | −0.944 ± 0.070 | −3.944 ± 0.124 | 0.097 ± 0.031 | −0.531 ± 0.053 | −3.248 ± 0.359 | −2.402 ± 0.113 | −1.538 ± 0.158 | |
M1: FiltDil [22] | 0.401 ± 0.025 | 0.523 ± 0.021 | ||||||||||
M2: HeurFilt [23] | 0.261 ± 0.032 | 0.564 ± 0.036 | ||||||||||
M3: 3Stage [13] | 0.684 ± 0.016 | 0.505 ± 0.018 | 0.769 ± 0.009 | 0.585 ± 0.014 | ||||||||
M4: S&O [25] | 0.602 ± 0.020 | |||||||||||
M5: S&C [4] | 0.664 ± 0.018 | 0.522 ± 0.021 | −0.160 ± 0.035 | 0.790 ± 0.008 | 0.510 ± 0.017 | 0.453 ± 0.023 | ||||||
M6: Enh3Stage [20] | 0.563 ± 0.042 | 0.641 ± 0.075 | 0.602 ± 0.058 | 0.169 ± 0.041 | 0.814 ± 0.010 | 0.522 ± 0.013 | 0.307 ± 0.026 | 0.498 ± 0.017 | 0.582 ± 0.029 | |||
M7: SpatialProc [14] | 0.669 ± 0.021 | 0.652 ± 0.016 | 0.610 ± 0.025 | 0.368 ± 0.042 | 0.511 ± 0.022 | 0.428 ± 0.037 | ||||||
MOC | M0: No Post-Proc. | 0.296 | −0.981 | −0.714 | −0.943 | −3.944 | 0.098 | −0.532 | −3.235 | −2.400 | −1.534 | |
M1: FiltDil [22] | 0.401 | 0.523 | ||||||||||
M2: HeurFilt [23] | 0.262 | 0.565 | ||||||||||
M3: 3Stage [13] | 0.684 | 0.505 | 0.770 | 0.585 | ||||||||
M4: S&O [25] | 0.601 | 0.648 | 0.378 | |||||||||
M5: S&C [4] | 0.664 | 0.522 | −0.160 | 0.790 | 0.510 | 0.453 | ||||||
M6: Enh3Stage [20] | 0.570 | 0.638 | 0.596 | 0.167 | 0.808 | 0.516 | 0.299 | 0.492 | 0.575 | |||
M7: SpatialProc [14] | 0.672 | 0.653 | 0.604 | 0.377 | 0.505 | 0.433 | ||||||
Metric | Alg. | Phoenix Dataset | ||||||||||
Scheme | LC | VMO | FT | MSSS | KFCM-CV | FDE | PAE | MF | TE | FICA | ||
MODA | M0: No Post-Proc. | −0.280 ± 0.033 | −1.353 ± 0.103 | −1.130 ± 0.055 | −1.403 ± 0.060 | −4.473 ± 0.090 | −0.065 ± 0.034 | −1.293 ± 0.082 | −4.427 ± 0.220 | −2.302 ± 0.240 | −4.771 ± 0.225 | |
M1: FiltDil [22] | −0.062 ± 0.026 | 0.140 ± 0.022 | ||||||||||
M2: HeurFilt [23] | 0.092 ± 0.050 | −0.514 ± 0.061 | ||||||||||
M3: 3Stage [13] | 0.105 ± 0.028 | 0.421 ± 0.021 | 0.350 ± 0.023 | 0.316 ± 0.033 | ||||||||
M4: S&O [25] | 0.039 ± 0.026 | |||||||||||
M5: S&C [4] | 0.020 ± 0.032 | 0.292 ± 0.031 | −0.837 ± 0.040 | 0.467 ± 0.022 | 0.258 ± 0.034 | -0.324 ± 0.049 | ||||||
M6: Enh3Stage [20] | 0.382 ± 0.024 | 0.168 ± 0.017 | 0.150 ± 0.021 | −0.829 ± 0.045 | 0.271 ± 0.036 | 0.235 ± 0.028 | 0.044 ± 0.025 | 0.298 ± 0.019 | −0.211 ± 0.045 | |||
M7: SpatialProc [14] | 0.326 ± 0.021 | 0.080 ± 0.026 | 0.113 ± 0.025 | −1.155 ± 0.062 | 0.064 ± 0.037 | −0.453 ± 0.039 | ||||||
MOC | M0: No Post-Proc. | −0.280 | −1.351 | −1.123 | −1.402 | −4.477 | −0.063 | −1.287 | −4.418 | −2.292 | −4.764 | |
M1: FiltDil [22] | −0.061 | 0.141 | ||||||||||
M2: HeurFilt [23] | 0.094 | −0.514 | ||||||||||
M3: 3Stage [13] | 0.103 | 0.420 | 0.352 | 0.319 | ||||||||
M4: S&O [25] | 0.044 | |||||||||||
M5: S&C [4] | 0.019 | 0.291 | −0.838 | 0.468 | 0.261 | −0.321 | ||||||
M6: Enh3Stage [20] | 0.379 | 0.171 | 0.146 | −0.436 | 0.268 | 0.239 | 0.146 | 0.302 | −0.207 | |||
M7: SpatialProc [14] | 0.330 | 0.082 | 0.112 | −1.153 | 0.065 | −0.452 |
Training: Test | No Post-Proc. | FiltDil (M1) [22] | HeurFilt (M2) [23] | 3Stage (M3) [13] | S&O (M4) [25] |
---|---|---|---|---|---|
80%:20% | 0.718 | 0.767 | 0.746 | 0.771 | 0.682 |
90%:10% | 0.745 | 0.790 | 0.772 | 0.798 | 0.706 |
95%:5% | 0.801 | 0.853 | 0.829 | 0.856 | 0.754 |
80%:20% | 85%:15% | 90%:10% | 95%:5% | 98%:2% | |
---|---|---|---|---|---|
No Post-Processing | 63.90 ± 1.81 | 64.68 ± 1.65 | 67.14 ± 1.58 | 71.29 ± 1.43 | 74.18 ± 1.32 |
FiltDil (M1) [22] | 65.96 ± 1.74 | 67.85 ± 1.92 | 68.73 ± 1.56 | 72.47 ± 1.69 | 75.33 ± 1.45 |
3Stage (M3) [13] | 71.07 ± 1.25 | 72.98 ± 1.09 | 73.61 ± 1.28 | 77.89 ± 1.31 | 79.54 ± 1.16 |
mAP (%) | |||||
---|---|---|---|---|---|
Training: Test | No Post-Proc. | 3Stage (M3) [13] | S&C (M5) [4] | Enh3Stage (M6) [20] | SpatialProc (M7) [14] |
80%:20% | 62.58 ± 1.73 | 69.84 ± 1.26 | 71.24 ± 1.68 | 75.32 ± 1.17 | 67.39 ± 1.52 |
85%:15% | 64.29 ± 1.56 | 71.65 ± 1.12 | 73.08 ± 1.52 | 77.89 ± 1.06 | 69.23 ± 1.38 |
90%:10% | 66.57 ± 1.48 | 72.98 ± 1.43 | 74.85 ± 1.36 | 79.24 ± 0.95 | 71.68 ± 1.25 |
95%:5% | 70.92 ± 1.35 | 76.23 ± 0.97 | 75.86 ± 1.03 | 81.36 ± 0.88 | 74.36 ± 1.10 |
98%:2% | 73.16 ± 1.29 | 78.45 ± 1.08 | 77.63 ± 0.94 | 82.80 ± 1.04 | 75.57 ± 0.99 |
Accuracy | |||||
Training:Test | No Post-Proc. | 3Stage (M3) [13] | S&C (M5) [4] | Enh3Stage (M6) [20] | SpatialProc (M7) [14] |
80%:20% | 0.718 | 0.771 | 0.780 | 0.823 | 0.742 |
85%:15% | 0.729 | 0.783 | 0.791 | 0.844 | 0.756 |
90%:10% | 0.745 | 0.798 | 0.817 | 0.865 | 0.773 |
95%:5% | 0.801 | 0.856 | 0.835 | 0.882 | 0.808 |
98%:2% | 0.846 | 0.874 | 0.858 | 0.891 | 0.835 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, X.; Ram, S.; Philip, R.C.; Rodríguez, J.J.; Szep, J.; Shao, S.; Satam, P.; Pacheco, J.; Hariri, S. Selecting Post-Processing Schemes for Accurate Detection of Small Objects in Low-Resolution Wide-Area Aerial Imagery. Remote Sens. 2022, 14, 255. https://doi.org/10.3390/rs14020255
Gao X, Ram S, Philip RC, Rodríguez JJ, Szep J, Shao S, Satam P, Pacheco J, Hariri S. Selecting Post-Processing Schemes for Accurate Detection of Small Objects in Low-Resolution Wide-Area Aerial Imagery. Remote Sensing. 2022; 14(2):255. https://doi.org/10.3390/rs14020255
Chicago/Turabian StyleGao, Xin, Sundaresh Ram, Rohit C. Philip, Jeffrey J. Rodríguez, Jeno Szep, Sicong Shao, Pratik Satam, Jesús Pacheco, and Salim Hariri. 2022. "Selecting Post-Processing Schemes for Accurate Detection of Small Objects in Low-Resolution Wide-Area Aerial Imagery" Remote Sensing 14, no. 2: 255. https://doi.org/10.3390/rs14020255
APA StyleGao, X., Ram, S., Philip, R. C., Rodríguez, J. J., Szep, J., Shao, S., Satam, P., Pacheco, J., & Hariri, S. (2022). Selecting Post-Processing Schemes for Accurate Detection of Small Objects in Low-Resolution Wide-Area Aerial Imagery. Remote Sensing, 14(2), 255. https://doi.org/10.3390/rs14020255