Next Article in Journal
Seasonal Effect of the Vegetation Clumping Index on Gross Primary Productivity Estimated by a Two-Leaf Light Use Efficiency Model
Next Article in Special Issue
Fast Detection of Moving Targets by Refocusing in GBSAR Imagery Based on Enlightend Search
Previous Article in Journal
Application of a Multifractal Model for Identification of Lithology and Hydrothermal Alteration in the Dasuji Porphyry Mo Deposit in Inner Mongolia, China
Previous Article in Special Issue
An Accurate and Efficient BP Algorithm Based on Precise Slant Range Model and Rapid Range History Construction Method for GEO SAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SAR-HUB: Pre-Training, Fine-Tuning, and Explaining

1
The BRain and Artificial INtelligence Lab (BRAIN LAB), School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
2
School of Civil Aviation, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(23), 5534; https://doi.org/10.3390/rs15235534
Submission received: 19 October 2023 / Revised: 24 November 2023 / Accepted: 24 November 2023 / Published: 28 November 2023

Abstract

Since the current remote sensing pre-trained models trained on optical images are not as effective when applied to SAR image tasks, it is crucial to create sensor-specific SAR models with generalized feature representations and to demonstrate with evidence the limitations of optical pre-trained models in downstream SAR tasks. The following aspects are the focus of this study: pre-training, fine-tuning, and explaining. First, we collect the current large-scale open-source SAR scene image classification datasets to pre-train a series of deep neural networks, including convolutional neural networks (CNNs) and vision transformers (ViT). A novel dynamic range adaptive enhancement method and a mini-batch class-balanced loss are proposed to tackle the challenges in SAR scene image classification. Second, the pre-trained models are transferred to various SAR downstream tasks compared with optical ones. Lastly, we propose a novel knowledge point interpretation method to reveal the benefits of the SAR pre-trained model with comprehensive and quantifiable explanations. This study is reproducible using open-source code and datasets, demonstrates generalization through extensive experiments on a variety of tasks, and is interpretable through qualitative and quantitative analyses. The codes and models are open source.
Keywords: SAR image interpretation; pre-trained model; transfer learning; explainable artificial intelligence SAR image interpretation; pre-trained model; transfer learning; explainable artificial intelligence

Share and Cite

MDPI and ACS Style

Yang, H.; Kang, X.; Liu, L.; Liu, Y.; Huang, Z. SAR-HUB: Pre-Training, Fine-Tuning, and Explaining. Remote Sens. 2023, 15, 5534. https://doi.org/10.3390/rs15235534

AMA Style

Yang H, Kang X, Liu L, Liu Y, Huang Z. SAR-HUB: Pre-Training, Fine-Tuning, and Explaining. Remote Sensing. 2023; 15(23):5534. https://doi.org/10.3390/rs15235534

Chicago/Turabian Style

Yang, Haodong, Xinyue Kang, Long Liu, Yujiang Liu, and Zhongling Huang. 2023. "SAR-HUB: Pre-Training, Fine-Tuning, and Explaining" Remote Sensing 15, no. 23: 5534. https://doi.org/10.3390/rs15235534

APA Style

Yang, H., Kang, X., Liu, L., Liu, Y., & Huang, Z. (2023). SAR-HUB: Pre-Training, Fine-Tuning, and Explaining. Remote Sensing, 15(23), 5534. https://doi.org/10.3390/rs15235534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop