remotesensing-logo

Journal Browser

Journal Browser

Advances in Radar Imaging with Deep Learning Algorithms

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (26 May 2024) | Viewed by 6515

Special Issue Editor


E-Mail Website
Guest Editor
IMT Atlantique, 44300 Nantes, France
Interests: SAR; target detection; deep learning; radar processing; EM wave propagation and scattering; active sensor image processing; data fusion methods and metrics; explainable IA; non-Gaussian statistics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent decades, an unprecedented amount of data has been gathered by the radar remote sensing community, which is boosting the development of an increasing number of applications in many areas from remote sensing of terrain and sea to medical imaging. It is worth noting that the high efficacy of radar imaging approaches is key to the development of advanced data processing strategies that are able to perform the detection and tracking of targets in some scenarios. For example, target detection using multiple-input multiple-output (MIMO) radars has recently gained popularity in radar research  Moreover, a set of new analytical tools has been proposed and applied to convolutional neural network (CNN) handling automatic target recognition on SAR datasets.

In this Special Issue, we intend to compile a series of papers that merge the analysis and use of radar images with AI techniques. We expect new research that will address practical problems in radar image applications with the help of advanced AI methods.

Articles may address, but are not limited to, the following topics:

  • Advanced AI-based target detection/recognition/tracking;
  • Radar image intelligent processing;
  • AI-based SAR imaging algorithm updating;
  • Combination of advanced signal processing and artificial intelligence techniques;
  • New datasets for remote sensing image classification with deep learning;
  • New radar systems, such as MIMO radars, distributed radars, dual multi-base radars, and so on.

Dr. Jean-Marc Le Caillec
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • synthetic aperture radar (SAR)
  • target detection/recognition
  • radar imaging
  • non-linearity detection
  • deep learning machine learning artificial intelligence
  • signal and image processing
  • passive and active sensors
  • array clustering

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 9507 KiB  
Article
Sparse SAR Imaging Based on Non-Local Asymmetric Pixel-Shuffle Blind Spot Network
by Yao Zhao, Decheng Xiao, Zhouhao Pan, Bingo Wing-Kuen Ling, Ye Tian and Zhe Zhang
Remote Sens. 2024, 16(13), 2367; https://doi.org/10.3390/rs16132367 - 28 Jun 2024
Viewed by 309
Abstract
The integration of Synthetic Aperture Radar (SAR) imaging technology with deep neural networks has experienced significant advancements in recent years. Yet, the scarcity of high-quality samples and the difficulty of extracting prior information from SAR data have experienced limited progress in this domain. [...] Read more.
The integration of Synthetic Aperture Radar (SAR) imaging technology with deep neural networks has experienced significant advancements in recent years. Yet, the scarcity of high-quality samples and the difficulty of extracting prior information from SAR data have experienced limited progress in this domain. This study introduces an innovative sparse SAR imaging approach using a self-supervised non-local asymmetric pixel-shuffle blind spot network. This strategy enables the network to be trained without labeled samples, thus solving the problem of the scarcity of high-quality samples. Through asymmetric pixel-shuffle downsampling (AP) operation, the spatial correlation between pixels is broken so that the blind spot network can adapt to the actual scene. The network also incorporates a non-local module (NLM) into its blind spot architecture, enhancing its capability to analyze a broader range of information and extract more comprehensive prior knowledge from SAR data. Subsequently, Plug and Play (PnP) technology is used to integrate the trained network into the sparse SAR imaging model to solve the regularization term problem. The optimization of the inverse problem is achieved through the Alternating Direction Method of Multipliers (ADMM) algorithm. The experimental results of the unlabeled samples demonstrate that our method significantly outperforms traditional techniques in reconstructing images across various regions. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Graphical abstract

19 pages, 4839 KiB  
Article
Intelligent Reconstruction of Radar Composite Reflectivity Based on Satellite Observations and Deep Learning
by Jianyu Zhao, Jinkai Tan, Sheng Chen, Qiqiao Huang, Liang Gao, Yanping Li and Chunxia Wei
Remote Sens. 2024, 16(2), 275; https://doi.org/10.3390/rs16020275 - 10 Jan 2024
Viewed by 1174
Abstract
Weather radar is a useful tool for monitoring and forecasting severe weather but has limited coverage due to beam blockage from mountainous terrain or other factors. To overcome this issue, an intelligent technology called “Echo Reconstruction UNet (ER-UNet)” is proposed in this study. [...] Read more.
Weather radar is a useful tool for monitoring and forecasting severe weather but has limited coverage due to beam blockage from mountainous terrain or other factors. To overcome this issue, an intelligent technology called “Echo Reconstruction UNet (ER-UNet)” is proposed in this study. It reconstructs radar composite reflectivity (CREF) using observations from Fengyun-4A geostationary satellites with broad coverage. In general, ER-UNet outperforms UNet in terms of root mean square error (RMSE), mean absolute error (MAE), structural similarity index (SSIM), probability of detection (POD), false alarm rate (FAR), critical success index (CSI), and Heidke skill score (HSS). Additionally, ER-UNet provides the better reconstruction of CREF compared to the UNet model in terms of the intensity, location, and details of radar echoes (particularly, strong echoes). ER-UNet can effectively reconstruct strong echoes and provide crucial decision-making information for early warning of severe weather. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Figure 1

19 pages, 23605 KiB  
Article
Above Ground Level Estimation of Airborne Synthetic Aperture Radar Altimeter by a Fully Supervised Altimetry Enhancement Network
by Mengmeng Duan, Yanxi Lu, Yao Wang, Gaozheng Liu, Longlong Tan, Yi Gao, Fang Li and Ge Jiang
Remote Sens. 2023, 15(22), 5404; https://doi.org/10.3390/rs15225404 - 17 Nov 2023
Viewed by 865
Abstract
Due to the lack of accurate labels for the airborne synthetic aperture radar altimeter (SARAL), the use of deep learning methods is limited for estimating the above ground level (AGL) of complicated landforms. In addition, the inherent additive and speckle noise definitely influences [...] Read more.
Due to the lack of accurate labels for the airborne synthetic aperture radar altimeter (SARAL), the use of deep learning methods is limited for estimating the above ground level (AGL) of complicated landforms. In addition, the inherent additive and speckle noise definitely influences the intended delay/Doppler map (DDM); accurate AGL estimation becomes more challenging when using the feature extraction approach. In this paper, a generalized AGL estimation algorithm is proposed, based on a fully supervised altimetry enhancement network (FuSAE-net), where accurate labels are generated by a novel semi-analytical model. In such a case, there is no need to have a fully analytical DDM model, and accurate labels are achieved without additive noises and speckles. Therefore, deep learning supervision is easy and accurate. Next, to further decrease the computational complexity for various landforms on the airborne platform, the network architecture is designed in a lightweight manner. Knowledge distillation has proven to be an effective and intuitive lightweight paradigm. To significantly improve the performance of the compact student network, both the encoder and decoder of the teacher network are utilized during knowledge distillation under the supervision of labels. In the experiments, airborne raw radar altimeter data were applied to examine the performance of the proposed algorithm. Comparisons with conventional methods in terms of both qualitative and quantitative aspects demonstrate the superiority of the proposed algorithm. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Figure 1

24 pages, 7236 KiB  
Article
Deep Learning-Based Enhanced ISAR-RID Imaging Method
by Xiurong Wang, Yongpeng Dai, Shaoqiu Song, Tian Jin and Xiaotao Huang
Remote Sens. 2023, 15(21), 5166; https://doi.org/10.3390/rs15215166 - 29 Oct 2023
Cited by 1 | Viewed by 1247
Abstract
Inverse synthetic aperture radar (ISAR) imaging can be improved by processing Range-Instantaneous Doppler (RID) images, according to a method proposed in this paper that uses neural networks. ISAR is a significant imaging technique for moving targets. However, scatterers span across several range bins [...] Read more.
Inverse synthetic aperture radar (ISAR) imaging can be improved by processing Range-Instantaneous Doppler (RID) images, according to a method proposed in this paper that uses neural networks. ISAR is a significant imaging technique for moving targets. However, scatterers span across several range bins and Doppler bins while imaging a moving target over a large accumulated angle. Defocusing consequently occurs in the results produced by the conventional Range Doppler Algorithm (RDA). Defocusing can be solved with the time-frequency analysis (TFA) method, but the resolution performance is reduced. The proposed method provides the neural network with more details by using a string of RID frames of images as input. As a consequence, it produces better resolution and avoids defocusing. Furthermore, we have developed a positional encoding method that precisely represents pixel positions while taking into account the features of ISAR images. To address the issue of an imbalance in the ratio of pixel count between target and non-target areas in ISAR images, we additionally use the idea of Focal Loss to improve the Mean Squared Error (MSE). We conduct experiments with simulated data of point targets and full-wave simulated data produced by FEKO to assess the efficacy of the proposed approach. The experimental results demonstrate that our approach can improve resolution while preventing defocusing in ISAR images. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Figure 1

30 pages, 38046 KiB  
Article
MosReformer: Reconstruction and Separation of Multiple Moving Targets for Staggered SAR Imaging
by Xin Qi, Yun Zhang, Yicheng Jiang, Zitao Liu and Chang Yang
Remote Sens. 2023, 15(20), 4911; https://doi.org/10.3390/rs15204911 - 11 Oct 2023
Viewed by 838
Abstract
Maritime moving target imaging using synthetic aperture radar (SAR) demands high resolution and wide swath (HRWS). Using the variable pulse repetition interval (PRI), staggered SAR can achieve seamless HRWS imaging. The reconstruction should be performed since the variable PRI causes echo pulse loss [...] Read more.
Maritime moving target imaging using synthetic aperture radar (SAR) demands high resolution and wide swath (HRWS). Using the variable pulse repetition interval (PRI), staggered SAR can achieve seamless HRWS imaging. The reconstruction should be performed since the variable PRI causes echo pulse loss and nonuniformly sampled signals in azimuth, both of which result in spectrum aliasing. The existing reconstruction methods are designed for stationary scenes and have achieved impressive results. However, for moving targets, these methods inevitably introduce reconstruction errors. The target motion coupled with non-uniform sampling aggravates the spectral aliasing and degrades the reconstruction performance. This phenomenon becomes more severe, particularly in scenes involving multiple moving targets, since the distinct motion parameter has its unique effect on spectrum aliasing, resulting in the overlapping of various aliasing effects. Consequently, it becomes difficult to reconstruct and separate the echoes of the multiple moving targets with high precision in staggered mode. To this end, motivated by deep learning, this paper proposes a novel Transformer-based algorithm to image multiple moving targets in a staggered SAR system. The reconstruction and the separation of the multiple moving targets are achieved through a proposed network named MosReFormer (Multiple moving target separation and reconstruction Transformer). Adopting a gated single-head Transformer network with convolution-augmented joint self-attention, the proposed MosReFormer network can mitigate the reconstruction errors and separate the signals of multiple moving targets simultaneously. Simulations and experiments on raw data show that the reconstructed and separated results are close to ideal imaging results which are sampled uniformly in azimuth with constant PRI, verifying the feasibility and effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Graphical abstract

20 pages, 6178 KiB  
Article
Boosting SAR Aircraft Detection Performance with Multi-Stage Domain Adaptation Training
by Wenbo Yu, Jiamu Li, Zijian Wang and Zhongjun Yu
Remote Sens. 2023, 15(18), 4614; https://doi.org/10.3390/rs15184614 - 20 Sep 2023
Viewed by 1111
Abstract
Deep learning has achieved significant success in various synthetic aperture radar (SAR) imagery interpretation tasks. However, automatic aircraft detection is still challenging due to the high labeling cost and limited data quantity. To address this issue, we propose a multi-stage domain adaptation training [...] Read more.
Deep learning has achieved significant success in various synthetic aperture radar (SAR) imagery interpretation tasks. However, automatic aircraft detection is still challenging due to the high labeling cost and limited data quantity. To address this issue, we propose a multi-stage domain adaptation training framework to efficiently transfer the knowledge from optical imagery and boost SAR aircraft detection performance. To overcome the significant domain discrepancy between optical and SAR images, the training process can be divided into three stages: image translation, domain adaptive pretraining, and domain adaptive finetuning. First, CycleGAN is used to translate optical images into SAR-style images and reduce global-level image divergence. Next, we propose multilayer feature alignment to further reduce the local-level feature distribution distance. By applying domain adversarial learning in both the pretrain and finetune stages, the detector can learn to extract domain-invariant features that are beneficial to the learning of generic aircraft characteristics. To evaluate the proposed method, extensive experiments were conducted on a self-built SAR aircraft detection dataset. The results indicate that by using the proposed training framework, the average precision of Faster RCNN gained an increase of 2.4, and that of YOLOv3 was improved by 2.6, which outperformed other domain adaptation methods. By reducing the domain discrepancy between optical and SAR in three progressive stages, the proposed method can effectively mitigate the domain shift, thereby enhancing the efficiency of knowledge transfer. It greatly improves the detection performance of aircraft and offers an effective approach to address the limited training data problem of SAR aircraft detection. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Figure 1

Review

Jump to: Research

33 pages, 3043 KiB  
Review
Generative Adversarial Networks for SAR Automatic Target Recognition and Classification Models Enhanced Explainability: Perspectives and Challenges
by Héloïse Remusati, Jean-Marc Le Caillec, Jean-Yves Schneider, Jacques Petit-Frère and Thomas Merlet
Remote Sens. 2024, 16(14), 2569; https://doi.org/10.3390/rs16142569 (registering DOI) - 13 Jul 2024
Viewed by 133
Abstract
Generative adversarial networks (or GANs) are a specific deep learning architecture often used for different usages, such as data generation or image-to-image translation. In recent years, this structure has gained increased popularity and has been used in different fields. One area of expertise [...] Read more.
Generative adversarial networks (or GANs) are a specific deep learning architecture often used for different usages, such as data generation or image-to-image translation. In recent years, this structure has gained increased popularity and has been used in different fields. One area of expertise currently in vogue is the use of GANs to produce synthetic aperture radar (SAR) data, and especially expand training datasets for SAR automatic target recognition (ATR). In effect, the complex SAR image formation makes these kind of data rich in information, leading to the use of deep networks in deep learning-based methods. Yet, deep networks also require sufficient data for training. However, contrary to optical images, we generally do not have a substantial number of available SAR images because of their acquisition and labelling cost; GANs are then an interesting tool. Concurrently, how to improve explainability for SAR ATR deep neural networks and how to make their reasoning more transparent have been increasingly explored as model opacity deteriorates trust of users. This paper aims at reviewing how GANs are used with SAR images, but also giving perspectives on how GANs could be used to improve interpretability and explainability of SAR classifiers. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Back to TopTop