remotesensing-logo

Journal Browser

Journal Browser

Exploitation of SAR Data Using Deep Learning Approaches

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (20 October 2023) | Viewed by 9581

Special Issue Editor


E-Mail Website
Guest Editor
UMR, TETIS, INRAE, University of Montpellier, Montpellier, France
Interests: deep learning; InSAR; ComSAR; tomography; GEDI
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Synthetic aperture radar (SAR) is a unique technology commonly used to capture an array of Earth surface parameters on a large spatial scale from space. Unlike optical technology, which produces the best images on sunny days, the European Space Agency’s SAR Sentinel-1 takes its snapshots actively through the use of radars, penetrating clouds and working at night. This offers an unprecedented multitemporal dataset, leading to a great opportunity to exploit SAR images through the use of deep learning techniques. This Special Issue intends to present high-quality scientific research papers describing deep learning methods for the exploitation of SAR in the big data era, including multitemporal analysis, speckle filtering, phase linking, phase unwrapping, data fusion (with optical and GEDI data), parameter estimation, and related big data topics. 

Dr. Dinh Ho Tong Minh
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • Sentinel-1 and Sentinel-2
  • GEDI
  • SAR
  • big data
  • data fusion
  • SAR interferometry
  • ComSAR

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1260 KiB  
Article
A Multicomponent Linear Frequency Modulation Signal-Separation Network for Multi-Moving-Target Imaging in the SAR-Ground-Moving-Target Indication System
by Chang Ding, Huilin Mu and Yun Zhang
Remote Sens. 2024, 16(4), 605; https://doi.org/10.3390/rs16040605 - 6 Feb 2024
Viewed by 738
Abstract
Multi-moving-target imaging in a synthetic aperture radar (SAR) system poses a significant challenge owing to target defocusing and being contaminated by strong background clutter. Aiming at this problem, a new deep-convolutional-neural-network (CNN)-assisted method is proposed for multi-moving-target imaging in a SAR-GMTI system. The [...] Read more.
Multi-moving-target imaging in a synthetic aperture radar (SAR) system poses a significant challenge owing to target defocusing and being contaminated by strong background clutter. Aiming at this problem, a new deep-convolutional-neural-network (CNN)-assisted method is proposed for multi-moving-target imaging in a SAR-GMTI system. The multi-moving-target signal can be modeled by a multicomponent LFM signal with additive perturbation. A fully convolutional network named MLFMSS-Net was designed based on an encoder–decoder architecture to extract the most-energetic LFM signal component from the multicomponent LFM signal in the time domain. Without prior knowledge of the target number, an iterative signal-separation framework based on the well-trained MLFMSS-Net is proposed to separate the multi-moving-target signal into multiple LFM signal components while eliminating the residual clutter. It works well, exhibiting high imaging robustness and low dependence on the system parameters, making it a suitable solution for practical imaging applications. Consequently, a well-focused multi-moving-target image can be obtained by parameter estimation and secondary azimuth compression for each separated LFM signal component. The simulations and experiments on both airborne and spaceborne SAR data showed that the proposed method is superior to traditional imaging methods in both imaging quality and efficiency. Full article
(This article belongs to the Special Issue Exploitation of SAR Data Using Deep Learning Approaches)
Show Figures

Graphical abstract

28 pages, 36012 KiB  
Article
Mix MSTAR: A Synthetic Benchmark Dataset for Multi-Class Rotation Vehicle Detection in Large-Scale SAR Images
by Zhigang Liu, Shengjie Luo and Yiting Wang
Remote Sens. 2023, 15(18), 4558; https://doi.org/10.3390/rs15184558 - 16 Sep 2023
Cited by 1 | Viewed by 2541
Abstract
Because of the counterintuitive imaging and confusing interpretation dilemma in Synthetic Aperture Radar (SAR) images, the application of deep learning in the detection of SAR targets has been primarily limited to large objects in simple backgrounds, such as ships and airplanes, with much [...] Read more.
Because of the counterintuitive imaging and confusing interpretation dilemma in Synthetic Aperture Radar (SAR) images, the application of deep learning in the detection of SAR targets has been primarily limited to large objects in simple backgrounds, such as ships and airplanes, with much less popularity in detecting SAR vehicles. The complexities of SAR imaging make it difficult to distinguish small vehicles from the background clutter, creating a barrier to data interpretation and the development of Automatic Target Recognition (ATR) in SAR vehicles. The scarcity of datasets has inhibited progress in SAR vehicle detection in the data-driven era. To address this, we introduce a new synthetic dataset called Mix MSTAR, which mixes target chips and clutter backgrounds with original radar data at the pixel level. Mix MSTAR contains 5392 objects of 20 fine-grained categories in 100 high-resolution images, predominantly 1478 × 1784 pixels. The dataset includes various landscapes such as woods, grasslands, urban buildings, lakes, and tightly arranged vehicles, each labeled with an Oriented Bounding Box (OBB). Notably, Mix MSTAR presents fine-grained object detection challenges by using the Extended Operating Condition (EOC) as a basis for dividing the dataset. Furthermore, we evaluate nine benchmark rotated detectors on Mix MSTAR and demonstrate the fidelity and effectiveness of the synthetic dataset. To the best of our knowledge, Mix MSTAR represents the first public multi-class SAR vehicle dataset designed for rotated object detection in large-scale scenes with complex backgrounds. Full article
(This article belongs to the Special Issue Exploitation of SAR Data Using Deep Learning Approaches)
Show Figures

Graphical abstract

19 pages, 5483 KiB  
Article
A Deep-Learning-Facilitated, Detection-First Strategy for Operationally Monitoring Localized Deformation with Large-Scale InSAR
by Teng Wang, Qi Zhang and Zhipeng Wu
Remote Sens. 2023, 15(9), 2310; https://doi.org/10.3390/rs15092310 - 27 Apr 2023
Cited by 2 | Viewed by 2328
Abstract
SAR interferometry (InSAR) has emerged in the big-data era, particularly benefitting from the acquisition capability and open-data policy of ESA’s Sentinel-1 SAR mission. A large number of Sentinel-1 SAR images have been acquired and archived, allowing for the generation of thousands of interferograms, [...] Read more.
SAR interferometry (InSAR) has emerged in the big-data era, particularly benefitting from the acquisition capability and open-data policy of ESA’s Sentinel-1 SAR mission. A large number of Sentinel-1 SAR images have been acquired and archived, allowing for the generation of thousands of interferograms, covering millions of square kilometers. In such a large-scale interferometry scenario, many applications actually aim at monitoring localized deformation sparsely distributed in the interferogram. Thus, it is not effective to apply the time-series InSAR analysis to the whole image and identify the deformed targets from the derived velocity map. Here, we present a strategy facilitated by the deep learning networks to firstly detect the localized deformation and then carry out the time-series analysis on small interferogram patches with deformation signals. Specifically, we report following-up studies of our proposed deep learning networks for masking decorrelation areas, detecting local deformation, and unwrapping high-gradient phases. In the applications of mining-induced subsidence monitoring and slow-moving landslide detection, the presented strategy not only reduces the computation time, but also avoids the influence of large-scale tropospheric delays and unwrapping errors. The presented detection-first strategy introduces deep learning to the time-series InSAR processing chain and makes the mission of operationally monitoring localized deformation feasible and efficient for the large-scale InSAR. Full article
(This article belongs to the Special Issue Exploitation of SAR Data Using Deep Learning Approaches)
Show Figures

Figure 1

25 pages, 7500 KiB  
Article
Unsupervised SAR Image Change Detection Based on Histogram Fitting Error Minimization and Convolutional Neural Network
by Kaiyu Zhang, Xiaolei Lv, Bin Guo and Huiming Chai
Remote Sens. 2023, 15(2), 470; https://doi.org/10.3390/rs15020470 - 13 Jan 2023
Cited by 1 | Viewed by 2072
Abstract
Synthetic aperture radar (SAR) image change detection is one of the most important applications in remote sensing. Before performing change detection, the original SAR image is often cropped to extract the region of interest (ROI). However, the size of the ROI often affects [...] Read more.
Synthetic aperture radar (SAR) image change detection is one of the most important applications in remote sensing. Before performing change detection, the original SAR image is often cropped to extract the region of interest (ROI). However, the size of the ROI often affects the change detection results. Therefore, it is necessary to detect changes using local information. This paper proposes a novel unsupervised change detection framework based on deep learning. The specific method steps are described as follows: First, we use histogram fitting error minimization (HFEM) to perform thresholding for a difference image (DI). Then, the DI is fed into a convolutional neural network (CNN). Therefore, the proposed method is called HFEM-CNN. We test three different CNN architectures called Unet, PSPNet and the designed fully convolutional neural network (FCNN) for the framework. The overall loss function is a weighted average of pixel loss and neighborhood loss. The weight between pixel loss and neighborhood loss is determined by the manually set parameter λ. Compared to other recently proposed methods, HFEM-CNN does not need a fragment removal procedure as post-processing. This paper conducts experiments for water and building change detection on three datasets. The experiments are divided into two parts: whole data experiments and random cropped data experiments. The complete experiments prove that the performance of the method in this paper is close to other methods on complete datasets. The random cropped data experiment is to perform local change detection using patches cropped from the whole datasets. The proposed method is slightly better than traditional methods in the whole data experiments. In experiments with randomly cropped data, the average kappa coefficient of our method on 63 patches is over 3.16% compared to other methods. Experiments also show that the proposed method is suitable for local change detection and robust to randomness and choice of hyperparameters. Full article
(This article belongs to the Special Issue Exploitation of SAR Data Using Deep Learning Approaches)
Show Figures

Figure 1

Back to TopTop