remotesensing-logo

Journal Browser

Journal Browser

Synthetic Aperture Radar (SAR) Meets Deep Learning

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (15 July 2022) | Viewed by 52423

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Interests: computer vision; neural networks; object detection/classification/segmentation; remote sensing processing; synthetic aperture radar; millimeter wave radar technology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Electrical and Electronic Engineering, University of Hong Kong, Hong Kong
Interests: computational imaging; inverse imaging problems; image reconstruction; deep learning; neuroimaging; computer vision in remote sensing

E-Mail Website
Guest Editor
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Interests: interferometry synthetic aperture radar (InSAR); InSAR remote sensing; remote sensing processing; machine learning and deep learning; detection and classification using SAR images
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Synthetic aperture radar (SAR) is an important active microwave imaging sensor whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications.

In recent years, deep learning represented by famous convolution neural networks has promoted huge progress in the computer vision community, e.g., face recognition, driverless field, Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve performance of various applications. Today, scholars are realizing the potential value of deep learning in remote sensing. Many remote sensing application techniques have been involved in deep learning, e.g., target and oil spill detection, traffic surveillance, topographic mapping, AI-based SAR imaging algorithm updating, coastline surveillance, and marine fisheries management.

Interestingly, when SAR meets deep learning, how to use this advanced technology correctly needs to be considered carefully, and how to give full play to the best performance of this “black-box” model also needs careful consideration. Notably, deep learning uncritically abandons traditional hand-crafted features and relies excessively on abstract ones of deep networks. Is this reasonable? Can the abstract features of deep networks fully represent real SAR? Should the traditional hand-crafted features provided with mature theories and elaborate techniques be abandoned completely? These questions are worth pondering when one applies various deep learning techniques to the SAR remote sensing community. In general, deep learning is always proposed for natural optical image whose imaging mechanisms are greatly different from SAR.

When SAR meets deep learning, should SAR accommodate itself to deep learning, or should deep learning accommodate itself to SAR? The relationship between the two needs further exploration and research. Furthermore, is deep learning really suitable for SAR? The number of SAR samples is far smaller than that of natural optical images. In this case, could we ensure deep networks learn SAR mechanisms deeply?

This Special Issue provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR, in various manuscript types, e.g., article, letter, review, technical report. Potential topics include but are not limited to the following:

  • Object detection and classification;
  • Ocean remote sensing;
  • Terrain classification;
  • Data analytics in the SAR remote sensing community;
  • Intelligent SAR agriculture monitoring;
  • Interferometric SAR technology;
  • SAR image intelligent processing;
  • AI-based SAR imaging algorithm updating;
  • SAR forest applications;
  • Earth observation;
  • Marine pollution.

We are looking forward to receiving your contribution to this Special Issue entitled “Synthetic Aperture Radar (SAR) Meets Deep Learning”.

Dr. Tianwen Zhang
Dr. Tianjiao Zeng
Prof. Dr. Xiaoling Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • synthetic aperture radar (SAR)
  • deep learning
  • convolution neural networks
  • computer vision
  • detection and classification
  • marine pollution

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

4 pages, 179 KiB  
Editorial
Synthetic Aperture Radar (SAR) Meets Deep Learning
by Tianwen Zhang, Tianjiao Zeng and Xiaoling Zhang
Remote Sens. 2023, 15(2), 303; https://doi.org/10.3390/rs15020303 - 04 Jan 2023
Cited by 12 | Viewed by 3354
Abstract
Synthetic aperture radar (SAR) is an important active microwave imaging sensor [...] Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)

Research

Jump to: Editorial, Review, Other

27 pages, 6476 KiB  
Article
Deep Learning Approach for Object Classification on Raw and Reconstructed GBSAR Data
by Marin Kačan, Filip Turčinović, Dario Bojanjac and Marko Bosiljevac
Remote Sens. 2022, 14(22), 5673; https://doi.org/10.3390/rs14225673 - 10 Nov 2022
Cited by 7 | Viewed by 1896
Abstract
The availability of low-cost microwave components today enables the development of various high-frequency sensors and radars, including Ground-based Synthetic Aperture Radar (GBSAR) systems. Similar to optical images, radar images generated by applying a reconstruction algorithm on raw GBSAR data can also be used [...] Read more.
The availability of low-cost microwave components today enables the development of various high-frequency sensors and radars, including Ground-based Synthetic Aperture Radar (GBSAR) systems. Similar to optical images, radar images generated by applying a reconstruction algorithm on raw GBSAR data can also be used in object classification. The reconstruction algorithm provides an interpretable representation of the observed scene, but may also negatively influence the integrity of obtained raw data due to applied approximations. In order to quantify this effect, we compare the results of a conventional computer vision architecture, ResNet18, trained on reconstructed images versus one trained on raw data. In this process, we focus on the task of multi-label classification and describe the crucial architectural modifications that are necessary to process raw data successfully. The experiments are performed on a novel multi-object dataset RealSAR obtained using a newly developed 24 GHz (GBSAR) system where the radar images in the dataset are reconstructed using the Omega-k algorithm applied to raw data. Experimental results show that the model trained on raw data consistently outperforms the image-based model. We provide a thorough analysis of both approaches across hyperparameters related to model pretraining and the size of the training dataset. This, in conclusion, shows how processing raw data provides overall better classification accuracy, it is inherently faster since there is no need for image reconstruction and it is therefore useful tool in industrial GBSAR applications where processing speed is critical. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Graphical abstract

37 pages, 40019 KiB  
Article
Triangle Distance IoU Loss, Attention-Weighted Feature Pyramid Network, and Rotated-SARShip Dataset for Arbitrary-Oriented SAR Ship Detection
by Zhijing Xu, Rui Gao, Kan Huang and Qihui Xu
Remote Sens. 2022, 14(18), 4676; https://doi.org/10.3390/rs14184676 - 19 Sep 2022
Cited by 11 | Viewed by 2779
Abstract
In synthetic aperture radar (SAR) images, ship targets are characterized by varying scales, large aspect ratios, dense arrangements, and arbitrary orientations. Current horizontal and rotation detectors fail to accurately recognize and locate ships due to the limitations of loss function, network structure, and [...] Read more.
In synthetic aperture radar (SAR) images, ship targets are characterized by varying scales, large aspect ratios, dense arrangements, and arbitrary orientations. Current horizontal and rotation detectors fail to accurately recognize and locate ships due to the limitations of loss function, network structure, and training data. To overcome the challenge, we propose a unified framework combining triangle distance IoU loss (TDIoU loss), an attention-weighted feature pyramid network (AW-FPN), and a Rotated-SARShip dataset (RSSD) for arbitrary-oriented SAR ship detection. First, we propose a TDIoU loss as an effective solution to the loss-metric inconsistency and boundary discontinuity in rotated bounding box regression. Unlike recently released approximate rotational IoU losses, we derive a differentiable rotational IoU algorithm to enable back-propagation of the IoU loss layer, and we design a novel penalty term based on triangle distance to generate a more precise bounding box while accelerating convergence. Secondly, considering the shortage of feature fusion networks in connection pathways and fusion methods, AW-FPN combines multiple skip-scale connections and attention-weighted feature fusion (AWF) mechanism, enabling high-quality semantic interactions and soft feature selections between features of different resolutions and scales. Finally, to address the limitations of existing SAR ship datasets, such as insufficient samples, small image sizes, and improper annotations, we construct a challenging RSSD to facilitate research on rotated ship detection in complex SAR scenes. As a plug-and-play scheme, our TDIoU loss and AW-FPN can be easily embedded into existing rotation detectors with stable performance improvements. Experiments show that our approach achieves 89.18% and 95.16% AP on two SAR image datasets, RSSD and SSDD, respectively, and 90.71% AP on the aerial image dataset, HRSC2016, significantly outperforming the state-of-the-art methods. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Graphical abstract

26 pages, 9687 KiB  
Article
A Lightweight Self-Supervised Representation Learning Algorithm for Scene Classification in Spaceborne SAR and Optical Images
by Xiao Xiao, Changjian Li and Yinjie Lei
Remote Sens. 2022, 14(13), 2956; https://doi.org/10.3390/rs14132956 - 21 Jun 2022
Cited by 4 | Viewed by 2014
Abstract
Despite the increasing amount of spaceborne synthetic aperture radar (SAR) images and optical images, only a few annotated data can be used directly for scene classification tasks based on convolution neural networks (CNNs). For this situation, self-supervised learning methods can improve scene classification [...] Read more.
Despite the increasing amount of spaceborne synthetic aperture radar (SAR) images and optical images, only a few annotated data can be used directly for scene classification tasks based on convolution neural networks (CNNs). For this situation, self-supervised learning methods can improve scene classification accuracy through learning representations from extensive unlabeled data. However, existing self-supervised scene classification algorithms are hard to deploy on satellites, due to the high computation consumption. To address this challenge, we propose a simple, yet effective, self-supervised representation learning (Lite-SRL) algorithm for the scene classification task. First, we design a lightweight contrastive learning structure for Lite-SRL, we apply a stochastic augmentation strategy to obtain augmented views from unlabeled spaceborne images, and Lite-SRL maximizes the similarity of augmented views to learn valuable representations. Then, we adopt the stop-gradient operation to make Lite-SRL’s training process not rely on large queues or negative samples, which can reduce the computation consumption. Furthermore, in order to deploy Lite-SRL on low-power on-board computing platforms, we propose a distributed hybrid parallelism (DHP) framework and a computation workload balancing (CWB) module for Lite-SRL. Experiments on representative datasets including OpenSARUrban, WHU-SAR6, NWPU-Resisc45, and AID dataset demonstrate that Lite-SRL can improve the scene classification accuracy under limited annotated data, and it is generalizable to both SAR and optical images. Meanwhile, compared with six state-of-the-art self-supervised algorithms, Lite-SRL has clear advantages in overall accuracy, number of parameters, memory consumption, and training latency. Eventually, to evaluate the proposed work’s on-board operational capability, we transplant Lite-SRL to the low-power computing platform NVIDIA Jetson TX2. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Figure 1

25 pages, 13742 KiB  
Article
A Low-Grade Road Extraction Method Using SDG-DenseNet Based on the Fusion of Optical and SAR Images at Decision Level
by Jinglin Zhang, Yuxia Li, Yu Si, Bo Peng, Fanghong Xiao, Shiyu Luo and Lei He
Remote Sens. 2022, 14(12), 2870; https://doi.org/10.3390/rs14122870 - 15 Jun 2022
Cited by 7 | Viewed by 1971
Abstract
Low-grade roads have complex features such as geometry, reflection spectrum, and spatial topology in remotely sensing optical images due to the different materials of those roads and also because they are easily obscured by vegetation or buildings, which leads to the low accuracy [...] Read more.
Low-grade roads have complex features such as geometry, reflection spectrum, and spatial topology in remotely sensing optical images due to the different materials of those roads and also because they are easily obscured by vegetation or buildings, which leads to the low accuracy of low-grade road extraction from remote sensing images. To address this problem, this paper proposes a novel deep learning network referred to as SDG-DenseNet as well as a fusion method of optical and Synthetic Aperture Radar (SAR) data on decision level to extract low-grade roads. On one hand, in order to enlarge the receptive field and ensemble multi-scale features in commonly used deep learning networks, we develop SDG-DenseNet in terms of three modules: stem block, D-Dense block, and GIRM module, in which the Stem block applies two consecutive small-sized convolution kernels instead of the large-sized convolution kernel, the D-Dense block applies three consecutive dilated convolutions after the initial Dense block, and Global Information Recovery Module (GIRM) combines the ideas of dilated convolution and attention mechanism. On the other hand, considering the penetrating capacity and oblique observation of SAR, which can obtain information from those low-grade roads obscured by vegetation or buildings in optical images, we integrate the extracted road result from SAR images into that from optical images at decision level to enhance the extraction accuracy. The experimental result shows that the proposed SDG-DenseNet attains higher IoU and F1 scores than other network models applied to low-grade road extraction from optical images. Furthermore, it verifies that the decision-level fusion of road binary maps from SAR and optical images can further significantly improve the F1, COR, and COM scores. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Graphical abstract

19 pages, 5584 KiB  
Article
A Lightweight Position-Enhanced Anchor-Free Algorithm for SAR Ship Detection
by Yun Feng, Jie Chen, Zhixiang Huang, Huiyao Wan, Runfan Xia, Bocai Wu, Long Sun and Mengdao Xing
Remote Sens. 2022, 14(8), 1908; https://doi.org/10.3390/rs14081908 - 15 Apr 2022
Cited by 26 | Viewed by 3291
Abstract
As an active microwave device, synthetic aperture radar (SAR) uses the backscatter of objects for imaging. SAR image ship targets are characterized by unclear contour information, a complex background and strong scattering. Existing deep learning detection algorithms derived from anchor-based methods mostly rely [...] Read more.
As an active microwave device, synthetic aperture radar (SAR) uses the backscatter of objects for imaging. SAR image ship targets are characterized by unclear contour information, a complex background and strong scattering. Existing deep learning detection algorithms derived from anchor-based methods mostly rely on expert experience to set a series of hyperparameters, and it is difficult to characterize the unique characteristics of SAR image ship targets, which greatly limits detection accuracy and speed. Therefore, this paper proposes a new lightweight position-enhanced anchor-free SAR ship detection algorithm called LPEDet. First, to resolve unclear SAR target contours and multiscale performance problems, we used YOLOX as the benchmark framework and redesigned the lightweight multiscale backbone, called NLCNet, which balances detection speed and accuracy. Second, for the strong scattering characteristics of the SAR target, we designed a new position-enhanced attention strategy, which suppresses background clutter by adding position information to the channel attention that highlights the target information to more accurately identify and locate the target. The experimental results for two large-scale SAR target detection datasets, SSDD and HRSID, show that our method achieves a higher detection accuracy and a faster detection speed than state-of-the-art SAR target detection methods. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Figure 1

27 pages, 9659 KiB  
Article
CRTransSar: A Visual Transformer Based on Contextual Joint Representation Learning for SAR Ship Detection
by Runfan Xia, Jie Chen, Zhixiang Huang, Huiyao Wan, Bocai Wu, Long Sun, Baidong Yao, Haibing Xiang and Mengdao Xing
Remote Sens. 2022, 14(6), 1488; https://doi.org/10.3390/rs14061488 - 19 Mar 2022
Cited by 64 | Viewed by 6387
Abstract
Synthetic-aperture radar (SAR) image target detection is widely used in military, civilian and other fields. However, existing detection methods have low accuracy due to the limitations presented by the strong scattering of SAR image targets, unclear edge contour information, multiple scales, strong sparseness, [...] Read more.
Synthetic-aperture radar (SAR) image target detection is widely used in military, civilian and other fields. However, existing detection methods have low accuracy due to the limitations presented by the strong scattering of SAR image targets, unclear edge contour information, multiple scales, strong sparseness, background interference, and other characteristics. In response, for SAR target detection tasks, this paper combines the global contextual information perception of transformers and the local feature representation capabilities of convolutional neural networks (CNNs) to innovatively propose a visual transformer framework based on contextual joint-representation learning, referred to as CRTransSar. First, this paper introduces the latest Swin Transformer as the basic architecture. Next, it introduces the CNN’s local information capture and presents the design of a backbone, called CRbackbone, based on contextual joint representation learning, to extract richer contextual feature information while strengthening SAR target feature attributes. Furthermore, the design of a new cross-resolution attention-enhancement neck, called CAENeck, is presented to enhance the characterizability of multiscale SAR targets. The mAP of our method on the SSDD dataset attains 97.0% accuracy, reaching state-of-the-art levels. In addition, based on the HISEA-1 commercial SAR satellite, which has been launched into orbit and in whose development our research group participated, we released a larger-scale SAR multiclass target detection dataset, called SMCDD, which verifies the effectiveness of our method. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Figure 1

25 pages, 10939 KiB  
Article
A Transformer-Based Coarse-to-Fine Wide-Swath SAR Image Registration Method under Weak Texture Conditions
by Yibo Fan, Feng Wang and Haipeng Wang
Remote Sens. 2022, 14(5), 1175; https://doi.org/10.3390/rs14051175 - 27 Feb 2022
Cited by 16 | Viewed by 3036
Abstract
As an all-weather and all-day remote sensing image data source, SAR (Synthetic Aperture Radar) images have been widely applied, and their registration accuracy has a direct impact on the downstream task effectiveness. The existing registration algorithms mainly focus on small sub-images, and there [...] Read more.
As an all-weather and all-day remote sensing image data source, SAR (Synthetic Aperture Radar) images have been widely applied, and their registration accuracy has a direct impact on the downstream task effectiveness. The existing registration algorithms mainly focus on small sub-images, and there is a lack of available accurate matching methods for large-size images. This paper proposes a high-precision, rapid, large-size SAR image dense-matching method. The method mainly includes four steps: down-sampling image pre-registration, sub-image acquisition, dense matching, and the transformation solution. First, the ORB (Oriented FAST and Rotated BRIEF) operator and the GMS (Grid-based Motion Statistics) method are combined to perform rough matching in the semantically rich down-sampled image. In addition, according to the feature point pairs, a group of clustering centers and corresponding images are obtained. Subsequently, a deep learning method based on Transformers is used to register images under weak texture conditions. Finally, the global transformation relationship can be obtained through RANSAC (Random Sample Consensus). Compared with the SOTA algorithm, our method’s correct matching point numbers are increased by more than 2.47 times, and the root mean squared error (RMSE) is reduced by more than 4.16%. The experimental results demonstrate that our proposed method is efficient and accurate, which provides a new idea for SAR image registration. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Figure 1

26 pages, 4990 KiB  
Article
TCD-Net: A Novel Deep Learning Framework for Fully Polarimetric Change Detection Using Transfer Learning
by Rezvan Habibollahi, Seyd Teymoor Seydi, Mahdi Hasanlou and Masoud Mahdianpari
Remote Sens. 2022, 14(3), 438; https://doi.org/10.3390/rs14030438 - 18 Jan 2022
Cited by 11 | Viewed by 2739
Abstract
Due to anthropogenic and natural activities, the land surface continuously changes over time. The accurate and timely detection of changes is greatly important for environmental monitoring, resource management and planning activities. In this study, a novel deep learning-based change detection algorithm is proposed [...] Read more.
Due to anthropogenic and natural activities, the land surface continuously changes over time. The accurate and timely detection of changes is greatly important for environmental monitoring, resource management and planning activities. In this study, a novel deep learning-based change detection algorithm is proposed for bi-temporal polarimetric synthetic aperture radar (PolSAR) imagery using a transfer learning (TL) method. In particular, this method has been designed to automatically extract changes by applying three main steps as follows: (1) pre-processing, (2) parallel pseudo-label training sample generation based on a pre-trained model and fuzzy c-means (FCM) clustering algorithm, and (3) classification. Moreover, a new end-to-end three-channel deep neural network, called TCD-Net, has been introduced in this study. TCD-Net can learn more strong and abstract representations for the spatial information of a certain pixel. In addition, by adding an adaptive multi-scale shallow block and an adaptive multi-scale residual block to the TCD-Net architecture, this model with much lower parameters is sensitive to objects of various sizes. Experimental results on two Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) bi-temporal datasets demonstrated the effectiveness of the proposed algorithm compared to other well-known methods with an overall accuracy of 96.71% and a kappa coefficient of 0.82. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Figure 1

23 pages, 13325 KiB  
Article
A Robust InSAR Phase Unwrapping Method via Phase Gradient Estimation Network
by Liming Pu, Xiaoling Zhang, Zenan Zhou, Liang Li, Liming Zhou, Jun Shi and Shunjun Wei
Remote Sens. 2021, 13(22), 4564; https://doi.org/10.3390/rs13224564 - 13 Nov 2021
Cited by 15 | Viewed by 2835
Abstract
Phase unwrapping is a critical step in synthetic aperture radar interferometry (InSAR) data processing chains. In almost all phase unwrapping methods, estimating the phase gradient according to the phase continuity assumption (PGE-PCA) is an essential step. The phase continuity assumption is not always [...] Read more.
Phase unwrapping is a critical step in synthetic aperture radar interferometry (InSAR) data processing chains. In almost all phase unwrapping methods, estimating the phase gradient according to the phase continuity assumption (PGE-PCA) is an essential step. The phase continuity assumption is not always satisfied due to the presence of noise and abrupt terrain changes; therefore, it is difficult to get the correct phase gradient. In this paper, we propose a robust least squares phase unwrapping method that works via a phase gradient estimation network based on the encoder–decoder architecture (PGENet) for InSAR. In this method, from a large number of wrapped phase images with topography features and different levels of noise, the deep convolutional neural network can learn global phase features and the phase gradient between adjacent pixels, so a more accurate and robust phase gradient can be predicted than that obtained by PGE-PCA. To get the phase unwrapping result, we use the traditional least squares solver to minimize the difference between the gradient obtained by PGENet and the gradient of the unwrapped phase. Experiments on simulated and real InSAR data demonstrated that the proposed method outperforms the other five well-established phase unwrapping methods and is robust to noise. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Graphical abstract

26 pages, 12098 KiB  
Article
A Novel Guided Anchor Siamese Network for Arbitrary Target-of-Interest Tracking in Video-SAR
by Jinyu Bao, Xiaoling Zhang, Tianwen Zhang, Jun Shi and Shunjun Wei
Remote Sens. 2021, 13(22), 4504; https://doi.org/10.3390/rs13224504 - 09 Nov 2021
Cited by 9 | Viewed by 1913
Abstract
Video synthetic aperture radar (Video-SAR) allows continuous and intuitive observation and is widely used for radar moving target tracking. The shadow of a moving target has the characteristics of stable scattering and no location shift, making moving target tracking using shadows a hot [...] Read more.
Video synthetic aperture radar (Video-SAR) allows continuous and intuitive observation and is widely used for radar moving target tracking. The shadow of a moving target has the characteristics of stable scattering and no location shift, making moving target tracking using shadows a hot topic. However, the existing techniques mainly rely on the appearance of targets, which is impractical and costly, especially for tracking targets of interest (TOIs) with high diversity and arbitrariness. Therefore, to solve this problem, we propose a novel guided anchor Siamese network (GASN) dedicated to arbitrary TOI tracking in Video-SAR. First, GASN searches for matching areas in the subsequent frames with the initial area of the TOI in the first frame are conducted, returning the most similar area using a matching function, which is learned from general training without TOI-related data. With the learned matching function, GASN can be used to track arbitrary TOIs. Moreover, we also constructed a guided anchor subnetwork, referred to as GA-SubNet, which employs the prior information of the first frame and generates sparse anchors of the same shape as the TOIs. The number of unnecessary anchors is therefore reduced to suppress false alarms. Our method was evaluated on simulated and real Video-SAR data. The experimental results demonstrated that GASN outperforms state-of-the-art methods, including two types of traditional tracking methods (MOSSE and KCF) and two types of modern deep learning techniques (Siamese-FC and Siamese-RPN). We also conducted an ablation experiment to demonstrate the effectiveness of GA-SubNet. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Figure 1

22 pages, 3829 KiB  
Article
Self-Supervised Despeckling Algorithm with an Enhanced U-Net for Synthetic Aperture Radar Images
by Gang Zhang, Zhi Li, Xuewei Li and Sitong Liu
Remote Sens. 2021, 13(21), 4383; https://doi.org/10.3390/rs13214383 - 31 Oct 2021
Cited by 2 | Viewed by 2444
Abstract
Self-supervised method has proven to be a suitable approach for despeckling on synthetic aperture radar (SAR) images. However, most self-supervised despeckling methods are trained by noisy-noisy image pairs, which are constructed by using natural images with simulated speckle noise, time-series real-world SAR images [...] Read more.
Self-supervised method has proven to be a suitable approach for despeckling on synthetic aperture radar (SAR) images. However, most self-supervised despeckling methods are trained by noisy-noisy image pairs, which are constructed by using natural images with simulated speckle noise, time-series real-world SAR images or generative adversarial network, limiting the practicability of these methods in real-world SAR images. Therefore, in this paper, a novel self-supervised despeckling algorithm with an enhanced U-Net is proposed for real-world SAR images. Firstly, unlike previous self-supervised despeckling works, the noisy-noisy image pairs are generated from real-word SAR images through a novel generation training pairs module, which makes it possible to train deep convolutional neural networks using real-world SAR images. Secondly, an enhanced U-Net is designed to improve the feature extraction and fusion capabilities of the network. Thirdly, a self-supervised training loss function with a regularization loss is proposed to address the difference of target pixel values between neighbors on the original SAR images. Finally, visual and quantitative experiments on simulated and real-world SAR images show that the proposed algorithm notably removes speckle noise with better preserving features, which exceed several state-of-the-art despeckling methods. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Graphical abstract

17 pages, 6279 KiB  
Article
Serial GANs: A Feature-Preserving Heterogeneous Remote Sensing Image Transformation Model
by Daning Tan, Yu Liu, Gang Li, Libo Yao, Shun Sun and You He
Remote Sens. 2021, 13(19), 3968; https://doi.org/10.3390/rs13193968 - 03 Oct 2021
Cited by 6 | Viewed by 2379
Abstract
In recent years, the interpretation of SAR images has been significantly improved with the development of deep learning technology, and using conditional generative adversarial nets (CGANs) for SAR-to-optical transformation, also known as image translation, has become popular. Most of the existing image translation [...] Read more.
In recent years, the interpretation of SAR images has been significantly improved with the development of deep learning technology, and using conditional generative adversarial nets (CGANs) for SAR-to-optical transformation, also known as image translation, has become popular. Most of the existing image translation methods based on conditional generative adversarial nets are modified based on CycleGAN and pix2pix, focusing on style transformation in practice. In addition, SAR images and optical images are characterized by heterogeneous features and large spectral differences, leading to problems such as incomplete image details and spectral distortion in the heterogeneous transformation of SAR images in urban or semiurban areas and with complex terrain. Aiming to solve the problems of SAR-to-optical transformation, Serial GANs, a feature-preserving heterogeneous remote sensing image transformation model, is proposed in this paper for the first time. This model uses the Serial Despeckling GAN and Colorization GAN to complete the SAR-to-optical transformation. Despeckling GAN transforms the SAR images into optical gray images, retaining the texture details and semantic information. Colorization GAN transforms the optical gray images obtained in the first step into optical color images and keeps the structural features unchanged. The model proposed in this paper provides a new idea for heterogeneous image transformation. Through decoupling network design, structural detail information and spectral information are relatively independent in the process of heterogeneous transformation, thereby enhancing the detail information of the generated optical images and reducing its spectral distortion. Using SEN-2 satellite images as the reference, this paper compares the degree of similarity between the images generated by different models and the reference, and the results revealed that the proposed model has obvious advantages in feature reconstruction and the economical volume of the parameters. It also showed that Serial GANs have great potential in decoupling image transformation. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Graphical abstract

Review

Jump to: Editorial, Research, Other

41 pages, 4732 KiB  
Review
Deep Learning for SAR Ship Detection: Past, Present and Future
by Jianwei Li, Congan Xu, Hang Su, Long Gao and Taoyang Wang
Remote Sens. 2022, 14(11), 2712; https://doi.org/10.3390/rs14112712 - 05 Jun 2022
Cited by 52 | Viewed by 7692
Abstract
After the revival of deep learning in computer vision in 2012, SAR ship detection comes into the deep learning era too. The deep learning-based computer vision algorithms can work in an end-to-end pipeline, without the need of designing features manually, and they have [...] Read more.
After the revival of deep learning in computer vision in 2012, SAR ship detection comes into the deep learning era too. The deep learning-based computer vision algorithms can work in an end-to-end pipeline, without the need of designing features manually, and they have amazing performance. As a result, it is also used to detect ships in SAR images. The beginning of this direction is the paper we published in 2017BIGSARDATA, in which the first dataset SSDD was used and shared with peers. Since then, lots of researchers focus their attention on this field. In this paper, we analyze the past, present, and future of the deep learning-based ship detection algorithms in SAR images. In the past section, we analyze the difference between traditional CFAR (constant false alarm rate) based and deep learning-based detectors through theory and experiment. The traditional method is unsupervised while the deep learning is strongly supervised, and their performance varies several times. In the present part, we analyze the 177 published papers about SAR ship detection. We highlight the dataset, algorithm, performance, deep learning framework, country, timeline, etc. After that, we introduce the use of single-stage, two-stage, anchor-free, train from scratch, oriented bounding box, multi-scale, and real-time detectors in detail in the 177 papers. The advantages and disadvantages of speed and accuracy are also analyzed. In the future part, we list the problem and direction of this field. We can find that, in the past five years, the AP50 has boosted from 78.8% in 2017 to 97.8 % in 2022 on SSDD. Additionally, we think that researchers should design algorithms according to the specific characteristics of SAR images. What we should do next is to bridge the gap between SAR ship detection and computer vision by merging the small datasets into a large one and formulating corresponding standards and benchmarks. We expect that this survey of 177 papers can make people better understand these algorithms and stimulate more research in this field. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Graphical abstract

Other

21 pages, 10909 KiB  
Technical Note
Data Augmentation for Building Footprint Segmentation in SAR Images: An Empirical Study
by Sandhi Wangiyana, Piotr Samczyński and Artur Gromek
Remote Sens. 2022, 14(9), 2012; https://doi.org/10.3390/rs14092012 - 22 Apr 2022
Cited by 5 | Viewed by 2542
Abstract
Building footprints provide essential information for mapping, disaster management, and other large-scale studies. Synthetic Aperture Radar (SAR) provides consistent data availability over optical images owing to its unique properties, which consequently makes it more challenging to interpret. Previous studies have demonstrated the success [...] Read more.
Building footprints provide essential information for mapping, disaster management, and other large-scale studies. Synthetic Aperture Radar (SAR) provides consistent data availability over optical images owing to its unique properties, which consequently makes it more challenging to interpret. Previous studies have demonstrated the success of automated methods using Convolutional Neural Networks to detect buildings in Very High Resolution (VHR) SAR images. However, the scarcity of such datasets that are available to the public can limit research progress in this field. We explored the impact of several data augmentation (DA) methods on the performance of building detection on a limited dataset of SAR images. Our results show that geometric transformations are more effective than pixel transformations. The former improves the detection of objects with different scale and rotation variations. The latter creates textural changes that help differentiate edges better, but amplifies non-object patterns, leading to increased false positive predictions. We experimented with applying DA at different stages and concluded that applying similar DA methods in training and inference showed the best performance compared with DA applied only during training. Some DA can alter key features of a building’s representation in radar images. Among them are vertical flips and quarter circle rotations, which yielded the worst performance. DA methods should be used in moderation to prevent unwanted transformations outside the possible object variations. Error analysis, either through statistical methods or manual inspection, is recommended to understand the bias presented in the dataset, which is useful in selecting suitable DAs. The findings from this study can provide potential guidelines for future research in selecting DA methods for segmentation tasks in radar imagery. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Graphical abstract

Back to TopTop