remotesensing-logo

Journal Browser

Journal Browser

Self-Supervised Learning in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 8726

Special Issue Editors

School of Electronic Engineering, Xidian University, Xi’an, China
Interests: remote sensing; computer vision; machine learning; deep learning

E-Mail Website
Guest Editor
Data Science in Earth Observation, Technical University of Munich (TUM), 80333 Munich, Germany
Interests: computer vision; machine learning; remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Data Science in Earth Observation, Technical University of Munich, Munich, Germany
Interests: 3D remote sensing; SAR building detection; uncertainty quantification
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The success of supervised learning has led to significant advances in remote sensing. However, traditional supervised learning approaches, especially deep-learning-based methods, rely heavily on the amount of annotated training data available. With a growing number of satellites in orbit, an increasing number of remote sensing data with diverse sensors and coverage areas are being received every day. In this respect, self-supervised learning could be a promising approach to remote sensing study, which aims to adopt self-defined pretext tasks such as supervision and use the learned representations for different downstream tasks. Although numerous efforts have been devoted to addressing the lack of data annotations in remote sensing, the applicability in real-world scenarios and theoretical research continues to put forward urgent requirements for advanced remote sensing methods. This Special Issue aims to gather the latest works in self-supervised learning in remote sensing and propose new theories and approaches to solve existing problems. Specific topics of interest include but are not limited to the following:

  • SAR image processing;
  • Self-supervised learning on SAR data;
  • High-resolution remote sensing image processing;
  • Transfer learning on remote sensing data;
  • Self-supervised learning for change detection;
  • Hyperspectral image processing;
  • Self-supervised learning on hyperspectral data;
  • Multispectral image processing;
  • Self-supervised model pre-training on remote sensing data;
  • Self-supervised learning for scene recognition, land use–land cover classification.

Dr. Qiang Li
Dr. Zhitong Xiong
Dr. Muhammad Shahzad
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image processing
  • artificial intelligence
  • deep learning
  • self-supervised learning

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 2321 KiB  
Article
Simulation-Based Self-Supervised Line Extraction for LiDAR Odometry in Urban Road Scenes
by Peng Wang, Ruqin Zhou, Chenguang Dai, Hanyun Wang, Wanshou Jiang and Yongsheng Zhang
Remote Sens. 2023, 15(22), 5322; https://doi.org/10.3390/rs15225322 - 11 Nov 2023
Cited by 1 | Viewed by 902
Abstract
LiDAR odometry is a fundamental task for high-precision map construction and real-time and accurate localization in autonomous driving. However, point clouds in urban road scenes acquired by vehicle-borne lasers are of large amounts, “near dense and far sparse” density, and contain different dynamic [...] Read more.
LiDAR odometry is a fundamental task for high-precision map construction and real-time and accurate localization in autonomous driving. However, point clouds in urban road scenes acquired by vehicle-borne lasers are of large amounts, “near dense and far sparse” density, and contain different dynamic objects, leading to low efficiency and low accuracy of existing LiDAR odometry methods. To address the above issues, a simulation-based self-supervised line extraction in urban road scene is proposed, as a pre-processing for LiDAR odometry to reduce the amount of input and the interference from dynamic objects. A simulated dataset is first constructed according to the characteristics of point clouds in urban road scenes; and then, an EdgeConv-based network, named LO-LineNet, is used for pre-training; finally, a model transferring strategy is adopted to transfer the pre-trained model from a simulated dataset to real-world scenes without ground-truth labels. Experimental results on the KITTI Odometry Dataset and the Apollo SouthBay Dataset indicate that the proposed method can accurately extract reliable lines in urban road scenes in a self-supervised way, and the use of the extracted reliable lines as input for odometry can significantly improve its accuracy and efficiency in urban road scenes. Full article
(This article belongs to the Special Issue Self-Supervised Learning in Remote Sensing)
Show Figures

Figure 1

21 pages, 64108 KiB  
Article
AA-LMM: Robust Accuracy-Aware Linear Mixture Model for Remote Sensing Image Registration
by Jian Yang, Chen Li and Xuelong Li
Remote Sens. 2023, 15(22), 5314; https://doi.org/10.3390/rs15225314 - 10 Nov 2023
Viewed by 927
Abstract
Remote sensing image registration has been widely applied in military and civilian fields, such as target recognition, visual navigation and change detection. The dynamic changes in the sensing environment and sensors bring differences to feature point detection in amount and quality, which is [...] Read more.
Remote sensing image registration has been widely applied in military and civilian fields, such as target recognition, visual navigation and change detection. The dynamic changes in the sensing environment and sensors bring differences to feature point detection in amount and quality, which is still a common and intractable challenge for feature-based registration approaches. With such multiple perturbations, the extracted feature points representing the same physical location in space may have different location accuracy. Most existing matching methods focus on recovering the optimal feature correspondences while they ignore the diversities of different points in position, which easily brings the model into a bad local extrema, especially when existing with the outliers and noises. In this paper, we present a novel accuracy-aware registration model for remote sensing. A soft weighting is designed for each sample to preferentially select more reliable sample points. To better estimate the transformation between input images, an optimal sparse approximation is applied to approach the transformation by multiple iterations, which effectively reduces the computation complexity and also improves the accuracy of approximation. Experimental results show that the proposed method outperforms the state-of-the-art approaches in both matching accuracy and correct matches. Full article
(This article belongs to the Special Issue Self-Supervised Learning in Remote Sensing)
Show Figures

Figure 1

30 pages, 32175 KiB  
Article
SEL-Net: A Self-Supervised Learning-Based Network for PolSAR Image Runway Region Detection
by Ping Han, Yanwen Peng, Zheng Cheng, Dayu Liao and Binbin Han
Remote Sens. 2023, 15(19), 4708; https://doi.org/10.3390/rs15194708 - 26 Sep 2023
Viewed by 905
Abstract
This paper proposes an information enhancement network based on self-supervised learning (SEL-Net) for runway area detection. During the self-supervised learning phase, the distinctive attributes of PolSAR multi-channel data are fully harnessed to enhance the generated pretrained model’s focus on airport runway areas. During [...] Read more.
This paper proposes an information enhancement network based on self-supervised learning (SEL-Net) for runway area detection. During the self-supervised learning phase, the distinctive attributes of PolSAR multi-channel data are fully harnessed to enhance the generated pretrained model’s focus on airport runway areas. During the detection phase, this paper presents an improved U-Net detection network. Edge Feature Extraction Modules (EEM) are integrated into the encoder and skip connection sections, while Semantic Information Transmission Modules (STM) are embedded into the decoder section. Furthermore, improvements have been applied to the network’s upsampling and downsampling architectures. Experimental results demonstrate that the proposed SEL-Net effectively addresses the issues of high false alarms and runway integrity, achieving a superior detection performance. Full article
(This article belongs to the Special Issue Self-Supervised Learning in Remote Sensing)
Show Figures

Figure 1

18 pages, 26214 KiB  
Article
YOLO-DCTI: Small Object Detection in Remote Sensing Base on Contextual Transformer Enhancement
by Lingtong Min, Ziman Fan, Qinyi Lv, Mohamed Reda, Linghao Shen and Binglu Wang
Remote Sens. 2023, 15(16), 3970; https://doi.org/10.3390/rs15163970 - 10 Aug 2023
Cited by 3 | Viewed by 2922
Abstract
Object detection for remote sensing is a fundamental task in image processing of remote sensing; as one of the core components, small or tiny object detection plays an important role. Despite the considerable advancements achieved in small object detection with the integration of [...] Read more.
Object detection for remote sensing is a fundamental task in image processing of remote sensing; as one of the core components, small or tiny object detection plays an important role. Despite the considerable advancements achieved in small object detection with the integration of CNN and transformer networks, there remains untapped potential for enhancing the extraction and utilization of information associated with small objects. Particularly within transformer structures, this potential arises from the disregard of the complex and the intertwined interplay between spatial context information and channel information during the global modeling of pixel-level information within small objects. As a result, valuable information is prone to being obfuscated and annihilated. To mitigate this limitation, we propose an innovative framework, YOLO-DCTI, that capitalizes on the Contextual Transformer (CoT) framework for the detection of small or tiny objects. Specifically, within CoT, we seamlessly incorporate global residuals and local fusion mechanisms throughout the entire input-to-output pipeline. This integration facilitates a profound investigation into the network’s intrinsic representations at deeper levels and fosters the fusion of spatial contextual attributes with channel characteristics. Moreover, we propose an improved decoupled contextual transformer detection head structure, denoted as DCTI, to effectively resolve the feature conflicts that ensue from the concurrent classification and regression tasks. The experimental results on the Dota, VISDrone, and NWPU VHR-10 datasets show that, on the powerful real-time detection network YOLOv7, the speed and accuracy of tiny targets are better balanced. Full article
(This article belongs to the Special Issue Self-Supervised Learning in Remote Sensing)
Show Figures

Graphical abstract

21 pages, 10793 KiB  
Article
Augmented GBM Nonlinear Model to Address Spectral Variability for Hyperspectral Unmixing
by Linghong Meng, Danfeng Liu, Liguo Wang, Jón Atli Benediktsson, Xiaohan Yue and Yuetao Pan
Remote Sens. 2023, 15(12), 3205; https://doi.org/10.3390/rs15123205 - 20 Jun 2023
Cited by 1 | Viewed by 1570
Abstract
Spectral unmixing (SU) is a significant preprocessing task for handling hyperspectral images (HSI), but its process is affected by nonlinearity and spectral variability (SV). Currently, SV is considered within the framework of linear mixing models (LMM), which ignores the nonlinear effects in the [...] Read more.
Spectral unmixing (SU) is a significant preprocessing task for handling hyperspectral images (HSI), but its process is affected by nonlinearity and spectral variability (SV). Currently, SV is considered within the framework of linear mixing models (LMM), which ignores the nonlinear effects in the scene. To address that issue, we consider the effects of SV on SU while investigating the nonlinear effects of hyperspectral images. Furthermore, an augmented generalized bilinear model is proposed to address spectral variability (abbreviated AGBM-SV). First, AGBM-SV adopts a generalized bilinear model (GBM) as the basic framework to address the nonlinear effects caused by second-order scattering. Secondly, scaling factors and spectral variability dictionaries are introduced to model the variability issues caused by the illumination conditions, material intrinsic variability, and other environmental factors. Then, a data-driven learning strategy is employed to set sparse and orthogonal bases for the abundance and spectral variability dictionaries according to the distribution characteristics of real materials. Finally, the alternating direction method of multipliers (ADMM) optimization method is used to split and solve the objective function, enabling the AGBM-SV algorithm to estimate the abundance and learn the spectral variability dictionary more effectively. The experimental results demonstrate the comparative superiority of the AGBM-SV method in both qualitative and quantitative perspectives, which can effectively solve the problem of spectral variability in nonlinear mixing scenes and to improve unmixing accuracy. Full article
(This article belongs to the Special Issue Self-Supervised Learning in Remote Sensing)
Show Figures

Figure 1

Back to TopTop