remotesensing-logo

Journal Browser

Journal Browser

Computational Intelligence for Remote Sensing Image Analysis and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (20 October 2023) | Viewed by 16329

Special Issue Editors

School of Electronics and Information, Northwestern Polytechnical University, 127 West Youyi Road, Xi’an 710072, China
Interests: computational intelligence; remote sensing images understanding; change detection; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Key Laboratory of Intelligent Perception and Image Understanding, Xidian University, Xi'an 710071, China
Interests: computational intelligence; evolutionary computation; neural networks; multi-objective optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Software Engineering, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
Interests: machine learning; evolutionary computation; computer vision; services computing; pervasive computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Embedded Systems Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
Interests: remote sensing; deep learning; artificial intelligence; image processing; signal processing
Special Issues, Collections and Topics in MDPI journals
School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710072, China
Interests: hyperspectral remote sensing; neural networks; computational intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Nowadays, remote sensing systems and technology have been intensively studied and widely applied for earth observation, environmental monitoring, land survey, disaster management, etc. The massive amount of remote sensing imagery data generated from various types of sensors mounted in satellites, aircrafts, and UAVs pose great challenges to data storage, management, and analysis. Meanwhile, security and privacy from the aspects of both data and techniques attract greater attention. In various applications based on remote sensing imagery data, there exist many optimization tasks which may not be well dealt with via conventional mathematical programming approaches due to a non-convex and/or a non-differentiable problem nature, or due to a lack of explicit mathematical formulations.

Computational intelligence (CI) is one of the most prosperous AI sub-fields, studying on computational methodologies and approaches inspired by the intelligent behaviors that occur in nature and biology to resolve complex problems, in which traditional approaches demonstrate ineffectiveness or infeasibility. It involves neural networks, evolutionary computation, and fuzzy logic as three major research areas, which has been successfully applied in remote sensing. Particularly, deep neural networks have achieved great successes in tons of remote sensing image analysis tasks, ranging from segmentation and classification to detection and super-resolution.

Topics To Be Covered

This Special Issue intends to provide a forum for disseminating the achievements related to the research and applications of CI-relevant techniques for analyzing remote sensing images of various modalities, e.g., multi/hyperspectral, SAR, and LIDAR images. The topics of this Special Issue include, but are not limited to:

  • CI for remote sensing image denoising, restoration or super-resolution;
  • CI for remote sensing image registration;
  • CI for remote sensing image segmentation, classification, and retrieval;
  • CI for image-based target detection and recognition in remote sensing;
  • CI-based feature selection, extraction and learning techniques for remote sensing image analysis;
  • CI for remote sensing image data fusion or compression;
  • CI for multi-temporal remote sensing image analysis, e.g., change detection;
  • Transfer learning and federated learning in CI-based remote sensing image analysis;
  • Security and privacy in CI-based remote sensing image analysis;
  • CI for large-scale or real-time remote sensing image analysis;
  • Applications: earth observation, land survey, mining, disaster management, navigation, etc. 

Dr. Jiao Shi
Prof. Dr. Maoguo Gong
Prof. Dr. Kai Qin
Dr. Gwanggil Jeon
Dr. Yu Lei
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • deep learning
  • computational intelligence
  • multi-task learning
  • image processing
  • hyperspectral and multispectral imaging

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

31 pages, 25541 KiB  
Article
Estimation of Small-Stream Water Surface Elevation Using UAV Photogrammetry and Deep Learning
by Radosław Szostak, Marcin Pietroń, Przemysław Wachniew, Mirosław Zimnoch and Paweł Ćwiąkała
Remote Sens. 2024, 16(8), 1458; https://doi.org/10.3390/rs16081458 - 20 Apr 2024
Viewed by 1255
Abstract
Unmanned aerial vehicle (UAV) photogrammetry allows the generation of orthophoto and digital surface model (DSM) rasters of terrain. However, DSMs of water bodies mapped using this technique often reveal distortions in the water surface, thereby impeding the accurate sampling of water surface elevation [...] Read more.
Unmanned aerial vehicle (UAV) photogrammetry allows the generation of orthophoto and digital surface model (DSM) rasters of terrain. However, DSMs of water bodies mapped using this technique often reveal distortions in the water surface, thereby impeding the accurate sampling of water surface elevation (WSE) from DSMs. This study investigates the capability of deep neural networks to accommodate the aforementioned perturbations and effectively estimate WSE from photogrammetric rasters. Convolutional neural networks (CNNs) were employed for this purpose. Two regression approaches utilizing CNNs were explored: direct regression employing an encoder and a solution based on prediction of the weight mask by an autoencoder architecture, subsequently used to sample values from the photogrammetric DSM. The dataset employed in this study comprises data collected from five case studies of small lowland streams in Poland and Denmark, consisting of 322 DSM and orthophoto raster samples. A grid search was employed to identify the optimal combination of encoder, mask generation architecture, and batch size among multiple candidates. Solutions were evaluated using two cross-validation methods: stratified k-fold cross-validation, where validation subsets maintained the same proportion of samples from all case studies, and leave-one-case-out cross-validation, where the validation dataset originates entirely from a single case study, and the training set consists of samples from other case studies. Depending on the case study and the level of validation strictness, the proposed solution achieved a root mean square error (RMSE) ranging between 2 cm and 16 cm. The proposed method outperforms methods based on the straightforward sampling of photogrammetric DSM, achieving, on average, an 84% lower RMSE for stratified cross-validation and a 62% lower RMSE for all-in-case-out cross-validation. By utilizing data from other research, the proposed solution was compared on the same case study with other UAV-based methods. For that benchmark case study, the proposed solution achieved an RMSE score of 5.9 cm for all-in-case-out cross-validation and 3.5 cm for stratified cross-validation, which is close to the result achieved by the radar-based method (RMSE of 3 cm), which is considered the most accurate method available. The proposed solution is characterized by a high degree of explainability and generalization. Full article
Show Figures

Figure 1

19 pages, 13621 KiB  
Article
MTU2-Net: Extracting Internal Solitary Waves from SAR Images
by Saheya Barintag, Zhijie An, Qiyu Jin, Xu Chen, Maoguo Gong and Tieyong Zeng
Remote Sens. 2023, 15(23), 5441; https://doi.org/10.3390/rs15235441 - 21 Nov 2023
Cited by 1 | Viewed by 1335
Abstract
Internal Solitary Waves (ISWs) play a pivotal role in transporting energy and matter within the ocean and also pose substantial risks to ocean engineering, navigation, and underwater communication systems. Consequently, measures need to be adopted to alleviate their negative effects and minimize linked [...] Read more.
Internal Solitary Waves (ISWs) play a pivotal role in transporting energy and matter within the ocean and also pose substantial risks to ocean engineering, navigation, and underwater communication systems. Consequently, measures need to be adopted to alleviate their negative effects and minimize linked risks. An effective method entails extracting ISW positions from Synthetic Aperture Radar (SAR) data for precise trajectory prediction and efficient avoidance strategies. However, manual extraction of ISWs from SAR data is time-consuming and prone to inaccuracies. Hence, it is imperative to develop a high-precision, rapid, and automated ISW-extraction algorithm. In this paper, we introduce Middle Transformer U2-net (MTU2-net), an innovative model that integrates a distinctive loss function and Transformer to improve the accuracy of ISWs’ extraction. The novel loss function enhances the model’s capacity to extract bow waves, whereas the Transformer ensures coherence in ISW’s patterns. By conducting experiments involving 762 image scenes, incorporating ISWs, from the South China Sea, we established a standardized dataset. The Mean Intersection over Union (MIoU) achieved on this dataset was 71.57%, surpassing the performance of other compared methods. The experimental outcomes showcase the remarkable performance of our proposed model in precisely extracting bow wave attributes from SAR data. Full article
Show Figures

Figure 1

20 pages, 9181 KiB  
Article
An Efficient Object Detection Algorithm Based on Improved YOLOv5 for High-Spatial-Resolution Remote Sensing Images
by Feng Cao, Bing Xing, Jiancheng Luo, Deyu Li, Yuhua Qian, Chao Zhang, Hexiang Bai and Hu Zhang
Remote Sens. 2023, 15(15), 3755; https://doi.org/10.3390/rs15153755 - 28 Jul 2023
Cited by 5 | Viewed by 2830
Abstract
The field of remote sensing information processing places significant research emphasis on object detection (OD) in high-spatial-resolution remote sensing images (HSRIs). The OD task in HSRIs poses additional challenges compared to conventional natural images. These challenges include variations in object scales, complex backgrounds, [...] Read more.
The field of remote sensing information processing places significant research emphasis on object detection (OD) in high-spatial-resolution remote sensing images (HSRIs). The OD task in HSRIs poses additional challenges compared to conventional natural images. These challenges include variations in object scales, complex backgrounds, dense arrangement, and uncertain orientations. These factors contribute to the increased difficulty of OD in HSRIs as compared to conventional images. To tackle the aforementioned challenges, this paper introduces an innovative OD algorithm that builds upon enhancements made to the YOLOv5 framework. The incorporation of RepConv, Transformer Encoder, and BiFPN modules into the original YOLOv5 network leads to improved detection accuracy, particularly for objects of varying scales. The C3GAM module is designed by introducing the GAM attention mechanism to address the interference caused by complex background regions. To achieve precise localization of densely arranged objects, the SIoU loss function is integrated into YOLOv5. The circular smooth label method is used to detect objects with uncertain directions. The effectiveness of the suggested algorithm is confirmed through its application to two commonly utilized datasets, specifically HRSC2016 and UCAS-AOD. The average detection accuracies achieved on these datasets are 90.29% and 90.06% respectively, surpassing the performance of other compared OD algorithms for HSRIs. Full article
Show Figures

Graphical abstract

20 pages, 7987 KiB  
Article
Multi-Class Double-Transformation Network for SAR Image Registration
by Xiaozheng Deng, Shasha Mao, Jinyuan Yang, Shiming Lu, Shuiping Gou, Youming Zhou and Licheng Jiao
Remote Sens. 2023, 15(11), 2927; https://doi.org/10.3390/rs15112927 - 4 Jun 2023
Cited by 2 | Viewed by 1847
Abstract
In SAR image registration, most existing methods consider the image registration as a two-classification problem to construct the pair training samples for training the deep model. However, it is difficult to obtain a mass of given matched-points directly from SAR images as the [...] Read more.
In SAR image registration, most existing methods consider the image registration as a two-classification problem to construct the pair training samples for training the deep model. However, it is difficult to obtain a mass of given matched-points directly from SAR images as the training samples. Based on this, we propose a multi-class double-transformation network for SAR image registration based on Swin-Transformer. Different from existing methods, the proposed method directly considers each key point as an independent category to construct the multi-classification model for SAR image registration. Then, based on the key points from the reference and sensed images, respectively, a double-transformation network with two branches is designed to search for matched-point pairs. In particular, to weaken the inherent diversity between two SAR images, key points from one image are transformed to the other image, and the transformed image is used as the basic image to capture sub-images corresponding to all key points as the training and testing samples. Moreover, a precise-matching module is designed to increase the reliability of the obtained matched-points by eliminating the inconsistent matched-point pairs given by two branches. Finally, a series of experiments illustrate that the proposed method can achieve higher registration performance compared to existing methods. Full article
Show Figures

Figure 1

22 pages, 4499 KiB  
Article
Attention-Embedded Triple-Fusion Branch CNN for Hyperspectral Image Classification
by Erlei Zhang, Jiayi Zhang, Jiaxin Bai, Jiarong Bian, Shaoyi Fang, Tao Zhan and Mingchen Feng
Remote Sens. 2023, 15(8), 2150; https://doi.org/10.3390/rs15082150 - 19 Apr 2023
Cited by 6 | Viewed by 1667
Abstract
Hyperspectral imaging (HSI) is widely used in various fields owing to its rich spectral information. Nonetheless, the high dimensionality of HSI and the limited number of labeled data remain significant obstacles to HSI classification technology. To alleviate the above problems, we propose an [...] Read more.
Hyperspectral imaging (HSI) is widely used in various fields owing to its rich spectral information. Nonetheless, the high dimensionality of HSI and the limited number of labeled data remain significant obstacles to HSI classification technology. To alleviate the above problems, we propose an attention-embedded triple-branch fusion convolutional neural network (AETF-Net) for an HSI classification. The network consists of a spectral attention branch, a spatial attention branch, and a multi-attention fusion branch (MAFB). The spectral branch introduces the cross-channel attention to alleviate the band redundancy problem in high dimensions, while the spatial branch preserves the location information of features and eliminates interfering image elements by a bi-directional spatial attention module. These pre-extracted spectral and spatial attention features are then embedded into a novel MAFB with large kernel decomposition technique. The proposed AETF-Net achieves multi-attention features reuse and extracts more representative and discriminative features. Experimental results on three well-known datasets demonstrate the superiority of the method AETF-Net. Full article
Show Figures

Figure 1

27 pages, 1846 KiB  
Article
Nearest Neighboring Self-Supervised Learning for Hyperspectral Image Classification
by Yao Qin, Yuanxin Ye, Yue Zhao, Junzheng Wu, Han Zhang, Kenan Cheng and Kun Li
Remote Sens. 2023, 15(6), 1713; https://doi.org/10.3390/rs15061713 - 22 Mar 2023
Cited by 8 | Viewed by 2318
Abstract
Recently, state-of-the-art classification performance of natural images has been obtained by self-supervised learning (S2L) as it can generate latent features through learning between different views of the same images. However, the latent semantic information of similar images has hardly been exploited by these [...] Read more.
Recently, state-of-the-art classification performance of natural images has been obtained by self-supervised learning (S2L) as it can generate latent features through learning between different views of the same images. However, the latent semantic information of similar images has hardly been exploited by these S2L-based methods. Consequently, to explore the potential of S2L between similar samples in hyperspectral image classification (HSIC), we propose the nearest neighboring self-supervised learning (N2SSL) method, by interacting between different augmentations of reliable nearest neighboring pairs (RN2Ps) of HSI samples in the framework of bootstrap your own latent (BYOL). Specifically, there are four main steps: pretraining of spectral spatial residual network (SSRN)-based BYOL, generation of nearest neighboring pairs (N2Ps), training of BYOL based on RN2P, final classification. Experimental results of three benchmark HSIs validated that S2L on similar samples can facilitate subsequent classification. Moreover, we found that BYOL trained on an un-related HSI can be fine-tuned for classification of other HSIs with less computational cost and higher accuracy than training from scratch. Beyond the methodology, we present a comprehensive review of HSI-related data augmentation (DA), which is meaningful to future research of S2L on HSIs. Full article
Show Figures

Figure 1

Other

Jump to: Research

18 pages, 8037 KiB  
Technical Note
A General Self-Supervised Framework for Remote Sensing Image Classification
by Yuan Gao, Xiaojuan Sun and Chao Liu
Remote Sens. 2022, 14(19), 4824; https://doi.org/10.3390/rs14194824 - 27 Sep 2022
Cited by 12 | Viewed by 2939
Abstract
This paper provides insights into the interpretation beyond simply combining self-supervised learning (SSL) with remote sensing (RS). Inspired by the improved representation ability brought by SSL in natural image understanding, we aim to explore and analyze the compatibility of SSL with remote sensing. [...] Read more.
This paper provides insights into the interpretation beyond simply combining self-supervised learning (SSL) with remote sensing (RS). Inspired by the improved representation ability brought by SSL in natural image understanding, we aim to explore and analyze the compatibility of SSL with remote sensing. In particular, we propose a self-supervised pre-training framework for the first time by applying the masked image modeling (MIM) method to RS image research in order to enhance its efficacy. The completion proxy task used by MIM encourages the model to reconstruct the masked patches, and thus correlate the unseen parts with the seen parts in semantics. Second, in order to figure out how pretext tasks affect downstream performance, we find the attribution consensus of the pre-trained model and downstream tasks toward the proxy and classification targets, which is quite different from that in natural image understanding. Moreover, this transferable consensus is persistent in cross-dataset full or partial fine-tuning, which means that SSL could boost general model-free representation beyond domain bias and task bias (e.g., classification, segmentation, and detection). Finally, on three publicly accessible RS scene classification datasets, our method outperforms the majority of fully supervised state-of-the-art (SOTA) methods with higher accuracy scores on unlabeled datasets. Full article
Show Figures

Figure 1

Back to TopTop