remotesensing-logo

Journal Browser

Journal Browser

Deep Neural Networks for Remote Sensing Scene Classification

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (1 December 2022) | Viewed by 12129

Special Issue Editors

The Key Laboratory of Computational Optical Imaging Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
Interests: remote sensing; machine learning; deep learning; image processing
School of Information and Communication Technology, Griffith University, Nathan, QLD 4111, Australia
Interests: pattern recognition; computer vision and spectral imaging with their applications to remote sensing and environmental informatics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid growth of remote sensing data, processing and understanding remote sensing scene images has become more and more important. Owing to their powerful learning ability, deep learning techniques have been widely used in remote sensing data processing and analysis, which will be a feasible solution for remote sensing scene classification and interpretation. Currently, new deep network architectures need to be paid more attention as more suitable tools to develop for remote sensing scene classification, segmentation, detection and higher-level understanding. Therefore, boosting the development of deep networks in remote sensing scene classification is urgent and of vital importance in the field of remote sensing. 

This Special Issue aims to develop state-of-the-art deep networks for more accurate remote sensing scene classification and recognition. 

This Special Issue will accept topics regarding remote sensing scene classification, segmentation, detection, and understanding-related works. These include, but are not limited to, the following topics:

  • Deep networks for remote sensing image scene classification;
  • Deep networks for remote sensing image scene segmentation;
  • Deep networks for remote sensing image scene detection;
  • Remote sensing image scene benchmark datasets;
  • Remote sensing image feature extraction and selection;
  • Remote sensing image enhancement and fusion.

Dr. Danfeng Hong
Dr. Jing Yao
Dr. Jun Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • remote sensing
  • feature extraction
  • classification and recognition
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 4581 KiB  
Article
Unsupervised Cross-Scene Aerial Image Segmentation via Spectral Space Transferring and Pseudo-Label Revising
by Wenjie Liu, Wenkai Zhang, Xian Sun and Zhi Guo
Remote Sens. 2023, 15(5), 1207; https://doi.org/10.3390/rs15051207 - 22 Feb 2023
Cited by 2 | Viewed by 1995
Abstract
Unsupervised domain adaptation (UDA) is essential since manually labeling pixel-level annotations is consuming and expensive. Since the domain discrepancies have not been well solved, existing UDA approaches yield poor performance compared with supervised learning approaches. In this paper, we propose a novel sequential [...] Read more.
Unsupervised domain adaptation (UDA) is essential since manually labeling pixel-level annotations is consuming and expensive. Since the domain discrepancies have not been well solved, existing UDA approaches yield poor performance compared with supervised learning approaches. In this paper, we propose a novel sequential learning network (SLNet) for unsupervised cross-scene aerial image segmentation. The whole system is decoupled into two sequential parts—the image translation model and segmentation adaptation model. Specifically, we introduce the spectral space transferring (SST) approach to narrow the visual discrepancy. The high-frequency components between the source images and the translated images can be transferred in the Fourier spectral space for better preserving the important identity and fine-grained details. To further alleviate the distribution discrepancy, an efficient pseudo-label revising (PLR) approach was developed to guide pseudo-label learning via entropy minimization. Without additional parameters, the entropy map works as the adaptive threshold, constantly revising the pseudo labels for the target domain. Furthermore, numerous experiments for single-category and multi-category UDA segmentation demonstrate that our SLNet is the state-of-the-art. Full article
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Scene Classification)
Show Figures

Graphical abstract

20 pages, 5701 KiB  
Article
Recursive RX with Extended Multi-Attribute Profiles for Hyperspectral Anomaly Detection
by Fang He, Shuai Yan, Yao Ding, Zhensheng Sun, Jianwei Zhao, Haojie Hu and Yujie Zhu
Remote Sens. 2023, 15(3), 589; https://doi.org/10.3390/rs15030589 - 18 Jan 2023
Cited by 7 | Viewed by 1779
Abstract
Hyperspectral anomaly detection (HAD) plays an important role in military and civilian applications and has attracted a lot of research. The well-known Reed–Xiaoli (RX) algorithm is the benchmark of HAD methods. Based on the RX model, many variants have been developed. However, most [...] Read more.
Hyperspectral anomaly detection (HAD) plays an important role in military and civilian applications and has attracted a lot of research. The well-known Reed–Xiaoli (RX) algorithm is the benchmark of HAD methods. Based on the RX model, many variants have been developed. However, most of them ignore the spatial characteristics of hyperspectral images (HSIs). In this paper, we combine the extended multi-attribute profiles (EMAP) and RX algorithm to propose the Recursive RX with Extended Multi-Attribute Profiles (RRXEMAP) algorithm. Firstly, EMAP is utilized to extract the spatial structure information of HSI. Then, a simple method of background purification is proposed. That is, the background is purified by utilizing the RX detector to remove the pixels that are more likely to be anomalies, which helps improve the ability of background estimation. In addition, a parameter is utilized to control the purification level and can be selected by experiments. Finally, the RX detector is used again between the EMAP feature and the new background distribution to judge the anomaly. Experimental results on six real hyperspectral datasets and a synthetic dataset demonstrate the effectiveness of the proposed RRXEMAP method and the importance of using the EMAP feature and background purity means. Especially, on the abu-airport-2 dataset, the AUC value obtained by the present method is 0.9858, which is higher than the second one, CRD, by 0.0198. Full article
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Scene Classification)
Show Figures

Figure 1

14 pages, 3800 KiB  
Article
Identifying Critical Infrastructure in Imagery Data Using Explainable Convolutional Neural Networks
by Shiloh N. Elliott, Ashley J. B. Shields, Elizabeth M. Klaehn and Iris Tien
Remote Sens. 2022, 14(21), 5331; https://doi.org/10.3390/rs14215331 - 25 Oct 2022
Cited by 5 | Viewed by 2651
Abstract
To date, no method utilizing satellite imagery exists for detailing the locations and functions of critical infrastructure across the United States, making response to natural disasters and other events challenging due to complex infrastructural interdependencies. This paper presents a repeatable, transferable, and explainable [...] Read more.
To date, no method utilizing satellite imagery exists for detailing the locations and functions of critical infrastructure across the United States, making response to natural disasters and other events challenging due to complex infrastructural interdependencies. This paper presents a repeatable, transferable, and explainable method for critical infrastructure analysis and implementation of a robust model for critical infrastructure detection in satellite imagery. This model consists of a DenseNet-161 convolutional neural network, pretrained with the ImageNet database. The model was provided additional training with a custom dataset, containing nine infrastructure classes. The resultant analysis achieved an overall accuracy of 90%, with the highest accuracy for airports (97%), hydroelectric dams (96%), solar farms (94%), substations (91%), potable water tanks (93%), and hospitals (93%). Critical infrastructure types with relatively low accuracy are likely influenced by data commonality between similar infrastructure components for petroleum terminals (86%), water treatment plants (78%), and natural gas generation (78%). Local interpretable model-agnostic explanations (LIME) was integrated into the overall modeling pipeline to establish trust for users in critical infrastructure applications. The results demonstrate the effectiveness of a convolutional neural network approach for critical infrastructure identification, with higher than 90% accuracy in identifying six of the critical infrastructure facility types. Full article
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Scene Classification)
Show Figures

Figure 1

32 pages, 8923 KiB  
Article
Tensor Dictionary Self-Taught Learning Classification Method for Hyperspectral Image
by Fengshuang Liu, Jun Fu, Qiang Wang and Rongqiang Zhao
Remote Sens. 2022, 14(17), 4373; https://doi.org/10.3390/rs14174373 - 2 Sep 2022
Cited by 4 | Viewed by 1946
Abstract
Precise object classification based on Hyperspectral imagery with limited training data presents a challenging task. We propose a tensor-based dictionary self-taught learning (TDSL) classification method to provide some insight into these challenges. The idea of TDSL is to utilize a small amount of [...] Read more.
Precise object classification based on Hyperspectral imagery with limited training data presents a challenging task. We propose a tensor-based dictionary self-taught learning (TDSL) classification method to provide some insight into these challenges. The idea of TDSL is to utilize a small amount of unlabeled data to improve the supervised classification. The TDSL trains tensor feature extractors from unlabeled data, extracts joint spectral-spatial tensor features and performs classification on the labeled data set. These two data sets can be gathered over different scenes even by different sensors. Therefore, TDSL can complete cross-scene and cross-sensor classification tasks. For training tensor feature extractors on unlabeled data, we propose a sparse tensor-based dictionary learning algorithm for three-dimensional samples. In the algorithm, we initialize dictionaries using Tucker decomposition and update these dictionaries based on the K higher-order singular value decomposition. These dictionaries are feature extractors, which are used to extract sparse joint spectral-spatial tensor features on the labeled data set. To provide classification results, the support vector machine as the classifier is applied to the tensor features. The TDSL with the majority vote (TDSLMV) can reduce the misclassified pixels in homogenous regions and at the edges of different homogenous regions, which further refines the classification. The proposed methods are evaluated on Indian Pines, Pavia University, and Houston2013 datasets. The classification results show that TDSLMV achieves as high as 99.13%, 99.28%, and 99.76% accuracies, respectively. Compared with several state-of-the-art methods, the classification accuracies of the proposed methods are improved by at least 2.5%. Full article
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Scene Classification)
Show Figures

Graphical abstract

26 pages, 7519 KiB  
Article
A Lightweight Convolutional Neural Network Based on Hierarchical-Wise Convolution Fusion for Remote-Sensing Scene Image Classification
by Cuiping Shi, Xinlei Zhang, Tianyi Wang and Liguo Wang
Remote Sens. 2022, 14(13), 3184; https://doi.org/10.3390/rs14133184 - 2 Jul 2022
Cited by 8 | Viewed by 2468
Abstract
The large intra-class difference and inter-class similarity of scene images bring great challenges to the research of remote-sensing scene image classification. In recent years, many remote-sensing scene classification methods based on convolutional neural networks have been proposed. In order to improve the classification [...] Read more.
The large intra-class difference and inter-class similarity of scene images bring great challenges to the research of remote-sensing scene image classification. In recent years, many remote-sensing scene classification methods based on convolutional neural networks have been proposed. In order to improve the classification performance, many studies increase the width and depth of convolutional neural network to extract richer features, which increases the complexity of the model and reduces the running speed of the model. In order to solve this problem, a lightweight convolutional neural network based on hierarchical-wise convolution fusion (LCNN-HWCF) is proposed for remote-sensing scene image classification. Firstly, in the shallow layer of the neural network (groups 1–3), the proposed lightweight dimension-wise convolution (DWC) is utilized to extract the shallow features of remote-sensing images. Dimension-wise convolution is carried out in the three dimensions of width, depth and channel, and then, the convoluted features of the three dimensions are fused. Compared with traditional convolution, dimension-wise convolution has a lower number of parameters and computations. In the deep layer of the neural network (groups 4–7), the running speed of the network usually decreases due to the increase in the number of filters. Therefore, the hierarchical-wise convolution fusion module is designed to extract the deep features of remote-sensing images. Finally, the global average pooling layer, the fully connected layer and the Softmax function are used for classification. Using global average pooling before the fully connected layer can better preserve the spatial information of features. The proposed method achieves good classification results on UCM, RSSCN7, AID and NWPU datasets. The classification accuracy of the proposed LCNN-HWCF on the AID dataset (training:test = 2:8) and the NWPU dataset (training:test = 1:9), with great classification difficulty, reaches 95.76% and 94.53%, respectively. A series of experimental results show that compared with some state-of-the-art classification methods, the proposed method not only greatly reduces the number of network parameters but also ensures the classification accuracy and achieves a good trade-off between the model classification accuracy and running speed. Full article
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Scene Classification)
Show Figures

Graphical abstract

Back to TopTop