remotesensing-logo

Journal Browser

Journal Browser

Convolutional Neural Network Applications in Remote Sensing II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (15 September 2023) | Viewed by 18501

Special Issue Editors


E-Mail Website
Guest Editor
Army Research Lab., Booz Allen Hamilton Inc., 2800 Powder Mil Rd., Adelphi, MD 20783, USA
Interests: computer vision; machine learning; AI; deep learning
Special Issues, Collections and Topics in MDPI journals
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
Interests: remote sensing image fusion; image registration; deep learning

E-Mail Website
Guest Editor
Army Research Lab., Booz Allen Hamilton Inc., 2800 Powder Mil Rd., Adelphi, MD 20783, USA
Interests: computer vision; machine learning; deep learning and AI
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Today, the field of computer vision and deep learning is rapidly progressing, with applications expanding into many areas, including remote sensing, where a myriad of challenges in data acquisition and annotation have yet to be fully solved. A breakthrough by utilizing high-performance deep-learning-based models that typically require large-scale annotated datasets is needed to address these challenges.

This Special Issue is focused on the latest advances in remote sensing using computer vision, deep learning and artificial intelligence. Although broad in scope, contributions with a specific focus are sought.

We welcome papers on topics of interest including, but not limited to:

  • Deep learning architecture for remote sensing;
  • Deep learning remote sensing image processing, including image fusion, cloud detection and landslide monitoring;
  • Machine learning for remote sensing;
  • Computer vision method for remote sensing;
  • Classification/detection/regression;
  • Unsupervised feature learning for remote sensing;
  • Domain adaptation and transfer learning with computer vision and deep learning for remote sensing;
  • Anomaly/novelty detection for remote sensing;
  • New remote sensing datasets and tasks;
  • Remote sensing data analysis;
  • New remote sensing applications;
  • Synthetic remote sensing data generation;
  • Real-time remote sensing;
  • Deep-learning-based image registration.

Dr. Hyungtae Lee
Dr. Qing Guo
Dr. Sungmin Eum
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning architecture
  • machine learning
  • computer vision
  • classification/detection/regression
  • unsupervised feature learning
  • domain adaptation and transfer learning
  • anomaly/novelty detection
  • synthetic data generation
  • real-time remote sensing
  • deep-learning-based image registration
  • remote sensing image processing

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 12665 KiB  
Article
Adaptive Multi-Feature Fusion Graph Convolutional Network for Hyperspectral Image Classification
by Jie Liu, Renxiang Guan, Zihao Li, Jiaxuan Zhang, Yaowen Hu and Xueyong Wang
Remote Sens. 2023, 15(23), 5483; https://doi.org/10.3390/rs15235483 - 24 Nov 2023
Cited by 5 | Viewed by 1210
Abstract
Graph convolutional networks (GCNs) are a promising approach for addressing the necessity for long-range information in hyperspectral image (HSI) classification. Researchers have attempted to develop classification methods that combine strong generalizations with effective classification. However, the current HSI classification methods based on GCN [...] Read more.
Graph convolutional networks (GCNs) are a promising approach for addressing the necessity for long-range information in hyperspectral image (HSI) classification. Researchers have attempted to develop classification methods that combine strong generalizations with effective classification. However, the current HSI classification methods based on GCN present two main challenges. First, they overlook the multi-view features inherent in HSIs, whereas multi-view information interacts with each other to facilitate classification tasks. Second, many algorithms perform a rudimentary fusion of extracted features, which can result in information redundancy and conflicts. To address these challenges and exploit the strengths of multiple features, this paper introduces an adaptive multi-feature fusion GCN (AMF-GCN) for HSI classification. Initially, the AMF-GCN algorithm extracts spectral and textural features from the HSIs and combines them to create fusion features. Subsequently, these three features are employed to construct separate images, which are then processed individually using multi-branch GCNs. The AMG-GCN aggregates node information and utilizes an attention-based feature fusion method to selectively incorporate valuable features. We evaluated the model on three widely used HSI datasets, i.e., Pavia University, Salinas, and Houston-2013, and achieved accuracies of 97.45%, 98.03%, and 93.02%, respectively. Extensive experimental results show that the classification performance of the AMF-GCN on benchmark HSI datasets is comparable to those of state-of-the-art methods. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Graphical abstract

16 pages, 4537 KiB  
Article
Utilizing Hyperspectral Reflectance and Machine Learning Algorithms for Non-Destructive Estimation of Chlorophyll Content in Citrus Leaves
by Dasui Li, Qingqing Hu, Siqi Ruan, Jun Liu, Jinzhi Zhang, Chungen Hu, Yongzhong Liu, Yuanyong Dian and Jingjing Zhou
Remote Sens. 2023, 15(20), 4934; https://doi.org/10.3390/rs15204934 - 12 Oct 2023
Viewed by 845
Abstract
To address the demands of precision agriculture and the measurement of plant photosynthetic response and nitrogen status, it is necessary to employ advanced methods for estimating chlorophyll content quickly and non-destructively at a large scale. Therefore, we explored the utilization of both linear [...] Read more.
To address the demands of precision agriculture and the measurement of plant photosynthetic response and nitrogen status, it is necessary to employ advanced methods for estimating chlorophyll content quickly and non-destructively at a large scale. Therefore, we explored the utilization of both linear regression and machine learning methodology to improve the prediction of leaf chlorophyll content (LCC) in citrus trees through the analysis of hyperspectral reflectance data in a field experiment. And the relationship between phenology and LCC estimation was also tested in this study. The LCC of citrus tree leaves at five growth seasons (May, June, August, October, and December) were measured alongside measurements of leaf hyperspectral reflectance. The measured LCC data and spectral parameters were used for evaluating LCC using univariate linear regression (ULR), multivariate linear regression (MLR), random forest regression (RFR), K-nearest neighbor regression (KNNR), and support vector regression (SVR). The results revealed the following: the MLR and machine learning models (RFR, KNNR, SVR), in both October and December, performed well in LCC estimation with a coefficient of determination (R2) greater than 0.70. In August, the ULR model performed the best, achieving an R2 of 0.69 and root mean square error (RMSE) of 8.92. However, the RFR model demonstrated the highest predictive power for estimating LCC in May, June, October, and December. Furthermore, the prediction accuracy was the best with the RFR model with parameters VOG2 and Carte4 in October, achieving an R2 of 0.83 and RMSE of 6.67. Our findings revealed that using just a few spectral parameters can efficiently estimate LCC in citrus trees, showing substantial promise for implementation in large-scale orchards. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Figure 1

14 pages, 7395 KiB  
Article
Simplified and High Accessibility Approach for the Rapid Assessment of Deforestation in Developing Countries: A Case of Timor-Leste
by Wonhee Cho and Chul-Hee Lim
Remote Sens. 2023, 15(18), 4636; https://doi.org/10.3390/rs15184636 - 21 Sep 2023
Viewed by 1325
Abstract
Forests are essential for sustaining ecosystems, regulating the climate, and providing economic benefits to human society. However, activities such as commercial practices, fuelwood collection, and land use changes have resulted in severe forest degradation and deforestation. Timor-Leste, a small island nation, faces environmental [...] Read more.
Forests are essential for sustaining ecosystems, regulating the climate, and providing economic benefits to human society. However, activities such as commercial practices, fuelwood collection, and land use changes have resulted in severe forest degradation and deforestation. Timor-Leste, a small island nation, faces environmental sustainability challenges due to land use changes, limited infrastructure, and agricultural practices. This study proposes a simplified and highly accessible approach to assess deforestation (SHAD) nationally using limited human and non-human resources such as experts, software, and hardware facilities. To assess deforestation in developing countries, we utilize open-source software (Dryad), employ the U-Net deep learning algorithm, and utilize open-source data generated from the Google Earth Engine platform to construct a time-series land cover classification model for Timor-Leste. In addition, we utilize the open-source land cover map as label data and satellite imagery as model training inputs, and our model demonstrates satisfactory performance in classifying time-series land cover. Next, we classify the land cover in Timor-Leste for 2016 and 2021, and verified that the forest classification achieved high accuracy ranging from 0.79 to 0.89. Thereafter, we produced a deforestation map by comparing the two land cover maps. The estimated deforestation rate was 1.9% annually with a primary concentration in the northwestern municipalities of Timor-Leste with dense population and human activities. This study demonstrates the potential of the SHAD approach to assess deforestation nationwide, particularly in countries with limited scientific experts and infrastructure. We anticipate that our study will support the development of management strategies for ecosystem sustainability, climate adaptation, and the conservation of economic benefits in various fields. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Figure 1

21 pages, 6239 KiB  
Article
The Classification of Hyperspectral Images: A Double-Branch Multi-Scale Residual Network
by Laiying Fu, Xiaoyong Chen, Saied Pirasteh and Yanan Xu
Remote Sens. 2023, 15(18), 4471; https://doi.org/10.3390/rs15184471 - 12 Sep 2023
Cited by 3 | Viewed by 1030
Abstract
With the continuous advancement of deep learning technology, researchers have made further progress in the hyperspectral image (HSI) classification domain. We propose a double-branch multi-scale residual network (DBMSRN) framework for HSI classification to improve classification accuracy and reduce the number of required training [...] Read more.
With the continuous advancement of deep learning technology, researchers have made further progress in the hyperspectral image (HSI) classification domain. We propose a double-branch multi-scale residual network (DBMSRN) framework for HSI classification to improve classification accuracy and reduce the number of required training samples. The DBMSRN consists of two branches designed to extract spectral and spatial features from the HSI. Thus, to obtain more comprehensive feature information, we extracted additional local and global features at different scales by expanding the network width. Moreover, we also increased the network depth to capture deeper feature information. Based on this concept, we devise spectral multi-scale residuals and spatial multi-scale residuals within a double-branch architecture. Additionally, skip connections are employed to augment the context information of the network. We demonstrate that the proposed framework effectively enhances classification accuracy in scenarios with limited training samples through experimental analysis. The proposed framework achieves an overall accuracy of 98.67%, 98.09%, and 96.76% on the Pavia University (PU), Kennedy Space Center (KSC), and Indian Pines (IP) datasets, respectively, surpassing the classification accuracy of existing advanced frameworks under identical conditions. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Graphical abstract

18 pages, 6170 KiB  
Article
The Spatiotemporal Distribution of NO2 in China Based on Refined 2DCNN-LSTM Model Retrieval and Factor Interpretability Analysis
by Ruming Chen, Jiashun Hu, Zhihao Song, Yixuan Wang, Xingzhao Zhou, Lin Zhao and Bin Chen
Remote Sens. 2023, 15(17), 4261; https://doi.org/10.3390/rs15174261 - 30 Aug 2023
Cited by 2 | Viewed by 898
Abstract
With the advancement of urbanization in China, effective control of pollutant emissions and air quality have become important goals in current environmental management. Nitrogen dioxide (NO2), as a precursor of tropospheric ozone and fine particulate matter, plays a significant role in [...] Read more.
With the advancement of urbanization in China, effective control of pollutant emissions and air quality have become important goals in current environmental management. Nitrogen dioxide (NO2), as a precursor of tropospheric ozone and fine particulate matter, plays a significant role in atmospheric chemistry research and air pollution control. However, the uneven ground monitoring stations and low temporal resolution of polar-orbiting satellites set challenges for accurately assessing near-surface NO2 concentrations. To address this issue, a spatiotemporal refined NO2 retrieval model was established for China using the geostationary satellite Himawari-8. The spatiotemporal characteristics of NO2 were analyzed and its contribution factors were explored. Firstly, seven Himawari-8 channels sensitive to NO2 were selected by using the forward feature selection based on information entropy. Subsequently, a 2DCNN-LSTM network model was constructed, incorporating the selected channels and meteorological variables as retrieval factors to estimate hourly NO2 in China from March 2018 to February 2020 (with a resolution of 0.05°, per hour). The performance evaluation demonstrates that the full-channel 2DCNN-LSTM model has good fitting capability and robustness (R2 = 0.74, RMSE = 10.93), and further improvements were achieved after channel selection (R2 = 0.87, RMSE = 6.84). The 10-fold cross-validation results indicate that the R2 between retrieval and measured values was above 0.85, the MAE was within 5.60, and the RMSE iwas within 7.90. R2 varied between 0.85 and 0.90, showing better validation at mid-day (R2 = 0.89) and in spring and fall transition seasons (R2 = 0.88 and R2 = 0.90). To investigate the cooperative effect of meteorological factors and other air pollutants on NO2, statistical methods (beta coefficients) were used to test the factor interpretability. Meteorological factors as well as other pollutants were analyzed. From a statistical perspective, PM2.5, boundary layer height, and O3 were found to have the largest impacts on near-surface NO2 concentrations, with each standard deviation change in these factors leading to 0.28, 0.24, and 0.23 in standard deviations of near-surface NO2, respectively. The findings of this study contribute to a comprehensive understanding of the spatiotemporal distribution of NO2 and provide a scientific basis for formulating targeted air pollution policies. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Figure 1

18 pages, 6352 KiB  
Article
Super-Resolution Rural Road Extraction from Sentinel-2 Imagery Using a Spatial Relationship-Informed Network
by Yuanxin Jia, Xining Zhang, Ru Xiang and Yong Ge
Remote Sens. 2023, 15(17), 4193; https://doi.org/10.3390/rs15174193 - 25 Aug 2023
Cited by 1 | Viewed by 1195
Abstract
With the development of agricultural and rural modernization, the informatization of rural roads has been an inevitable requirement for promoting rural revitalization. To date, however, the vast majority of road extraction methods mainly focus on urban areas and rely on very high-resolution satellite [...] Read more.
With the development of agricultural and rural modernization, the informatization of rural roads has been an inevitable requirement for promoting rural revitalization. To date, however, the vast majority of road extraction methods mainly focus on urban areas and rely on very high-resolution satellite or aerial images, whose costs are not yet affordable for large-scale rural areas. Therefore, a deep learning (DL)-based super-resolution mapping (SRM) method has been considered to relieve this dilemma by using freely available Sentinel-2 imagery. However, few DL-based SRM methods are suitable due to these methods only relying on the spectral features derived from remote sensing images, which is insufficient for the complex rural road extraction task. To solve this problem, this paper proposes a spatial relationship-informed super-resolution mapping network (SRSNet) for extracting roads in rural areas which aims to generate 2.5 m fine-scale rural road maps from 10 m Sentinel-2 images. Based on the common sense that rural roads often lead to rural settlements, the method adopts a feature enhancement module to enhance the capture of road features by incorporating the relative position relation between roads and rural settlements into the model. Experimental results show that the SRSNet can effectively extract road information, with significantly better results for elongated rural roads. The intersection over union (IoU) of the mapping results is 68.9%, which is 4.7% higher than that of the method without fusing settlement features. The extracted roads show more details in the areas with strong spatial relationships between the settlements and roads. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Graphical abstract

17 pages, 6966 KiB  
Article
Siam-EMNet: A Siamese EfficientNet–MANet Network for Building Change Detection in Very High Resolution Images
by Liang Huang, Qiuyuan Tian, Bo-Hui Tang, Weipeng Le, Min Wang and Xianguang Ma
Remote Sens. 2023, 15(16), 3972; https://doi.org/10.3390/rs15163972 - 10 Aug 2023
Cited by 1 | Viewed by 1216
Abstract
As well as very high resolution (VHR) remote sensing technology and deep learning, methods for detecting changes in buildings have made great progress. Despite this, there are still some problems with the incomplete detection of change regions and rough edges. To this end, [...] Read more.
As well as very high resolution (VHR) remote sensing technology and deep learning, methods for detecting changes in buildings have made great progress. Despite this, there are still some problems with the incomplete detection of change regions and rough edges. To this end, a change detection network for building VHR remote sensing images based on Siamese EfficientNet B4-MANet (Siam-EMNet) is proposed. First, a bi-branches pretrained EfficientNet B4 encoder structure is constructed to enhance the performance of feature extraction and the rich shallow and deep information is obtained; then, the semantic information of the building is input into the MANet decoder integrated by the dual attention mechanism through the skip connection. The position-wise attention block (PAB) and multi-scale fusion attention block (MFAB) capture spatial relationships between pixels in the global view and channel relationships between layers. The integration of dual attention mechanisms ensures that the building contour is fully detected. The proposed method was evaluated on the LEVIR-CD dataset, and its precision, recall, accuracy, and F1-score were 92.00%, 88.51%, 95.71%, and 90.21%, respectively, which represented the best overall performance compared to the BIT, CDNet, DSIFN, L-Unet, P2V-CD, and SNUNet methods. Verification of the efficacy of the suggested approach was then conducted. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Figure 1

25 pages, 10359 KiB  
Article
On-Board Multi-Class Geospatial Object Detection Based on Convolutional Neural Network for High Resolution Remote Sensing Images
by Yanyun Shen, Di Liu, Junyi Chen, Zhipan Wang, Zhe Wang and Qingling Zhang
Remote Sens. 2023, 15(16), 3963; https://doi.org/10.3390/rs15163963 - 10 Aug 2023
Cited by 4 | Viewed by 1541
Abstract
Multi-class geospatial object detection in high-resolution remote sensing images has significant potential in various domains such as industrial production, military warning, disaster monitoring, and urban planning. However, the traditional process of remote sensing object detection involves several time-consuming steps, including image acquisition, image [...] Read more.
Multi-class geospatial object detection in high-resolution remote sensing images has significant potential in various domains such as industrial production, military warning, disaster monitoring, and urban planning. However, the traditional process of remote sensing object detection involves several time-consuming steps, including image acquisition, image download, ground processing, and object detection. These steps may not be suitable for tasks with shorter timeliness requirements, such as military warning and disaster monitoring. Additionally, the transmission of massive data from satellites to the ground is limited by bandwidth, resulting in time delays and redundant information, such as cloud coverage images. To address these challenges and achieve efficient utilization of information, this paper proposes a comprehensive on-board multi-class geospatial object detection scheme. The proposed scheme consists of several steps. Firstly, the satellite imagery is sliced, and the PID-Net (Proportional-Integral-Derivative Network) method is employed to detect and filter out cloud-covered tiles. Subsequently, our Manhattan Intersection over Union (MIOU) loss-based YOLO (You Only Look Once) v7-Tiny method is used to detect remote-sensing objects in the remaining tiles. Finally, the detection results are mapped back to the original image, and the truncated NMS (Non-Maximum Suppression) method is utilized to filter out repeated and noisy boxes. To validate the reliability of the scheme, this paper creates a new dataset called DOTA-CD (Dataset for Object Detection in Aerial Images-Cloud Detection). Experiments were conducted on both ground and on-board equipment using the AIR-CD dataset, DOTA dataset, and DOTA-CD dataset. The results demonstrate the effectiveness of our method. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Figure 1

21 pages, 5385 KiB  
Article
SMBCNet: A Transformer-Based Approach for Change Detection in Remote Sensing Images through Semantic Segmentation
by Jiangfan Feng, Xinyu Yang, Zhujun Gu, Maimai Zeng and Wei Zheng
Remote Sens. 2023, 15(14), 3566; https://doi.org/10.3390/rs15143566 - 16 Jul 2023
Cited by 1 | Viewed by 1624
Abstract
Remote sensing change detection (RSCD) is crucial for our understanding of the dynamic pattern of the Earth’s surface and human influence. Recently, transformer-based methodologies have advanced from their powerful global modeling capabilities in RSCD tasks. Nevertheless, they remain under excessive parameterization, which continues [...] Read more.
Remote sensing change detection (RSCD) is crucial for our understanding of the dynamic pattern of the Earth’s surface and human influence. Recently, transformer-based methodologies have advanced from their powerful global modeling capabilities in RSCD tasks. Nevertheless, they remain under excessive parameterization, which continues to be severely constrained by time and computation resources. Here, we present a transformer-based RSCD model called the Segmentation Multi-Branch Change Detection Network (SMBCNet). Our proposed approach combines a hierarchically structured transformer encoder with a cross-scale enhancement module (CEM) to extract global information with lower complexity. To account for the diverse nature of changes, we introduce a plug-and-play multi-branch change fusion module (MCFM) that integrates temporal features. Within this module, we transform the change detection task into a semantic segmentation problem. Moreover, we identify the Temporal Feature Aggregation Module (TFAM) to facilitate integrating features from diverse spatial scales. These results demonstrate that semantic segmentation is an effective solution to change detection (CD) problems in remote sensing images. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Figure 1

21 pages, 9593 KiB  
Article
Pixel-Wise Attention Residual Network for Super-Resolution of Optical Remote Sensing Images
by Yali Chang, Gang Chen and Jifa Chen
Remote Sens. 2023, 15(12), 3139; https://doi.org/10.3390/rs15123139 - 15 Jun 2023
Cited by 2 | Viewed by 1406
Abstract
The deep-learning-based image super-resolution opens a new direction for the remote sensing field to reconstruct further information and details from captured images. However, most current SR works try to improve the performance by increasing the complexity of the model, which results in significant [...] Read more.
The deep-learning-based image super-resolution opens a new direction for the remote sensing field to reconstruct further information and details from captured images. However, most current SR works try to improve the performance by increasing the complexity of the model, which results in significant computational costs and memory consumption. In this paper, we propose a lightweight model named pixel-wise attention residual network for optical remote sensor images, which can effectively solve the super-resolution task of multi-satellite images. The proposed method consists of three modules: the feature extraction module, feature fusion module, and feature mapping module. First, the feature extraction module is responsible for extracting the deep features from the input spatial bands with different spatial resolutions. Second, the feature fusion module with the pixel-wise attention mechanism generates weight coefficients for each pixel on the feature map and fully fuses the deep feature information. Third, the feature mapping module is aimed to maintain the fidelity of the spectrum by adding the fused residual feature map directly to the up-sampled low-resolution images. Compared with existing deep-learning-based methods, the major advantage of our method is that for the first time, the pixel-wise attention mechanism is incorporated in the task of super-resolution fusion of remote sensing images, which effectively improved the performance of the fusion network. The accuracy assessment results show that our method achieved superior performance of the root mean square error, signal-to–reconstruction ratio error, universal image quality index, and peak signal noise ratio compared to competing approaches. The improvements in the signal-to-reconstruction ratio error and peak signal noise ratio are significant, with a respective increase of 0.15 and 0.629 dB for Sentinel-2 data, and 0.196 and 1 dB for Landsat data. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Graphical abstract

26 pages, 6793 KiB  
Article
R-MFNet: Analysis of Urban Carbon Stock Change against the Background of Land-Use Change Based on a Residual Multi-Module Fusion Network
by Chunyang Wang, Kui Yang, Wei Yang, Haiyang Qiang, Huiyuan Xue, Bibo Lu and Peng Zhou
Remote Sens. 2023, 15(11), 2823; https://doi.org/10.3390/rs15112823 - 29 May 2023
Cited by 2 | Viewed by 2076
Abstract
Regional land-use change is the leading cause of ecosystem carbon stock change; it is essential to investigate the response of LUCC to carbon stock to achieve the strategic goal of “double carbon” in a region. This paper proposes a residual network algorithm, the [...] Read more.
Regional land-use change is the leading cause of ecosystem carbon stock change; it is essential to investigate the response of LUCC to carbon stock to achieve the strategic goal of “double carbon” in a region. This paper proposes a residual network algorithm, the Residual Multi-module Fusion Network (R-MFNet), to address the problems of blurred feature boundary information, low classification accuracy, and high noise, which are often encountered in traditional classification methods. The network algorithm uses an R-ASPP module to expand the receptive field of the feature map to extract sufficient and multi-scale target features; it uses the attention mechanism to assign weights to the multi-scale information of each channel and space. It can fully preserve the remote sensing image features extracted by the convolutional layer through the residual connection. Using this classification network method, the classification of three Landsat-TM/OLI images of Zhengzhou City (the capital of Henan Province) from 2001 to 2020 was realized (the years that the three images were taken are 2001, 2009, and 2020). Compared with SVM, 2D-CNN, and deep residual networks (ResNet), the overall accuracy of the test dataset is increased by 10.07%, 3.96%, and 1.33%, respectively. The classification achieved using this method is closer to the real land surface, and its accuracy is higher than that of the finished product data obtained using the traditional classification method, providing high-precision land-use classification data for the subsequent carbon storage estimation research. Based on the land-use classification data and the carbon density data corrected by meteorological data (temperature and precipitation data), the InVEST model is used to analyze the land-use change and its impact on carbon storage in the region. The results showed that, from 2001 to 2020, the carbon stock in the study area showed a downward trend, with a total decrease of 1.48 × 107 t. Over the course of this 19-year period, the farmland area in Zhengzhou decreased by 1101.72 km2, and the built land area increased sharply by 936.16 km2. The area of land transfer accounted for 29.26% of the total area of Zhengzhou City from 2001 to 2009, and 31.20% from 2009 to 2020. The conversion of farmland to built land is the primary type of land transfer and the most important reason for decreasing carbon stock. The research results can provide support, in the form of scientific data, for land-use management decisions and carbon storage function protections in Zhengzhou and other cities around the world undergoing rapid urbanization. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Graphical abstract

24 pages, 12514 KiB  
Article
A Stage-Adaptive Selective Network with Position Awareness for Semantic Segmentation of LULC Remote Sensing Images
by Wei Zheng, Jiangfan Feng, Zhujun Gu and Maimai Zeng
Remote Sens. 2023, 15(11), 2811; https://doi.org/10.3390/rs15112811 - 29 May 2023
Cited by 2 | Viewed by 1355
Abstract
Deep learning has proven to be highly successful at semantic segmentation of remote sensing images (RSIs); however, it remains challenging due to the significant intraclass variation and interclass similarity, which limit the accuracy and continuity of feature recognition in land use and land [...] Read more.
Deep learning has proven to be highly successful at semantic segmentation of remote sensing images (RSIs); however, it remains challenging due to the significant intraclass variation and interclass similarity, which limit the accuracy and continuity of feature recognition in land use and land cover (LULC) applications. Here, we develop a stage-adaptive selective network that can significantly improve the accuracy and continuity of multiscale ground objects. Our proposed framework can learn to implement multiscale details based on a specific attention method (SaSPE) and transformer that work collectively. In addition, we enhance the feature extraction capability of the backbone network at both local and global scales by improving the window attention mechanism of the Swin Transfer. We experimentally demonstrate the success of this framework through quantitative and qualitative results. This study demonstrates the strong potential of the prior knowledge of deep learning-based models for semantic segmentation of RSIs. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Figure 1

22 pages, 6980 KiB  
Article
Self-Supervised Remote Sensing Image Dehazing Network Based on Zero-Shot Learning
by Jianchong Wei, Yan Cao, Kunping Yang, Liang Chen and Yi Wu
Remote Sens. 2023, 15(11), 2732; https://doi.org/10.3390/rs15112732 - 24 May 2023
Cited by 4 | Viewed by 1591
Abstract
Traditional dehazing approaches that rely on prior knowledge exhibit limited efficacy when confronted with the intricacies of real-world hazy environments. While learning-based dehazing techniques necessitate large-scale datasets for effective model training, the acquisition of these datasets is time-consuming and laborious, and the resulting [...] Read more.
Traditional dehazing approaches that rely on prior knowledge exhibit limited efficacy when confronted with the intricacies of real-world hazy environments. While learning-based dehazing techniques necessitate large-scale datasets for effective model training, the acquisition of these datasets is time-consuming and laborious, and the resulting models may encounter a domain shift when processing real-world hazy images. To overcome the limitations of prior-based and learning-based dehazing methods, we propose a self-supervised remote sensing (RS) image-dehazing network based on zero-shot learning, where the self-supervised process avoids dense dataset requirements and the learning-based structures refine the artifacts in extracted image priors caused by complex real-world environments. The proposed method has three stages. The first stage involves pre-processing the input hazy image by utilizing a prior-based dehazing module; in this study, we employed the widely recognized dark channel prior (DCP) to obtain atmospheric light, a transmission map, and the preliminary dehazed image. In the second stage, we devised two convolutional neural networks, known as RefineNets, dedicated to enhancing the transmission map and the initial dehazed image. In the final stage, we generated a hazy image using the atmospheric light, the refined transmission map, and the refined dehazed image by following the haze imaging model. The meticulously crafted loss function encourages cycle-consistency between the regenerated hazy image and the input hazy image, thereby facilitating a self-supervised dehazing model. During the inference phase, the model undergoes training in a zero-shot manner to yield the haze-free image. These thorough experiments validate the substantial improvement of our method over the prior-based dehazing module and the zero-shot training efficiency. Furthermore, assessments conducted on both uniform and non-uniform RS hazy images demonstrate the superiority of our proposed dehazing technique. Full article
(This article belongs to the Special Issue Convolutional Neural Network Applications in Remote Sensing II)
Show Figures

Figure 1

Back to TopTop