remotesensing-logo

Journal Browser

Journal Browser

Recent Advances in High Resolution Remote Sensing Image Processing and Analysis: Methodology and Application

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (15 November 2023) | Viewed by 33975

Special Issue Editors


grade E-Mail Website
Guest Editor
1.Helmholtz Institute Freiberg for Resource Technology, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), D-09599 Freiberg, Germany
2. Institute of Advanced Research in Artificial Intelligence (IARAI), 1030 Wien, Austria
Interests: hyperspectral image interpretation; multisensor and multitemporal data fusion
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid development of remotely sensed sensors and platforms, multi-source high resolution remote sensing images (e.g., optical imagery, SAR, and LiDAR) are widely available, opening new opportunities for geo-related applications. These substantial quantities of data provide both scientific insights and scalability for monitoring natural disaster (e.g., floods, forest fires, and earthquakes) as well as human activities (e.g., land-use/land-cover classification, change detection). To process the enormous data volume and extract meaningful information, deep learning based techniques, such as Convolutional neural networks, Recurrent networks, Attention mechanism and Transformer, have achieved ground-breaking performance in high resolution remote sensing image interpretation. However, several challenges and open issues remain to be addressed, including the efficient solutions, novel technologies for feature representations, and robust large-scale applications. For this special issue, we encourage the submission of articles to address advanced topics related to high resolution remote sensing image processing. Topics of interests include but are not limited to the following:

  • Fine scale land-use/Land-cover classification;
  • Data/knowledge driven methods for remote sensing imagery;
  • Fusion strategy of multi-modal remote sensing data;
  • Novel network structure for high resolution remote sensing image analysis;
  • Small object detection and tracking;
  • Time series using change detection;
  • Transfer learning in high resolution remote sensing interpretation;
  • Data pre-processing methods for multi-modal remote sensing data;
  • High resolution remote sensing application (e.g., agricultural, urban planning).

Dr. Qiqi Zhu
Prof. Dr. Danfeng Hong
Dr. Ce Zhang
Prof. Dr. Pedram Ghamisi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • high resolution
  • transfer learning
  • multi-modal data fusion
  • classification and segmentation
  • change detection
  • remote sensing application

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2220 KiB  
Article
DMCH: A Deep Metric and Category-Level Semantic Hashing Network for Retrieval in Remote Sensing
by Haiyan Huang, Qimin Cheng, Zhenfeng Shao, Xiao Huang and Liyuan Shao
Remote Sens. 2024, 16(1), 90; https://doi.org/10.3390/rs16010090 - 25 Dec 2023
Viewed by 1281
Abstract
The effectiveness of hashing methods in big data retrieval has been proved due to their merit in computational and storage efficiency. Recently, encouraged by the strong discriminant capability of deep learning in image representation, various deep hashing methodologies have emerged to enhance retrieval [...] Read more.
The effectiveness of hashing methods in big data retrieval has been proved due to their merit in computational and storage efficiency. Recently, encouraged by the strong discriminant capability of deep learning in image representation, various deep hashing methodologies have emerged to enhance retrieval performance. However, maintaining the semantic richness inherent in remote sensing images (RSIs), characterized by their scene intricacy and category diversity, remains a significant challenge. In response to this challenge, we propose a novel two-stage deep metric and category-level semantic hashing network termed DMCH. First, it introduces a novel triple-selection strategy during the semantic metric learning process to optimize the utilization of triple-label information. Moreover, it inserts a hidden layer to enhance the latent correlation between similar hash codes via a designed category-level classification loss. In addition, it employs additional constraints to keep bit-uncorrelation and bit-balance of generated hash codes. Furthermore, a progressive coarse-to-fine hash code sorting scheme is used for superior fine-grained retrieval and more effective hash function learning. Experiment results on three datasets illustrate the effectiveness and superiority of the proposed method. Full article
Show Figures

Figure 1

18 pages, 2846 KiB  
Article
Enhancing Object Detection in Remote Sensing: A Hybrid YOLOv7 and Transformer Approach with Automatic Model Selection
by Mahmoud Ahmed, Naser El-Sheimy, Henry Leung and Adel Moussa
Remote Sens. 2024, 16(1), 51; https://doi.org/10.3390/rs16010051 - 21 Dec 2023
Cited by 2 | Viewed by 2309
Abstract
In the remote sensing field, object detection holds immense value for applications such as land use classification, disaster monitoring, and infrastructure planning, where accurate and efficient identification of objects within images is essential for informed decision making. However, achieving object localization with high [...] Read more.
In the remote sensing field, object detection holds immense value for applications such as land use classification, disaster monitoring, and infrastructure planning, where accurate and efficient identification of objects within images is essential for informed decision making. However, achieving object localization with high precision can be challenging even if minor errors exist at the pixel level, which can significantly impact the ground distance measurements. To address this critical challenge, our research introduces an innovative hybrid approach that combines the capabilities of the You Only Look Once version 7 (YOLOv7) and DEtection TRansformer (DETR) algorithms. By bridging the gap between local receptive field and global context, our approach not only enhances overall object detection accuracy, but also promotes precise object localization, a key requirement in the field of remote sensing. Furthermore, a key advantage of our approach is the introduction of an automatic selection module which serves as an intelligent decision-making component. This module optimizes the selection process between YOLOv7 and DETR, and further improves object detection accuracy. Finally, we validate the improved performance of our new hybrid approach through empirical experimentation, and thus confirm its contribution to the field of target recognition and detection in remote sensing images. Full article
Show Figures

Figure 1

17 pages, 7735 KiB  
Article
Efficient Wheat Lodging Detection Using UAV Remote Sensing Images and an Innovative Multi-Branch Classification Framework
by Kai Zhang, Rundong Zhang, Ziqian Yang, Jie Deng, Ahsan Abdullah, Congying Zhou, Xuan Lv, Rui Wang and Zhanhong Ma
Remote Sens. 2023, 15(18), 4572; https://doi.org/10.3390/rs15184572 - 17 Sep 2023
Cited by 1 | Viewed by 1475
Abstract
Wheat lodging has a significant impact on yields and quality, necessitating the accurate acquisition of lodging information for effective disaster assessment and damage evaluation. This study presents a novel approach for wheat lodging detection in large and heterogeneous fields using UAV remote sensing [...] Read more.
Wheat lodging has a significant impact on yields and quality, necessitating the accurate acquisition of lodging information for effective disaster assessment and damage evaluation. This study presents a novel approach for wheat lodging detection in large and heterogeneous fields using UAV remote sensing images. A comprehensive dataset spanning an area of 2.3117 km2 was meticulously collected and labeled, constituting a valuable resource for this study. Through a comprehensive comparison of algorithmic models, remote sensing data types, and model frameworks, this study demonstrates that the Deeplabv3+ model outperforms various other models, including U-net, Bisenetv2, FastSCN, RTFormer, Bisenetv2, and HRNet, achieving a noteworthy F1 score of 90.22% for detecting wheat lodging. Intriguingly, by leveraging RGB image data alone, the current model achieves high-accuracy rates in wheat lodging detection compared to models trained with multispectral datasets at the same resolution. Moreover, we introduce an innovative multi-branch binary classification framework that surpasses the traditional single-branch multi-classification framework. The proposed framework yielded an outstanding F1 score of 90.30% for detecting wheat lodging and an accuracy of 86.94% for area extraction of wheat lodging, surpassing the single-branch multi-classification framework by an improvement of 7.22%. Significantly, the present comprehensive experimental results showcase the capacity of UAVs and deep learning to detect wheat lodging in expansive areas, demonstrating high efficiency and cost-effectiveness under heterogeneous field conditions. This study offers valuable insights for leveraging UAV remote sensing technology to identify post-disaster damage areas and assess the extent of the damage. Full article
Show Figures

Figure 1

21 pages, 5071 KiB  
Article
C2S-RoadNet: Road Extraction Model with Depth-Wise Separable Convolution and Self-Attention
by Anchao Yin, Chao Ren, Zhiheng Yan, Xiaoqin Xue, Ying Zhou, Yuanyuan Liu, Jiakai Lu and Cong Ding
Remote Sens. 2023, 15(18), 4531; https://doi.org/10.3390/rs15184531 - 14 Sep 2023
Cited by 3 | Viewed by 1347
Abstract
In order to effectively utilize acquired remote sensing imagery and improve the completeness of information extraction, we propose a new road extraction model called C2S-RoadNet. C2S-RoadNet was designed to enhance the feature extraction capability by combining depth-wise separable convolution with lightweight asymmetric self-attention [...] Read more.
In order to effectively utilize acquired remote sensing imagery and improve the completeness of information extraction, we propose a new road extraction model called C2S-RoadNet. C2S-RoadNet was designed to enhance the feature extraction capability by combining depth-wise separable convolution with lightweight asymmetric self-attention based on encoder and decoder structures. C2S-RoadNet is able to establish long-distance dependencies and fully utilize global information, and it better extracts road information. Based on the lightweight asymmetric self-attention network, a multi-scale adaptive weight module was designed to aggregate information at different scales. The use of adaptive weights can fully harness features at different scales to improve the model’s extraction performance. The strengthening of backbone information plays an important role in the extraction of road main branch information, which can effectively improve the integrity of road information. Compared with existing deep learning algorithms based on encoder–decoder, experimental results on various public road datasets show that the C2S-RoadNet model can produce more complete road extraction, especially when faced with scenarios involving occluded roads or complex lighting conditions. On the Massachusetts road dataset, the PA, F1 score, and IoU reached 98%, 77%, and 72%, respectively. Furthermore, on the DeepGlobe dataset, the PA, F1 score, and IoU reached 98%, 78%, and 64%, respectively. The objective performance evaluation indicators also significantly improved on the LSRV dataset, and the PA, F1 score, and IoU reached 96%, 82%, and 71%, respectively. Full article
Show Figures

Figure 1

22 pages, 5055 KiB  
Article
Road Extraction from High-Resolution Remote Sensing Images via Local and Global Context Reasoning
by Jie Chen, Libo Yang, Hao Wang, Jingru Zhu, Geng Sun, Xiaojun Dai, Min Deng and Yan Shi
Remote Sens. 2023, 15(17), 4177; https://doi.org/10.3390/rs15174177 - 25 Aug 2023
Cited by 3 | Viewed by 2471
Abstract
Road extraction from high-resolution remote sensing images is a critical task in image understanding and analysis, yet it poses significant challenges because of road occlusions caused by vegetation, buildings, and shadows. Deep convolutional neural networks have emerged as the leading approach for road [...] Read more.
Road extraction from high-resolution remote sensing images is a critical task in image understanding and analysis, yet it poses significant challenges because of road occlusions caused by vegetation, buildings, and shadows. Deep convolutional neural networks have emerged as the leading approach for road extraction because of their exceptional feature representation capabilities. However, existing methods often yield incomplete and disjointed road extraction results. To address this issue, we propose CR-HR-RoadNet, a novel high-resolution road extraction network that incorporates local and global context reasoning. In this work, we introduce a road-adapted high-resolution network as the feature encoder, effectively preserving intricate details of narrow roads and spatial information. To capture multi-scale local context information and model the interplay between roads and background environments, we integrate multi-scale features with residual learning in a specialized multi-scale feature representation module. Moreover, to enable efficient long-range dependencies between different dimensions and reason the correlation between various road segments, we employ a lightweight coordinate attention module as a global context-aware algorithm. Extensive quantitative and qualitative experiments on three datasets demonstrate that CR-HR-RoadNet achieves superior extraction accuracy across various road datasets, delivering road extraction results with enhanced completeness and continuity. The proposed method holds promise for advancing road extraction in challenging remote sensing scenarios and contributes to the broader field of deep-learning-based image analysis for geospatial applications. Full article
Show Figures

Figure 1

19 pages, 5322 KiB  
Article
Learning Adversarially Robust Object Detector with Consistency Regularization in Remote Sensing Images
by Yang Li, Yuqiang Fang, Wanyun Li, Bitao Jiang, Shengjin Wang and Zhi Li
Remote Sens. 2023, 15(16), 3997; https://doi.org/10.3390/rs15163997 - 11 Aug 2023
Cited by 1 | Viewed by 1621
Abstract
Object detection in remote sensing has developed rapidly and has been applied in many fields, but it is known to be vulnerable to adversarial attacks. Improving the robustness of models has become a key issue for reliable application deployment. This paper proposes a [...] Read more.
Object detection in remote sensing has developed rapidly and has been applied in many fields, but it is known to be vulnerable to adversarial attacks. Improving the robustness of models has become a key issue for reliable application deployment. This paper proposes a robust object detector for remote sensing images (RSIs) to mitigate the performance degradation caused by adversarial attacks. For remote sensing objects, multi-dimensional convolution is utilized to extract both specific features and consistency features from clean images and adversarial images dynamically and efficiently. This enhances the feature extraction ability and thus enriches the context information used for detection. Furthermore, regularization loss is proposed from the perspective of image distribution. This can separate consistent features from the mixed distributions for reconstruction to assure detection accuracy. Experimental results obtained using different datasets (HRSC, UCAS-AOD, and DIOR) demonstrate that the proposed method effectively improves the robustness of detectors against adversarial attacks. Full article
Show Figures

Figure 1

24 pages, 20605 KiB  
Article
Towards Feature Decoupling for Lightweight Oriented Object Detection in Remote Sensing Images
by Chenwei Deng, Donglin Jing, Yuqi Han, Zhiyuan Deng and Hong Zhang
Remote Sens. 2023, 15(15), 3801; https://doi.org/10.3390/rs15153801 - 30 Jul 2023
Cited by 4 | Viewed by 1856
Abstract
Recently, the improvement of detection performance always relies on deeper convolutional layers and complex convolutional structures in remote sensing images, which significantly increases the storage space and computational complexity of the detector. Although previous work has designed various novel lightweight convolutions, when these [...] Read more.
Recently, the improvement of detection performance always relies on deeper convolutional layers and complex convolutional structures in remote sensing images, which significantly increases the storage space and computational complexity of the detector. Although previous work has designed various novel lightweight convolutions, when these convolutional structures are applied to remote sensing detection tasks, the inconsistency between features and targets as well as between features and tasks in the detection architecture is often ignored: (1) The features extracted by convolution sliding in a fixed direction make it difficult to effectively model targets with arbitrary direction distribution, which leads to the detector needing more parameters to encode direction information and the network parameters being highly redundant; (2) The detector shares features from the backbone, but the classification task requires rotation-invariant features while the regression task requires rotation-sensitive features. This inconsistency in the task can lead to inefficient convolutional structures. Therefore, this paper proposed a detector that uses the Feature Decoupling for Lightweight Oriented Object Detection (FDLO-Det). Specifically, we constructed a rotational separable convolution that extracts rotational equivariant features while significantly compressing network parameters and computational complexity through highly shared parameters. Next, we introduced an orthogonal polarization transformation module that decomposes rotational equivariant features in both horizontal and vertical orthogonal directions, and used polarization functions to filter out the required features for classification and regression tasks, effectively improving detector performance. Extensive experiments on DOTA, HRSC2016, and UCAS-AOD show that the proposed detector can achieve the best performance and achieve an effective balance between computational complexity and detection accuracy. Full article
Show Figures

Figure 1

30 pages, 7722 KiB  
Article
MS-AGAN: Road Extraction via Multi-Scale Information Fusion and Asymmetric Generative Adversarial Networks from High-Resolution Remote Sensing Images under Complex Backgrounds
by Shaofu Lin, Xin Yao, Xiliang Liu, Shaohua Wang, Hua-Min Chen, Lei Ding, Jing Zhang, Guihong Chen and Qiang Mei
Remote Sens. 2023, 15(13), 3367; https://doi.org/10.3390/rs15133367 - 30 Jun 2023
Cited by 7 | Viewed by 2522
Abstract
Extracting roads from remote sensing images is of significant importance for automatic road network updating, urban planning, and construction. However, various factors in complex scenes (e.g., high vegetation coverage occlusions) may lead to fragmentation in the extracted road networks and also affect the [...] Read more.
Extracting roads from remote sensing images is of significant importance for automatic road network updating, urban planning, and construction. However, various factors in complex scenes (e.g., high vegetation coverage occlusions) may lead to fragmentation in the extracted road networks and also affect the robustness of road extraction methods. This study proposes a multi-scale road extraction method with asymmetric generative adversarial learning (MS-AGAN). First, we design an asymmetric GAN with a multi-scale feature encoder to better utilize the context information in high-resolution remote sensing images (HRSIs). Atrous spatial pyramid pooling (ASPP) and feature fusion are integrated into the asymmetric encoder–decoder structure to avoid feature redundancy caused by multi-level cascading operations and enhance the generator network’s ability to extract fine-grained road information at the pixel level. Second, to maintain road connectivity, topologic features are considered in the pixel segmentation process. A linear structural similarity loss (LSSIM) is introduced into the loss function of MS-AGAN, which guides MS-AGAN to generate more accurate segmentation results. Finally, to fairly evaluate the performance of deep models under complex backgrounds, the Bayesian error rate (BER) is introduced into the field of road extraction for the first time. Experiments are conducted via Gaofen-2 (GF-2) high-resolution remote sensing images with high vegetation coverage in the Daxing District of Beijing, China, and the public DeepGlobe dataset. The performance of MS-AGAN is compared with a list of advanced models, including RCFSNet, CoANet, UNet, DeepLabV3+, and DiResNet. The final results show that (1) with respect to road extraction performance, the Recall, F1, and IoU values of MS-AGAN on the Daxing dataset are 2.17%, 0.04%, and 2.63% higher than the baselines. On DeepGlobe, the Recall, F1, and IoU of MS-AGAN improve by 1.12%, 0.42%, and 0.25%, respectively. (2) On road connectivity, the Conn index of MS-AGAN from the Daxing dataset is 46.39%, with an improvement of 0.62% over the baselines, and the Conn index of MS-AGAN on DeepGlobe is 70.08%, holding an improvement of 1.73% over CoANet. The quantitative and qualitative analyses both demonstrate the superiority of MS-AGAN in preserving road connectivity. (3) In particular, the BER of MS-AGAN is 20.86% over the Daxing dataset with a 0.22% decrease compared to the best baselines and 11.77% on DeepGlobe with a 0.85% decrease compared to the best baselines. The proposed MS-AGAN provides an efficient, cost-effective, and reliable method for the dynamic updating of road networks via HRSIs. Full article
Show Figures

Figure 1

18 pages, 2976 KiB  
Article
Modified Dynamic Routing Convolutional Neural Network for Pan-Sharpening
by Kai Sun, Jiangshe Zhang, Junmin Liu, Shuang Xu, Xiangyong Cao and Rongrong Fei
Remote Sens. 2023, 15(11), 2869; https://doi.org/10.3390/rs15112869 - 31 May 2023
Cited by 1 | Viewed by 1558
Abstract
Based on deep learning, various pan-sharpening models have achieved excellent results. However, most of them adopt simple addition or concatenation operations to merge the information of low spatial resolution multi-spectral (LRMS) images and panchromatic (PAN) images, which may cause a loss of detailed [...] Read more.
Based on deep learning, various pan-sharpening models have achieved excellent results. However, most of them adopt simple addition or concatenation operations to merge the information of low spatial resolution multi-spectral (LRMS) images and panchromatic (PAN) images, which may cause a loss of detailed information. To tackle this issue, inspired by capsule networks, we propose a plug-and-play layer named modified dynamic routing layer (MDRL), which modifies the information transmission mode of capsules to effectively fuse LRMS images and PAN images. Concretely, the lower-level capsules are generated by applying transform operation to the features of LRMS images and PAN images, which preserve the spatial location information. Then, the dynamic routing algorithm is modified to adaptively select the lower-level capsules to generate the higher-level capsule features to represent the fusion of LRMS images and PAN images, which can effectively avoid the loss of detailed information. In addition, the previous addition and concatenation operations are illustrated as special cases of our MDRL. Based on MIPSM with addition operations and DRPNN with concatenation operations, two modified dynamic routing models named MDR–MIPSM and MDR–DRPNN are further proposed for pan-sharpening. Extensive experimental results demonstrate that the proposed method can achieve remarkable spectral and spatial quality. Full article
Show Figures

Figure 1

23 pages, 27433 KiB  
Article
Land Subsidence in a Coastal City Based on SBAS-InSAR Monitoring: A Case Study of Zhuhai, China
by Huimin Sun, Hongxia Peng, Min Zeng, Simiao Wang, Yujie Pan, Pengcheng Pi, Zixuan Xue, Xinwen Zhao, Ao Zhang and Fengmei Liu
Remote Sens. 2023, 15(9), 2424; https://doi.org/10.3390/rs15092424 - 5 May 2023
Cited by 8 | Viewed by 2963
Abstract
The superimposed effects of sea level rise caused by global warming and land subsidence seriously threaten the sustainable development of coastal cities. In recent years, an important coastal city in China, Zhuhai, has been suffering from severe and widespread land subsidence; however, the [...] Read more.
The superimposed effects of sea level rise caused by global warming and land subsidence seriously threaten the sustainable development of coastal cities. In recent years, an important coastal city in China, Zhuhai, has been suffering from severe and widespread land subsidence; however, the characteristics, triggers, and vulnerability assessment of ground subsidence in Zhuhai are still unclear. Therefore, we used the SBAS-InSAR technique to process 51 Sentinel-1A images to monitor the land subsidence in Zhuhai during the period from August 2016 to June 2019. The results showed that there was extensive land subsidence in the study area, with a maximum rate of −109.75 mm/yr. The surface had sequentially undergone a process of minor uplift and decline fluctuation, sharp settlement, and stable subsidence. The distribution and evolution of land subsidence were controlled by tectonic fractures and triggered by the thickness of soft soil, the intensity of groundwater development, and the seasonal changes of atmospheric precipitation. The comprehensive index method and the analytic hierarchy process were applied to derive extremely high subsidence vulnerability in several village communities and some traffic arteries in Zhuhai. Our research provides a theoretical basis for urban disaster prevention in Zhuhai and the construction planning of coastal cities around the world. Full article
Show Figures

Figure 1

26 pages, 2303 KiB  
Article
A CNN-Transformer Network Combining CBAM for Change Detection in High-Resolution Remote Sensing Images
by Mengmeng Yin, Zhibo Chen and Chengjian Zhang
Remote Sens. 2023, 15(9), 2406; https://doi.org/10.3390/rs15092406 - 4 May 2023
Cited by 16 | Viewed by 4816
Abstract
Current deep learning-based change detection approaches mostly produce convincing results by introducing attention mechanisms to traditional convolutional networks. However, given the limitation of the receptive field, convolution-based methods fall short of fully modelling global context and capturing long-range dependencies, thus insufficient in discriminating [...] Read more.
Current deep learning-based change detection approaches mostly produce convincing results by introducing attention mechanisms to traditional convolutional networks. However, given the limitation of the receptive field, convolution-based methods fall short of fully modelling global context and capturing long-range dependencies, thus insufficient in discriminating pseudo changes. Transformers have an efficient global spatio-temporal modelling capability, which is beneficial for the feature representation of changes of interest. However, the lack of detailed information may cause the transformer to locate the boundaries of changed regions inaccurately. Therefore, in this article, a hybrid CNN-transformer architecture named CTCANet, combining the strengths of convolutional networks, transformer, and attention mechanisms, is proposed for high-resolution bi-temporal remote sensing image change detection. To obtain high-level feature representations that reveal changes of interest, CTCANet utilizes tokenizer to embed the features of each image extracted by convolutional network into a sequence of tokens, and the transformer module to model global spatio-temporal context in token space. The optimal bi-temporal information fusion approach is explored here. Subsequently, the reconstructed features carrying deep abstract information are fed to the cascaded decoder to aggregate with features containing shallow fine-grained information, through skip connections. Such an aggregation empowers our model to maintain the completeness of changes and accurately locate small targets. Moreover, the integration of the convolutional block attention module enables the smoothing of semantic gaps between heterogeneous features and the accentuation of relevant changes in both the channel and spatial domains, resulting in more impressive outcomes. The performance of the proposed CTCANet surpasses that of recent certain state-of-the-art methods, as evidenced by experimental results on two publicly accessible datasets, LEVIR-CD and SYSU-CD. Full article
Show Figures

Figure 1

21 pages, 4310 KiB  
Article
Smooth GIoU Loss for Oriented Object Detection in Remote Sensing Images
by Xiaoliang Qian, Niannian Zhang and Wei Wang
Remote Sens. 2023, 15(5), 1259; https://doi.org/10.3390/rs15051259 - 24 Feb 2023
Cited by 20 | Viewed by 2838
Abstract
Oriented object detection (OOD) can more accurately locate objects with an arbitrary direction in remote sensing images (RSIs) compared to horizontal object detection. The most commonly used bounding box regression (BBR) loss in OOD is smooth L1 loss, which requires the precondition that [...] Read more.
Oriented object detection (OOD) can more accurately locate objects with an arbitrary direction in remote sensing images (RSIs) compared to horizontal object detection. The most commonly used bounding box regression (BBR) loss in OOD is smooth L1 loss, which requires the precondition that spatial parameters are independent of one another. This independence is an ideal that is not achievable in practice. To avoid this problem, various kinds of IoU-based BBR losses have been widely used in OOD; however, their relationships with IoUs are approximately linear. Consequently, the gradient value, i.e., the learning intensity, cannot be dynamically adjusted with the IoU in these cases, which restricts the accuracy of object location. To handle this problem, a novel BBR loss, named smooth generalized intersection over union (GIoU) loss, is proposed. The contributions it makes include two aspects. First of all, smooth GIoU loss can employ more appropriate learning intensities in the different ranges of GIoU values to address the above problem and the design scheme of smooth GIoU loss can be generalized to other IoU-based BBR losses. Secondly, the existing computational scheme of GIoU loss can be modified to fit OOD. The ablation study of smooth GIoU loss validates the effectiveness of its design scheme. Comprehensive comparisons performed on two RSI datasets demonstrate that the proposed smooth GIoU loss is superior to other BBR losses adopted by existing OOD methods and can be generalized for various kinds of OOD methods. Furthermore, the core idea of smooth GIoU loss can be generalized to other IoU-based BBR losses. Full article
Show Figures

Figure 1

25 pages, 11314 KiB  
Article
Application of a Novel Multiscale Global Graph Convolutional Neural Network to Improve the Accuracy of Forest Type Classification Using Aerial Photographs
by Huiqing Pei, Toshiaki Owari, Satoshi Tsuyuki and Yunfang Zhong
Remote Sens. 2023, 15(4), 1001; https://doi.org/10.3390/rs15041001 - 11 Feb 2023
Cited by 13 | Viewed by 3142
Abstract
The accurate classification of forest types is critical for sustainable forest management. In this study, a novel multiscale global graph convolutional neural network (MSG-GCN) was compared with random forest (RF), U-Net, and U-Net++ models in terms of the classification of natural mixed forest [...] Read more.
The accurate classification of forest types is critical for sustainable forest management. In this study, a novel multiscale global graph convolutional neural network (MSG-GCN) was compared with random forest (RF), U-Net, and U-Net++ models in terms of the classification of natural mixed forest (NMX), natural broadleaved forest (NBL), and conifer plantation (CP) using very high-resolution aerial photographs from the University of Tokyo Chiba Forest in central Japan. Our MSG-GCN architecture is novel in the following respects: The convolutional kernel scale of the encoder is unlike those of other models; local attention replaces the conventional U-Net++ skip connection; a multiscale graph convolutional neural block is embedded into the end layer of the encoder module; and various decoding layers are spliced to preserve high- and low-level feature information and to improve the decision capacity for boundary cells. The MSG-GCN achieved higher classification accuracy than other state-of-the-art (SOTA) methods. The classification accuracy in terms of NMX was lower compared with NBL and CP. The RF method produced severe salt-and-pepper noise. The U-Net and U-Net++ methods frequently produced error patches and the edges between different forest types were rough and blurred. In contrast, the MSG-GCN method had fewer misclassification patches and showed clear edges between different forest types. Most areas misclassified by MSG-GCN were on edges, while misclassification patches were randomly distributed in internal areas for U-Net and U-Net++. We made full use of artificial intelligence and very high-resolution remote sensing data to create accurate maps to aid forest management and facilitate efficient and accurate forest resource inventory taking in Japan. Full article
Show Figures

Figure 1

Back to TopTop