remotesensing-logo

Journal Browser

Journal Browser

Geospatial Artificial Intelligence (GeoAI) in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: 31 August 2024 | Viewed by 1682

Special Issue Editors

Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
Interests: remote sensing image interpretation; artificial intelligence; machine learning; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
Interests: remote sensing image interpretation; artificial intelligence; machine learning; computer vision

E-Mail Website
Guest Editor
Australian Institute for Machine Learning, University of Adelaide, Adelaide, 5005, Australia
Interests: machine learning; computer vision and natural language processing

E-Mail Website
Guest Editor
Land Satellite Remote Sensing Application Center, Ministry of Natural Resources of the People’s Republic of China, Beijing 100048, China
Interests: remote sensing image interpretation; natural resource survey and monitoring; artificial intelligence

Special Issue Information

Dear Colleagues,

With the development of remote sensing technology, hundreds of millions of square kilometers of the Earth can now be covered on a daily basis. Geospatial analysis benefits greatly from the vast amount of remote sensing data. The goal of GeoAI is to enhance the processing power of geospatial information by introducing artificial intelligence technologies. In recent years, an increasing number of researchers have employed GeoAI technology to accomplish the intelligent interpretation of remote sensing data, effectively lowering the bar for geospatial applications. In particular, the emergence of foundation models in the past two years has propelled AI forward with regard to cognitive capabilities, opening up new avenues for geospatial information processing in remote sensing. While some researchers have used remote sensing foundation models and showed initial success in terms of their application in various recognition tasks, there remain some challenging and promising directions for further exploration.

This Special Issue aims at studies on the most advanced GeoAI technology in remote sensing and its diverse application tasks. Topics may cover anything from the development of GeoAI models (especially foundation models) to the efficient generalization applications of the models. Hence, we welcome research that includes issues such as the embedding of remote sensing properties or geospatial knowledge, the robustness and transferability of foundation models, and the future evolution forecasting of geospatial tasks.

In conjunction with the 2024 ISPRS TC I Contest on Intelligent Interpretation for Multi-modal Remote Sensing Application, the Special Issue is open to authors presenting papers at the Contest. It is important to note that papers submitted for this Special Issue should not be identical to the papers presented at the ISPRS TC I Contest. Instead, authors are encouraged to provide longer papers, typically two to three times longer, offering a more comprehensive presentation of their work, enhanced techniques, and methodologies, additional datasets, and expanded experimental sections.

Dr. Xian Sun
Dr. Wanxuan Lu
Dr. Lingqiao Liu
Dr. Shucheng You
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing foundation models with GeoAI
  • deep neural networks combining RS properties or geospatial knowledge
  • robust architectures for multiple geospatial tasks
  • GeoAI for remote sensing forecasting tasks
  • high-quality and large-scale datasets for remote sensing foundation models
  • applications such as smart cities, land-cover and ocean observation, environmental monitoring, and sustainable development

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 12605 KiB  
Article
Active Bidirectional Self-Training Network for Cross-Domain Segmentation in Remote-Sensing Images
by Zhujun Yang, Zhiyuan Yan, Wenhui Diao, Yihang Ma, Xinming Li and Xian Sun
Remote Sens. 2024, 16(13), 2507; https://doi.org/10.3390/rs16132507 - 8 Jul 2024
Viewed by 407
Abstract
Semantic segmentation with cross-domain adaptation in remote-sensing images (RSIs) is crucial and mitigates the expense of manually labeling target data. However, the performance of existing unsupervised domain adaptation (UDA) methods is still significantly impacted by domain bias, leading to a considerable gap compared [...] Read more.
Semantic segmentation with cross-domain adaptation in remote-sensing images (RSIs) is crucial and mitigates the expense of manually labeling target data. However, the performance of existing unsupervised domain adaptation (UDA) methods is still significantly impacted by domain bias, leading to a considerable gap compared to supervised trained models. To address this, our work focuses on semi-supervised domain adaptation, selecting a small subset of target annotations through active learning (AL) that maximize information to improve domain adaptation. Overall, we propose a novel active bidirectional self-training network (ABSNet) for cross-domain semantic segmentation in RSIs. ABSNet consists of two sub-stages: a multi-prototype active region selection (MARS) stage and a source-weighted class-balanced self-training (SCBS) stage. The MARS approach captures the diversity in labeled source data by introducing multi-prototype density estimation based on Gaussian mixture models. We then measure inter-domain similarity to select complementary and representative target samples. Through fine-tuning with the selected active samples, we propose an enhanced self-training strategy SCBS, designed for weighted training on source data, aiming to avoid the negative effects of interfering samples. We conduct extensive experiments on the LoveDA and ISPRS datasets to validate the superiority of our method over existing state-of-the-art domain-adaptive semantic segmentation methods. Full article
(This article belongs to the Special Issue Geospatial Artificial Intelligence (GeoAI) in Remote Sensing)
Show Figures

Figure 1

29 pages, 26115 KiB  
Article
DCP-Net: A Distributed Collaborative Perception Network for Remote Sensing Semantic Segmentation
by Zhechao Wang, Peirui Cheng, Shujing Duan, Kaiqiang Chen, Zhirui Wang, Xinming Li and Xian Sun
Remote Sens. 2024, 16(13), 2504; https://doi.org/10.3390/rs16132504 - 8 Jul 2024
Viewed by 328
Abstract
Collaborative perception enhances onboard perceptual capability by integrating features from other platforms, effectively mitigating the compromised accuracy caused by a restricted observational range and vulnerability to interference. However, current implementations of collaborative perception overlook the prevalent issues of both limited and low-reliability communication, [...] Read more.
Collaborative perception enhances onboard perceptual capability by integrating features from other platforms, effectively mitigating the compromised accuracy caused by a restricted observational range and vulnerability to interference. However, current implementations of collaborative perception overlook the prevalent issues of both limited and low-reliability communication, as well as misaligned observations in remote sensing. To address this problem, this article presents an innovative distributed collaborative perception network (DCP-Net) specifically designed for remote sensing applications. Firstly, a self-mutual information match module is proposed to identify collaboration opportunities and select suitable partners. This module prioritizes critical collaborative features and reduces redundant transmission for better adaptation to weak communication in remote sensing. Secondly, a related feature fusion module is devised to tackle the misalignment between local and collaborative features due to the multiangle observations, improving the quality of fused features for the downstream task. We conduct extensive experiments and visualization analyses using three semantic segmentation datasets, namely Potsdam, iSAID, and DFC23. The results demonstrate that DCP-Net outperforms the existing collaborative perception methods comprehensively, improving mIoU by 2.61% to 16.89% at the highest collaboration efficiency and achieving state-of-the-art performance. Full article
(This article belongs to the Special Issue Geospatial Artificial Intelligence (GeoAI) in Remote Sensing)
Show Figures

Figure 1

22 pages, 7590 KiB  
Article
Resilient Factor Graph-Based GNSS/IMU/Vision/Odo Integrated Navigation Scheme Enhanced by Noise Approximate Gaussian Estimation in Challenging Environments
by Ziyue Li, Qian Meng, Zuliang Shen, Lihui Wang, Lin Li and Haonan Jia
Remote Sens. 2024, 16(12), 2176; https://doi.org/10.3390/rs16122176 - 15 Jun 2024
Viewed by 521
Abstract
The signal blockage and multipath effects of the Global Navigation Satellite System (GNSS) caused by urban canyon scenarios have brought great technical challenges to the positioning and navigation of autonomous vehicles. In this paper, an improved factor graph optimization algorithm enhanced by a [...] Read more.
The signal blockage and multipath effects of the Global Navigation Satellite System (GNSS) caused by urban canyon scenarios have brought great technical challenges to the positioning and navigation of autonomous vehicles. In this paper, an improved factor graph optimization algorithm enhanced by a resilient noise model is proposed. The measurement noise is resilient and adjusted based on an approximate Gaussian distribution-based estimation. In estimating and adjusting the noise parameters of the measurement model, the error covariance matrix of the multi-sensor fusion positioning system is dynamically optimized to improve the system accuracy. Firstly, according to the approximate Gaussian statistical property of the GNSS/odometer velocity residual sequence, the measured data are divided into an approximate Gaussian fitting region and an approximate Gaussian convergence region. Secondly, the interval is divided according to the measured data, and the corresponding variational Bayesian network and Gaussian mixture model are used to estimate the innovation online. Further, the noise covariance matrix of the adaptive factor graph-based model is dynamically optimized using the estimated noise parameters. Finally, based on low-cost inertial navigation equipment, GNSS, odometer, and vision, the algorithm is implemented and verified using a simulation platform and real-vehicle road test. The experimental results show that in a complex urban road environment, compared with the traditional factor graph fusion localization algorithm, the maximum improvement in accuracy of the proposed algorithm can reach 65.63%, 39.52%, and 42.95% for heading, position, and velocity, respectively. Full article
(This article belongs to the Special Issue Geospatial Artificial Intelligence (GeoAI) in Remote Sensing)
Show Figures

Figure 1

Back to TopTop