remotesensing-logo

Journal Browser

Journal Browser

Multi-platform and Multi-modal Remote Sensing Data Fusion with Advanced Deep Learning Techniques (Second Edition)

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: 30 October 2024 | Viewed by 363

Special Issue Editors


E-Mail Website
Guest Editor
Nanjing University of Information Science & Technology, Nanjing, China
Interests: Computer Vision; Multimedia Forensics; Digital
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, Nanjing University of Information Science and Technology, No. 219 Ningliu Road, Nanjing 210044, Jiangsu Province, China
Interests: Computer vision; multispectral image processing; person Re-identification; deep learning
Special Issues, Collections and Topics in MDPI journals
School of Computer and Software, Nanjing University of Information Science and Technology, No. 219 Ningliu Road, Nanjing 210044, Jiangsu Province, China
Interests: hyperspectral remote sensing image processing (including: unmixing, classification, fusion); deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Dean of Information & Communication Engineering College
Interests: Hyperspectral Imagery; Image Denoising; Spectroscopy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue is the second edition of the Special Issue titled “Multi-Platform and Multi-Modal Remote Sensing Data Fusion with Advanced Deep Learning Techniques”. After the resounding success of our first edition, we are thrilled to launch the second edition.

Recent advances in sensor and aircraft technology have enabled us to acquire vast amounts of different types of remote sensing data for Earth observation. These multi-source data make it possible to derive diverse information regarding the Earth’s surface. For instance, multispectral and hyperspectral images can provide rich spectral information about geound objects, panchromatic images can reach fine spatial resolutions, synthetic aperture radar (SAR) data can be used to map different properties of the terrain, and laser imaging detection and ranging (LIDAR) data can clarify the elevation of land cover. However, a single source of data can no longer meet the needs of subsequent processing, such as classification, object detection/tracking, super-resolution, and restoration.

Therefore, multi-modal remote sensing data, acquired using sensors from multiple platforms, should be combined and fused. This fusion can make full use of the complementary information of multi-source remote sensing data, thereby further improving the accuracy of the analysis of the acquired scene (classification, detection, tracking, geological mapping, etc.).

Recently, deep learning has become one of the hottest research fields. Many advanced deep learning techniques have been developed, such as meta learning, self-supervision learning, few-shot learning, evolutionary learning, attention mechanisms, transformer, etc. The application of these technologies to remote sensing images, especially the fusion of multi-platform and multi-modal remote sensing data, is still an open topic. For this Special Issue, we desire original contributions (including high-quality original research articles, reviews, theoretical and critical perspectives, and viewpoint articles) written by innovative researchers on the fusion of multi-platform and multi-modal remote sensing data, which exploits advanced deep learning techniques to address the aforementioned theoretical and practical problems.

Prof. Dr. Yuhui Zheng
Dr. Guoqing Zhang
Dr. Le Sun
Prof. Dr. Byeungwoo Jeon
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multispectral and hyperspectral data fusion
  • hyperspectral and LiDAR data fusion
  • pansharpening or thermal sharpening
  • optical and SAR data fusion
  • optical and LiDAR data fusion
  • novel benchmark
  • multi-platform or multi-modal datasets
  • advanced deep learning algorithm/architectures/theory transfer, multitask, few-shot and meta learning
  • attention mechanism and transformer
  • convolutional neural networks/graph convolutional networks
  • scene/object classification and segmentation
  • target detection/tracking
  • geological mapping

Related Special Issue

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2648 KiB  
Article
Multi-Feature, Cross Attention-Induced Transformer Network for Hyperspectral and LiDAR Data Classification
by Zirui Li, Runbang Liu, Le Sun and Yuhui Zheng
Remote Sens. 2024, 16(15), 2775; https://doi.org/10.3390/rs16152775 - 29 Jul 2024
Abstract
Transformers have shown remarkable success in modeling sequential data and capturing intricate patterns over long distances. Their self-attention mechanism allows for efficient parallel processing and scalability, making them well-suited for the high-dimensional data in hyperspectral and LiDAR imagery. However, further research is needed [...] Read more.
Transformers have shown remarkable success in modeling sequential data and capturing intricate patterns over long distances. Their self-attention mechanism allows for efficient parallel processing and scalability, making them well-suited for the high-dimensional data in hyperspectral and LiDAR imagery. However, further research is needed on how to more deeply integrate the features of two modalities in attention mechanisms. In this paper, we propose a novel Multi-Feature Cross Attention-Induced Transformer Network (MCAITN) designed to enhance the classification accuracy of hyperspectral and LiDAR data. The MCAITN integrates the strengths of both data modalities by leveraging a cross-attention mechanism that effectively captures the complementary information between hyperspectral and LiDAR features. By utilizing a transformer-based architecture, the network is capable of learning complex spatial-spectral relationships and long-range dependencies. The cross-attention module facilitates the fusion of multi-source data, improving the network’s ability to discriminate between different land cover types. Extensive experiments conducted on benchmark datasets demonstrate that the MCAITN outperforms state-of-the-art methods in terms of classification accuracy and robustness. Full article
Show Figures

Figure 1

Back to TopTop