remotesensing-logo

Journal Browser

Journal Browser

Pansharpening and Beyond in the Deep Learning Era

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Earth Observation Data".

Deadline for manuscript submissions: closed (15 November 2023) | Viewed by 8314

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical Engineering and Information Technology, University Federico II, 80125 Naples, Italy
Interests: image segmentation and classification; despeckling; pansharpening; data fusion; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering and Information Technology, University Federico II, 80125 Napoli, Italy
Interests: image segmentation; data fusion; despeckling; super-resolution; pansharpening; deep learning

E-Mail Website
Guest Editor
Department of Science and Technology, University Parthenope, 80143 Naples, Italy
Interests: data fusion; despeckling; super-resolution; pansharpening; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, space agencies and private companies have been deploying in orbit numerous observation satellites with unprecedented temporal, spatial and spectral capabilities. End users are flooded with a variety of images of diverse nature (e.g., optical vs. SAR, high-resolution vs. wide-coverage, mono- vs. multi-spectral), often in regular time series. Consequently, data fusion is becoming a key asset in remote sensing, enabling cross-sensor/modality, cross-resolution or cross-temporal analysis and information extraction.

Multiresolution (MR) fusion is a popular task where two images of the same scene with different resolutions and complementary features are merged with the aim of synthesizing a higher-quality image that reproduces all bands of interest at the highest possible resolution. There are many different cases of MR fusion, such as hyper-/multi-spectral fusion, pansharpening, SAR/optical or SAR/SAR fusion, and so forth, and new fusion problems arise each time a new Earth observation satellite is put in orbit. In addition, new (or renovated) challenging questions are carried by the big wave of deep learning—for example:

  • What is the right volume of training data to ensure good generalization properties?
  • What is a good loss for training?·      
  • How should data be labelled for training purposes?·      
  • How can a limited computational burden be achieved at inference time?·       
  • How can the quality of the fused products be objectively assessed?

This Special Issue aims to report the latest advances and trends concerning the solution of MR fusion problems. Papers of both theoretical and applicative nature are welcome.

Major topics of interest include but are not limited to:

  • Pansharpening.
  • Hyper-spectral/multi-spectral image fusion.
  • Optical or SAR image super-resolution.
  • Multitemporal fusion.
  • Cross-sensor multi-resolution fusion.
  • Pansharpening and super-resolution assessment. 

Dr. Giuseppe Scarpa
Dr. Antonio Mazza
Dr. Sergio Vitale
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • pansharpening
  • data fusion
  • super-resolution
  • image quality assessment
  • multi-resolution image
  • hyper-spectral image
  • multispectral image

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 14968 KiB  
Article
Landsat-8 to Sentinel-2 Satellite Imagery Super-Resolution-Based Multiscale Dilated Transformer Generative Adversarial Networks
by Chunyang Wang, Xian Zhang, Wei Yang, Gaige Wang, Zongze Zhao, Xuan Liu and Bibo Lu
Remote Sens. 2023, 15(22), 5272; https://doi.org/10.3390/rs15225272 - 07 Nov 2023
Viewed by 1503
Abstract
Image super-resolution (SR) techniques can improve the spatial resolution of remote sensing images to provide more feature details and information, which is important for a wide range of remote sensing applications, including land use/cover classification (LUCC). Convolutional neural networks (CNNs) have achieved impressive [...] Read more.
Image super-resolution (SR) techniques can improve the spatial resolution of remote sensing images to provide more feature details and information, which is important for a wide range of remote sensing applications, including land use/cover classification (LUCC). Convolutional neural networks (CNNs) have achieved impressive results in the field of image SR, but the inherent localization of convolution limits the performance of CNN-based SR models. Therefore, we propose a new method, namely, the dilated Transformer generative adversarial network (DTGAN) for the SR of multispectral remote sensing images. DTGAN combines the local focus of CNNs with the global perspective of Transformers to better capture both local and global features in remote sensing images. We introduce dilated convolutions into the self-attention computation of Transformers to control the network’s focus on different scales of image features. This enhancement improves the network’s ability to reconstruct details at various scales in the images. SR imagery provides richer surface information and reduces ambiguity for the LUCC task, thereby enhancing the accuracy of LUCC. Our work comprises two main stages: remote sensing image SR and LUCC. In the SR stage, we conducted comprehensive experiments on Landsat-8 (L8) and Sentinel-2 (S2) remote sensing datasets. The results indicate that DTGAN generates super-resolution (SR) images with minimal computation. Additionally, it outperforms other methods in terms of the spectral angle mapper (SAM) and learned perceptual image patch similarity (LPIPS) metrics, as well as visual quality. In the LUCC stage, DTGAN was used to generate SR images of areas outside the training samples, and then the SR imagery was used in the LUCC task. The results indicated a significant improvement in the accuracy of LUCC based on SR imagery compared to low-resolution (LR) LUCC maps. Specifically, there were enhancements of 0.130 in precision, 0.178 in recall, and 0.157 in the F1-score. Full article
(This article belongs to the Special Issue Pansharpening and Beyond in the Deep Learning Era)
Show Figures

Figure 1

20 pages, 7252 KiB  
Article
Fast Full-Resolution Target-Adaptive CNN-Based Pansharpening Framework
by Matteo Ciotola and Giuseppe Scarpa
Remote Sens. 2023, 15(2), 319; https://doi.org/10.3390/rs15020319 - 05 Jan 2023
Cited by 6 | Viewed by 1726
Abstract
In the last few years, there has been a renewed interest in data fusion techniques, and, in particular, in pansharpening due to a paradigm shift from model-based to data-driven approaches, supported by the recent advances in deep learning. Although a plethora of convolutional [...] Read more.
In the last few years, there has been a renewed interest in data fusion techniques, and, in particular, in pansharpening due to a paradigm shift from model-based to data-driven approaches, supported by the recent advances in deep learning. Although a plethora of convolutional neural networks (CNN) for pansharpening have been devised, some fundamental issues still wait for answers. Among these, cross-scale and cross-datasets generalization capabilities are probably the most urgent ones since most of the current networks are trained at a different scale (reduced-resolution), and, in general, they are well-fitted on some datasets but fail on others. A recent attempt to address both these issues leverages on a target-adaptive inference scheme operating with a suitable full-resolution loss. On the downside, such an approach pays an additional computational overhead due to the adaptation phase. In this work, we propose a variant of this method with an effective target-adaptation scheme that allows for the reduction in inference time by a factor of ten, on average, without accuracy loss. A wide set of experiments carried out on three different datasets, GeoEye-1, WorldView-2 and WorldView-3, prove the computational gain obtained while keeping top accuracy scores compared to state-of-the-art methods, both model-based and deep-learning ones. The generality of the proposed solution has also been validated, applying the new adaptation framework to different CNN models. Full article
(This article belongs to the Special Issue Pansharpening and Beyond in the Deep Learning Era)
Show Figures

Figure 1

23 pages, 14791 KiB  
Article
MPFINet: A Multilevel Parallel Feature Injection Network for Panchromatic and Multispectral Image Fusion
by Yuting Feng, Xin Jin, Qian Jiang, Quanli Wang, Lin Liu and Shaowen Yao
Remote Sens. 2022, 14(23), 6118; https://doi.org/10.3390/rs14236118 - 02 Dec 2022
Cited by 1 | Viewed by 1479
Abstract
The fusion of a high-spatial-resolution panchromatic (PAN) image and a corresponding low-resolution multispectral (MS) image can yield a high-resolution multispectral (HRMS) image, which is also known as pansharpening. Most previous methods based on convolutional neural networks (CNNs) have achieved remarkable results. However, information [...] Read more.
The fusion of a high-spatial-resolution panchromatic (PAN) image and a corresponding low-resolution multispectral (MS) image can yield a high-resolution multispectral (HRMS) image, which is also known as pansharpening. Most previous methods based on convolutional neural networks (CNNs) have achieved remarkable results. However, information of different scales has not been fully mined and utilized, and still produces spectral and spatial distortion. In this work, we propose a multilevel parallel feature injection network that contains three scale levels and two parallel branches. In the feature extraction branch, a multi-scale perception dynamic convolution dense block is proposed to adaptively extract the spatial and spectral information. Then, the sufficient multilevel features are injected into the image reconstruction branch, and an attention fusion module based on the spectral dimension is designed in order to fuse shallow contextual features and deep semantic features. In the image reconstruction branch, cascaded transformer blocks are employed to capture the similarities among the spectral bands of the MS image. Extensive experiments are conducted on the QuickBird and WorldView-3 datasets to demonstrate that MPFINet achieves significant improvement over several state-of-the-art methods on both spatial and spectral quality assessments. Full article
(This article belongs to the Special Issue Pansharpening and Beyond in the Deep Learning Era)
Show Figures

Graphical abstract

25 pages, 5387 KiB  
Article
Full-Resolution Quality Assessment for Pansharpening
by Giuseppe Scarpa and Matteo Ciotola
Remote Sens. 2022, 14(8), 1808; https://doi.org/10.3390/rs14081808 - 08 Apr 2022
Cited by 16 | Viewed by 2274
Abstract
A reliable quality assessment procedure for pansharpening methods is of critical importance for the development of the related solutions. Unfortunately, the lack of ground truths to be used as guidance for an objective evaluation has pushed the community to resort to two approaches, [...] Read more.
A reliable quality assessment procedure for pansharpening methods is of critical importance for the development of the related solutions. Unfortunately, the lack of ground truths to be used as guidance for an objective evaluation has pushed the community to resort to two approaches, which can also be jointly applied. Hence, two kinds of indexes can be found in the literature: (i) reference-based reduced-resolution indexes aimed to assess the synthesis ability; (ii) no-reference subjective quality indexes for full-resolution datasets aimed to assess spectral and spatial consistency. Both reference-based and no-reference indexes present critical shortcomings, which motivate the community to explore new solutions. In this work, we propose an alternative no-reference full-resolution assessment framework. On one side, we introduce a protocol, namely the reprojection protocol, to take care of the spectral consistency issue. On the other side, a new index of the spatial consistency between the pansharpened image and the panchromatic band at full resolution is also proposed. Experimental results carried out on different datasets/sensors demonstrate the effectiveness of the proposed approach. Full article
(This article belongs to the Special Issue Pansharpening and Beyond in the Deep Learning Era)
Show Figures

Graphical abstract

Back to TopTop