Next Article in Journal
Editorial for the Special Issue: “Ten Years of Remote Sensing at Barcelona Expert Center”
Next Article in Special Issue
Super-Resolution of Sentinel-2 Images Using Convolutional Neural Networks and Real Ground Truth Data
Previous Article in Journal
Study on Seasonal Variations of Plasma Bubble Occurrence over Hong Kong Area Using GNSS Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks

by
Luis Salgueiro Romero
1,
Javier Marcello
2 and
Verónica Vilaplana
1,*
1
Signal Theory and Communications Department, Universitat Politècnica de Catalunya (BarcelonaTech), 08034 Barcelona, Spain
2
Institute of Oceanography and Global Change, University of Las Palmas of Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(15), 2424; https://doi.org/10.3390/rs12152424
Submission received: 5 June 2020 / Revised: 1 July 2020 / Accepted: 21 July 2020 / Published: 28 July 2020

Abstract

Sentinel-2 satellites provide multi-spectral optical remote sensing images with four bands at 10 m of spatial resolution. These images, due to the open data distribution policy, are becoming an important resource for several applications. However, for small scale studies, the spatial detail of these images might not be sufficient. On the other hand, WorldView commercial satellites offer multi-spectral images with a very high spatial resolution, typically less than 2 m, but their use can be impractical for large areas or multi-temporal analysis due to their high cost. To exploit the free availability of Sentinel imagery, it is worth considering deep learning techniques for single-image super-resolution tasks, allowing the spatial enhancement of low-resolution (LR) images by recovering high-frequency details to produce high-resolution (HR) super-resolved images. In this work, we implement and train a model based on the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) with pairs of WorldView-Sentinel images to generate a super-resolved multispectral Sentinel-2 output with a scaling factor of 5. Our model, named RS-ESRGAN, removes the upsampling layers of the network to make it feasible to train with co-registered remote sensing images. Results obtained outperform state-of-the-art models using standard metrics like PSNR, SSIM, ERGAS, SAM and CC. Moreover, qualitative visual analysis shows spatial improvements as well as the preservation of the spectral information, allowing the super-resolved Sentinel-2 imagery to be used in studies requiring very high spatial resolution.
Keywords: super-resolution; generative adversarial network; deep learning; Sentinel-2; WorldView super-resolution; generative adversarial network; deep learning; Sentinel-2; WorldView

Share and Cite

MDPI and ACS Style

Salgueiro Romero, L.; Marcello, J.; Vilaplana, V. Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks. Remote Sens. 2020, 12, 2424. https://doi.org/10.3390/rs12152424

AMA Style

Salgueiro Romero L, Marcello J, Vilaplana V. Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks. Remote Sensing. 2020; 12(15):2424. https://doi.org/10.3390/rs12152424

Chicago/Turabian Style

Salgueiro Romero, Luis, Javier Marcello, and Verónica Vilaplana. 2020. "Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks" Remote Sensing 12, no. 15: 2424. https://doi.org/10.3390/rs12152424

APA Style

Salgueiro Romero, L., Marcello, J., & Vilaplana, V. (2020). Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks. Remote Sensing, 12(15), 2424. https://doi.org/10.3390/rs12152424

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop