remotesensing-logo

Journal Browser

Journal Browser

Deep Learning for Remote Sensing Image Enhancement

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 1102

Special Issue Editors


E-Mail Website
Guest Editor
Department of Applied Computing, College of Computing, Michigan Technological University, Houghton, MI, USA
Interests: robotics system design; Integration of UAV-based remote sensing; AI applications in remote sensing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Enhancing images is a challenging task due to several factors, including color inconsistencies, low contrast, blurriness, low signal-to-noise ratios, and uneven lighting. Recent progress in deep learning has offered solutions to tackle some of these issues. Nevertheless, given the complex nature of imaging, certain technical hurdles remain. These include the scarcity of adequately labeled training data and the ability of deep learning algorithms to not only enhance images but to also provide useful information on downstream tasks (such as object detection and segmentation).

The aim of this Special Issue is to provide a forum for cutting-edge research works that address the ongoing challenges in image enhancement using state-of-the-art deep learning methods. Please note that images/data need to be acquired by remote sensing methods  We welcome topics that include, but are not limited to, the following:

  • Image color balancing;
  • Generative deep learning for image generation;
  • Image quality assessment using reference or no-reference evaluation metrics;
  • Image super-resolution;
  • Image denoising;
  • New image datasets.

Dr. Sidike Paheding
Dr. Ashraf Saleem
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image enhancement
  • deep learning
  • underwater images
  • image restoration
  • Image database

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 18839 KiB  
Article
ESatSR: Enhancing Super-Resolution for Satellite Remote Sensing Images with State Space Model and Spatial Context
by Yinxiao Wang, Wei Yuan, Fang Xie and Baojun Lin
Remote Sens. 2024, 16(11), 1956; https://doi.org/10.3390/rs16111956 - 29 May 2024
Viewed by 153
Abstract
Super-resolution (SR) for satellite remote sensing images has been recognized as crucial and has found widespread applications across various scenarios. Previous SR methods were usually built upon Convolutional Neural Networks and Transformers, which suffer from either limited receptive fields or a lack of [...] Read more.
Super-resolution (SR) for satellite remote sensing images has been recognized as crucial and has found widespread applications across various scenarios. Previous SR methods were usually built upon Convolutional Neural Networks and Transformers, which suffer from either limited receptive fields or a lack of prior assumptions. To address these issues, we propose ESatSR, a novel SR method based on state space models. We utilize the 2D Selective Scan to obtain an enhanced capability in modeling long-range dependencies, which contributes to a wide receptive field. A Spatial Context Interaction Module (SCIM) and an Enhanced Image Reconstruction Module (EIRM) are introduced to combine image-related prior knowledge into our model, therefore guiding the process of feature extraction and reconstruction. Tailored for remote sensing images, the interaction of multi-scale spatial context and image features is leveraged to enhance the network’s capability in capturing features of small targets. Comprehensive experiments show that ESatSR demonstrates state-of-the-art performance on both OLI2MSI and RSSCN7 datasets, with the highest PSNRs of 42.11 dB and 31.42 dB, respectively. Extensive ablation studies illustrate the effectiveness of our module design. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
19 pages, 18847 KiB  
Article
Remote Sensing Image Dehazing via a Local Context-Enriched Transformer
by Jing Nie, Jin Xie and Hanqing Sun
Remote Sens. 2024, 16(8), 1422; https://doi.org/10.3390/rs16081422 - 17 Apr 2024
Cited by 1 | Viewed by 565
Abstract
Remote sensing image dehazing is a well-known remote sensing image processing task focused on restoring clean images from hazy images. The Transformer network, based on the self-attention mechanism, has demonstrated remarkable advantages in various image restoration tasks, due to its capacity to capture [...] Read more.
Remote sensing image dehazing is a well-known remote sensing image processing task focused on restoring clean images from hazy images. The Transformer network, based on the self-attention mechanism, has demonstrated remarkable advantages in various image restoration tasks, due to its capacity to capture long-range dependencies within images. However, it is weak at modeling local context. Conversely, convolutional neural networks (CNNs) are adept at capturing local contextual information. Local contextual information could provide more details, while long-range dependencies capture global structure information. The combination of long-range dependencies and local context modeling is beneficial for remote sensing image dehazing. Therefore, in this paper, we propose a CNN-based adaptive local context enrichment module (ALCEM) to extract contextual information within local regions. Subsequently, we integrate our proposed ALCEM into the multi-head self-attention and feed-forward network of the Transformer, constructing a novel locally enhanced attention (LEA) and a local continuous-enhancement feed-forward network (LCFN). The LEA utilizes the ALCEM to inject local context information that is complementary to the long-range relationship modeled by multi-head self-attention, which is beneficial to removing haze and restoring details. The LCFN extracts multi-scale spatial information and selectively fuses them by the the ALCEM, which supplements more informative information compared with existing regular feed-forward networks with only position-specific information flow. Powered by the LEA and LCFN, a novel Transformer-based dehazing network termed LCEFormer is proposed to restore clear images from hazy remote sensing images, which combines the advantages of CNN and Transformer. Experiments conducted on three distinct datasets, namely DHID, ERICE, and RSID, demonstrate that our proposed LCEFormer achieves the state-of-the-art performance in hazy scenes. Specifically, our LCEFormer outperforms DCIL by 0.78 dB and 0.018 for PSNR and SSIM on the DHID dataset. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
Show Figures

Figure 1

Back to TopTop