remotesensing-logo

Journal Browser

Journal Browser

Deep Learning for Remote Sensing Image Enhancement

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 3712

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Department of Applied Computing, College of Computing, Michigan Technological University, Houghton, MI, USA
Interests: robotics system design; Integration of UAV-based remote sensing; AI applications in remote sensing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Enhancing images is a challenging task due to several factors, including color inconsistencies, low contrast, blurriness, low signal-to-noise ratios, and uneven lighting. Recent progress in deep learning has offered solutions to tackle some of these issues. Nevertheless, given the complex nature of imaging, certain technical hurdles remain. These include the scarcity of adequately labeled training data and the ability of deep learning algorithms to not only enhance images but to also provide useful information on downstream tasks (such as object detection and segmentation).

The aim of this Special Issue is to provide a forum for cutting-edge research works that address the ongoing challenges in image enhancement using state-of-the-art deep learning methods. Please note that images/data need to be acquired by remote sensing methods  We welcome topics that include, but are not limited to, the following:

  • Image color balancing;
  • Generative deep learning for image generation;
  • Image quality assessment using reference or no-reference evaluation metrics;
  • Image super-resolution;
  • Image denoising;
  • New image datasets.

Dr. Sidike Paheding
Dr. Ashraf Saleem
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image enhancement
  • deep learning
  • underwater images
  • image restoration
  • Image database

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 26710 KiB  
Article
A Lightweight CNN Based on Axial Depthwise Convolution and Hybrid Attention for Remote Sensing Image Dehazing
by Yufeng He, Cuili Li, Xu Li and Tiecheng Bai
Remote Sens. 2024, 16(15), 2822; https://doi.org/10.3390/rs16152822 - 31 Jul 2024
Viewed by 769
Abstract
Hazy weather reduces contrast, narrows the dynamic range, and blurs the details of the remote sensing image. Additionally, color fidelity deteriorates, causing color shifts and image distortion, thereby impairing the utility of remote sensing data. In this paper, we propose a lightweight remote [...] Read more.
Hazy weather reduces contrast, narrows the dynamic range, and blurs the details of the remote sensing image. Additionally, color fidelity deteriorates, causing color shifts and image distortion, thereby impairing the utility of remote sensing data. In this paper, we propose a lightweight remote sensing-image-dehazing network, named LRSDN. The network comprises two tailored, lightweight modules arranged in cascade. The first module, the axial depthwise convolution and residual learning block (ADRB), is for feature extraction, efficiently expanding the convolutional receptive field with little computational overhead. The second is a feature-calibration module based on the hybrid attention block (HAB), which integrates a simplified, yet effective channel attention module and a pixel attention module embedded with an observational prior. This joint attention mechanism effectively enhances the representation of haze features. Furthermore, we introduce a novel method for remote sensing hazy image synthesis using Perlin noise, facilitating the creation of a large-scale, fine-grained remote sensing haze image dataset (RSHD). Finally, we conduct both quantitative and qualitative comparison experiments on multiple publicly available datasets. The results demonstrate that the LRSDN algorithm achieves superior dehazing performance with fewer than 0.1M parameters. We also validate the positive effects of the LRSDN in road extraction and land cover classification applications. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
Show Figures

Graphical abstract

21 pages, 18843 KiB  
Article
ESatSR: Enhancing Super-Resolution for Satellite Remote Sensing Images with State Space Model and Spatial Context
by Yinxiao Wang, Wei Yuan, Fang Xie and Baojun Lin
Remote Sens. 2024, 16(11), 1956; https://doi.org/10.3390/rs16111956 - 29 May 2024
Cited by 1 | Viewed by 863
Abstract
Super-resolution (SR) for satellite remote sensing images has been recognized as crucial and has found widespread applications across various scenarios. Previous SR methods were usually built upon Convolutional Neural Networks and Transformers, which suffer from either limited receptive fields or a lack of [...] Read more.
Super-resolution (SR) for satellite remote sensing images has been recognized as crucial and has found widespread applications across various scenarios. Previous SR methods were usually built upon Convolutional Neural Networks and Transformers, which suffer from either limited receptive fields or a lack of prior assumptions. To address these issues, we propose ESatSR, a novel SR method based on state space models. We utilize the 2D Selective Scan to obtain an enhanced capability in modeling long-range dependencies, which contributes to a wide receptive field. A Spatial Context Interaction Module (SCIM) and an Enhanced Image Reconstruction Module (EIRM) are introduced to combine image-related prior knowledge into our model, therefore guiding the process of feature extraction and reconstruction. Tailored for remote sensing images, the interaction of multi-scale spatial context and image features is leveraged to enhance the network’s capability in capturing features of small targets. Comprehensive experiments show that ESatSR demonstrates state-of-the-art performance on both OLI2MSI and RSSCN7 datasets, with the highest PSNRs of 42.11 dB and 31.42 dB, respectively. Extensive ablation studies illustrate the effectiveness of our module design. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
Show Figures

Graphical abstract

19 pages, 18847 KiB  
Article
Remote Sensing Image Dehazing via a Local Context-Enriched Transformer
by Jing Nie, Jin Xie and Hanqing Sun
Remote Sens. 2024, 16(8), 1422; https://doi.org/10.3390/rs16081422 - 17 Apr 2024
Cited by 3 | Viewed by 1330
Abstract
Remote sensing image dehazing is a well-known remote sensing image processing task focused on restoring clean images from hazy images. The Transformer network, based on the self-attention mechanism, has demonstrated remarkable advantages in various image restoration tasks, due to its capacity to capture [...] Read more.
Remote sensing image dehazing is a well-known remote sensing image processing task focused on restoring clean images from hazy images. The Transformer network, based on the self-attention mechanism, has demonstrated remarkable advantages in various image restoration tasks, due to its capacity to capture long-range dependencies within images. However, it is weak at modeling local context. Conversely, convolutional neural networks (CNNs) are adept at capturing local contextual information. Local contextual information could provide more details, while long-range dependencies capture global structure information. The combination of long-range dependencies and local context modeling is beneficial for remote sensing image dehazing. Therefore, in this paper, we propose a CNN-based adaptive local context enrichment module (ALCEM) to extract contextual information within local regions. Subsequently, we integrate our proposed ALCEM into the multi-head self-attention and feed-forward network of the Transformer, constructing a novel locally enhanced attention (LEA) and a local continuous-enhancement feed-forward network (LCFN). The LEA utilizes the ALCEM to inject local context information that is complementary to the long-range relationship modeled by multi-head self-attention, which is beneficial to removing haze and restoring details. The LCFN extracts multi-scale spatial information and selectively fuses them by the the ALCEM, which supplements more informative information compared with existing regular feed-forward networks with only position-specific information flow. Powered by the LEA and LCFN, a novel Transformer-based dehazing network termed LCEFormer is proposed to restore clear images from hazy remote sensing images, which combines the advantages of CNN and Transformer. Experiments conducted on three distinct datasets, namely DHID, ERICE, and RSID, demonstrate that our proposed LCEFormer achieves the state-of-the-art performance in hazy scenes. Specifically, our LCEFormer outperforms DCIL by 0.78 dB and 0.018 for PSNR and SSIM on the DHID dataset. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
Show Figures

Figure 1

Back to TopTop