sensors-logo

Journal Browser

Journal Browser

Advances in Remote Sensing Image Enhancement and Classification

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (15 August 2024) | Viewed by 7434

Special Issue Editor

School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
Interests: image fusion; remote sensing image classification; object tracking
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing technology plays a crucial role in acquiring information about the Earth's surface using sensors mounted on satellites or aircrafts. The acquired images often require enhancement and classification techniques to extract meaningful information. This summary explores recent advancements in remote sensing image enhancement and classification, focusing on two main streams: enhancement and classification.

Remote sensing image acquisition processes are generally influenced by various kinds of degradation, such as noise, geometric distortions, blur (motion, atmospheric turbulence, out-of-focus), etc. Image enhancement is becoming one of the central issues in the development of remote sensing. The enhancement explores fusing, denoising, and hardware design to improve the quality of images, enabling the extraction of more comprehensive and accurate knowledge.

Turning to image classification, researchers have explored various techniques to extract meaningful information from enhanced remote sensing images. Classification explores various research lines utilizing advanced algorithms and machine learning techniques to accurately classify remote sensing data.

This Special Issue aims to develop state-of-the-art technologies for remote sensing image enhancement and classification.

Furthermore, with the help of these new data and technologies, the application of remote sensing data can also be improved and expanded. Authors are sincerely invited to contribute their research results, focusing on cutting-edge technologies, novel applications, and remote sensing classification and improvement evaluation methods, including, but not limited to, the following topics:

For enhancement:

  1. Fusion-based enhancement (multi-modal, multi-temporal, multi-source, multi-sensor, etc.);
  2. Super-resolution-based enhancement;
  3. Denoising-based enhancement;
  4. Quality assessment for enhanced images;
  5. Design of imaging sensor/system.

For classification, there are even more research lines, such as:

  1. Hyperspectral image classification;
  2. New image classification architectures;
  3. New datasets for remote sensing image classification with deep learning;
  4. Remote sensing image processing and pattern recognition;
  5. Scene classification;
  6. Image or data fusion/fusion classification;
  7. Target detection/change detection.

If you want to learn more information or need any advice, you can contact the Special Issue Editor Penelope Wang via <[email protected]> directly.

Dr. Xu Li
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1006 KiB  
Article
Semantic Interaction Meta-Learning Based on Patch Matching Metric
by Baoguo Wei, Xinyu Wang, Yuetong Su, Yue Zhang and Lixin Li
Sensors 2024, 24(17), 5620; https://doi.org/10.3390/s24175620 - 30 Aug 2024
Viewed by 751
Abstract
Metric-based meta-learning methods have demonstrated remarkable success in the domain of few-shot image classification. However, their performance is significantly contingent upon the choice of metric and the feature representation for the support classes. Current approaches, which predominantly rely on holistic image features, may [...] Read more.
Metric-based meta-learning methods have demonstrated remarkable success in the domain of few-shot image classification. However, their performance is significantly contingent upon the choice of metric and the feature representation for the support classes. Current approaches, which predominantly rely on holistic image features, may inadvertently disregard critical details necessary for novel tasks, a phenomenon known as “supervision collapse”. Moreover, relying solely on visual features to characterize support classes can prove to be insufficient, particularly in scenarios involving limited sample sizes. In this paper, we introduce an innovative framework named Patch Matching Metric-based Semantic Interaction Meta-Learning (PatSiML), designed to overcome these challenges. To counteract supervision collapse, we have developed a patch matching metric strategy based on the Transformer architecture to transform input images into a set of distinct patch embeddings. This approach dynamically creates task-specific embeddings, facilitated by a graph convolutional network, to formulate precise matching metrics between the support classes and the query image patches. To enhance the integration of semantic knowledge, we have also integrated a label-assisted channel semantic interaction strategy. This strategy merges word embeddings with patch-level visual features across the channel dimension, utilizing a sophisticated language model to combine semantic understanding with visual information. Our empirical findings across four diverse datasets reveal that the PatSiML method achieves a classification accuracy improvement of 0.65% to 21.15% over existing methodologies, underscoring its robustness and efficacy. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Enhancement and Classification)
Show Figures

Figure 1

18 pages, 43906 KiB  
Article
Frequency-Oriented Transformer for Remote Sensing Image Dehazing
by Yaoqing Zhang, Xin He, Chunxia Zhan and Junjie Li
Sensors 2024, 24(12), 3972; https://doi.org/10.3390/s24123972 - 19 Jun 2024
Cited by 1 | Viewed by 841
Abstract
Remote sensing images are inevitably affected by the degradation of haze with complex appearance and non-uniform distribution, which remarkably affects the effectiveness of downstream remote sensing visual tasks. However, most current methods principally operate in the original pixel space of the image, which [...] Read more.
Remote sensing images are inevitably affected by the degradation of haze with complex appearance and non-uniform distribution, which remarkably affects the effectiveness of downstream remote sensing visual tasks. However, most current methods principally operate in the original pixel space of the image, which hinders the exploration of the frequency characteristics of remote sensing images, resulting in these models failing to fully exploit their representation ability to produce high-quality images. This paper proposes a frequency-oriented remote sensing dehazing Transformer named FOTformer, to explore information in the frequency domain to eliminate disturbances caused by haze in remote sensing images. It contains three components. Specifically, we developed a frequency-prompt attention evaluator to estimate the self-correlation of features in the frequency domain rather than the spatial domain, improving the image restoration performance. We propose a content reconstruction feed-forward network that captures information between different scales in features and integrates and processes global frequency domain information and local multi-scale spatial information in Fourier space to reconstruct the global content under the guidance of the amplitude spectrum. We designed a spatial-frequency aggregation block to exchange and fuse features from the frequency domain and spatial domain of the encoder and decoder to facilitate the propagation of features from the encoder stream to the decoder and alleviate the problem of information loss in the network. The experimental results show that the FOTformer achieved a more competitive performance against other remote sensing dehazing methods on commonly used benchmark datasets. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Enhancement and Classification)
Show Figures

Figure 1

18 pages, 21564 KiB  
Article
Remote Sensing Image Classification Based on Canny Operator Enhanced Edge Features
by Mo Zhou, Yue Zhou, Dawei Yang and Kai Song
Sensors 2024, 24(12), 3912; https://doi.org/10.3390/s24123912 - 17 Jun 2024
Cited by 1 | Viewed by 798
Abstract
Remote sensing image classification plays a crucial role in the field of remote sensing interpretation. With the exponential growth of multi-source remote sensing data, accurately extracting target features and comprehending target attributes from complex images significantly impacts classification accuracy. To address these challenges, [...] Read more.
Remote sensing image classification plays a crucial role in the field of remote sensing interpretation. With the exponential growth of multi-source remote sensing data, accurately extracting target features and comprehending target attributes from complex images significantly impacts classification accuracy. To address these challenges, we propose a Canny edge-enhanced multi-level attention feature fusion network (CAF) for remote sensing image classification. The original image is specifically inputted into a convolutional network for the extraction of global features, while increasing the depth of the convolutional layer facilitates feature extraction at various levels. Additionally, to emphasize detailed target features, we employ the Canny operator for edge information extraction and utilize a convolution layer to capture deep edge features. Finally, by leveraging the Attentional Feature Fusion (AFF) network, we fuse global and detailed features to obtain more discriminative representations for scene classification tasks. The performance of our proposed method (CAF) is evaluated through experiments conducted across three openly accessible datasets for classifying scenes in remote sensing images: NWPU-RESISC45, UCM, and MSTAR. The experimental findings indicate that our approach based on incorporating edge detail information outperforms methods relying solely on global feature-based classifications. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Enhancement and Classification)
Show Figures

Figure 1

19 pages, 10524 KiB  
Article
VELIE: A Vehicle-Based Efficient Low-Light Image Enhancement Method for Intelligent Vehicles
by Linwei Ye, Dong Wang, Dongyi Yang, Zhiyuan Ma and Quan Zhang
Sensors 2024, 24(4), 1345; https://doi.org/10.3390/s24041345 - 19 Feb 2024
Cited by 1 | Viewed by 2325
Abstract
In Advanced Driving Assistance Systems (ADAS), Automated Driving Systems (ADS), and Driver Assistance Systems (DAS), RGB camera sensors are extensively utilized for object detection, semantic segmentation, and object tracking. Despite their popularity due to low costs, RGB cameras exhibit weak robustness in complex [...] Read more.
In Advanced Driving Assistance Systems (ADAS), Automated Driving Systems (ADS), and Driver Assistance Systems (DAS), RGB camera sensors are extensively utilized for object detection, semantic segmentation, and object tracking. Despite their popularity due to low costs, RGB cameras exhibit weak robustness in complex environments, particularly underperforming in low-light conditions, which raises a significant concern. To address these challenges, multi-sensor fusion systems or specialized low-light cameras have been proposed, but their high costs render them unsuitable for widespread deployment. On the other hand, improvements in post-processing algorithms offer a more economical and effective solution. However, current research in low-light image enhancement still shows substantial gaps in detail enhancement on nighttime driving datasets and is characterized by high deployment costs, failing to achieve real-time inference and edge deployment. Therefore, this paper leverages the Swin Vision Transformer combined with a gamma transformation integrated U-Net for the decoupled enhancement of initial low-light inputs, proposing a deep learning enhancement network named Vehicle-based Efficient Low-light Image Enhancement (VELIE). VELIE achieves state-of-the-art performance on various driving datasets with a processing time of only 0.19 s, significantly enhancing high-dimensional environmental perception tasks in low-light conditions. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Enhancement and Classification)
Show Figures

Figure 1

25 pages, 15056 KiB  
Article
Remote Sensing Retrieval of Cloud Top Height Using Neural Networks and Data from Cloud-Aerosol Lidar with Orthogonal Polarization
by Yinhe Cheng, Hongjian He, Qiangyu Xue, Jiaxuan Yang, Wei Zhong, Xinyu Zhu and Xiangyu Peng
Sensors 2024, 24(2), 541; https://doi.org/10.3390/s24020541 - 15 Jan 2024
Cited by 1 | Viewed by 1207
Abstract
In order to enhance the retrieval accuracy of cloud top height (CTH) from MODIS data, neural network models were employed based on Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Three types of methods were established using MODIS inputs: cloud parameters, calibrated radiance, and [...] Read more.
In order to enhance the retrieval accuracy of cloud top height (CTH) from MODIS data, neural network models were employed based on Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Three types of methods were established using MODIS inputs: cloud parameters, calibrated radiance, and a combination of both. From a statistical standpoint, models with combination inputs demonstrated the best performance, followed by models with calibrated radiance inputs, while models relying solely on calibrated radiance had poorer applicability. This work found that cloud top pressure (CTP) and cloud top temperature played a crucial role in CTH retrieval from MODIS data. However, within the same type of models, there were slight differences in the retrieved results, and these differences were not dependent on the quantity of input parameters. Therefore, the model with fewer inputs using cloud parameters and calibrated radiance was recommended and employed for individual case studies. This model produced results closest to the actual cloud top structure of the typhoon and exhibited similar cloud distribution patterns when compared with the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) CTHs from a climatic statistical perspective. This suggests that the recommended model has good applicability and credibility in CTH retrieval from MODIS images. This work provides a method to improve accurate CTHs from MODIS data for better utilization. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Enhancement and Classification)
Show Figures

Figure 1

Back to TopTop