remotesensing-logo

Journal Browser

Journal Browser

Advances in High-Resolution Satellite Remote Sensing Image Processing and Classification

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 10 May 2025 | Viewed by 11800

Special Issue Editors


E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: geometric and radiation processing of satellite image; image matching; photogrammetry; 3D reconstruction

E-Mail Website
Guest Editor
School of Resource and Environmental Sciences, Wuhan University, Wuhan 430079, China
Interests: image classification; radiometric normalization of satellite image; data fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: satellite image processing; deep learning; change detection; agricultural remote sensing

E-Mail Website
Guest Editor
Earth Observation and Modelling (EoM), Geographisches Institut, Christian-Albrechts-Universität zu Kiel, Ludewig-Meyn-Straße 8, 24118 Kiel, Schleswig-Holstein, Germany
Interests: near shore bathymetry observation using satellite based LIDAR (NASA’s ICESat-2); earth observation applications (like crop monitoring, monoculture detection, grassland mowing detection, natural disaster impact assessment etc.) using satellite synthetic aperture radar (SAR) like CSA’s RADARSAT-2, RADARSAT Constellation Mission (RCM), and ESA’s Sentinel-1; development of machine learning based algorithms for Earth observation and terrestrial applications (like monitoring orchid population in high nature value grassland, tracking beach litter dynamics from time lapse cameras in Svalbard, Greenland, and Iceland for the ICEBERG project etc.)
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Satellite remote sensing is a major technical means for obtaining large-scale and rapid Earth observations. In recent years, global satellite Earth observation technology has flourished, with continuous improvements in the spatial resolution, temporal resolution, and spectral resolution of satellite remote sensing images. The types of satellite remote sensing have also become increasingly diverse, with multi-source, multi-modal, multi-temporal, and multi-angle satellite remote sensing images containing various geometric, radiometric, and spectral data that provide a rich source of information for scientifically analyzing changes in the Earth's surface conditions. Therefore, accurately extracting the various types of information contained in the images is crucial for the application of satellite remote sensing.

To overcome the geometric and radiometric differences in satellite remote sensing imagery and fully realize its application value in various fields of interest, such as land usage and environmental monitoring, cultural heritage, archaeology, precision farming, human activity monitoring, and other engaging research and practical areas, new relevant challenges in research fields need to be addressed, including those connected to image orientation, enhancement, registration, and fusion, 3D reconstruction, change detection, and trend analysis. In this context, automated and reliable techniques are needed to process and extract information from such a large amount of satellite remote sensing images.

Given the reasons above, the processing and classification of high-resolution satellite remote sensing images is becoming highly attractive and popular, making it possible to reach a very high degree of autonomous functioning, accuracy, and promising results, including the following applications, among others of interest:

  • Geometric and radiometric processing of high-resolution satellite remote sensing images;
  • Registration and fusion of multi-source and multi-modal satellite remote sensing images;
  • High-precision and high-quality 3D reconstructions of satellite remote sensing images;
  • Object extraction and accuracy evaluation in 3D reconstruction;
  • Deep learning methods for satellite image processing and interpretation;
  • Enhancement and super-resolution reconstruction for satellite remote sensing images;
  • Segmentation and clustering of satellite remote sensing images;
  • Automation in thematic map production (e.g., spatial and temporal pattern analysis, change detection);
  • New satellite camera calibration, orientation, and 3D reconstruction;
  • Change detection in complex environments (e.g., farmlands, buildings);
  • Fine extraction of main food crops;
  • Monitoring of phenological periods using satellite remote sensing images.

Dr. Yingdong Pi
Dr. Wenli Huang
Dr. Qingwei Zhuang
Dr. Arnab Muhuri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • geometric/radiometric processing
  • image registration
  • image interpretation
  • 3D reconstruction
  • deep learning
  • multi-source image fusion and information extraction
  • image clustering and segmentation
  • object extraction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

28 pages, 40407 KiB  
Article
FreeMix: Open-Vocabulary Domain Generalization of Remote-Sensing Images for Semantic Segmentation
by Jingyi Wu, Jingye Shi, Zeyong Zhao, Ziyang Liu and Ruicong Zhi
Remote Sens. 2025, 17(8), 1357; https://doi.org/10.3390/rs17081357 - 11 Apr 2025
Viewed by 294
Abstract
In this study, we present a novel concept termed open-vocabulary domain generalization (OVDG), which we investigate within the context of semantic segmentation. OVDG presents greater difficulty compared to conventional domain generalization, yet it offers greater practicality. It jointly considers (1) recognizing both base [...] Read more.
In this study, we present a novel concept termed open-vocabulary domain generalization (OVDG), which we investigate within the context of semantic segmentation. OVDG presents greater difficulty compared to conventional domain generalization, yet it offers greater practicality. It jointly considers (1) recognizing both base and novel classes and (2) generalizing to unseen domains. In OVDG, only the labels of base classes and the images from source domains are available to learn a robust model. Then, the model could be generalized to images from novel classes and target domains directly. In this paper, we propose a dual-branch FreeMix module to implement the OVDG task effectively in a universal framework: the base segmentation branch (BSB) and the entity segmentation branch (ESB). First, the entity mask is introduced as a novel concept for segmentation generalization. Additionally, semantic logits are learned for both the base mask and the entity mask, enhancing the diversity and completeness of masks for both base classes and novel classes. Second, the FreeMix utilizes pretrained self-supervised learning on large-scale remote-sensing data (RS_SSL) to extract domain-agnostic visual features for decoding masks and semantic logits. Third, a training tactic called dataset-aware sampling (DAS) is introduced for multi-source domain learning, aimed at improving the overall performance. In summary, RS_SSL, ESB, and DAS can significantly improve the generalization ability of the model on both a class level and a domain level. Experiments demonstrate that our method produces state-of-the-art results on several remote-sensing semantic-segmentation datasets, including Potsdam, GID5, DeepGlobe, and URUR, for OVDG. Full article
Show Figures

Figure 1

32 pages, 13349 KiB  
Article
Global–Local Feature Fusion of Swin Kansformer Novel Network for Complex Scene Classification in Remote Sensing Images
by Shuangxian An, Leyi Zhang, Xia Li, Guozhuang Zhang, Peizhe Li, Ke Zhao, Hua Ma and Zhiyang Lian
Remote Sens. 2025, 17(7), 1137; https://doi.org/10.3390/rs17071137 - 22 Mar 2025
Viewed by 292
Abstract
The spatial distribution characteristics of remote sensing scene imagery exhibit significant complexity, necessitating the extraction of critical semantic features and effective discrimination of feature information to improve classification accuracy. While the combination of traditional convolutional neural networks (CNNs) and Transformers has proven effective [...] Read more.
The spatial distribution characteristics of remote sensing scene imagery exhibit significant complexity, necessitating the extraction of critical semantic features and effective discrimination of feature information to improve classification accuracy. While the combination of traditional convolutional neural networks (CNNs) and Transformers has proven effective in extracting features from both local and global perspectives, the multilayer perceptron (MLP) within Transformers struggles with nonlinear problems and insufficient feature representation, leading to suboptimal performance in fused models. To address these limitations, we propose a Swin Kansformer network for remote sensing scene classification, which integrates the Kolmogorov–Arnold Network (KAN) and employs a window-based self-attention mechanism for global information extraction. By replacing the traditional MLP layer with the KAN module, the network approximates functions through the decomposition of complex multivariate functions into univariate functions, enhancing the extraction of complex features. Additionally, an asymmetric convolution group module is introduced to replace conventional convolutions, further improving local feature extraction capabilities. Experimental validation on the AID and NWPU-RESISC45 datasets demonstrates that the proposed method achieves classification accuracies of 97.78% and 94.90%, respectively, outperforming state-of-the-art models such as ViT + LCA and ViT + PA by 0.89%, 1.06%, 0.27%, and 0.66%. These results highlight the performance advantages of the Swin Kansformer, while the incorporation of the KAN offers a novel and promising approach for remote sensing scene classification tasks with broad application potential. Full article
Show Figures

Figure 1

26 pages, 14961 KiB  
Article
A Geometric Calibration Method for Spaceborne Single-Photon Lasers That Integrates Laser Detectors and Corner Cube Retroreflectors
by Ren Liu, Junfeng Xie, Fan Mo, Xiaomeng Yang, Zhiyu Jiang and Liang Hong
Remote Sens. 2025, 17(5), 773; https://doi.org/10.3390/rs17050773 - 23 Feb 2025
Viewed by 248
Abstract
Geometric calibration, as a crucial method for ensuring the precision of spaceborne single-photon laser point cloud data, has garnered significant attention. Nonetheless, prevailing geometric calibration methods are generally limited by inadequate precision or are unable to accommodate spaceborne lasers equipped with multiple payloads [...] Read more.
Geometric calibration, as a crucial method for ensuring the precision of spaceborne single-photon laser point cloud data, has garnered significant attention. Nonetheless, prevailing geometric calibration methods are generally limited by inadequate precision or are unable to accommodate spaceborne lasers equipped with multiple payloads on a single platform. To overcome these limitations, a novel geometric calibration method for spaceborne single-photon lasers that integrates laser detectors with corner cube retroreflectors (CCRs) is introduced in this study. The core concept of this method involves the use of triggered detectors to identify the laser footprint centerline (LFC). The geometric relationships between the triggered CCRs and the LFC are subsequently analyzed, and CCR data are incorporated to determine the coordinates of the nearest laser footprint centroids. These laser footprint centroids are then utilized as ground control points to perform the geometric calibration of the spaceborne single-photon laser. Finally, ATLAS observational data are used to simulate the geometric calibration process with detectors and CCRs, followed by conducting geometric calibration experiments with the gt2l and gt2r beams. The results demonstrate that the accuracy of the calibrated laser pointing angle is approximately 1 arcsec, and the ranging precision is better than 2.1 cm, which verifies the superiority and reliability of the proposed method. Furthermore, deployment strategies for detectors and CCRs are explored to provide feasible implementation plans for practical calibration. Notably, as this method only requires the positioning of laser footprint centroids using ground equipment for calibration, it provides exceptional calibration accuracy and is applicable to single-photon lasers across various satellite platforms. Full article
Show Figures

Figure 1

21 pages, 2229 KiB  
Article
LH-YOLO: A Lightweight and High-Precision SAR Ship Detection Model Based on the Improved YOLOv8n
by Qi Cao, Hang Chen, Shang Wang, Yongqiang Wang, Haisheng Fu, Zhenjiao Chen and Feng Liang
Remote Sens. 2024, 16(22), 4340; https://doi.org/10.3390/rs16224340 - 20 Nov 2024
Cited by 2 | Viewed by 1945
Abstract
Synthetic aperture radar is widely applied to ship detection due to generating high-resolution images under diverse weather conditions and its penetration capabilities, making SAR images a valuable data source. However, detecting multi-scale ship targets in complex backgrounds leads to issues of false positives [...] Read more.
Synthetic aperture radar is widely applied to ship detection due to generating high-resolution images under diverse weather conditions and its penetration capabilities, making SAR images a valuable data source. However, detecting multi-scale ship targets in complex backgrounds leads to issues of false positives and missed detections, posing challenges for lightweight and high-precision algorithms. There is an urgent need to improve accuracy of algorithms and their deployability. This paper introduces LH-YOLO, a YOLOv8n-based, lightweight, and high-precision SAR ship detection model. We propose a lightweight backbone network, StarNet-nano, and employ element-wise multiplication to construct a lightweight feature extraction module, LFE-C2f, for the neck of LH-YOLO. Additionally, a reused and shared convolutional detection (RSCD) head is designed using a weight sharing mechanism. These enhancements significantly reduce model size and computational demands while maintaining high precision. LH-YOLO features only 1.862 M parameters, representing a 38.1% reduction compared to YOLOv8n. It exhibits a 23.8% reduction in computational load while achieving a mAP50 of 96.6% on the HRSID dataset, which is 1.4% higher than YOLOv8n. Furthermore, it demonstrates strong generalization on the SAR-Ship-Dataset with a mAP50 of 93.8%, surpassing YOLOv8n by 0.7%. LH-YOLO is well-suited for environments with limited resources, such as embedded systems and edge computing platforms. Full article
Show Figures

Figure 1

16 pages, 2602 KiB  
Article
Multi-Scale and Multi-Network Deep Feature Fusion for Discriminative Scene Classification of High-Resolution Remote Sensing Images
by Baohua Yuan, Sukhjit Singh Sehra and Bernard Chiu
Remote Sens. 2024, 16(21), 3961; https://doi.org/10.3390/rs16213961 - 24 Oct 2024
Cited by 2 | Viewed by 1257
Abstract
The advancement in satellite image sensors has enabled the acquisition of high-resolution remote sensing (HRRS) images. However, interpreting these images accurately and obtaining the computational power needed to do so is challenging due to the complexity involved. This manuscript proposed a multi-stream convolutional [...] Read more.
The advancement in satellite image sensors has enabled the acquisition of high-resolution remote sensing (HRRS) images. However, interpreting these images accurately and obtaining the computational power needed to do so is challenging due to the complexity involved. This manuscript proposed a multi-stream convolutional neural network (CNN) fusion framework that involves multi-scale and multi-CNN integration for HRRS image recognition. The pre-trained CNNs were used to learn and extract semantic features from multi-scale HRRS images. Feature extraction using pre-trained CNNs is more efficient than training a CNN from scratch or fine-tuning a CNN. Discriminative canonical correlation analysis (DCCA) was used to fuse deep features extracted across CNNs and image scales. DCCA reduced the dimension of the features extracted from CNNs while providing a discriminative representation by maximizing the within-class correlation and minimizing the between-class correlation. The proposed model has been evaluated on NWPU-RESISC45 and UC Merced datasets. The accuracy associated with DCCA was 10% and 6% higher than discriminant correlation analysis (DCA) in the NWPU-RESISC45 and UC Merced datasets. The advantage of DCCA was better demonstrated in the NWPU-RESISC45 dataset due to the incorporation of richer within-class variability in this dataset. While both DCA and DCCA minimize between-class correlation, only DCCA maximizes the within-class correlation and, therefore, attains better accuracy. The proposed framework achieved higher accuracy than all state-of-the-art frameworks involving unsupervised learning and pre-trained CNNs and 2–3% higher than the majority of fine-tuned CNNs. The proposed framework offers computational time advantages, requiring only 13 s for training in NWPU-RESISC45, compared to a day for fine-tuning the existing CNNs. Thus, the proposed framework achieves a favourable balance between efficiency and accuracy in HRRS image recognition. Full article
Show Figures

Figure 1

25 pages, 34633 KiB  
Article
Identification of Potential Landslides in the Gaizi Valley Section of the Karakorum Highway Coupled with TS-InSAR and Landslide Susceptibility Analysis
by Kaixiong Lin, Guli Jiapaer, Tao Yu, Liancheng Zhang, Hongwu Liang, Bojian Chen and Tongwei Ju
Remote Sens. 2024, 16(19), 3653; https://doi.org/10.3390/rs16193653 - 30 Sep 2024
Cited by 1 | Viewed by 1545
Abstract
Landslides have become a common global concern because of their widespread nature and destructive power. The Gaizi Valley section of the Karakorum Highway is located in an alpine mountainous area with a high degree of geological structure development, steep terrain, and severe regional [...] Read more.
Landslides have become a common global concern because of their widespread nature and destructive power. The Gaizi Valley section of the Karakorum Highway is located in an alpine mountainous area with a high degree of geological structure development, steep terrain, and severe regional soil erosion, and landslide disasters occur frequently along this section, which severely affects the smooth flow of traffic through the China-Pakistan Economic Corridor (CPEC). In this study, 118 views of Sentinel-1 ascending- and descending-orbit data of this highway section are collected, and two time-series interferometric synthetic aperture radar (TS-InSAR) methods, distributed scatter InSAR (DS-InSAR) and small baseline subset InSAR (SBAS-InSAR), are used to jointly determine the surface deformation in this section and identify unstable slopes from 2021 to 2023. Combining these data with data on sites of historical landslide hazards in this section from 1970 to 2020, we constructed 13 disaster-inducing factors affecting the occurrence of landslides as evaluation indices of susceptibility, carried out an evaluation of regional landslide susceptibility, and identified high-susceptibility unstable slopes (i.e., potential landslides). The results show that DS-InSAR and SBAS-InSAR have good agreement in terms of deformation distribution and deformation magnitude and that compared with single-orbit data, double-track SAR data can better identify unstable slopes in steep mountainous areas, providing a spatial advantage. The landslide susceptibility results show that the area under the curve (AUC) value of the artificial neural network (ANN) model (0.987) is larger than that of the logistic regression (LR) model (0.883) and that the ANN model has a higher classification accuracy than the LR model. A total of 116 unstable slopes were identified in the study, 14 of which were determined to be potential landslides after the landslide susceptibility results were combined with optical images and field surveys. These 14 potential landslides were mapped in detail, and the effects of regional natural disturbances (e.g., snowmelt) and anthropogenic disturbances (e.g., mining projects) on the identification of potential landslides using only SAR data were assessed. The results of this research can be directly applied to landslide hazard mitigation and prevention in the Gaizi Valley section of the Karakorum Highway. In addition, our proposed method can also be used to map potential landslides in other areas with the same complex topography and harsh environment. Full article
Show Figures

Graphical abstract

20 pages, 6608 KiB  
Article
Satellite Remote Sensing Grayscale Image Colorization Based on Denoising Generative Adversarial Network
by Qing Fu, Siyuan Xia, Yifei Kang, Mingwei Sun and Kai Tan
Remote Sens. 2024, 16(19), 3644; https://doi.org/10.3390/rs16193644 - 29 Sep 2024
Cited by 2 | Viewed by 2017
Abstract
Aiming to solve the challenges of difficult training, mode collapse in current generative adversarial networks (GANs), and the efficiency issue of requiring multiple samples for Denoising Diffusion Probabilistic Models (DDPM), this paper proposes a satellite remote sensing grayscale image colorization method using a [...] Read more.
Aiming to solve the challenges of difficult training, mode collapse in current generative adversarial networks (GANs), and the efficiency issue of requiring multiple samples for Denoising Diffusion Probabilistic Models (DDPM), this paper proposes a satellite remote sensing grayscale image colorization method using a denoising GAN. Firstly, a denoising optimization method based on U-ViT for the generator network is introduced to further enhance the model’s generation capability, along with two optimization strategies to significantly reduce the computational burden. Secondly, the discriminator network is optimized by proposing a feature statistical discrimination network, which imposes fewer constraints on the generator network. Finally, grayscale image colorization comparative experiments are conducted on three real satellite remote sensing grayscale image datasets. The results compared with existing typical colorization methods demonstrate that the proposed method can generate color images of higher quality, achieving better performance in both subjective human visual perception and objective metric evaluation. Experiments in building object detection show that the generated color images can improve target detection performance compared to the original grayscale images, demonstrating significant practical value. Full article
Show Figures

Figure 1

26 pages, 10864 KiB  
Article
Near-Real-Time Long-Strip Geometric Processing without GCPs for Agile Push-Frame Imaging of LuoJia3-01 Satellite
by Rongfan Dai, Mi Wang and Zhao Ye
Remote Sens. 2024, 16(17), 3281; https://doi.org/10.3390/rs16173281 - 4 Sep 2024
Cited by 1 | Viewed by 1224
Abstract
Long-strip imaging is an important way of improving the coverage and acquisition efficiency of remote sensing satellite data. During the agile maneuver imaging process of the satellite, the LuoJia3-01 satellite can obtain a sequence of array long-strip images with a certain degree of [...] Read more.
Long-strip imaging is an important way of improving the coverage and acquisition efficiency of remote sensing satellite data. During the agile maneuver imaging process of the satellite, the LuoJia3-01 satellite can obtain a sequence of array long-strip images with a certain degree of overlap. Limited by the relative accuracy of satellite attitude, there will be relative misalignment between the sequence frame images, requiring high-precision geometric processing to meet the requirements of large-area remote sensing applications. Therefore, this study proposes a new method for the geometric correction of long-strip images without ground control points (GCPs) through GPU acceleration. Firstly, through the relative orientation of sequence images, the relative geometric errors between the images are corrected frame-by-frame. Then, block perspective transformation and image point densified filling (IPDF) direct mapping processing are carried out, mapping the sequence images frame-by-frame onto the stitched image. In this way, the geometric correction and image stitching of the sequence frame images are completed simultaneously. Finally, computationally intensive steps, such as point matching, coordinate transformation, and grayscale interpolation, are processed in parallel using GPU to further enhance the program’s execution efficiency. The experimental results show that the method proposed in this study achieves a stitching accuracy of less than 0.3 pixels for the geometrically corrected long-strip images, an internal geometric accuracy of less than 1.5 pixels, and an average processing time of less than 1.5 s per frame, meeting the requirements for high-precision near-real-time processing applications. Full article
Show Figures

Figure 1

Other

Jump to: Research

18 pages, 16454 KiB  
Technical Note
Annotated Dataset for Training Cloud Segmentation Neural Networks Using High-Resolution Satellite Remote Sensing Imagery
by Mingyuan He, Jie Zhang, Yang He, Xinjie Zuo and Zebin Gao
Remote Sens. 2024, 16(19), 3682; https://doi.org/10.3390/rs16193682 - 2 Oct 2024
Cited by 1 | Viewed by 1787
Abstract
The integration of satellite data with deep learning has revolutionized various tasks in remote sensing, including classification, object detection, and semantic segmentation. Cloud segmentation in high-resolution satellite imagery is a critical application within this domain, yet progress in developing advanced algorithms for this [...] Read more.
The integration of satellite data with deep learning has revolutionized various tasks in remote sensing, including classification, object detection, and semantic segmentation. Cloud segmentation in high-resolution satellite imagery is a critical application within this domain, yet progress in developing advanced algorithms for this task has been hindered by the scarcity of specialized datasets and annotation tools. This study addresses this challenge by introducing CloudLabel, a semi-automatic annotation technique leveraging region growing and morphological algorithms including flood fill, connected components, and guided filter. CloudLabel v1.0 streamlines the annotation process for high-resolution satellite images, thereby addressing the limitations of existing annotation platforms which are not specifically adapted to cloud segmentation, and enabling the efficient creation of high-quality cloud segmentation datasets. Notably, we have curated the Annotated Dataset for Training Cloud Segmentation (ADTCS) comprising 32,065 images (512 × 512) for cloud segmentation based on CloudLabel. The ADTCS dataset facilitates algorithmic advancement in cloud segmentation, characterized by uniform cloud coverage distribution and high image entropy (mainly 5–7). These features enable deep learning models to capture comprehensive cloud characteristics, enhancing recognition accuracy and reducing ground object misclassification. This contribution significantly advances remote sensing applications and cloud segmentation algorithms. Full article
Show Figures

Figure 1

Back to TopTop