remotesensing-logo

Journal Browser

Journal Browser

Advances in Remote Sensing Image Target Detection and Recognition

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 30 December 2025 | Viewed by 2316

Special Issue Editor

National Key Laboratory of Science and Technology on Space-Born Intelligent Information Processing (SBIIP), Beijing Institute of Technology, Beijing 100081, China
Interests: representation learning; object detection; few-shot learning; semantic segmentation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing image target detection and recognition is a hot research topic in computer vison and can effectively extract valuable information from massive accessible remote sensing imagery data, supporting intelligent interpretation systems for earth observation. However, certain sophisticated challenges severely impact the performance of remote sensing image target detection and recognition, hindering intelligent interpretation algorithms’ application in practical systems. Specifically, in situations such as long-tail distribution, few-shot learning, domain shifts, real-time processing requirement, and so on, previously designed remote sensing image target detection and recognition algorithms deliver inferior performance. Thus, new mechanisms and methods need to be explored to improve the learning robustness, processing efficiency, and generalization ability of remote sensing image target detection and recognition, which will be crucial for establishing next-generation remote sensing intelligent interpretation systems.

This Special Issue aims to drive the development of target detection and recognition in the remote sensing domain, establishing a next-generation remote sensing detection and recognition algorithm. Topics may involve semi-supervised learning, transfer learning, and few-shot learning for remote sensing object detection or recognition, while considering specific challenges of remote sensing target characters, i.e., multi-scale, arbitrary orientation, tiny or weak objects, and so on.

Suggested themes and article types include the following:

  1. Few-shot remote sensing object detection and recognition;
  2. Zero-shot remote sensing object detection and recognition;
  3. Cross-domain object detection and recognition in remote sensing domain;
  4. Pretraining technology for remote sensing object detection and recognition;
  5. Long-tail distribution object detection and recognition;
  6. Open-vocabulary object detection in remote sensing domain.

Dr. Yin Zhuang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • semi-supervised learning
  • few-shot learning
  • long-tail distribution
  • domain adaptation
  • open-vocabulary
  • zero-shot learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 4385 KB  
Article
HTMNet: Hybrid Transformer–Mamba Network for Hyperspectral Target Detection
by Xiaosong Zheng, Yin Kuang, Yu Huo, Wenbo Zhu, Min Zhang and Hai Wang
Remote Sens. 2025, 17(17), 3015; https://doi.org/10.3390/rs17173015 - 30 Aug 2025
Viewed by 453
Abstract
Hyperspectral target detection (HTD) aims to identify pixel-level targets within complex backgrounds, but existing HTD methods often fail to fully exploit multi-scale features and integrate global–local information, leading to suboptimal detection performance. To address these challenges, a novel hybrid Transformer–Mamba network (HTMNet) is [...] Read more.
Hyperspectral target detection (HTD) aims to identify pixel-level targets within complex backgrounds, but existing HTD methods often fail to fully exploit multi-scale features and integrate global–local information, leading to suboptimal detection performance. To address these challenges, a novel hybrid Transformer–Mamba network (HTMNet) is proposed to reconstruct the high-fidelity background samples for HTD. HTMNet consists of the following two parallel modules: the multi-scale feature extraction (MSFE) module and the global–local feature extraction (GLFE) module. Specifically, in the MSFE module, we designed a multi-scale Transformer to extract and fuse multi-scale background features. In the GLFE module, a global feature extraction (GFE) module is devised to extract global background features by introducing a spectral–spatial attention module in the Transformer. Meanwhile, a local feature extraction (LFE) module is developed to capture local background features by incorporating the designed circular scanning strategy into the LocalMamba. Additionally, a feature interaction fusion (FIF) module is devised to integrate features from multiple perspectives, enhancing the model’s overall representation capability. Experiments show that our method achieves AUC(PF, PD) scores of 99.97%, 99.91%, 99.82%, and 99.64% on four public hyperspectral datasets. These results demonstrate that HTMNet consistently surpasses state-of-the-art HTD methods, delivering superior detection performance in terms of AUC(PF, PD). Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
Show Figures

Figure 1

23 pages, 16581 KB  
Article
SLD-YOLO: A Lightweight Satellite Component Detection Algorithm Based on Multi-Scale Feature Fusion and Attention Mechanism
by Yonghao Li, Hang Yang, Bo Lü and Xiaotian Wu
Remote Sens. 2025, 17(17), 2950; https://doi.org/10.3390/rs17172950 - 25 Aug 2025
Viewed by 521
Abstract
Space-based on-orbit servicing missions impose stringent requirements for precise identification and localization of satellite components, while existing detection algorithms face dual challenges of insufficient accuracy and excessive computational resource consumption. This paper proposes SLD-YOLO, a lightweight satellite component detection model based on improved [...] Read more.
Space-based on-orbit servicing missions impose stringent requirements for precise identification and localization of satellite components, while existing detection algorithms face dual challenges of insufficient accuracy and excessive computational resource consumption. This paper proposes SLD-YOLO, a lightweight satellite component detection model based on improved YOLO11, balancing accuracy and efficiency through structural optimization and lightweight design. First, we design RLNet, a lightweight backbone network that employs reparameterization mechanisms and hierarchical feature fusion strategies to reduce model complexity by 19.72% while maintaining detection accuracy. Second, we propose the CSP-HSF multi-scale feature fusion module, used in conjunction with PSConv downsampling, to effectively improve the model’s perception of multi-scale objects. Finally, we introduce SimAM, a parameter-free attention mechanism in the detection head to further improve feature representation capability. Experiments on the UESD dataset demonstrate that SLD-YOLO achieves measurable improvements compared to the baseline YOLO11s model across five satellite component detection categories: mAP50 increases by 2.22% to 87.44%, mAP50:95 improves by 1.72% to 63.25%, while computational complexity decreases by 19.72%, parameter count reduces by 25.93%, model file size compresses by 24.59%, and inference speed reaches 90.4 FPS. Validation experiments on the UESD_edition2 dataset further confirm the model’s robustness. This research provides an effective solution for target detection tasks in resource-constrained space environments, demonstrating practical engineering application value. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
Show Figures

Figure 1

22 pages, 5535 KB  
Article
OFNet: Integrating Deep Optical Flow and Bi-Domain Attention for Enhanced Change Detection
by Liwen Zhang, Quan Zou, Guoqing Li, Wenyang Yu, Yong Yang and Heng Zhang
Remote Sens. 2025, 17(17), 2949; https://doi.org/10.3390/rs17172949 - 25 Aug 2025
Viewed by 484
Abstract
Change detection technology holds significant importance in disciplines such as urban planning, land utilization tracking, and hazard evaluation, as it can efficiently and accurately reveal dynamic regional change processes, providing crucial support for scientific decision-making and refined management. Although deep learning methods based [...] Read more.
Change detection technology holds significant importance in disciplines such as urban planning, land utilization tracking, and hazard evaluation, as it can efficiently and accurately reveal dynamic regional change processes, providing crucial support for scientific decision-making and refined management. Although deep learning methods based on computer vision have achieved remarkable progress in change detection, they still face challenges including reducing dynamic background interference, capturing subtle changes, and effectively fusing multi-temporal data features. To address these issues, this paper proposes a novel change detection model called OFNet. Building upon existing Siamese network architectures, we introduce an optical flow branch module that supplements pixel-level dynamic information. By incorporating motion features to guide the network’s attention to potential change regions, we enhance the model’s ability to characterize and discriminate genuine changes in cross-temporal remote sensing images. Additionally, we innovatively propose a dual-domain attention mechanism that simultaneously models discriminative features in both spatial and frequency domains for change detection tasks. The spatial attention focuses on capturing edge and structural changes, while the frequency-domain attention strengthens responses to key frequency components. The synergistic fusion of these two attention mechanisms effectively improves the model’s sensitivity to detailed changes and enhances the overall robustness of detection. Experimental results demonstrate that OFNet achieves an IoU of 83.03 on the LEVIR-CD dataset and 82.86 on the WHU-CD dataset, outperforming current mainstream approaches and validating its superior detection performance and generalization capability. This presents a novel technical method for environmental observation and urban transformation analysis tasks. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
Show Figures

Figure 1

26 pages, 6806 KB  
Article
Fine Recognition of MEO SAR Ship Targets Based on a Multi-Level Focusing-Classification Strategy
by Zhaohong Li, Wei Yang, Can Su, Hongcheng Zeng, Yamin Wang, Jiayi Guo and Huaping Xu
Remote Sens. 2025, 17(15), 2599; https://doi.org/10.3390/rs17152599 - 26 Jul 2025
Viewed by 453
Abstract
The Medium Earth Orbit (MEO) spaceborne Synthetic Aperture Radar (SAR) has great coverage ability, which can improve maritime ship target surveillance performance significantly. However, due to the huge computational load required for imaging processing and the severe defocusing caused by ship motions, traditional [...] Read more.
The Medium Earth Orbit (MEO) spaceborne Synthetic Aperture Radar (SAR) has great coverage ability, which can improve maritime ship target surveillance performance significantly. However, due to the huge computational load required for imaging processing and the severe defocusing caused by ship motions, traditional ship recognition conducted in focused image domains cannot process MEO SAR data efficiently. To address this issue, a multi-level focusing-classification strategy for MEO SAR ship recognition is proposed, which is applied to the range-compressed ship data domain. Firstly, global fast coarse-focusing is conducted to compensate for sailing motion errors. Then, a coarse-classification network is designed to realize major target category classification, based on which local region image slices are extracted. Next, fine-focusing is performed to correct high-order motion errors, followed by applying fine-classification applied to the image slices to realize final ship classification. Equivalent MEO SAR ship images generated by real LEO SAR data are utilized to construct training and testing datasets. Simulated MEO SAR ship data are also used to evaluate the generalization of the whole method. The experimental results demonstrate that the proposed method can achieve high classification precision. Since only local region slices are used during the second-level processing step, the complex computations induced by fine-focusing for the full image can be avoided, thereby significantly improving overall efficiency. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
Show Figures

Graphical abstract

Back to TopTop