Next Article in Journal
LSTT: Long-Term Spatial–Temporal Tensor Model for Infrared Small Target Detection under Dynamic Background
Next Article in Special Issue
BCMFIFuse: A Bilateral Cross-Modal Feature Interaction-Based Network for Infrared and Visible Image Fusion
Previous Article in Journal
Directional Applicability Analysis of Albedo Retrieval Using Prior BRDF Knowledge
Previous Article in Special Issue
Infrared–Visible Image Fusion through Feature-Based Decomposition and Domain Normalization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CPROS: A Multimodal Decision-Level Fusion Detection Method Based on Category Probability Sets

College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(15), 2745; https://doi.org/10.3390/rs16152745
Submission received: 18 June 2024 / Revised: 22 July 2024 / Accepted: 23 July 2024 / Published: 27 July 2024
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing II)

Abstract

Images acquired by different sensors exhibit different characteristics because of the varied imaging mechanisms of sensors. The fusion of visible and infrared images is valuable for specific image applications. While infrared images provide stronger object features under poor illumination and smoke interference, visible images have rich texture features and color information about the target. This study uses dual optical fusion as an example to explore fusion detection methods at different levels and proposes a multimodal decision-level fusion detection method based on category probability sets (CPROS). YOLOv8—a single-mode detector with good detection performance—was chosen as the benchmark. Next, we innovatively introduced the improved Yager formula and proposed a simple non-learning fusion strategy based on CPROS, which can combine the detection results of multiple modes and effectively improve target confidence. We validated the proposed algorithm using the VEDAI public dataset, which was captured from a drone perspective. The results showed that the mean average precision (mAP) of YOLOv8 using the CPROS method was 8.6% and 16.4% higher than that of the YOLOv8 detection single-mode dataset. The proposed method significantly reduces the missed detection rate (MR) and number of false detections per image (FPPI), and it can be generalized.
Keywords: improved Yager formula; category probability set; fusion strategy; multimodal improved Yager formula; category probability set; fusion strategy; multimodal

Share and Cite

MDPI and ACS Style

Li, C.; Zuo, Z.; Tong, X.; Huang, H.; Yuan, S.; Dang, Z. CPROS: A Multimodal Decision-Level Fusion Detection Method Based on Category Probability Sets. Remote Sens. 2024, 16, 2745. https://doi.org/10.3390/rs16152745

AMA Style

Li C, Zuo Z, Tong X, Huang H, Yuan S, Dang Z. CPROS: A Multimodal Decision-Level Fusion Detection Method Based on Category Probability Sets. Remote Sensing. 2024; 16(15):2745. https://doi.org/10.3390/rs16152745

Chicago/Turabian Style

Li, Can, Zhen Zuo, Xiaozhong Tong, Honghe Huang, Shudong Yuan, and Zhaoyang Dang. 2024. "CPROS: A Multimodal Decision-Level Fusion Detection Method Based on Category Probability Sets" Remote Sensing 16, no. 15: 2745. https://doi.org/10.3390/rs16152745

APA Style

Li, C., Zuo, Z., Tong, X., Huang, H., Yuan, S., & Dang, Z. (2024). CPROS: A Multimodal Decision-Level Fusion Detection Method Based on Category Probability Sets. Remote Sensing, 16(15), 2745. https://doi.org/10.3390/rs16152745

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop