Advanced Object Detection for Maritime Fire Safety
Abstract
:1. Introduction
- We propose an enhanced object detection model based on the DETR framework, integrating EfficientNet-B0 as the backbone. The EfficientNet-B0 architecture is known for its balance between computational efficiency and accuracy, making it particularly suitable for real-time detection tasks. By replacing the traditional ResNet backbone used in DETR with EfficientNet-B0, our model achieves improved detection accuracy while maintaining efficiency, especially in detecting small and occluded objects such as distant smoke plumes or flames.
- We introduce a custom, annotated dataset designed for fire and smoke detection in shipboard environments. This dataset includes a diverse range of ship settings, both interior and exterior, and accounts for varying lighting conditions, viewpoints, and environmental factors. It is a valuable resource for training and evaluating models for fire and smoke detection in maritime contexts, where such specialized data have been scarce.
- The proposed model demonstrates superior performance in detecting small and medium-sized objects compared to both the baseline DETR and popular YOLO variants. This enhancement is critical for maritime fire detection, where small flames and smoke plumes are often the earliest signs of fire incidents. Our model’s ability to detect these challenging objects with high precision is a notable improvement over existing models.
- We perform a comprehensive experimental evaluation, comparing our proposed model to state-of-the-art object detection frameworks, including YOLOv5 and the baseline DETR model. Our results show that the proposed model outperforms these alternatives across multiple metrics, including Average Precision (AP) at different Intersection over Union (IoU) thresholds. This thorough evaluation highlights the effectiveness of our model in various detection tasks, demonstrating its robustness and generalization capabilities across different maritime scenarios.
- The integration of EfficientNet-B0 allows the proposed model to strike a balance between computational load and detection performance, making it suitable for deployment in real-time fire and smoke detection systems aboard ships. This real-time capability is critical for early hazard detection and safety monitoring, where timely response is crucial to preventing catastrophic events.
2. Related Works
3. Methodology
3.1. DETR
3.2. EfficientNet-B0
3.3. The Proposed Model
4. Experiment and Results
4.1. The Dataset
4.2. Data Prepossessing
4.3. Metrics and Losses
4.4. Experiment and Results
4.5. Comparison with Baseline and SOTA Models
4.6. Ablation Study
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ergasheva, A.; Akhmedov, F.; Abdusalomov, A.; Kim, W. Advancing Maritime Safety: Early Detection of Ship Fires Through Computer Vision, Deep Learning Approaches, and Histogram Equalization Techniques. Fire 2024, 7, 84. [Google Scholar] [CrossRef]
- Zhang, Z.; Tan, L.; Tiong, R.L.K. Ship-Fire net: An improved YOLOv8 algorithm for ship fire detection. Sensors 2024, 24, 727. [Google Scholar] [CrossRef]
- Kim, D.; Ruy, W. CNN-based fire detection method on autonomous ships using composite channels composed of RGB and IR data. Int. J. Nav. Arch. Ocean Eng. 2022, 14, 100489. [Google Scholar] [CrossRef]
- Lee, H.G.; Pham, T.N.; Nguyen, V.H.; Kwon, K.R.; Lee, J.H.; Huh, J.H. Image-based Outlet Fire Causing Classification using CNN-based Deep Learning Models. IEEE Access 2024, 12, 135104–135116. [Google Scholar] [CrossRef]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In European Conference on Computer Vision; Springer International Publishing: Cham, Germany, 2020; pp. 213–229. [Google Scholar]
- Tan, M.; Le, Q.E. Rethinking model scaling for convolutional neural networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
- Cheknane, M.; Bendouma, T.; Boudouh, S.S. Advancing fire detection: Two-stage deep learning with hybrid feature extraction using faster R-CNN approach. Signal Image Video Process. 2024, 18, 5503–5510. [Google Scholar] [CrossRef]
- Han, H. A novel single shot-multibox detector based on multiple Gaussian mixture model for urban fire smoke detection. Comput. Sci. Inf. Syst. 2023, 20, 32. [Google Scholar] [CrossRef]
- Liang, T.; Zeng, G. FSH-DETR: An Efficient End-to-End Fire Smoke and Human Detection Based on a Deformable DEtection TRansformer (DETR). Sensors 2024, 24, 4077. [Google Scholar] [CrossRef]
- Abdusalomov, A.; Umirzakova, S.; Safarov, F.; Mirzakhalilov, S.; Egamberdiev, N.; Cho, Y.I. A Multi-Scale Approach to Early Fire Detection in Smart Homes. Electronics 2024, 13, 4354. [Google Scholar] [CrossRef]
- Titu, F.S.; Pavel, M.A.; Michael, G.K.O.; Babar, H.; Aman, U.; Khan, R. Real-Time Fire Detection: Integrating Lightweight Deep Learning Models on Drones with Edge Computing. Drones 2024, 8, 483. [Google Scholar] [CrossRef]
- Yu, R.; Kim, K. A Study of Novel Initial Fire Detection Algorithm Based on Deep Learning Method. J. Electr. Eng. Technol. 2024, 19, 3675–3686. [Google Scholar] [CrossRef]
- Ifeoma, N.; Ekene, A.; Obinna, O.; Kingsely, I.; Chrysantus, O. Development of a CNN-Based Smoke/Fire Detection System for High-Risk Environments. Eur. J. Sci. Innov. Technol. 2024, 4, 241–248. [Google Scholar]
- Fernandes, A.; Utkin, A.; Chaves, P. Enhanced Automatic Wildfire Detection System Using Big Data and EfficientNets. Fire 2024, 7, 286. [Google Scholar] [CrossRef]
- Chitram, S.; Kumar, S.; Thenmalar, S. Enhancing Fire and Smoke Detection Using Deep Learning Techniques. Eng. Proc. 2024, 62, 7. [Google Scholar]
- Yang, F.; Xue, Q.; Cao, Y.; Li, X.; Zhang, W.; Li, G. Multi-temporal dependency handling in video smoke recognition: A holistic approach spanning spatial, short-term, and long-term perspectives. Expert Syst. Appl. 2024, 245, 123081. [Google Scholar] [CrossRef]
- Khan, T.; Khan, Z.A.; Choi, C. Enhancing real-time fire detection: An effective multi-attention network and a fire benchmark. Neural Comput. Appl. 2023, 1–15. [Google Scholar] [CrossRef]
- Catargiu, C.; Cleju, N.; Ciocoiu, I.B. A Comparative Performance Evaluation of YOLO-Type Detectors on a New Open Fire and Smoke Dataset. Sensors 2024, 24, 5597. [Google Scholar] [CrossRef]
- Zhang, D. A Yolo-based Approach for Fire and Smoke Detection in IoT Surveillance Systems. Int. J. Adv. Comput. Sci. Appl. 2024, 15. [Google Scholar] [CrossRef]
- Li, G.; Cheng, P.; Li, Y.; Huang, Y. Lightweight wildfire smoke monitoring algorithm based on unmanned aerial vehicle vision. Signal Image Video Process. 2024, 18, 7079–7091. [Google Scholar] [CrossRef]
- Avazov, K.; Jamil, M.K.; Muminov, B.; Abdusalomov, A.B.; Cho, Y.-I. Fire detection and notification method in ship areas using deep learning and computer vision approaches. Sensors 2023, 23, 7078. [Google Scholar] [CrossRef]
- Ricci, S.; Ravikumar, B.S.S.K.; Rizzetto, L. Fire Management on Container Ships: New Strategies and Technologies. TransNav Int. J. Mar. Navig. Saf. Sea Transp. 2023, 17, 415–421. [Google Scholar] [CrossRef]
- Cheng, G.; Chen, X.; Wang, C.; Li, X.; Xian, B.; Yu, H. Visual fire detection using deep learning: A survey. Neurocomputing 2024, 596, 127975. [Google Scholar] [CrossRef]
- Hosain, M.T.; Zaman, A.; Abir, M.R.; Akter, S.; Mursalin, S.; Khan, S.S. Synchronizing Object Detection: Applications, Advancements and Existing Challenges. IEEE Access 2024, 12, 54129–54167. [Google Scholar] [CrossRef]
- Zia, E.; Vahdat-Nejad, H.; Zeraatkar, M.A.; Joloudari, J.H.; Hoseini, S.A. 3ENB2: End-to-end EfficientNetB2 model with online data augmentation for fire detection. Signal Image Video Process. 2024, 18, 7183–7197. [Google Scholar] [CrossRef]
Number | Equation | Equation | Description |
---|---|---|---|
1 | Mean Average Precision (mAP) | N is the number of object classes. AP(i) is the Average Precision for class i. | |
2 | Average Precision (AP) | is the precision at the nth threshold. | |
3 | Bounding box loss (L1 loss) | N is the number of bounding boxes. represents the ground truth bounding box coordinates. denotes the predicted bounding box coordinates. | |
4 | Generalized Intersection over Union (GIoU) Loss | A and B are the ground truth and predicted bounding boxes, C is the smallest enclosing box covering both A and B, and is the area outside the union of the two boxes. | |
5 | Intersection over Union (IoU) | is the area of overlap between the predicted and ground truth boxes, and is the area of the union. | |
6 | Classification loss (cross-entropy loss) | C is the total number of classes. is the ground truth probability. is the predicted probability for the i-th class. | |
7 | Hungarian matching loss | is the cross-entropy classification loss, is the L1 loss for bounding box regression, is the GIoU loss for bounding boxes, and , , are hyperparameters that balance the contribution of each component. | |
8 | Total loss (training loss) | N is the number of matched pairs of ground truth and predicted objects, , , are the losses for the I-th object pair, and , , are hyperparameters that balance the contribution of each component. |
Model | Parameters (Millions) | Epochs | AP | AP50 | AP75 | Small Objects | Medium Objects | Large Objects |
---|---|---|---|---|---|---|---|---|
Baseline DETR | 41.4 | 300 | 34.7 | 45.8 | 30.3 | 14.0 | 30.6 | 50.7 |
YOLOv5s | 7.5 | 300 | 36.0 | 53.6 | 33.1 | 12.6 | 30.0 | 52.5 |
YOLOv5m | 21.2 | 300 | 37.2 | 55.0 | 34.5 | 13.7 | 31.3 | 54.6 |
YOLOv7 [21] | 37.3 | 300 | 36.0 | 53.6 | 33.1 | 12.2 | 30.0 | 52.5 |
YOLOv8 [1] | 44.7 | 300 | 37.2 | 55.0 | 34.5 | 15.2 | 31.3 | 54.6 |
FSH-DETR [9] | 60.3 | 300 | 66.7 | 84.2 | 33.9 | 15.5 | 33.5 | 52.6 |
Proposed Model | 18.7 | 300 | 38.7 | 55.6 | 35.0 | 16.0 | 34.7 | 55.9 |
Model | AP (%) | Precision (%) | Recall (%) | F1 Score (%) | Small Object Detection (AP) | Inference Speed (ms/Frame) |
---|---|---|---|---|---|---|
YOLOv7 [21] | 36.0 | 72.5 | 65.1 | 68.6 | 17.6 | 15 |
YOLOv8 [1] | 37.2 | 74.3 | 66.7 | 70.3 | 12.7 | 14 |
FSH-DETR [9] | 33.7 | 69.1 | 61.3 | 65.0 | 12.5 | 25 |
3ENB2 [25] | 55.1 | 69.2 | 74.5 | 76.8 | 16.9 | 33 |
MITI-DETR [10] | 38.6 | 75.4 | 67.2 | 71.1 | 15.1 | 14 |
IoT Yolo [19] | 38.2 | 74.8 | 68.5 | 71.5 | 15.6 | 15 |
Proposed Model | 38.7 | 97.2 | 97.0 | 97.4 | 16.0 | 13 |
Component | Configuration | Average Precision (AP) | Change in AP (%) |
---|---|---|---|
Baseline (Original DETR with ResNet) | No Augmentation | 33.5% | - |
EfficientNet-B0 Backbone | No Augmentation | 36.7% | +3.2% |
ResNet Backbone | With Rotation | 34.8% | +1.3% |
ResNet Backbone | With Color Adjustment | 34.3% | +0.8% |
EfficientNet-B0 Backbone | With All Augmentations | 38.7% | +5.2% |
Experiment Setup | mAP (%) | Small Object Detection (AP) | Precision (%) | Recall (%) | F1 Score (%) | Inference Speed (ms/Frame) |
---|---|---|---|---|---|---|
Baseline DETR (ResNet Backbone) | 33.5 | 12.0 | 70.2 | 64.5 | 67.2 | 16 |
EfficientNet-B0 Backbone Only | 36.7 | 14.2 | 73.8 | 66.3 | 69.9 | 14 |
Backbone + Data Augmentation | 37.9 | 15.4 | 75.9 | 67.8 | 71.6 | 14 |
Full Model (Proposed) | 38.7 | 16.0 | 97.2 | 97.0 | 97.4 | 13 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Makhmudov, F.; Umirzakova, S.; Kutlimuratov, A.; Abdusalomov, A.; Cho, Y.-I. Advanced Object Detection for Maritime Fire Safety. Fire 2024, 7, 430. https://doi.org/10.3390/fire7120430
Makhmudov F, Umirzakova S, Kutlimuratov A, Abdusalomov A, Cho Y-I. Advanced Object Detection for Maritime Fire Safety. Fire. 2024; 7(12):430. https://doi.org/10.3390/fire7120430
Chicago/Turabian StyleMakhmudov, Fazliddin, Sabina Umirzakova, Alpamis Kutlimuratov, Akmalbek Abdusalomov, and Young-Im Cho. 2024. "Advanced Object Detection for Maritime Fire Safety" Fire 7, no. 12: 430. https://doi.org/10.3390/fire7120430
APA StyleMakhmudov, F., Umirzakova, S., Kutlimuratov, A., Abdusalomov, A., & Cho, Y.-I. (2024). Advanced Object Detection for Maritime Fire Safety. Fire, 7(12), 430. https://doi.org/10.3390/fire7120430