Next Article in Journal
Critical Egress Parameters Governing Assisted Evacuation in Hospital Buildings
Previous Article in Journal
Experimental and Numerical Studies on the Efficacy of Water Mist to Suppress Hydrocarbon Fires in Enclosures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing Maritime Safety: Early Detection of Ship Fires through Computer Vision, Deep Learning Approaches, and Histogram Equalization Techniques

1
Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
2
Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
*
Author to whom correspondence should be addressed.
Submission received: 23 January 2024 / Revised: 26 February 2024 / Accepted: 7 March 2024 / Published: 8 March 2024
(This article belongs to the Special Issue Protection of Ships against Fire and Personnel Evacuation)

Abstract

:
The maritime sector confronts an escalating challenge with the emergence of onboard fires aboard in ships, evidenced by a pronounced uptick in incidents in recent years. The ramifications of such fires transcend immediate safety apprehensions, precipitating repercussions that resonate on a global scale. This study underscores the paramount importance of ship fire detection as a proactive measure to mitigate risks and fortify maritime safety comprehensively. Initially, we created and labeled a custom ship dataset. The collected images are varied in their size, like having high- and low-resolution images in the dataset. Then, by leveraging the YOLO (You Only Look Once) object detection algorithm we developed an efficacious and accurate ship fire detection model for discerning the presence of fires aboard vessels navigating marine routes. The ship fire detection model was trained on 50 epochs with more than 25,000 images. The histogram equalization (HE) technique was also applied to avoid destruction from water vapor and to increase object detection. After training, images of ships were input into the inference model after HE, to be categorized into two classes. Empirical findings gleaned from the proposed methodology attest to the model’s exceptional efficacy, with the highest detection accuracy attaining a noteworthy 0.99% across both fire-afflicted and non-fire scenarios.

1. Introduction

Despite witnessing a notable reduction of 50% in shipping losses over the course of the past decade, it is imperative to underscore that fires onboard vessels persist as one of the most substantial safety concerns within the maritime industry. This enduring challenge necessitates a continued focus on comprehensive safety measures and innovative solutions to effectively address and mitigate the risks associated with onboard fires. A recently published report by the international insurance conglomerate Allianz sheds light on the alarming statistics surrounding fires on large shipping vessels. According to the report, a staggering 200 fire incidents were documented in 2022 alone, marking the highest annual total in a decade. Notably, of these incidents, 43 were specifically identified as occurring on cargo or container ships, underscoring the heightened risk within this sector of the maritime industry. Implementing computer vision algorithms for advanced fire detection, monitoring, and response systems in the context of fire and smoke detection offers a comprehensive approach to enhancing ship safety. Computer vision algorithms can be trained to recognize specific patterns associated with smoke. Through the meticulous examination of video streams captured by cameras installed on board, these sophisticated algorithms are adept at rapidly and precisely detecting the manifestation of smoke across different sections of the vessel. This capability extends to recognizing smoke emanations resulting from a multitude of sources, including, but not limited to, combustion of fuel, leakage of lubricating oils, and the malfunctioning of pipes along with their associated fittings [1]. The algorithms leverage advanced image processing and machine learning techniques to analyze the visual data, enabling them to discern the subtle nuances of smoke appearance under varying lighting and environmental conditions. This capability is crucial to detect ship fires accurately, allowing the system to alert the crew or trigger automated responses before a fire escalates. The integration of smoke detection algorithms enhances the overall effectiveness of fire prevention measures.
Computer vision algorithms, as part of an alert system, play a critical role in providing timely notifications about potential fire incidents. These alerts can be sent to relevant personnel, both onboard and onshore, through various communication channels. The system can differentiate between normal activities and emergency situations, ensuring that alarms are triggered only in response to genuine threats. The rapid dissemination of alerts enables quick decision-making and response coordination, contributing to effective firefighting efforts. Computer vision algorithms can be integrated with the ship’s fire suppression systems. In the event of smoke or fire detection, the system can automatically activate fire extinguishing mechanisms, such as sprinklers or suppressant agents. This seamless integration ensures a swift and targeted response, minimizing the potential damage caused by fires. Such automation is crucial for situations where immediate human intervention might be challenging or delayed. The machine learning component of computer vision systems enables continuous improvement over time. As the algorithms process more data and encounter various scenarios, they can adapt and refine their capabilities. This self-learning aspect contributes to the system’s accuracy in detecting smoke patterns and anomalies, reducing false alarms and enhancing overall reliability. The potential for fire danger zones to manifest onboard ships is heightened due to various factors. The intricate machinery and systems within a vessel, coupled with the presence of combustible materials, create an environment susceptible to the initiation and rapid spread of fires. The engine room, being the nucleus of a ship’s power generation, is particularly prone to fire incidents due to the intricate network of components and the inherent combustibility of certain materials present. Additionally, electrical systems, machinery malfunction, and human error can act as catalysts for the emergence of fire danger zones, further emphasizing the need for robust detection and prevention mechanisms within the maritime setting. The pivotal compartment of a vessel responsible for powering and ensuring its seamless operation is the engine room. Nevertheless, owing to its intricate structure and the presence of flammable materials, 75% of all ship fires originate in the engine room and nearly two-thirds of these engine room fires specifically occur in the primary and auxiliary engines, as well as in closely associated components such as turbochargers [2]. Given this context, the detection of engine room fires holds paramount significance. The swift and precise identification of fires within the engine room is crucial to mitigating potential harm to individuals and property resulting from maritime accidents. Moreover, it can contribute positively to the ongoing enhancement of ship damage control systems, as well as the advancement of technology in ship fire prevention and control.
An alternative to traditional fire alarm systems is the adoption of AI-driven fire detection. In recent times, there has been a notable integration of deep learning algorithms in the identification of fires through visual data. Current research substantiates the efficacy of methods rooted in computer vision and deep learning for the purpose of fire detection [3,4,5,6,7]. Deep learning technology possesses the capability to autonomously extract object features from images, facilitating the acquisition of generalized information. These methodologies exhibit exceptional learning capacities and adaptability. Prominent among the common deep learning algorithms is YOLO [8,9]. Deep-learning-based target detection offers an automated process for extracting intricate details and features from images. This approach proves particularly effective in overcoming the challenges of redundancy and interference associated with the manual extraction of image features in the context of fire detection [10]. Traditional methods of fire detection technology involve the amalgamation of data from various indoor sensors. Alerts are generated when the parameter values detected by these sensors exceed the predefined thresholds. In the initial stages of sensor technology, emphasis was placed on the concept of a “point sensor”, primarily relying on particle activation related to essential fire characteristics, including heat, gas, flames, smoke, and other pertinent factors [11]. In recent years, propelled by the rapid advancement of computer vision, image processing technologies, the continuous enhancement of hardware computing capabilities, and the widespread adoption of video surveillance networks, there has been a discernible shift in attention towards the evolution of fire detection technologies. Notably, video fire testing, underpinned by deep learning principles, has emerged as a prominent research area characterized by its swift response times and heightened accuracy. This transition is underscored by the increasing intelligence and automation of modern ships, coupled with the maturation of video surveillance systems. This confluence presents a viable prospect for leveraging monitoring and deep learning technologies in the detection of fires within engine rooms.
Moreover, the successful application of video-oriented fire detection in diverse settings, ranging from indoor office spaces to outdoor environments like forested areas, lays a robust foundation for its potential adaptation in the maritime domain. In alignment with these advancements, this paper contributes to the discourse by proposing the application of the YOLO algorithm for ship fire detection. By harnessing the capabilities of YOLO, which excels in real-time object detection, this research aims to enhance the efficacy of fire detection in engine rooms through the analysis of real-time video surveillance feeds. The YOLO algorithm stands out as a highly efficient real-time object detection method. It operates by dividing an image into a grid system, with each grid autonomously responsible for detecting objects within its designated area. What distinguishes YOLO is its capacity for real-time inference and, notably, it achieves this feat while demanding minimal computational resources. The persistent threat of onboard fires remains a significant concern in the maritime industry, despite a commendable reduction in overall shipping losses over the past decade. Engine rooms, being vital components of vessels, are particularly susceptible to fire incidents, emphasizing the critical need for robust detection and prevention measures. The integration of computer vision algorithms, particularly those rooted in deep learning and exemplified by the YOLO algorithm, presents a transformative approach to enhancing fire detection and response systems on ships. The evolution of fire detection technology from traditional sensor-based methods to advanced computer vision algorithms signifies a paradigm shift in maritime safety. Leveraging the capabilities of YOLO and other deep learning models enables real-time, accurate detection of fire and smoke in complex environments like engine rooms, as shown in Figure 1. This transition is aligned with the increasing intelligence and automation of modern ships and the maturity of video surveillance systems, creating a conducive environment for the adoption of cutting-edge technologies.
This paper contributes significantly to the field by proposing the application of the YOLO algorithm for ship fire detection.
  • Introduction of YOLO Algorithm for Ship Fire Detection: This paper significantly contributes to the field by introducing the application of the YOLO algorithm for ship fire detection. The utilization of YOLO’s real-time object detection capabilities marks a pivotal step in enhancing the methodology employed for fire detection on ships.
  • Enhancing Effectiveness through Real-time Analysis: By leveraging YOLO’s capabilities, the research aims to elevate the overall effectiveness of fire detection systems. The emphasis on real-time analysis of video surveillance data signifies a critical advancement, allowing for swift and accurate detection of fire incidents as they unfold on ships.
  • Promise of YOLO-Based Algorithms in Maritime Safety: The application of YOLO-based algorithms holds immense promise for advancing safety measures on ships. This contribution introduces a novel and sophisticated approach to maritime fire detection, highlighting the potential for YOLO algorithms to redefine safety protocols within the maritime industry.
  • Utilization of Custom Datasets for Robust Algorithm Performance: The incorporation of custom datasets in the research is a strategic move to further contribute to the robustness and adaptability of the proposed YOLO algorithm in real-world scenarios.
  • Pioneering Further Exploration of Computer Vision Techniques with the Application of Histogram Equalization Technique: Equalizing the histogram, subtle details and features in 2D images, in the case of sea transports, provide better detection of smoke and fire in high water vapor representation in the air in the oceanic environment.
In this study, we propose to enhance maritime safety by utilizing the YOLO algorithm for detecting fires on ships. By fine-tuning the real-time detection strengths of YOLO, following image equalization, this research seeks to significantly improve the efficiency of fire detection. The promise held by YOLO-based algorithms represents a significant leap forward in enhancing safety measures, paving the way for the future integration of advanced computer vision techniques in maritime security. Furthermore, the strategic use of custom datasets underscores the commitment to robustness and adaptability, offering valuable insights for ongoing improvements in maritime safety protocols. As the maritime industry embraces these technological advancements, this paper serves as a pivotal contribution to the evolution of safety standards, ensuring a safer and more resilient maritime environment.

2. Related Work

Over the past decade, there has been a notable shift in fire detection technology, with the emergence of deep learning techniques, particularly the YOLO algorithms, proving instrumental in addressing significant challenges in object detection. The development of YOLO’s framework, evolving from the initial YOLOv1 [12] to the latest YOLOv8 algorithms, reflects key innovations and differences that enhance its proficiency in executing detection tasks. These advancements are intricately tied to the paradigm shift in the realm of fire detection technology. Object detection and recognition algorithms primarily rely on specific types of deep neural networks (DNNs) and convolutional neural networks (CNNs). Learnable neural networks comprise multiple layers, each assigned distinct functions such as area analysis, feature extraction, data identification, and anomaly detection to achieve precise object detection. Noteworthy advancements in this field include the early fire warning mechanism proposed by Chen et al. [13], utilizing video processing to detect fire and smoke pixels through chromaticity and disorder measurements within the RGB model. A typical ship’s fire detection system incorporates sensors for fire and smoke, heat detectors, and gas detectors, in conjunction with an alarm panel [14]. Engineered to provide both visible and audible alerts, these fire detectors play a crucial role in indicating the precise location of a fire on the vessel. The network of detectors spanning the entire ship is intricately linked to a fire control panel. This central control unit not only issues visual and auditory alarms but also has the capacity to trigger alarms in various other sections of the vessel for comprehensive alerting and response. The progression towards advanced fire detection methodologies is exemplified by Foggia et al.’s [15] work, presenting a method that analyzes surveillance camera videos. Their approach integrates information from color, shape alterations, and motion analysis through multiple expert systems, showcasing the convergence of various technological elements. Furthermore, the research conducted by Arthur K et al. [16] on video flame segmentation and recognition underscores the industry’s dedication to exploring innovative techniques in fire detection. In a parallel development, Wu et al. [17] introduced a dynamic fire detection algorithm for surveillance videos, incorporating radiation domain feature models.
In the realm of fire detection through image segmentation, the fundamental task entails the allocation of individual pixels within an image to distinct categories, distinguishing between those constituting the fire region and those comprising the background. This segmentation objective is systematically addressed by employing semantic segmentation networks, which undergo end-to-end training to directly assimilate the capacity for delineating segmentation masks from the original image. Noteworthy instances of this approach include the utilization of frameworks such as GAN [18]. Not only does instance segmentation involve classifying each pixel into specific categories but this CNN architecture can also distinguish individual instances of those categories. The original U-Net architecture was introduced for biomedical image segmentation, particularly for the segmentation of neuronal structures in electron microscopy images. Consequently, this methodological paradigm empowers the network to discern and classify elements related to fire at a granular level of individual pixels, thereby facilitating the meticulous identification of fire regions against the backdrop of the overall visual data. It is important to note that, while U-Net provides a strong foundation, there are other dedicated architectures for instance segmentation tasks, such as Mask R-CNN [19], or Region-based Convolutional Neural Network, which explicitly addresses the challenges of segmenting and distinguishing individual instances within a given class. Guan et al. [20] proposed an innovative approach to instance segmentation, denoted as MaskSU R-CNN, designed specifically for the early detection and segmentation of forest fires. Research endeavors focused on addressing critical challenges in computer vision related to forest fire detection using UAV-captured video frames from the FLAME dataset. The approach proposed innovative solutions for binary image classification (fire vs. no fire) and fire instance segmentation. The semantic segmentation method for fire smoke, leveraging global information and the U-Net network, is designed to accurately delineate and identify regions associated with fire smoke within images. Semantic segmentation involves classifying each pixel in an image into distinct categories, in this case, differentiating between fire smoke and the background. The integration of global information and the U-Net architecture enhances the model’s ability to capture contextual details and spatial relationships crucial for effective segmentation [21]. As exemplified by Zheng et al. [22], a sophisticated approach to semantic segmentation in the context of fire smoke has been introduced. Their method intricately integrates global contextual information and leverages the U-Net network architecture. The algorithm, characterized by its utilization of Multi-Scale Residual Group Attention (MRGA), is adept at concurrently exploiting contextual understanding and intricate spatial relationships within the image data. By synergistically incorporating MRGA with the U-Net framework, the model adeptly captures multi-scale smoke features, thereby augmenting its capacity to discern subtle nuances within small-scale smoke instances. This amalgamation of methodologies significantly enhances the model’s perceptual acuity, particularly when confronted with the challenges associated with detecting and segmenting small-scale smoke regions. The scholars referenced in [23] have introduced an innovative algorithm denoted as “fire-YOLO”. This algorithm constitutes an augmentation to YOLOv4, integrating depth-separable convolution techniques. This augmentation serves the dual purpose of mitigating the computational costs associated with the model while concurrently enhancing the perceptual field of the feature layer. Notably, the inclusion of a cavity convolution method further refines the model’s efficiency.
The influence of ocean proximity in areas near the ocean often means higher humidity levels due to the proximity of a great deal of water evaporation, which makes object detection less accurate. Humidity can affect the performance of the sensors used in imaging systems, degrading the quality of images captured by cameras. Therefore, contrast is instrumental for visual processing and understanding of the information contents within the images in various environmental settings [24]. Chen et al. [25] introduced a histogram equalization (HE)-based approach, called quadrant dynamic histogram equalization (QDHE), for captured images from devices. This method was mainly applied in the area where images were captured in low-light environments; the QDHE algorithm enhances images without any intensity saturation, noise amplification, or over-enhanced images.

3. Proposed Methods, Model Architecture

Our primary goal is to effectively detect fire on ships by training a model that detects ships, smoke, and fire, and mainly focuses on detecting ships that are on fire or with smoke without fire. We created custom dataset for various sea transports that are with fire, without fire, and with smoke, and fine-tuned YOLOv8 state-of-the-art single-shot detector model.
Moreover, since real-time object detection is relatively challenging due to its variances in object sizes and aspect ratios, inference speed and noise occurrences significantly affect object detection. In other words, high humidity levels in the marine environment can lead to haziness and reduce visibility in the atmosphere. Objects in real-world scenarios often exhibit diverse aspect ratios, meaning they can be elongated or compressed in various ways. This mainly results in images with reduced contrast and clarity, making it challenging to distinguish objects such as sea transports. Object detection algorithms often rely on well-defined features and patterns. Therefore, for better detection purposes we apply histogram equalization techniques for image enhancement, avoiding a narrow range of intensity values in ship images.
Traditional object detection models might struggle when confronted with such variability. Moreover, real-time object detection demands swift processing to keep pace with the continuous stream of input data, such as video frames. Slower inference speeds can lead to latency issues, causing a lag between the occurrence of an event and the model’s response. Considering above-mentioned obstacles, YOLO is a great approach.

3.1. YOLO Architecture

YOLOv1 was introduced in 2016; initial steps of real-time object detection of YOLO algorithm that consisted of 24 convolutional layers are shown in Figure 2.
YOLOv1 takes an input image of fixed size (e.g., 448 × 448 pixels). Input image is then divided into as A × A grid, where each grid cell is responsible for predicting objects that fall within it. Each grid cell predicts B bounding boxes and confidence scores for those boxes. The final output is a tensor of dimensions (A,A,B × 5 + C), where B is the number of bounding boxes predicted per grid cell, 5 corresponds to the bounding box coordinates and C is the number of classes. YOLOv1 used stochastic gradient descent as its optimizer, localization loss, and classification loss functions. Loss function is designed to penalize both localization and classification errors. The λcoord and λnoobj were equalized to 5, set to regularization coefficients to regulate the magnitude of different parts of localized objects as shown in equation below.
λ c o o r d i = 1 A 2 j = 1 B i j o b j c c l a s s e s
where i j o b j denoted objects that appeared in cell i and i j o b j denoted the jth bounding box in cell i that was set to prediction.

3.2. The Model Structure of Yolov8 Network

YOLOv8 stands as the latest iteration in the YOLO object detection model series, retaining the foundational architecture of its predecessors while introducing a myriad of enhancements. In the context of ship fire detection, the importance of YOLOv8 in real-time applications becomes evident. Utilizing a custom collection of ship images depicting both fire and non-fire scenarios, YOLOv8 proves instrumental in swiftly and accurately identifying instances of ship fires. This capability is particularly crucial for maritime safety, where timely detection of ship fires can significantly contribute to effective emergency response and disaster mitigation.
Figure 3 is representation of YOLOv8 architecture, which is built on PyTorch open-source machine leaning library. Backbone layer of model includes convolutional 2D (conv2d) image and batch normalization (U) in the same parameter, then Rectified Linear Unit (ReLU) activation function in leak parameter to handle negative inputs, allowing small non-zero gradients to propagate through the network. C is for concatenation and P (3,4,5) are detection model names.

3.3. Histogram Equalization Technique Application for Detection Enhancement

Histogram equalization (HE) technique application for the enhancement of ship fire images is a crucial approach in our study. As we mentioned, marine environment has high likelihood of becoming humid most of the time. Our objective is to enhance precision in ship fire detection and categorize images based on the presence or absence of fire. Adverse weather conditions associated with high humidity, such as fog, mist, or heavy rainfall, can significantly impact the quality of images and compromise the effectiveness of object detection. The presence of moisture in the air can cause reduced visibility, image distortion, and altered surface characteristics, making it difficult for algorithms to accurately identify and locate objects. Therefore, we combined trained ship fire detection model with HE. The proposed HE technique adjusts the brightness of ship images by evenly distributing over RGB channels. Moreover, HE produces unrealistic effects in photographs most of the time. To solve this, we also apply image upscaling technique by keeping image identity. The brightness distribution can be seen through cumulative density function (cdf) or cumulative distribution function, showing a line in 0 to 255 color channels.
h v = r o u n d c d f v c d f m i n M × N c d f m i n × L 1
where h v represent the output value after applying the operation for HE to variable v ,   r o u n d is a rounding result of the expression inside the parenthesis operations, c d f v is cumulative density at the point variable, and c d f m i n is a constant for the minimum value. M × N are dimension variables of an input image. L is the equalized value (266), because pixel intensity is generally expressed in between 0 and 255.

3.4. Data Distribution

In maritime safety and emergency response, the swift and accurate detection of ship fires plays a pivotal role in mitigating potential disasters. Leveraging advancements in computer vision and deep learning, this methodology outlines a comprehensive approach to training a model specifically designed for the detection of ship fires. The process involves the meticulous collection of a diverse dataset.
To augment the dataset’s size and variability, video frames depicting ship fires are extracted, enriching the training material. The chosen model architecture, a variant of the widely used YOLO family, is tailored to facilitate real-time detection capabilities. Pre-trained on a large dataset, the model is fine-tuned to discern between two crucial classes: ships on fire and ships not on fire. The methodology encompasses critical steps, including dataset organization, data augmentation, model selection, and training parameter optimization, ensuring the development of a robust and reliable ship fire detection system.
This methodology not only emphasizes the technical intricacies of model training but also underscores the importance of continuous improvement. Regular updates, fine-tuning, and adaptation to evolving scenarios contribute to the model’s ongoing effectiveness in safeguarding maritime environments. Through a systematic and well-documented approach, this method serves as a valuable resource for those aiming to deploy advanced technologies for ship fire detection, ultimately enhancing safety measures within maritime operations.
Figure 4 depicts our dataset’s ship images in different scenarios. (a) shows wide visible burning ships, where fire is clearly shown. (b) is an example of ships on fire where smoke is the dominant feature to classify. (c) is an example of no-fire class. Overall, if ship images have smoke in them, we labeled those images as “fire” class. “Not-fire” class images are clear, without any smoke with only evaporation in the images.
Table 1 represents a comprehensive dataset we used to train our model, encompassing images representative of both fire and non-fire scenarios. The dataset comprised a total of 19,781 images, with meticulous attention given to balancing the representation of fire-related instances alongside non-fire instances, a critical consideration for ensuring model robustness and generalization.
Within the dataset, the category of “Fire” images encompassed a substantial count of 16,261 instances, indicative of the emphasis placed on capturing the diverse manifestations of fire-related occurrences. Correspondingly, “Non-Fire” images were meticulously curated to provide a complementary representation, totaling 5735 instances, thereby facilitating a comprehensive assessment of the model’s discriminative capabilities across varied scenarios.
For the purpose of model training and evaluation, the dataset was partitioned into distinct subsets, namely “Training Images” and “Validation Images”. The “Training Images” subset, consisting of both fire and non-fire instances, comprised 19,372 images, serving as the foundational corpus upon which the model’s learning process was predicated. In parallel, the “Validation Images” subset, comprising 6144 images, was employed to gauge the model’s performance and generalization ability on unseen data, thereby ensuring a rigorous evaluation framework.
The dataset comprises over 25,000 high-resolution images, capturing a wide array of maritime environments. Each image is meticulously labeled to indicate the presence or absence of a fire on the ship. The inclusion of both positive and negative instances aims to challenge object detection models to discern subtle details amidst the complex maritime backdrop. A notable challenge in this dataset is the variability in ship sizes and orientations, and the dynamic nature of fire occurrences. The aspect ratio of ships, combined with the unpredictable nature of fires, necessitates robust algorithms capable of handling these variations for accurate detection. The dataset is partitioned into training and validation sets to facilitate the development and evaluation of our ship fire detection model. We employ state-of-the-art object detection architectures and fine-tune them on our dataset. The training process involves optimizing for both accuracy and inference speed, balancing the need for precision with the demand for real-time performance.

4. Experimental Results and Discussion

This research not only establishes a robust foundation for effective ship fire detection systems but also underscores the potential of computer vision and deep learning methodologies in addressing critical safety challenges in the maritime domain. The meticulous evaluation of our approach, utilizing precision, recall, and F1 score metrics, provides a comprehensive understanding of the model’s performance, affirming its viability for practical implementation in enhancing maritime safety protocols. Furthermore, the success of our preliminary experiments encourages the further exploration and refinement of these methodologies to continually advance the capabilities of ship fire detection systems for broader industry applications.

4.1. Model Evaluation

As shown in Table 2, the computational setup utilized for this research comprises a high-performance hardware configuration. The motherboard employed is the ASRock X399 Taichi, renowned for its robustness and compatibility with high-end components. The system boasts a substantial memory capacity of 32.0 GiB, ensuring ample resources for data processing and model training tasks. At the heart of the system lies the AMD Ryzen™ Threadripper™ 1950X processor, a formidable workstation-grade CPU equipped with 32 cores. This processor provides exceptional parallel computing power, enabling the efficient execution of complex machine learning algorithms. Complementing the CPU is the NVIDIA GeForce GTX 1080 Ti graphics card, renowned for its prowess in accelerating deep learning tasks through GPU parallelism.

4.2. Evaluation Metrics

The performance of the developed ship fire detection model is meticulously assessed through the examination of the confusion matrix, a fundamental tool in evaluating the classification results. The matrix encapsulates the model’s predictions against ground truth labels, offering insights into its proficiency across distinct classes. The selection of performance metrics is contingent upon distinct factors, including the inherent characteristics of the data and the objectives of the analysis. These metrics serve as pivotal tools for gauging the efficacy of a proposed approach or model, offering a nuanced evaluation of its performance. The assessment of model predictions against ground truth involves fundamental metrics such as true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). These metrics, extensively employed in our prior scholarly works [27,28,29,30,31,32], encapsulate the model’s proficiency in accurately categorizing instances, shedding light on its discriminative capabilities. The quantitative derivation of these metrics contributes to a comprehensive understanding of the model’s performance across diverse datasets and analytical contexts, delineating its efficacy and reliability in distinct scenarios.
Precision = T P T P + F P
Recall = T P T P + F N
F 1   score = 2 R e c a l l P r e c i s i o n R e c a l l + P r e c i s i o n

4.3. Model Training Results

The matrix, presented in Figure 5, encapsulates the model’s classification outcomes for the “Ship on Fire” and “Ship Not on Fire” classes. The model exhibits an exceptional proficiency in detecting ships engulfed in flames, achieving a commendable accuracy of 99%. This result underscores the robustness of the model in discerning and accurately classifying instances of fire aboard ships. The high precision in this class is particularly noteworthy, indicating a low rate of false positives in the identification of fire incidents. In the complementary class of “Ship Not on Fire”, the model attains a flawless accuracy rate of 100%. This implies that the model, when confronted with ships devoid of fire, consistently makes correct predictions without any instances of misclassification. The perfect accuracy in this category signifies a robust ability to differentiate non-fire scenarios with utmost precision.
From Figure 6a it can be seen that the decline in training loss from an initial value of 1 to the remarkably low level of 0.001 signifies the model’s capacity to capture intricate patterns and dependencies within the training data. This progressive learning is indicative of the model’s adaptability, as it shows its parameters to align more closely with the underlying structure of the ship dataset and the same adaptability in case (b), with relatively higher fluctuation until reaching epoch 20. In the case of (c), the model’s learning rating is increasing significantly, reaching its highest point of 0.97% accuracy in 50 epochs. This metric, representing the proportion of correctly classified instances, stands as a testament to the ship fire detection model’s proficiency in discerning intricate patterns within the data. While the achieved accuracy is remarkable, careful consideration must be given to the generalization capabilities of the model.
Figure 7 and Figure 8 illustrate the outcomes of ship fire detection concerning validation examples from our custom dataset. These visual representations showcase the effectiveness of our developed model in accurately identifying and delineating instances of ship fires. In Figure 7, the detection results provide a clear visualization of the algorithm’s performance, offering insights into its ability to discern and highlight areas indicative of fire incidents. Similarly, Figure 8 further elucidates the proficiency of our ship fire detection system by presenting additional validation examples from the custom dataset.
The comparative results of ship on fire images are presented in Figure 9. Panel (a) displays images without the application of HE, providing a baseline representation. In contrast, Panel (b) exhibits images after the application of histogram equalization, showcasing the enhanced quality and contrast achieved through this preprocessing technique. These comparative visualizations offer a side-by-side assessment of the impact of histogram equalization on ship fire images. The images in Panel (b) demonstrate the improved visibility of critical features, aiding in the discernment of fire-related patterns and enhancing the overall quality of the dataset for more effective analysis and detection. In Figure 10 and Figure 11, single image and multiple image examples as visual output results from the proposed method.

5. Conclusions

In conclusion, this research has addressed the escalating challenge posed by onboard fires on container ships within the maritime industry. The discernible surge in incidents over recent years necessitates a proactive approach to enhance maritime safety and mitigate the associated risks. Leveraging the YOLO object detection algorithm, our study successfully developed an efficient and reliable system to detect ship fires. The model, trained on a comprehensive dataset comprising over 25,000 ship images, achieved an impressive accuracy exceeding 99%, underscoring its robust performance.
The multifaceted nature of ship fires, stemming from diverse causes such as electrical faults and human error, emphasizes the significance of advanced detection systems. Beyond immediate safety concerns, the implications of ship fires extend to environmental impacts, including the release of pollutants and greenhouse gases, contributing to global warming. Recognizing the urgency of addressing these challenges, our research advocates for the integration of state-of-the-art fire detection technologies into maritime safety systems. By implementing advanced detection systems, not only can we safeguard human lives and protect valuable cargo, but we also contribute to minimizing the ecological footprint associated with maritime disasters. This paper highlights the critical importance of proactive measures in preventing and responding to ship fires effectively. To enhance the accuracy of our ship fire detection model, we combined the trained model with HE algorithms to preprocess ship images, because the presence of moisture in the air, such as fog, mist, or heavy rainfall, can significantly influence the quality of images. The application of HE is set to increase object detection accuracy. As the maritime industry faces evolving challenges, embracing cutting-edge technologies becomes imperative to ensure the resilience and sustainability of maritime operations. Our research serves as a foundation for further advancements in ship fire detection and underscores the pivotal role of technology in shaping the future of maritime safety.

Author Contributions

A.E. and F.A. conceived the idea, performed the experiments, and developed the algorithm for ship fire detection. A.A. and W.K. provided suggestions to analyze data and improve the manuscript. The article was written by A.E. and F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Gachon University Research Fund (GCU-202300740001), the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2022S1A5C2A07090938).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Avazov, K.; Jamil, M.K.; Muminov, B.; Abdusalomov, A.B.; Cho, Y.-I. Fire Detection and Notification Method in Ship Areas Using Deep Learning and Computer Vision Approaches. Sensors 2023, 23, 7078. [Google Scholar] [CrossRef]
  2. Zhu, J.; Zhang, J.; Wang, Y.; Ge, Y.; Zhang, Z.; Zhang, S. Fire Detection in Ship Engine Rooms Based on Deep Learning. Sensors 2023, 23, 6552. [Google Scholar] [CrossRef]
  3. Norkobil Saydirasulovich, S.; Abdusalomov, A.; Jamil, M.K.; Nasimov, R.; Kozhamzharova, D.; Cho, Y.-I. A YOLOv6-Based Improved Fire Detection Approach for Smart City Environments. Sensors 2023, 23, 3161. [Google Scholar] [CrossRef]
  4. Sadewa, R.P.; Irawan, B.; Setianingsih, C. Fire Detection Using Image Processing Techniques with Convolutional Neural Networks. In Proceedings of the 2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia, 5–6 December 2019; pp. 290–295. [Google Scholar]
  5. Valikhujaev, Y.; Abdusalomov, A.; Cho, Y.I. Automatic Fire and Smoke Detection Method for Surveillance Systems Based on Dilated CNNs. Atmosphere 2020, 11, 1241. [Google Scholar] [CrossRef]
  6. Avazov, K.; Hyun, A.E.; Sami S, A.A.; Khaitov, A.; Abdusalomov, A.B.; Cho, Y.I. Forest Fire Detection and Notification Method Based on AI and IoT Approaches. Future Internet 2023, 15, 61. [Google Scholar] [CrossRef]
  7. Abdusalomov, A.B.; Islam, B.M.S.; Nasimov, R.; Mukhiddinov, M.; Whangbo, T.K. An Improved Forest Fire Detection Method Based on the Detectron2 Model and a Deep Learning Approach. Sensors 2023, 23, 1512. [Google Scholar] [CrossRef]
  8. Wu, H.; Hu, Y.; Wang, W.; Mei, X.; Xian, J. Ship fire detection based on an improved YOLO algorithm with a lightweight convolutional neural network model. Sensors 2022, 22, 7420. [Google Scholar] [CrossRef]
  9. Xu, K.; Xu, Y.; Xing, Y.; Liu, Z. YOLO-F: YOLO for flame detection. Int. J. Pattern Recognit. Artif. Intell. 2023, 37, 2250043. [Google Scholar] [CrossRef]
  10. Muhammad, K.; Ahmad, J.; Mehmood, I.; Rho, S.; Baik, S.W. Convolutional neural networks-based fire detection in surveillance videos. IEEE Access 2018, 6, 18174–18183. [Google Scholar] [CrossRef]
  11. Gaur, A.; Singh, A.; Kumar, A.; Kulkarni, K.S.; Lala, S.; Kapoor, K.; Srivastava, V.; Kumar, A.; Mukhopadhyay, S.C. Fire Sensing Technologies: A Review. IEEE Sens. J. 2019, 19, 3191–3202. [Google Scholar] [CrossRef]
  12. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. 2016. Available online: https://arxiv.org/pdf/1506.02640.pdf (accessed on 12 January 2024).
  13. Chen, T.H.; Wu, P.H.; Chiou, Y.C. An early fire-detection method based on image processing. In Proceedings of the 2004 International Conference on Image Processing, ICIP ‘04, Singapore, 24–27 October 2004. [Google Scholar]
  14. Barmpoutis, P.; Stathaki, T.; Dimitropoulos, K.; Grammalidis, N. Early Fire Detection Based on Aerial 360-Degree Sensors, Deep Convolution Neural Networks and Exploitation of Fire Dynamic Textures. Remote Sens. 2020, 12, 3177. [Google Scholar] [CrossRef]
  15. Foggia, P.; Saggese, A.; Vento, M. Real-Time Fire Detection for Video-Surveillance Applications Using a Combination of Experts Based on Color, Shape, and Motion. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1545–1556. [Google Scholar] [CrossRef]
  16. Wong, A.K.K.; Fong, N.K. Experimental Study of Video Fire Detection and its Applications. Procedia Eng. 2014, 71, 316–327. [Google Scholar] [CrossRef]
  17. Wu, Z.; Song, T.; Wu, X.; Shao, X.; Liu, Y. Spectral Spatio-Temporal Fire Model for Video Fire Detection. Int. J. Pattern Recognit. Artif. Intell. 2018, 32, 1850013. [Google Scholar] [CrossRef]
  18. Abdusalomov, A.B.; Nasimov, R.; Nasimova, N.; Muminov, B.; Whangbo, T.K. Evaluating Synthetic Medical Images Using Artificial Intelligence with the GAN Algorithm. Sensors 2023, 23, 3440. [Google Scholar] [CrossRef] [PubMed]
  19. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
  20. Guan, Z.; Miao, X.; Mu, Y.; Sun, Q.; Ye, Q.; Gao, D. Forest Fire Segmentation from Aerial Imagery Data Using an Improved Instance Segmentation Model. Remote Sens. 2022, 14, 3159. [Google Scholar] [CrossRef]
  21. Nodirov, J.; Abdusalomov, A.B.; Whangbo, T.K. Attention 3D U-Net with Multiple Skip Connections for Segmentation of Brain Tumor Images. Sensors 2022, 22, 6501. [Google Scholar] [CrossRef]
  22. Zheng, Y.; Wang, Z.; Xu, B.; Niu, Y. Multi-scale semantic segmentation for fire smoke image based on global information and U-Net. Electronics 2022, 11, 2718. [Google Scholar] [CrossRef]
  23. Avazov, K.; Mukhiddinov, M.; Makhmudov, F.; Cho, Y.I. Fire Detection Method in Smart City Environments Using a Deep Learning-Based Approach. Electronics 2022, 11, 73. [Google Scholar] [CrossRef]
  24. Olshausen, B.A.; Field, D.J. Vision and the coding of natural images: The human brain may hold the secrets to the best image-compression algorithms. Am. Sci. 2000, 88, 238. Available online: http://www.americanscientist.org/issues/feature/2000/3/vision-and-the-coding-of-natural-images (accessed on 12 January 2024). [CrossRef]
  25. Ooi, C.H.; Isa, N.A.M. Quadrants dynamic histogram equalization for contrast enhancement. IEEE Trans. Consum. Electron. 2010, 56, 2552–2559. [Google Scholar] [CrossRef]
  26. Saydirasulovich, S.N.; Mukhiddinov, M.; Djuraev, O.; Abdusalomov, A.; Cho, Y.-I. An Improved Wildfire Smoke Detection Based on YOLOv8 and UAV Images. Sensors 2023, 23, 8374. [Google Scholar] [CrossRef]
  27. Azim, T.; Jaffar, M.; Mirza, A. Automatic Fatigue Detection of Drivers through Pupil Detection and Yawning Analysis. In Proceedings of the Fourth International Conference on Innovative Computing, Information and Control, Kaohsiung, Taiwan, 7–9 December 2009. [Google Scholar]
  28. Raudonis, V.; Simutis, R.; Narvydas, G. Discrete eye tracking for medical applications. In Proceedings of the 2nd ISABEL, Bratislava, Slovakia, 24–27 November 2009; pp. 1–6. [Google Scholar]
  29. Farkhod, A.; Abdusalomov, A.; Makhmudov, F.; Cho, Y.I. LDA-Based Topic Modeling Sentiment Analysis Using Topic/Document/Sentence (TDS). Model. Appl. Sci. 2021, 11, 11091. [Google Scholar] [CrossRef]
  30. Liu, H.; Liu, Q. Robust real-time eye detection and tracking for rotated facial images under complex conditions. In Proceedings of the 6th ICNC, Yantai, China, 10–12 August 2010; Volume 4, pp. 2028–2034. [Google Scholar]
  31. Li, X.; Wee, W.G. An efficient method for eye tracking and eye-gazed FOV estimation. In Proceedings of the 16th IEEE International Conference on Image Processing, Cairo, Egypt, 7–10 November 2009; pp. 2597–2600. [Google Scholar]
  32. Farkhod, A.; Abdusalomov, A.B.; Mukhiddinov, M.; Cho, Y.-I. Development of Real-Time Landmark-Based Emotion Recognition CNN for Masked Faces. Sensors 2022, 22, 8704. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow chart depicting our proposed method.
Figure 1. Flow chart depicting our proposed method.
Fire 07 00084 g001
Figure 2. YOLOv1 architecture [12].
Figure 2. YOLOv1 architecture [12].
Fire 07 00084 g002
Figure 3. YOLOv8 architecture [26].
Figure 3. YOLOv8 architecture [26].
Fire 07 00084 g003
Figure 4. Ship data representation, (a) ship on fire example, (b) ship on fire, mostly smoke visible, (c) ship images with no fire or smoke.
Figure 4. Ship data representation, (a) ship on fire example, (b) ship on fire, mostly smoke visible, (c) ship images with no fire or smoke.
Fire 07 00084 g004
Figure 5. Confusion matrix distribution results of trained model.
Figure 5. Confusion matrix distribution results of trained model.
Fire 07 00084 g005
Figure 6. Model training representation in graph lines. (a) shows training loss value, (b) is a validation loss over 50 epochs, (c) is detection accuracy of bosh classes.
Figure 6. Model training representation in graph lines. (a) shows training loss value, (b) is a validation loss over 50 epochs, (c) is detection accuracy of bosh classes.
Fire 07 00084 g006
Figure 7. Frame extracted examples for “Ship on fire class”.
Figure 7. Frame extracted examples for “Ship on fire class”.
Fire 07 00084 g007
Figure 8. Google images for both classes.
Figure 8. Google images for both classes.
Fire 07 00084 g008
Figure 9. Comparative results of ship on fire images, (a) representing images without HE, (b) representing images after HE.
Figure 9. Comparative results of ship on fire images, (a) representing images without HE, (b) representing images after HE.
Fire 07 00084 g009
Figure 10. Testing results on single image on both classes.
Figure 10. Testing results on single image on both classes.
Fire 07 00084 g010
Figure 11. Testing results with multiple input image examples.
Figure 11. Testing results with multiple input image examples.
Fire 07 00084 g011
Table 1. Distribution of fire images in the dataset.
Table 1. Distribution of fire images in the dataset.
DatasetTraining ImagesValidation ImagesTotal Images
Fire16,261352019,781
Non-Fire3111 2624 5735
Table 2. The configuration information of the experimental platform.
Table 2. The configuration information of the experimental platform.
ConfigurationVersions
Hardware modelASRock X399 Taichi
Memory32.0 GiB
ProcessorAMD Ryzen™ Threadripper™ 1950X × 32
GraphicsNVIDIA GeForce GTX 1080 Ti
Operating systemUbuntu 23.04
Operating system type64-bit
ToolkitCUDA 12.0
Kernel versionLinux 6.2.0-37-generic
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ergasheva, A.; Akhmedov, F.; Abdusalomov, A.; Kim, W. Advancing Maritime Safety: Early Detection of Ship Fires through Computer Vision, Deep Learning Approaches, and Histogram Equalization Techniques. Fire 2024, 7, 84. https://doi.org/10.3390/fire7030084

AMA Style

Ergasheva A, Akhmedov F, Abdusalomov A, Kim W. Advancing Maritime Safety: Early Detection of Ship Fires through Computer Vision, Deep Learning Approaches, and Histogram Equalization Techniques. Fire. 2024; 7(3):84. https://doi.org/10.3390/fire7030084

Chicago/Turabian Style

Ergasheva, Aziza, Farkhod Akhmedov, Akmalbek Abdusalomov, and Wooseong Kim. 2024. "Advancing Maritime Safety: Early Detection of Ship Fires through Computer Vision, Deep Learning Approaches, and Histogram Equalization Techniques" Fire 7, no. 3: 84. https://doi.org/10.3390/fire7030084

Article Metrics

Back to TopTop