Next Article in Journal
An FDS Simulation to Predict the Kerosene Pool Fire Results at Rocket Launchpad Basement Facilities in the Republic of Korea
Next Article in Special Issue
Forest Fire Object Detection Analysis Based on Knowledge Distillation
Previous Article in Journal
RepVGG-YOLOv7: A Modified YOLOv7 for Fire Smoke Detection
Previous Article in Special Issue
Forest Fire Driving Factors and Fire Risk Zoning Based on an Optimal Parameter Logistic Regression Model: A Case Study of the Liangshan Yi Autonomous Prefecture, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unsupervised Flame Segmentation Method Based on GK-RGB in Complex Background

College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha 410004, China
*
Author to whom correspondence should be addressed.
Fire 2023, 6(10), 384; https://doi.org/10.3390/fire6100384
Submission received: 24 August 2023 / Revised: 2 October 2023 / Accepted: 3 October 2023 / Published: 7 October 2023
(This article belongs to the Special Issue Geospatial Data in Wildfire Management)

Abstract

:
Fires are disastrous events with significant negative impacts on both people and the environment. Thus, timely and accurate fire detection and firefighting operations are crucial for social development and ecological protection. In order to segment the flame accurately, this paper proposes the GK-RGB unsupervised flame segmentation method. In this method, RGB segmentation is used as the central algorithm to extract flame features. Additionally, a Gaussian filtering method is applied to remove noise interference from the image. Moreover, K-means mean clustering is employed to address incomplete flame segmentation caused by flame colours falling outside the fixed threshold. The experimental results show that the proposed method achieves excellent results on four flame images with different backgrounds at different time periods: Accuracy: 97.71%, IOU: 81.34%, and F1-score: 89.61%. Compared with other methods, GK-RGB has higher segmentation accuracy and is more suitable for the detection of fire. Therefore, the method proposed in this paper helps the application of firefighting and provides a new reference value for the detection and identification of fires.

1. Introduction

Fire is a natural disaster that poses a serious threat to human life and property [1,2]. In recent years, safety incidents caused by fires have been frequent. For example, there was a series of very large fires in Australia between September 2019 and January 2020, which exceeded 100,000 hectares [3], and in the western United States, where up to 2.5 million hectares of land were burned due to fires between January and September 2020 [4]. Failure to detect a fire in time will inevitably lead to its expansion and spread, which can have a significant negative impact on property, buildings, and ecosystems and can even pose a threat to human safety [5,6,7]. Thus, prompt and efficient identification of flames and extinguishing fires is crucial to mitigate economic losses and ensure human safety.
For the proficient recognition of flames, we will employ the flame segmentation technique to effectively isolate the flame from its background. Flame segmentation is a specialised technique in machine learning that plays a crucial role in the detection of wildland fires. In simple terms, it involves breaking down an image or video into distinct parts, with the specific aim of identifying and isolating regions that correspond to flames or fire. This method harnesses the power of artificial intelligence to automatically recognise the characteristic patterns and shapes associated with fire in visual data.
In order to meet the real-time accuracy of fire detection, researchers have started to use machine learning methods for flame detection [8,9,10]. Damir et al. [11] used a histogram-based approach to segment smoke in order to segment forest fires at the pixel level. Chen et al. [12] proposed an algorithm to fuse fire segmentation with multi-feature recognition. Firstly, an improved Ycbcr model is established for determining fire segmentation under reflective and non-reflective conditions, followed by an improved growing region algorithm for fine segmentation of fires, and finally, a quantitative index for final fire recognition is given. Pratik et al. [13] used the CIE L*a*b algorithm to achieve accurate flame segmentation by detecting fire pixels in each image and using blur values ranging from 0.1 to 0.9.
Traditional flame segmentation, however, has some restrictions and frequently can only distinguish flames in particular settings. For this reason, numerous academics have suggested cutting-edge flame detection techniques. To assist users in iteratively choosing a number of frames for annotation that will inform video segmentation for contemporary flames, Yin [14] and colleagues created an interactive video segmentation system called VOS. Akmalbek et al. [15] implemented an improved implementation of fire recognition based on the Detectron2 model. By leveraging dark channels to perform picture defogging and using GhostNet, depth separable convolution, and SENet for lightweight development of the model, Huang et al. [16] suggested a lightweight forest fire detection approach based on YOLOX-L and defogging methods. Considering that flame colour features are more prominent in RGB images, flame regions can be identified based on the colour features of the pixels. Therefore, this paper is based on RGB segmentation, using the three channels R, G, and B to extract the flame region. However, this method does not solve the following two problems in the flame segmentation process: (1) Some of the flame colours are outside the fixed threshold, resulting in incomplete flame segmentation; as shown in Table 1 (a), RGB segmentation is not sensitive enough to flames other than red, i.e., these flames are outside the fixed threshold, ultimately resulting in incomplete segmentation results. (2) The presence of noise in the image affects the segmentation result; as shown in Table 1 (b), the presence of high-frequency noise in the image will reduce the segmentation effect.
To solve the problem of incomplete flame segmentation, Zhen et al. [17] proposed a multi-scale residual attention group attention combined with U-Net to extract multi-scale flame features. Chen et al. [18] extracted fire pixels based on RGB processing and subjected the extracted fire pixels to growth and disorder dynamics for flame extraction. Manish et al. [19] classified the colour pixels of the L*a*b spatial model by using K-means mean clustering to separate the smoke. Inspired by the K-means mean clustering approach, we used K-means mean clustering to adaptively determine cluster centres based on the distribution of colours, clustering pixels of similar colours together with the aim of solving the problem of incomplete flame segmentation.
To address the problem of noise in the image during segmentation, Wang et al. [20] proposed a method based on FBM and region-growing smoke segmentation by selecting an appropriate Hurst threshold to obtain a binary image and using region growing to obtain an effective segmented region. Buades et al. [21] proposed the method of non-local averaging, which calculates the non-local average of all pixels in an image aimed at removing image noise. Chen et al. [22] proposed a novel non-linear filter that effectively suppresses noise while preserving image detail. The RBiFPN network was created by Li et al. [23] and included Yolov5’s framework as the backbone network to help differentiate the subtle differences between clouds and cloudiness. In this study, we use Gaussian filtering to reduce the effect of noise by weighting each pixel and its surrounding pixels by averaging.
Specifically, in order to solve the problem of under-segmentation due to some of the flame colours being outside a fixed threshold and the problem of noise present in the image affecting the segmentation results, we propose an unsupervised flame segmentation method for GK-RGB, and the specific contributions regarding this work are as follows:
  • K-means mean clustering is invoked to determine the cluster centres adaptively by the colour distribution and to cluster similar pixel points into the same cluster, aiming to eliminate the problem of unsegmented parts of the flame colour beyond the threshold;
  • A Gaussian filtering method is invoked, where the image is smoothed and blurred to remove noise interference from the image by weighting each pixel and its surrounding pixels on average;
  • The experimental results show that the proposed method in this paper achieves good results in the segmentation of flames, in which the average values of Acc, IOU, and F1 reach 97.71%, 81.34%, and 89.61%, respectively. Compared with other methods, the GK-RGB flame segmentation method has higher recognition accuracy and provides a reference value for modern fire recognition and detection.

2. Materials and Methods

2.1. Data Set Preparation

The experimental dataset for this experiment is from the Bitbucket platform, and the name of the dataset is BowFire Dataset, with 119 included images. In order to ensure the integrity of the experiment, six images with different backgrounds at different times were selected as the main subjects for this experiment. As shown in Table 2, where Figure (a) shows a blackout flame map containing a background of people and clutter, Figure (b) shows a daytime street containing people, trees, and a fire being fought, Figure (c) shows a blackout flame map outdoors, and Figure (d) shows a daytime airport containing people, aircraft, trees and a fire being fought. Figure (e) shows a red wine bottle and a red vase flame with obstructing elements. Figure (f) shows pictures of small fires with red street signs as well as images of street fires.
During the experiment, all images were sized at 640 × 480 with a resolution of 96 dpi. In the subsequent image presentations, the following six images will all be shown under the names (a), (b), (c), (d), (e) and (f).

2.2. GK-RGB

During the task of flame segmentation, the flames have distinct colour characteristics. That is, flames usually appear in bright red, orange, and yellow colours, which have high brightness values in the RGB colour space. By detecting and extracting these high-luminance pixels, we can effectively segment the flame area. Therefore, RGB colour features are chosen for flame segmentation in this paper. Specifically, this is achieved by setting the threshold range of the RGB colour channel. Equations (1) and (2) demonstrate the conditions that the flame pixel points (x, y) should satisfy:
R ( x , y ) > G ( x , y ) > B ( x , y )
R ( x , y ) R T
where  R ( x , y ) G ( x , y ) , and  B ( x , y )  are the components of the pixel point  ( x , y )  on the red, green, and blue colour channels, respectively, and the minimum threshold for the component of the red pixel point is indicated. After several experiments, the final value of  R T  in this paper is 190.
However, in the task of flame colour segmentation on complex backgrounds, using only RGB colour segmentation methods faces two problems: (1) The presence of noise interference in the image affects the segmentation effect. (2) RGB segmentation is not sensitive enough to flames other than red, resulting in incomplete segmentation results. To solve these two problems, Gaussian filtering is used to pre-process the images to filter image noise; in addition, K-means mean clustering is used to optimise the RGB method to solve the problem of incomplete RGB segmentation of flames other than red. Section 2.3 and Section 2.4 will provide more detailed information.
The following is the general procedure for this experiment:
  • Input image: reads the folder for which flame splitting is required;
  • Gaussian filtering: Gaussian filtering is applied to the input image to remove the noise present in the image;
  • K-means clustering mean: the RGB value of each pixel is extracted, and after determining the K-value, the cluster centres are selected and iteratively clustered to calculate the best cluster centre;
  • RGB colour segmentation: based on the K-means clustering result, the cluster centre representing the flame is selected, and based on the selected flame colour, the pixel points similar to it are extracted to form the flame region.

2.3. K-Means Mean Clustering

To solve the problem of incomplete RGB colour segmentation for flames other than red, we used K-means mean clustering to optimise RGB segmentation. The method is capable of adaptively determining clustering centres based on the distribution of colours, clustering similarly coloured pixels together and, thus, segmenting the different colour regions of the flame more accurately. This allows better reproduction of the details and colour variations of the flame and compensates for the colour limitations of the RGB colour model. The steps regarding its operation are as follows:
  • First, the K-means algorithm identifies the different coloured regions of a flame by clustering the pixels in the flame image. Setting the number of clusters (K) to 4, K-means can group pixels of similar colours together and separate out regions of flame of different colours;
  • Subsequently, the position of the cluster centres is automatically adjusted to suit the different colours of the flames. The cluster centres are dynamically determined based on the distribution of the data, thus capturing subtle differences in flame colour. This adaptive nature gives K-means an advantage over fixed-threshold RGB segmentation when dealing with flames of multiple colour variations.

2.4. Gaussian Filtering

In the process of flame segmentation, the presence of noise in the image can cause noise to appear in the segmentation results, thus affecting the results. In order to solve the noise problem, we use a Gaussian filtering method that smoothes and blurs the effect of the values of the surrounding pixels on the current pixel, mainly by means of a weighted average. The detailed steps regarding this method are as follows:
1.
First, determine the size of the Gaussian filter, which in this paper is 5 × 5.
2.
Calculate the weight value of the Gaussian kernel based on the filter size and standard deviation (a). The Gaussian kernel is σ, a two-dimensional matrix where each element corresponds to a pixel position in the filter. The value of each element is calculated from a Gaussian function and indicates the weight of the pixel at that position in the filtering process. The formula for the Gaussian function is as follows:
G ( x , y ) = 1 2 π σ 2 e x 2 + y 2 2 σ 2
where x and y represent the offset of the current pixel position relative to the pixel position at the centre of the filter, respectively, and σ represents the standard deviation of the Gaussian function.
3.
Filtering is performed on each pixel in the image. For each pixel point, the neighbouring pixels within the size range of the filter around it are taken, and the element-level product and summation operations are performed with the Gaussian kernel at the corresponding position. This allows the average value of the current pixel to be calculated based on the weights of the neighbouring pixels, i.e., replacing the current pixel value with the weighted average of the surrounding pixels.
4.
The same filtering operation is applied to each pixel of the entire image, thus removing noise.
With Gaussian filtering, each pixel in the image is affected by a weighted average of its surrounding pixels, where the weights are determined by a Gaussian kernel. Since the Gaussian kernel is designed so that pixels closer to the central pixel have a higher weight and pixels further away from the central pixel have a lower weight, Gaussian filtering can be effective in removing noise in real flame segmentation.

3. Results

3.1. Experimental Evaluation Indicators

The main evaluation indicators used in this experiment were F1-score, IOU, and Accuracy. In the following equations, TP is true (predicted flame, actual flame); FP is false positive (predicted flame, actual non-flame); FN is false negative (predicted non-flame, actual flame); and TN is true negative (predicted non-flame, actual non-flame).
IOU is the intersection and merge ratio, which is mainly used to calculate the degree of overlap between two images. The formula for its calculation is as follows:
IOU = A B A B
The F1-score is the summed average of Precision and Recall, which is expressed as follows:
F 1 - score = 2 Precision recall Precision + recall = TP 2 TP + FN + FP
Accuracy is expressed as the percentage of the total sample size that is predicted correctly, and it is given by the following formula:
Accuracy = TP + TN TP + FP + FN + TN
In later sections, due to limited space in the table, Acc will be used to denote Accuracy and F1 to denote the F1-score.

3.2. Ablation Experiments

In order to evaluate the performance of the method used in this paper, ablation experiments were carried out. In the experimental process, RGB colour segmentation is used as the basis, and Gaussian filtering, K-means mean clustering, and a combination of Gaussian filtering and K-means mean clustering are added to this method for comparison in order to demonstrate the effectiveness of Gaussian filtering and K-means mean clustering for RGB optimisation. Where GK-RGB is the method proposed in this paper, K-means-RGB is the K-means mean clustering to RGB colour segmentation optimisation method, and Gaussian–RGB is the combination of Gaussian filtering and RGB colour segmentation methods. The visualisation results of the ablation experiments and the experimental mean results are given in Table 3 and Table 4, respectively.
As can be seen in Table 3, the RGB colour split flames are not ideal. Figures (a), (b), (c), (d), (e), and (f) all show varying degrees of flame leakage, with Figures (a), (d), and (e) showing the most serious flame leakage. This is due to the large yellow colour of the flames in both images, which causes the RGB segmentation to fall short of the corresponding thresholds and, thus, be missed. In addition, the results of RGB segmentation have a higher noise content, leading to further deterioration of the results. When Gaussian filtering is added, i.e., when the Gaussian–RGB method is used, the noise content in Figures (a), (b), (c), (d), (e), and (f) is reduced. The most significant improvement is seen in Figure (a), where the given metrics show a 4.18% increase in Accuracy, a 14.19% increase in IOU, and a 13.13% increase in F1-score, which is due to the Gaussian filter removing high-frequency noise from the image and, thus, enhancing the segmentation effect. When RGB segmentation is optimised using K-means mean clustering, flame segmentation is greatly improved in Figures (a), (b), (c), (d), (e), and (f). The most significant improvement is seen in Figures (a), (d), and (e), where it can be seen that the phenomenon of large, missed judgements in RGB segmentation is resolved in the K-means-RGB method. This is because K-means mean clustering can extract the colour distribution of a flame from a complex flame image and cluster similar coloured flame pixels together to achieve a complete segmentation of the flame. Additionally, while the segmentation of the image on the right side of Figure (f) has a minor quantity of flame, it nevertheless shows the efficacy of the suggested method. Figures (d) and (e) show interference alternatives that are similar to flames but are still distinguishable, demonstrating the stability of the suggested technique. Compared to the other three methods, GK-RGB gives the best flame segmentation results. The method uses Gaussian filtering as pre-processing, followed by K-means mean clustering to optimise the RGB segmentation, resulting in optimal flame segmentation results, which largely overlap with the labelled map.
As shown in Table 4, which shows the average of the metrics of the four images segmented, it can be seen that the method has the best overall Acc, IOU, and F1 values after using the GK-RGB method, which are all higher than the K-means-RGB, Gaussian–RGB, and RGB methods. This further illustrates the effectiveness of the Gaussian filtering, K-means mean clustering module in this experiment.
To ascertain the runtime efficacy of the proposed method delineated in this paper, we meticulously compute the average processing time for individual images in distinct ablation experiments. We carefully opt for six specific images as outlined in the paper, subjecting them to processing through the corresponding four ablation methods. Subsequently, we derive the average processing time for each of the selected images. The central processing unit (CPU) model employed in this experiment is the Intel(R) CoreTM i7-10750H CPU @ 2.60 GHz. Table 5 presents the experimental outcomes derived from this specific set of experiments. Examining Table 5 reveals that GK-RGB necessitates a longer processing time for flame segmentation compared to the other three methods. However, this heightened processing duration is accompanied by an enhanced accuracy in flame segmentation. Simultaneously, the processing time remains within the practical bounds suitable for real-world inspection applications.

3.3. Comparative Experiments

In order to further analyse the performance of the GK-RGB method, we compare GK-RGB with other good flame segmentation methods: Backprojection-Hsv [24], Backprojection-Ycrcb [24], Chen [25], and Phillips [26]. Table 6 gives the experimental results of these methods with GK-RGB. Due to the fact that the images used in the experiments are the same as in Table 3 and due to the location of the table, the original image will not be shown in this table.
From Table 6, it can be seen that in the segmentation of Figure (b), all five methods achieved results that were basically consistent with the label image. From the indicators, the segmentation results of the other four methods were even slightly better than the GK-RGB method. However, in Figures (a), (c), (d), (e), and (f), the segmentation effect of GK-RGB is better than the four methods compared. In particular, in Figures (a), (d), and (e), the GK-RGB method is far superior to the four methods compared. This is because in Figure (a), the large area of the flame appearing in colours other than red leads to under-segmentation by the Backprojection-Hsv, Backprojection-Ycrcb, and Chen methods, and the fact that the ground irradiates the shadow of the flame exacerbates the misjudgement of the ground segmentation by the other four methods. In Figures (d) and (e), the interference of the similarity between the firefighter’s clothing, wine bottles, vases, and the flame colour in the Figure leads to an error in the segmentation results of the four methods compared. In contrast, in the GK-RGB method, the integrity of the flame segmentation in Figure (a) and Figure (d) is guaranteed due to the use of K-means mean clustering for RGB colour segmentation to optimise the grouping of similarly coloured pixels together. In Figure (d), the method dynamically determines the cluster centres by K-means mean clustering, eliminating a small amount of interference in the original RGB colour segmentation, i.e., the similarity between firefighter clothing, wine bottles, vases, and flame colour.
As shown in Table 7, the average values of the metrics obtained from the four maps of GK-RGB, Backprojection-Hsv, Backprojection-Ycrcb, Chen, and Phillips segmentation are given. Overall, the GK-RGB method has the highest mean score for the metrics obtained from flame segmentation, outperforming all four comparison methods. Therefore, the GK-RGB method proposed in this paper outperforms these excellent comparison methods and is suitable for flame segmentation tasks with different complex backgrounds.

4. Conclusions

In this paper, we propose the GK-RGB method, which aims to achieve accurate recognition of fire segmentation in complex backgrounds. First, a flame dataset with the dataset name BowFire Dataset is obtained through the Bitbucketpin platform. In order to ensure the integrity of the experiment, four images of flames with different backgrounds at different time periods were selected from the 119 data sets as the main subjects for this experiment. In the GK-RGB method, we selected RGB as the main algorithm for flame segmentation and extracted the main features of the flame by setting different thresholds for the three channels: R, G, and B. At the same time, we use a Gaussian filtering method aimed at eliminating the noise interference present in the image. In addition, to solve the problem of under-segmentation in RGB segmentation because some of the flame colours are outside the threshold, we use K-means mean clustering to cluster the flame colour pixels together and, thus, segment the flames completely.
The experimental results show that our GK-RGB method achieves excellent results with 97.71% Accuracy, 81.34% IOU and 89.61% F1-score. In the ablation experiment, it was further demonstrated that the effectiveness of using Gaussian filtering (Accuracy + 0.22%, IOU + 1.42%, F1-Score + 0.95%) and K-means mean clustering (Accuracy + 2.57%, IOU + 14.61%, F1-Score + 9.69%) has been proven. Compared to other methods, the GK-RGB method is able to segment the flame part more accurately and completely and is more suitable for the detection of flames. This provides a new reference value for modern fire protection construction as well as for the detection and identification of fires.
In order to make the work of fire identification more complete, in the future, we will make further improvements in the following two areas: (1) Realisation of real-time flame segmentation: fast real-time target detection and tracking techniques are used in combination with the GK-RGB segmentation method to improve the speed and real-time performance of flame segmentation. (2) Enabling multi-sensor data fusion: In addition to RGB colours, flames also produce heat and light radiation. In the future, the fusion of RGB images with thermal images or other sensor data is considered to improve the accuracy and robustness of flame segmentation. Overall, we are committed to improving flame-splitting technology to better meet fire construction as well as to reduce the incidence of fire.

Author Contributions

X.S.: Methodology, writing—original draft, conceptualisation. Z.L.: Visualisation, data acquisition, software. Z.X.: Validation, project administration, investigation. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation in China (Grant No. 61703441), in part by the Key Project of Education Department of Hunan Province (Grant No. 21A0179), the Changsha Municipal Natural Science Foundation (Grant No. kq2014160), and Hunan Key Laboratory of Intelligent Logistics Technology (2019TP1015).

Data Availability Statement

Data is available upon request.

Conflicts of Interest

The author declares that there is no conflict of interest in the publication of this article.

References

  1. Galizia, L.F.; Curt, T.; Barbero, R.; Rodrigues, M.J. Understanding fire regimes in Europe. Int. J. Wildland Fire 2021, 31, 56–66. [Google Scholar] [CrossRef]
  2. Agbeshie, A.A.; Abugre, S.; Atta-Darkwa, T.; Awuah, R.J. A review of the effects of forest fire on soil properties. J. For. Res. 2022, 33, 1419–1441. [Google Scholar] [CrossRef]
  3. Boer, M.M.; Resco de Dios, V.; Bradstock, R.A. Unprecedented burn area of Australian mega forest fires. Nat. Clim. Chang. 2020, 10, 171–172. [Google Scholar] [CrossRef]
  4. Higuera, P.E.; Abatzoglou, J.T. Record-setting climate enabled the extraordinary 2020 fire season in the western United States. Glob. Change Biol. 2021, 27, 1–2. [Google Scholar] [CrossRef] [PubMed]
  5. Pyne, S.J. World Fire: The Culture of Fire on Earth; University of Washington Press: Washington, DC, USA, 1997. [Google Scholar]
  6. Brushlinsky, N.; Ahrens, M.; Sokolov, S.; Wagner, P.J. World fire statistics. Cent. Fire Stat. 2016, 10. [Google Scholar]
  7. Switzer, J.M.; Hope, G.D.; Grayston, S.J.; Prescott, C.E. Changes in soil chemical and biological properties after thinning and prescribed fire for ecosystem restoration in a Rocky Mountain Douglas-fir forest. For. Ecol. Manag. 2012, 275, 1–13. [Google Scholar] [CrossRef]
  8. Jeon, M.; Choi, H.-S.; Lee, J.; Kang, M.J. Multi-scale prediction for fire detection using convolutional neural network. Fire Technol. 2021, 57, 2533–2551. [Google Scholar] [CrossRef]
  9. Toulouse, T.; Rossi, L.; Akhloufi, M.; Celik, T.; Maldague, X.J. Benchmarking of wildland fire colour segmentation algorithms. IET Image Process. 2015, 9, 1064–1072. [Google Scholar] [CrossRef]
  10. Rudz, S.; Chetehouna, K.; Hafiane, A.; Laurent, H.; Séro-Guillaume, O.J. Investigation of a novel image segmentation method dedicated to forest fire applications. Meas. Sci. Technol. 2013, 24, 075403. [Google Scholar] [CrossRef]
  11. Krstinić, D.; Stipaničev, D.; Jakovčević, T.J. Histogram-based smoke segmentation in forest fire detection system. Inf. Technol. Control 2009, 38. [Google Scholar]
  12. Qiang, X.; Zhou, G.; Chen, A.; Zhang, X.; Zhang, W.J. Forest fire smoke detection under complex backgrounds using TRPCA and TSVB. Int. J. Wildland Fire 2021, 30, 329–350. [Google Scholar] [CrossRef]
  13. Chhetri, P.; Banu, P.N. Fuzzy based Manu’s fire segmentation algorithm. Int. J. Intell. Eng. Inform. 2020, 8, 221–238. [Google Scholar] [CrossRef]
  14. Yin, Z.; Zheng, J.; Luo, W.; Qian, S.; Zhang, H.; Gao, S. Learning to recommend frame for interactive video object segmentation in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15445–15454. [Google Scholar]
  15. Abdusalomov, A.B.; Islam, B.M.S.; Nasimov, R.; Mukhiddinov, M.; Whangbo, T.K.J. An improved forest fire detection method based on the detectron2 model and a deep learning approach. Sensors 2023, 23, 1512. [Google Scholar] [CrossRef] [PubMed]
  16. Huang, J.; He, Z.; Guan, Y.; Zhang, H.J. Real-time forest fire detection by ensemble lightweight YOLOX-L and defogging method. Sensors 2023, 23, 1894. [Google Scholar] [CrossRef] [PubMed]
  17. Zheng, Y.; Wang, Z.; Xu, B.; Niu, Y.J. Multi-Scale Semantic Segmentation for Fire Smoke Image Based on Global Information and U-Net. Electronics 2022, 11, 2718. [Google Scholar] [CrossRef]
  18. Chen, T.-H.; Wu, P.-H.; Chiou, Y.-C. An early fire-detection method based on image processing. In Proceedings of the 2004 International Conference on Image Processing, ICIP’04, Singapore, 24–27 October 2004; pp. 1707–1710. [Google Scholar]
  19. Shrivastava, M.; Matlani, P. A smoke detection algorithm based on K-means segmentation. In Proceedings of the 2016 International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 11–12 July 2016; pp. 301–305. [Google Scholar]
  20. Wang, X.; Jiang, A.; Wang, Y. A segmentation method of smoke in forest-fire image based on fbm and region growing. In Proceedings of the 2011 Fourth International Workshop on Chaos-Fractals Theories and Applications, Hangzhou, China, 19–22 October 2011; pp. 390–393. [Google Scholar]
  21. Buades, A.; Coll, B.; Morel, J.-M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; pp. 60–65. [Google Scholar]
  22. Chen, T.; Ma, K.-K.; Chen, L.-H. Tri-state median filter for image denoising. IEEE Trans. Image Process. 1999, 8, 1834–1838. [Google Scholar] [CrossRef] [PubMed]
  23. Li, A.; Zhao, Y.; Zheng, Z.J. Novel Recursive BiFPN Combining with Swin Transformer for Wildland Fire Smoke Detection. Forests 2022, 13, 2032. [Google Scholar] [CrossRef]
  24. Wirth, M.; Zaremba, R. Flame region detection based on histogram backprojection. In Proceedings of the 2010 Canadian Conference on Computer and Robot Vision, Washington, DC, USA, 31 May–2 June 2010; pp. 167–174. [Google Scholar]
  25. Chen, T.-H.; Kao, C.-L.; Chang, S.-M. An intelligent real-time fire-detection method based on video processing. In Proceedings of the IEEE 37th Annual 2003 International Carnahan Conference onSecurity Technology, Taipei, Taiwan, 14–16 October 2003; pp. 104–111. [Google Scholar]
  26. Phillips Iii, W.; Shah, M.; da Vitoria Lobo, N.J. Flame recognition in video. Pattern Recognit. Lett. 2002, 23, 319–327. [Google Scholar] [CrossRef]
Table 1. Shortcomings of the RGB method of segmenting flames.
Table 1. Shortcomings of the RGB method of segmenting flames.
Segmentation Result
Original imageFire 06 00384 i001Fire 06 00384 i002
Ground truthFire 06 00384 i003Fire 06 00384 i004
RGBFire 06 00384 i005Fire 06 00384 i006
(a)(b)
Table 2. Pictures of different time periods and backgrounds.
Table 2. Pictures of different time periods and backgrounds.
Fire 06 00384 i007Fire 06 00384 i008Fire 06 00384 i009Fire 06 00384 i010Fire 06 00384 i011Fire 06 00384 i012
(a)(b)(c)(d)(e)(f)
Table 3. Visual comparison of results of ablation experiments.
Table 3. Visual comparison of results of ablation experiments.
Original ImageGround TruthGK-RGBK-Means-RGBGaussian–RGBRGB
(a)Fire 06 00384 i013Fire 06 00384 i014Fire 06 00384 i015Fire 06 00384 i016Fire 06 00384 i017Fire 06 00384 i018
Acc: 95.37%Acc: 94.73%Acc: 87.84%Acc: 83.66%
IOU: 81.77%IOU: 79.80%IOU: 55.19%IOU: 41.00%
F1: 89.97%F1: 88.77%F1: 71.28%F1: 58.15%
(b)Fire 06 00384 i019Fire 06 00384 i020Fire 06 00384 i021Fire 06 00384 i022Fire 06 00384 i023Fire 06 00384 i024
Acc: 96.76%Acc: 96.55%Acc: 93.03%Acc: 92.99%
IOU: 83.98%IOU: 83.00%IOU: 65.45%IOU: 65.26%
F1: 91.29%F1: 90.71%F1: 79.12%F1: 78.98%
(c)Fire 06 00384 i025Fire 06 00384 i026Fire 06 00384 i027Fire 06 00384 i028Fire 06 00384 i029Fire 06 00384 i030
Acc: 98.50%Acc: 98.44%Acc: 97.82%Acc: 97.53%
IOU: 80.73%IOU: 79.81%IOU: 70.59%IOU: 66.94%
F1: 89.39%F1: 88.77%F1: 82.76%F1: 80.20%
(d)Fire 06 00384 i031Fire 06 00384 i032Fire 06 00384 i033Fire 06 00384 i034Fire 06 00384 i035Fire 06 00384 i036
Acc: 98.96%Acc: 99.03%Acc: 96.39%Acc: 95.34%
IOU: 90.35%IOU: 91.04%IOU: 70.96%IOU: 59.62%
F1: 94.93%F1: 95.31%F1: 83.01%F1: 74.70%
(e)Fire 06 00384 i037Fire 06 00384 i038Fire 06 00384 i039Fire 06 00384 i040Fire 06 00384 i041Fire 06 00384 i042
Acc: 97.68%Acc: 97.30%Acc: 97.20%Acc: 95.67%
IOU: 70.89%IOU: 66.75%IOU: 65.29%IOU: 46.35%
F1: 82.97%F1: 80.06%F1: 79.00%F1: 63.34%
(f)Fire 06 00384 i043Fire 06 00384 i044Fire 06 00384 i045Fire 06 00384 i046Fire 06 00384 i047Fire 06 00384 i048
Acc: 98.96%Acc: 98.90%Acc: 98.56%Acc: 98.14%
IOU: 80.33%IOU: 79.13%IOU: 72.91%IOU: 65.43%
F1: 89.09%F1: 88.35%F1: 84.33%F1: 79.10%
Table 4. Mean values of ablation experiment results.
Table 4. Mean values of ablation experiment results.
GK-RGBK-Means-RGBGaussian–RGBRGB
Accuracy97.71%97.49%95.14%93.89%
IOU81.34%79.92%66.73%57.43%
F1-score89.61%88.66%79.92%72.41%
Table 5. Processing time.
Table 5. Processing time.
GK-RGBK-Means-RGBGaussian–RGBRGB
Time(s)0.2840.2640.1530.145
Table 6. Visualisation results compared to other methods.
Table 6. Visualisation results compared to other methods.
Ground TruthGK-RGBBackprojection-Hsv [24]Backprojection-Ycrcb [24]Chen [25]Phillips [26]
(a)Fire 06 00384 i049Fire 06 00384 i050Fire 06 00384 i051Fire 06 00384 i052Fire 06 00384 i053Fire 06 00384 i054
Acc: 95.37%Acc: 76.04%Acc: 76.11%Acc: 77.76%Acc: 70.71%
IOU: 81.77%IOU: 46.75%IOU: 45.32%IOU: 47.24%IOU: 42.36%
F1: 89.97%F1: 63.71%F1: 62.37%F1: 64.17%F1: 59.52%
(b)Fire 06 00384 i055Fire 06 00384 i056Fire 06 00384 i057Fire 06 00384 i058Fire 06 00384 i059Fire 06 00384 i060
Acc: 96.76%Acc: 98.10%Acc: 97.00%Acc: 97.61%Acc: 96.73%
IOU: 83.98%IOU: 91.09%IOU: 86.68%IOU: 88.44%IOU: 91.52%
F1: 91.29%F1: 95.34%F1: 92.87%F1: 93.87%F1: 95.57%
(c)Fire 06 00384 i061Fire 06 00384 i062Fire 06 00384 i063Fire 06 00384 i064Fire 06 00384 i065Fire 06 00384 i066
Acc: 98.50%Acc: 95.30%Acc: 93.28%Acc: 96.87%Acc: 98.18%
IOU: 80.73%IOU: 60.26%IOU: 51.63%IOU: 69.32%IOU: 78.69%
F1: 89.39%F1: 75.20%F1: 68.10%F1: 81.88%F1: 88.07%
(d)Fire 06 00384 i067Fire 06 00384 i068Fire 06 00384 i069Fire 06 00384 i070Fire 06 00384 i071Fire 06 00384 i072
Acc: 98.96%Acc: 94.37%Acc: 92.34%Acc: 96.93%Acc: 96.68%
IOU: 90.35%IOU: 63.85%IOU: 56.80%IOU: 87.43%IOU: 74.96%
F1: 94.93%F1: 77.93%F1: 72.45%F1: 77.67%F1: 85.69%
(e)Fire 06 00384 i073Fire 06 00384 i074Fire 06 00384 i075Fire 06 00384 i076Fire 06 00384 i077Fire 06 00384 i078
Acc: 97.68%Acc: 97.01%Acc: 96.51%Acc: 97.42%Acc: 96.01%
IOU: 70.89%IOU: 71.97%IOU: 69.18%IOU: 70.12%IOU: 60.47%
F1: 82.97%F1: 83.70%F1: 81.78%F1: 82.44%F1: 75.37%
(f)Fire 06 00384 i079Fire 06 00384 i080Fire 06 00384 i081Fire 06 00384 i082Fire 06 00384 i083Fire 06 00384 i084
Acc: 98.96%Acc: 96.43%Acc: 94.28%Acc: 98.62%Acc: 98.31%
IOU: 80.33%IOU: 59.08%IOU: 47.30%IOU: 78.44%IOU: 75.04%
F1: 89.09%F1: 74.28%F1: 64.22%F1: 87.92%F1: 85.74%
Table 7. Average segmentation compared to other methods.
Table 7. Average segmentation compared to other methods.
GK-RGBBackprojection-HsvBackprojection-YcrcbChenPhillips
Accuracy97.71%92.86%91.59%94.20%92.77%
IOU81.34%65.50%59.49%74.50%70.51%
F1-score89.61%78.36%73.63%81.33%81.66%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, X.; Liu, Z.; Xu, Z. Unsupervised Flame Segmentation Method Based on GK-RGB in Complex Background. Fire 2023, 6, 384. https://doi.org/10.3390/fire6100384

AMA Style

Shen X, Liu Z, Xu Z. Unsupervised Flame Segmentation Method Based on GK-RGB in Complex Background. Fire. 2023; 6(10):384. https://doi.org/10.3390/fire6100384

Chicago/Turabian Style

Shen, Xuejie, Zhihuan Liu, and Zhuonong Xu. 2023. "Unsupervised Flame Segmentation Method Based on GK-RGB in Complex Background" Fire 6, no. 10: 384. https://doi.org/10.3390/fire6100384

Article Metrics

Back to TopTop