Next Article in Journal
Smoke Emissions and Buoyant Plumes above Prescribed Burns in the Pinelands National Reserve, New Jersey
Previous Article in Journal
Study on Explosion Mechanism of Dimethyl Ether/H2-Blended Gas Based on Chemical Kinetics Method
Previous Article in Special Issue
Investigations of the Fire Behavior of Functionally Graded Concrete Slabs with Mineral Hollow Spheres
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Fire Classification Models Based on Deep Learning for Building an Intelligent Multi-Sensor System

1
Division of Electrical and Computer Engineering, Chonnam National University, Yeosu 59626, Republic of Korea
2
Misotech Research Institute, R1107, 43, Digital-ro 26-gil, Guro-gu, Seoul 08389, Republic of Korea
3
Division of Data Analysis, Busan-Ulsan-Gyeongnam Branch, Korea Institute of Science and Technology Information, 41, Centum dong-ro, Haeundae-gu, Busan 48059, Republic of Korea
*
Author to whom correspondence should be addressed.
Fire 2024, 7(9), 329; https://doi.org/10.3390/fire7090329
Submission received: 9 August 2024 / Revised: 10 September 2024 / Accepted: 19 September 2024 / Published: 21 September 2024
(This article belongs to the Special Issue Advances in Building Fire Safety Engineering)

Abstract

:
Fire detection systems are critical for mitigating the damage caused by fires, which can result in significant annual property losses and fatalities. This paper presents a deep learning-based fire classification model for an intelligent multi-sensor system aimed at early and reliable fire detection. The model processes data from multiple sensors that detect various parameters, such as temperature, humidity, and gas concentrations. Several deep learning architectures were evaluated, including LSTM, GRU, Bi-LSTM, LSTM-FCN, InceptionTime, and Transformer. The models were trained on data collected from controlled fire scenarios and validated for classification accuracy, loss, and real-time performance. The results indicated that the LSTM-based models (particularly Bi-LSTM and LSTM) could achieve high classification accuracy and low false alarm rates, demonstrating their effectiveness for real-time fire detection. The findings highlight the potential of advanced deep-learning models to enhance the reliability of sensor-based fire detection systems.

1. Introduction

Fire is considered a highly detrimental catastrophe in contemporary society that can result in substantial destruction of property and loss of life. According to statistical data from the National Fire Protection Association (NFPA) in the United States, there were 1,504,500 reported fire incidents in 2022, marking a 12.2% increase from 2013. Moreover, these incidents resulted in 3790 fatalities and 13,250 injuries, with an estimated property damage cost of USD 18.1 billion [1].
Various policies, technologies, and systems are being implemented to mitigate fire damage, casualties, and economic consequences. However, incidents of fire-related harm persist. As indicated in a 2023 report by the National Fire Agency of Korea, although the incidence of fires and resulting fatalities has decreased, the scale of destruction caused by fires remains substantial. In Republic of Korea, 40,113 fire incidents were recorded in 2022, resulting in 341 fatalities and 2327 injuries, with property damage estimated at KRW 1210.4 billion [2]. Negligence emerged as a primary factor in these fires, with a significant proportion originating in industrial settings, such as those involving electricity, machinery, and chemicals. Moreover, the statistical data indicate that although the total number of fires has declined, the incidence of fires in apartments has increased, underscoring the need for enhanced preventive measures and strengthened individual safety management in residential settings [3].
The previously mentioned statistical data clearly demonstrate that fire is a preventable disaster if detection and monitoring are conducted promptly. Moreover, given that the damage and casualties from fires have escalated exponentially over time, early fire detection has become imperative rather than optional [4]. To minimize fire damage, extensive research has been conducted on various algorithms for early fire detection that have subsequently been successfully developed and commercialized. However, enhancing sensor sensitivity for real-time detection often results in increased false alarms, which can trigger unnecessary emergency evacuations and entail significant economic and social costs. For example, a false alarm on an aircraft is estimated to cost between USD 30,000 and 50,000 [5]. Therefore, fire detection algorithms must achieve high accuracy to ensure both real-time detection and sufficient reliability.
Fire detection systems can be broadly categorized into two types based on the type of data measured by their sensors during a fire event. The first comprises video-based systems, which employ cameras to detect visual signs of fire (such as flames and smoke). The second type relies on chemical sensors to detect the concentration of key chemicals, gases, and particulates generated during a fire [6,7]. With the rapid advancement of artificial intelligence technology, there has been significant progress in image-based fire detection research and development. However, despite the numerous advantages and potential of chemical sensor-based fire detection, commercially available fire detection systems predominantly use traditional methods, and progress in this area remains limited. Consequently, enhancing fire detection performance and reliability through a deep learning model based on multidimensional chemical sensor data presents a highly effective approach to improving sensor-based fire detection capabilities [7,8,9].
In this study, a comprehensive analysis of the existing scientific literature was conducted to identify significant studies in this domain. The motivation behind this study is to develop a system that can detect fires early and accurately by leveraging the power of deep learning models, which can analyze sensor data in real time. Furthermore, early fire detection is not just about saving property but also reducing fatalities and ensuring rapid emergency responses. This facilitated the recognition of algorithmic research patterns and relevant architectures associated with early fire prediction based on fire detection sensors. The methodology presented in this paper entails the intelligent processing of gathered data from fire detection sensors through the implementation of a classification model based on deep learning. Within this research, the primary issue of fire detection was segmented into the following three distinct subcomponents:
  • The first stage entails preprocessing the data collected from multiple sensors, during which feature engineering is conducted to prepare the data for input to the model.
  • Subsequently, several deep learning models are applied to classify the time series data, facilitating the evaluation of model performance and identification of the most effective model.
  • Finally, performance metrics such as fire-starting time accuracy (FSTA) and false-positive alarm rate (FAR) are examined to assess the model’s capability to accurately classify fires in real time.
The subsequent sections of this paper are structured as follows:
  • Section 2 provides an overview of previous methodologies and approaches to the current research.
  • All proposed methodologies are delineated in Section 3.
  • The results of the aforementioned methodologies are described in Section 4.
  • The study’s constraints and future research directions are deliberated in Section 5.
  • The conclusions and general results of this study are presented in Section 6.

2. Related Studies

2.1. Advantages and Disadvantages of Fire Detection Based on Chemical Sensor Systems

One of the primary advantages of fire detection based on chemical sensor systems is the ability to detect fires more quickly than conventional smoke detectors. This is because chemical volatiles often appear before smoke particles, providing faster alarm responses and potentially saving more lives by alerting occupants earlier [6]. Image-based fire detection is particularly advantageous in large areas (such as forests) because it can capture visuals of flames and smoke. However, this method typically identifies fires only after they have reached a significant intensity, limiting its effectiveness for early detection [7]. In contrast, chemical sensor-based fire detection systems can quickly identify fire-related elements (such as carbon monoxide (CO), smoke (dust) particles, and changes in oxygen (O2) levels) in the initial stages, which are not easily detected by image-based technologies. Furthermore, chemical sensors can detect toxic emissions, which are responsible for most fire-related casualties. Hence, they offer an extra layer of safety [6]. The use of sensors such as those that detect CO and total volatile organic compounds (t-VOC) has been demonstrated to be effective in terms of concentration increase and response time, making them reliable for early fire detection [10]. Moreover, integrating these sensors into IoT-based systems can enhance real-time monitoring and precision in temperature and gas measurements, which is crucial for outdoor fire detection and firefighting efforts [11]. This approach is widely employed in various sectors (including aviation, maritime operations, mining, and indoor environments) due to its straightforward installation, low equipment costs, and real-time monitoring capabilities [5,12,13,14]. In addition, chemical sensor-based fire detection systems are extensively used in sectors such as aviation, naval operations, mining, and indoor buildings due to their ease of installation, cost-effectiveness, and ability to provide real-time fire monitoring [6,13,15,16].
Despite these advantages, chemical sensor-based fire detection systems have some notable disadvantages. For example, chemical sensors can respond to a wide variety of volatiles beyond combustion products, necessitating complex multivariate data processing techniques to maintain high sensitivity and minimize false alarms [6]. Although low-accuracy sensors could be feasible, they might not be as effective in diverse fire scenarios, requiring further research and validation under different conditions [10]. Additionally, the effectiveness of chemical sensors can be limited by their detection range, as observed in fire-fighting robots where sensors may fail to detect fires beyond a certain distance [17]. Traditional fire detectors based on early chemical sensor systems have primarily processed a single parameter. For example, current commercialized products predominantly use single-variable sensing methods, such as detecting smoke via carbon monoxide or optical sensors [18,19,20]. However, these single-parameter detection systems frequently encounter malfunctions due to their operational simplicity, with a notably high false alarm rate being a significant drawback [21,22]. Consequently, research and development have focused on fire detection methods that employ multiple sensors rather than relying on single-parameter sensing methods. This is achieved either by incorporating several independent sensors in complex configurations or by establishing sensor networks [6,8,9,15,16].
Although chemical sensor-based fire detectors offer significant usability advantages and are widely commercialized, challenges remain in terms of achieving accurate fire detection and assessment. The primary issue that complicates chemical detection for sensors is the diverse nature of fire types, which can vary depending on the ignition material. According to Bukowski et al. [23], chemical sensor-based fire detectors must classify three main types of fires: flaming fires, smoldering fires, and fires associated with heating or cooking activities. Flaming fires produce visible high-temperature sparks and cause sudden temperature changes. In contrast, although smoldering fires and those required for heating or cooking generate substantial smoke and soot, they produce less heat. This results in a more gradual temperature increase. Additionally, while the combustion products, thermal decomposition substances, and fine particles from these fire types are generally similar, there are some distinct differences. This variability in ignition materials (including combustion substances and thermal decomposition products) complicates the fine-tuning of chemical sensor reactions or sensitivity, increasing the complexity of fire detection.
Despite the advantages of existing fire detection methods, such as their ability to detect visible signs of fire, they are limited by their sensitivity and delay in detection. Smoke-based detection systems often trigger alarms only after the fire has escalated significantly, and image-based methods rely heavily on environmental factors like visibility. These methods also struggle with a high rate of false positives in certain settings, especially in industrial or kitchen environments where smoke or heat can be present without an actual fire. This study aims to address these limitations by employing deep learning models that can process multidimensional sensor data to detect fire scenarios with higher accuracy and fewer false alarms.

2.2. Fire Detection Algorithm Based on Chemical Sensor System

To determine a fire situation accurately using sensor data, a robust fire detection algorithm is essential [24]. The most commonly used approach is threshold-based fire detection, which relies on predefined sensor values [5,13,15]. In this algorithm, a fire is detected when the indicator value for a specific combustible material or fire situation exceeds a set threshold. Although this method is straightforward to implement when fire scenarios and engineering knowledge are established, it has significant limitations. For example, it primarily considers recent sensor measurements and disregards long-term sensor data, resulting in higher false alarm rates due to signal variations or noise. Additionally, it is challenging to establish predefined sensor thresholds for all potential fire scenarios, meaning this approach is unsuitable for detecting various types of fire. Other studies have proposed multivariate statistical analysis methods for fire surveillance that consider extended periods of historical sensor data [25,26]. However, the effectiveness of these multivariate approaches has been questioned due to challenges such as dimensionality reduction and real-time data processing that have arisen during data analysis [24,27,28].
With recent advances in machine learning and deep learning, a significant amount of research has been focused on developing chemical sensor-based fire detection algorithms that employ artificial intelligence (AI) technology. For instance, Kim et al. [24] employed locally weighted projection regression (LWPR), which is a machine learning technique within locally weighted learning, to enhance the detection accuracy of temperature, carbon dioxide, and smoke changes identified by multiple sensors. Wang et al. [29] developed a fire monitoring model that uses a combination of probabilistic neural network (PNN), radial basis function (RBF), and backpropagation (BP) techniques. Similarly, Zheng et al. [30] employed a fuzzy radial basis function NN (Fuzzy RBFNN) to train a fire detection classification model.
Beak et al. [8] introduced a novel approach for fire detection that leverages machine learning and optimization technologies to monitor various types of fires using data from multi-channel fire sensor signals. The primary algorithm employed in this study is the support vector machine-dynamic time warping kernel (SVM-DTWK) function. This research focused on enhancing existing fire detectors by developing a new fire monitoring framework that was capable of identifying fires accurately. Another study by Beak et al. [31] introduced an algorithm that could enhance the mapping of target points in sensor signals to closer points by capturing complex temporal information and using nonlinear temporal alignment to improve similarity within the same class of sensor signals. The experimental results using real fire sensor data demonstrated that this proposed system is highly effective for early fire detection, achieving promising performance in terms of early detection accuracy and a low false alarm rate. In their latest study, Beak et al. [9] introduced a real-time automatic fire detection algorithm that uses the multi-resolution properties of the wavelet transform. This algorithm employs a novel feature selection technique to identify optimal features for distinguishing between normal and fire conditions and constructs multiple detectors sensitive to various fire types without prior knowledge. The experimental results indicated that the wavelet-based algorithm could achieve significantly lower mean and median false alarm rates compared to other methods.
Zhang et al. [32] conducted a series of small-scale fire experiments using model compartments fabricated from galvanized steel and fire-resistant glass to observe fire behavior and smoke movement under various ventilation conditions and fuel loads. They subsequently compiled an experimental database from these observations. Using these data, the researchers then employed a long short-term memory (LSTM) NN to predict fire occurrence and flashover based on historical temperature data. The study demonstrated that the LSTM model could maintain high accuracy in predicting temperature rise and flashover probability, even with fuel loads smaller than those used in the training set. Li [3] introduced a novel approach for fire monitoring by employing a multi-sensor data fusion algorithm based on artificial NNs (ANNs). The aim of this method was to enhance the accuracy and reliability of fire detection systems by integrating data from various sensors, including smoke, temperature, and gas detectors. The goal was to provide a more comprehensive and precise assessment of fire conditions to reduce false alarms and missed detections. Özyurt [33] burned and collected fire sensor data on various materials, including di-ethyl-hexyl-sebacate (DEHS), polyalphaolefins (PAO), paraffin, wood, cotton, N-heptane, polyurethane, tobacco, printed circuit boards (PCBs), sunflower oil, and cement powder, gypsum powder, and textiles. Using these data, various deep learning architectures were applied, including DENSE (dense NN), convolutional NN (CNN), LSTM, CNN–LSTM, and CNN–LSTM– DENSE, to classify fire event types and particle types. The results demonstrated that the CNN–LSTM–DENSE model exhibited the best performance in terms of classification accuracy.
The reviewed literature indicated that most chemical sensor-based fire detection algorithms rely on machine learning, with relatively few studies exploring the application of recently developed time series-based deep learning models. This highlights a research gap that necessitates the investigation of fire detection algorithms that employ advanced deep learning techniques to determine the extent of improvement in reliability and accuracy. Given these considerations, the aim of this study was to fill this gap by proposing a robust fire detection algorithm based on a time series-based deep learning model, developing a fire situation classification framework, and thoroughly evaluating its performance.
In summary, traditional fire detection methods face challenges in terms of delayed detection, high false alarm rates, and the inability to detect certain types of fires, such as smoldering or chemical fires. These limitations make the need for advanced methods like deep learning crucial, as they can process large amounts of data in real time and make more accurate classifications across different fire scenarios.

3. Materials and Methods

3.1. Data Generation and Collection

As mentioned previously, the objective of this study was to develop an effective and reliable fire detection model through deep learning-based fire situation classification. To achieve this, it is essential to differentiate between blazing fires, fires that burn with soot and smoke, and fires associated with heating and cooking, as outlined in the study by Bukowski et al. [23]. Accordingly, the core of this research was to construct a deep learning model that was capable of classifying these three fire scenarios. However, these fire situations are somewhat abstract and pose challenges for use as labels in supervised deep learning. Therefore, to establish clearer classification standards, the guidelines from the NFPA 10 standard [34] by the U.S. NFPA were incorporated in this study (Table 1). These standards differentiate fire suppression methods and types of fire extinguishers based on the substance causing ignition, with fire grades managed accordingly. It should be noted that although there are slight international variations, most standards are consistent. This study focused on building a deep learning model for early fire detection, excluding flammable or explosive fire situations. Consequently, the study target classes were A and K fires from the NFPA classification, excluding classes B, C, and D, which represent special fire situations. Drawing on the three fire-related situations described by Bukowski et al. [23] and the flame and nuisance labels used in Özyurt’s study [33] to classify fire scenarios, the fire simulations were designed. The scenarios and corresponding fire experiments are detailed in Table 2.
As illustrated in Table 2, authentic combustible substances were employed as specimens for creating a deep learning-driven fire detection algorithm in this study, with the aim of augmenting its pragmatic relevance and functionality. Instances of fires accompanied by flames were denoted as “flaming”, whereas circumstances prone to be misidentified by fire sensors were classified as “nuisance” and “heating”. Furthermore, data obtained at ambient temperatures were designated as “normal” for comparative purposes.
Fire scenario experiments were conducted using a room corner tester at the Advanced Convergence Materials Center in Busan Technopark, which is a medium-sized apparatus designed for evaluating the heat release rate (Figure 1). The room corner tester is equipped with an exhaust system that extracts smoke through a collector when the smoke concentration is high. Since smoke is one of the key data points collected by the sensors, the experiments were carefully designed to prevent experimental errors that could arise from controlling smoke generation. Data were gathered using a manufactured multidimensional integrated sensor board that could detect 11 parameters, including temperature, humidity, O2, CO2, CO, dust (PM 10, 2.5, and 1.0), and smoke (red, green, and infra-red). Comprehensive specifications for each sensor are listed in Figure 2. However, all the infrared smoke sensor values were consistently recorded as zero. Hence, they were excluded from data collection and analysis.
Before initiating the fire occurrence experiments for each scenario, steady-state data were collected for 1 min. Subsequently, data simulating a fire situation through the combustion or heating of a specific material were gathered for 10 min. The sensor data collection interval was 1 s, with 10 parameter values simultaneously recorded every 1 s. This procedure was repeated 20 times, and the data collection method is detailed in Table 3.

3.2. Feature Engineering to Build Deep Learning Models for Fire Scenario Classification

Before developing a deep learning model, if the scales of the input data differ significantly, the model could preferentially learn specific features while the contributions of other features might diminish to zero. This issue can undermine the effectiveness of training multidimensional data. Therefore, it is crucial to perform data scaling or normalization to eliminate these relative scale differences when features are numerically represented. In this study, four types of scaler packages (Standard scaler, MinMax scaler, Robust scaler, MaxAbs scaler) from Python’s Scikit-learn library [35] were applied directly to the sensing data to train the model and determine the optimal scaler. Despite observing no significant differences in data values or model performance with each scaler, the Robust scaler was selected to minimize the impact of outliers [36]. The Robust scaler standardizes each component x i by transforming into x i , as described in Equation (1). In other words, this transformation centers the data around the median (second quartile, Q 2 ( x ) ), then scales it according to the interquartile range, which is the difference between the first quartile ( Q 1 ( x ) ) and the third quartile ( Q 3 x ) of x .
x i = x i Q 2 x Q 3 x Q 1 x
To input fire detection sensing data into a deep learning model, it is crucial to determine the window size (the time range of multidimensional sensing data collected in a time series) and the slide size, which indicates the interval at which the window moves. The window size defines the data input at each instance, while the slide size specifies the frequency of data inputs (Figure 3). In this study, we initially evaluated the performance of a simple LSTM model with various window and slide sizes and found that a window size of 10 and a slide size of 5 were most effective (Figure 4). By applying the specified window size and slide, the total dataset for learning comprised 9440 entries. The data were then shuffled and partitioned into training (60%), validation (24%), and test (16%) datasets to achieve comprehensive training and evaluation. Each dataset was split by maintaining the ratio of classes. Accordingly, the number of training, validation, and test datasets were 5664, 2265, and 1511, respectively.
Before initiating the fire occurrence experiments for each scenario, steady-state data were collected for 1 min. Subsequently, data simulating a fire situation through the combustion or heating of a specific material were gathered for 10 min. The sensor data collection interval was 1 s, with 10 parameter values simultaneously recorded every 1 s. This procedure was repeated 20 times. The data collection method is detailed in Table 3.

3.3. Building Diverse Deep Learning Models for Fire Scenario Classification

The fire sensing data collected in this study were organized in chronological order. Consequently, to build a model for classifying fire scenarios, various deep learning models that are effective in learning sequential data and performing classification tasks based on time series data were constructed. The performance of each model was then compared using the same dataset.

3.3.1. LSTM

LSTM is engineered to mitigate the limitations of basic recurrent NN (RNN) in managing long-term dependencies [37]. Unlike the relatively simple structure of RNN, which uses a tanh (hyperbolic tangent) activation function to process previous stage information and inputs, LSTMs employ a more complex architecture consisting of four layers for state transitions.
Central to the LSTM architecture is the cell state, which is regulated by three distinct gates: forget, input, and output. The forget gate uses a sigmoid function to generate a value between 0 and 1, then determines whether to retain or discard information from the cell state based on the previous state value C t 1 . The input gate then decides which new information should be stored in the cell state, using a tanh function to generate candidate information vectors and a sigmoid function to select relevant information. The cell state is updated by combining the outputs of the forget and input gates. Finally, the output gate applies a sigmoid function to the input value to determine the final output from the cell state, which is modulated by the tanh-transformed cell state value to produce the necessary output (Figure 5).

3.3.2. Gated Recurrent Unit

The gated recurrent unit (GRU) reduces computational complexity for updating the hidden state while addressing the long-term dependency issues addressed by LSTM [38]. Essentially, the GRU offers a simplified structure compared to the complex architecture of LSTM while achieving comparable performance. Unlike LSTM, which employs three gates (output, input, and forget), the GRU employs only two gates: update and reset (Figure 6). The update gate functions similarly to both the forget and input gates of LSTM. Specifically, z t determines the extent of previous information retained, while 1 z t determines the amount of new information incorporated, effectively balancing past and current information. Consequently, z t acts as the forget gate, and 1 z t acts as the input gate. In contrast, the reset gate selects the portion of previous information to be used in generating the next state. Although it resembles the output gate of LSTM, the reset gate in the GRU processes information differently by pre-selecting from previous data rather than from the final output.

3.3.3. Bidirectional LSTM (Bi-LSTM)

Unlike traditional RNN or LSTM, which processes input sequences in a single direction (either forward or backward), Bi-LSTM processes sequences simultaneously in both directions. Bi-LSTM comprises two LSTM layers: one that handles the sequence in the forward direction and another in the backward direction, with each maintaining its own hidden state and memory cells [39]. During the forward pass, the input sequence is fed into the forward LSTM layer from the first to the last time step. Here, the forward LSTM computes and updates the hidden state and memory cell based on the current input and the previously hidden state and memory cell. Concurrently, the input sequence is fed into the backward LSTM layer in reverse order (from the last to the first time step), with the backward LSTM similarly computing and updating its hidden state and memory cell. Once both passes are completed, the hidden states of the forward and backward LSTM layers at each time step are combined, typically by concatenation or through another transformation method (Figure 7).

3.3.4. LSTM-Fully Convolutional Network (LSTM-FCN)

The LSTM-FCN technique integrates LSTM networks with a temporal convolutional network (TCN) for time series classification. As illustrated in Figure 8, the FCN acts as a feature extractor, followed by a global average pooling layer to reduce the number of parameters in the classification model [40].
The LSTM block incorporates dropout and enhances the fully convolutional block. Each convolutional layer is followed by batch normalization and ReLU activation layers, with a dimension shuffle layer preceding the LSTM. Both the FCN and LSTM components of the LSTM-FCN receive the same input data, although they process the data differently. The fully convolutional block manages univariate time series data with a length of T as the input with a time step of T . In contrast, the LSTM block treats the same data as a multivariate time series with a single time step and T variables. This dimension shuffle process is essential for preventing rapid overfitting in the LSTM block, which would otherwise hinder the learning of long-term dependencies and reduce performance. Additionally, dimension shuffle enhances the network’s learning speed.

3.3.5. InceptionTime

InceptionTime is a deep learning model designed specifically for time series classification [41]. It was inspired by the inception architecture [42] used in computer vision and then adapted to manage temporal data.
The architecture of the InceptionTime model is displayed in Figure 9, and its main features and components are as follows:
  • Inception Modules: Drawing from the original inception architecture, InceptionTime employs modules that perform multiple convolutional operations with varying kernel sizes in parallel, enabling the model to detect patterns at multiple scales and resolutions in time series data.
  • Parallel Convolutions: Each inception module applies several parallel convolutional filters with different kernel sizes (e.g., 1, 3, and 5) to the input data, capturing features at various granularities, which is essential for understanding complex temporal dependencies.
  • MaxPooling and Bottleneck Layers: Inception modules often include max-pooling layers to reduce spatial dimensions and bottleneck layers (1 × 1 convolutions) to decrease feature map dimensionality, reducing computational costs and preventing overfitting.
  • Residual Connections: To enhance deep network training, InceptionTime integrates residual connections, which alleviate the vanishing gradient problem by allowing gradients to propagate more effectively through the network.
  • Stacking of Inception Modules: Multiple inception modules are stacked to build a deep network, facilitating the learning of hierarchical representations of time series data.
  • Global Average Pooling: Following the inception modules, a global average pooling layer is applied to average each feature map into a single value, aiding in feature generalization and parameter reduction.
  • Fully Connected Layer: The output of the global average pooling layer is fed into a fully connected layer followed by a softmax activation function for classification.
  • Training and Optimization: The model utilizes backpropagation for training and optimization techniques, such as Adam or RMSprop. Data augmentation and regularization methods (e.g., dropout) are also employed to enhance generalization.
In Figure 9, the different filter sizes were distinguished by color. The InceptionTime architecture is adept at capturing complex patterns in time series data, making it well-suited for diverse time series classification tasks. Moreover, its capacity to manage different temporal resolutions simultaneously and the incorporation of residual connections contribute to its robust performance on various benchmark datasets.

3.3.6. Transformer

The transformer model is a deep learning architecture that is very effective for sequence data, such as natural language processing. It was first introduced by Vaswani et al. [43] in 2017 and is based entirely on the attention mechanism. It is known to be easier to parallelize and can learn faster than previous RNN or LSTM-based models.
The architecture of the transformer model is displayed in Figure 10, and its main components and features are as follows:
  • Attention Mechanism: The core of the transformer model is the attention mechanism, particularly the self-attention mechanism. This allows each word in the input sequence to learn a richer representation by considering its relationships with all other words in the sequence, improving the model’s understanding of context.
  • Multi-Head Attention: By applying multiple attention mechanisms in parallel, the transformer processes information in various representation spaces, enabling the model to learn different contextual information simultaneously.
  • Positional Encoding: Since the transformer lacks inherent order information for the input sequence, positional encoding is used to add positional information to each word, allowing the model to recognize the sequence order.
  • Encoder: This processes the input sequence and transforms it into a high-dimensional vector representation through multiple layers of self-attention and feed-forward NNs.
  • Decoder: This generates the target sequence by using the encoder’s output along with its own self-attention to predict the next word in the sequence.
  • Feed-Forward Network: Each encoder and decoder layer includes a position-specific feed-forward NN, which nonlinearly transforms the attention output at each position.
  • Residual Connections and Layer Normalization: Applied after each sublayer (attention and feed-forward networks), residual connections and layer normalization stabilize the learning process and support deeper network structures.

3.3.7. Model Configurations

As discussed previously, six models were used for training, validation, and testing. To ensure consistency across all models, identical hyperparameter values were applied uniformly, as detailed in Table 4. Scaling was conducted using the training data, and the parameters obtained during this process were subsequently applied to both the validation and test datasets.

4. Results

4.1. Fire Scenario Classification Performance Evaluation for Each Model

4.1.1. Loss and Accuracy for Each Model

Based on the six models constructed, training with a batch size of 32 over 100 epochs resulted in a clear downward trend in both train and validation losses. Correspondingly, train accuracy and validation accuracy for each model exhibited an upward trend, indicating proper training, as illustrated in Figure 11. Table 5 presents the highest train and validation accuracies, in addition to the lowest train and validation losses for each model.
Based solely on loss values and accuracy, all six models exhibited excellent performance. However, given the simplicity of the input data structure and the limited number of features, simpler models (such as LSTM and GRU) outperformed more complex models (such as InceptionTime and Transformer). Among the six models, the Bi-LSTM model demonstrated the best performance, achieving a validation loss of 0.0020 and a validation accuracy of 0.9995. The LSTM model performed the second best, with a validation loss of 0.0020 and a validation accuracy of 0.9995.
Numerous studies have corroborated the hypothesis that theoretically simple models often exhibit superior generalization capabilities compared to complex models when managing relatively straightforward data. For example, Hasanpour et al. [46] empirically demonstrated that a simple architecture (SimpleNet) outperformed more complex models (including VGGNet, ResNet, and GoogleNet), achieving high accuracy while maintaining efficiency in terms of memory and computation. This superior performance was evident on datasets such as CIFAR-10 and MNIST. Additionally, Grinsztajn et al. [47] examined why tree-based models continue to surpass deep learning models on tabular data, noting that simple multilayer perceptrons (MLPs) or ResNet architectures can outperform more intricate models. Bargagli Stoffi et al. [48] investigated the relationship between model simplicity and generalization from the perspective of statistical learning theory, suggesting that simpler models are advantageous in terms of avoiding overfitting and capturing the signal rather than the noise in the data more effectively.

4.1.2. Verification of Fire Scenario Classification Performance for Each Model through Evaluation Indicators

The confusion matrix results for evaluating the classification performance of each model are presented in Figure 12 and reveal that all six models exhibited high classification accuracy across four classes. These results allowed us to derive several performance metrics that quantify the classification efficacy of each model, as displayed in Equations (2)–(5).
A c c u r a c y = T P + T N T P + T N + F P + F N
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
True positive (TP) indicates the number of positive examples classified accurately. In contrast, true negative (TN) displays the number of negative examples classified accurately. False positive (FP) is the number of actual negative examples classified as positive. Finally, false negative (FN) is the number of actual positive examples classified as negative. Accuracy serves as an overarching metric of a model’s ability to correctly classify instances across all classes. Precision measures the alignment between the model’s predictions and the actual classes, reflecting the model’s reliability. Recall assesses the model’s capacity to identify true instances of each class correctly, indicating the model’s practical utility. The F1 score is calculated as the harmonic mean of precision and recall and provides a balanced measure of both metrics. Table 6 presents these classification performance indicators derived from the evaluation of each model used in this study.
High values were observed across all performance indicators for nearly all the models. However, despite having the lowest loss value and highest accuracy during training and validation, the Bi-LSTM model exhibited relatively lower performance in terms of classification indicators compared to the other models. Conversely, the simplest model (LSTM) demonstrated exceptionally high values for all classification performance indicators.
Recall measures the rate at which the actual class is correctly identified and is also referred to as either the sensitivity or the true positive rate (TPR). In contrast, S p e c i f i c i t y denotes the rate at which the actual non-correct class is correctly identified as negative. The false positive rate (FPR) is also known as Fall-out and is calculated by subtracting the Specificity from 1, as in Equations (6) and (7).
S p e c i f i c i t y = T N F P + F N
F P R   F a l l o u t = 1 S p e c i f i c i t y = F P F P + T N
The performance of a classification model can be evaluated using a receiver operating characteristic (ROC) curve, which plots the true and false positive rates on the y- and the x-axis, respectively. The area under the curve (AUC) value indicates the model’s classification performance, with values closer to 1 signifying superior performance. ROC-AUC is primarily used for binary classification models. However, since the LSTM-based fire situation classification model developed in this study involves four classes, separate ROC curves were generated, and the AUC values were calculated for each class.
As illustrated in Figure 13, the AUC values for each class across all models were 1.00 (or very close to 1.00), which was consistent with the other classification performance indices. This result confirmed that the various time-series classification models constructed in this study exhibit excellent classification performance.

4.2. Real-Time Classification Performance Evaluation of Fire Scenarios for Each Model

A key objective of this study was to verify the accuracy and speed of the constructed deep learning-based fire scenario classifier in the real-time classification of fire situations. To evaluate this aspect, the model was adapted to return a class for the current situation every 10 s based on the weights of the six constructed models. Specifically, the real-time fire scenario classification model outputs one of four classes (normal, flaming, heating, or nuisance) every 1 s based on sensor data collected for 11 parameters over the preceding 10 s. This model tracks the changes in the values of the 11 parameters over time and the probabilities associated with each class decision. The real-time classification performance was assessed through a new experiment conducted in a separate test environment, using Equations (8) and (9), which are adaptations of the metrics used by Beak et al. [8] to suit the model developed in this study.
F S T A =   T F S T E F S T 10
F A R   % = t = 10 T F S T + 9 I y ^ t y t T F S T 1 × 100
The following section provides a detailed explanation of each index value used in the study. The first index is fire-starting time accuracy (FSTA), which is determined by calculating the absolute difference between the true fire-starting time (TFST), the actual start time of the fire simulation experiment, and the estimated fire-starting time (EFST) predicted by the model. It should be noted that there is an additional subtraction of 10 s to account for the model’s data collection lag. A smaller FSTA value indicates higher real-time classification performance.
The second index ensures that the model consistently recognizes the current situation as the default state (“normal”) until the fire experiment begins, without mistakenly identifying it as another state. In other words, the model must avoid false alarms, which occur when the model returns a class other than normal during an actual normal state. From the point of accumulating 10 s of data until TFST + 9 s, the model must continuously return only the normal value. The false alarm rate (FAR) is defined as the proportion of times the model predicts a class other than normal during this TFST + 1 s period. A lower FAR value indicates better real-time classification performance. Here, I ( · ) represents the indicator function, y t denotes the actual class at time t , and y ^ t represents the class predicted by the model at time t .
The experimental design and results for the real-time classification performance test are detailed in Table 7. Most models successfully classified fire and other situations within 1 min, resulting in F A R values of 0.00%. In a related study, Beak et al. [8] reported an average FSTA value of 251.30 for fire, heating, and disturbance situations using SVM-DTWK, which is an independent real-time fire detection model based on machine learning. This result was a significant improvement over FSTA values obtained from traditional threshold sensor methods or probabilistic NNs. Compared to previous studies, the FSTA values presented in this study indicate superior performance for the real-time fire detection model.

5. Discussion

The findings from this research underscore the significant potential of advanced deep learning models in enhancing the accuracy and reliability of sensor-based fire detection systems. The comparative analysis of various deep learning architectures revealed that simpler models (such as LSTM and GRU) often outperformed more complex architectures (such as InceptionTime and Transformer). This improved performance can be attributed to the relatively straightforward nature of the sensor data, which benefits from the capacity of LSTM and GRU to effectively capture temporal dependencies without the need for extensive parameter tuning.
The high accuracy and low false alarm rates achieved by the LSTM and Bi-LSTM models are particularly noteworthy. These models demonstrated superior performance in both the training and validation phases, with a validation accuracy exceeding 99%. The ability to maintain such high accuracy while processing real-time data highlights their robustness and applicability in practical scenarios. Furthermore, the real-time classification performance, which was evaluated through indices such as FSTA and FAR, confirmed that these models could reliably detect fire events with minimal delay and no false alarms. This would enhance their utility in emergency response systems. Despite these promising results, several challenges and limitations warrant further investigation. The robustness of the models under diverse fire scenarios and varying environmental conditions remains an area for future research. Additionally, the integration of these models into existing fire detection infrastructures requires consideration of computational efficiency and scalability, particularly in large-scale deployments. Addressing these challenges will be crucial for the widespread adoption of deep learning-based fire detection systems. Moreover, although the study’s reliance on controlled fire scenarios was necessary for initial validation, this might not fully capture the complexity of real-world fire events. Future work should aim to include a broader range of fire types and environmental conditions to improve the generalizability of the models. Additionally, exploring the integration of other sensor types (such as visual and thermal cameras) could further enhance the detection capabilities and robustness of the system.

6. Conclusions

This study demonstrated the efficacy of deep learning models, particularly LSTM and Bi-LSTM architectures, in the real-time classification of fire scenarios using multidimensional sensor data. The models achieved high classification accuracy and low false alarm rates, confirming their potential for practical application in intelligent fire detection systems. The findings suggest that deep learning can significantly improve the performance of sensor-based fire detection, providing early and reliable alerts that are crucial for mitigating fire damage and enhancing safety.
The superior performance of the LSTM and Bi-LSTM models was attributed to their ability to capture temporal dependencies in the sensor data effectively. This capability allows for the accurate classification of fire events and non-fire events, minimizing false alarms that can result in unnecessary evacuations and economic costs. The success of these models underscores the importance of selecting appropriate architectures that balance complexity and performance for specific application contexts.
The real-time classification capabilities demonstrated in this study are particularly relevant for critical applications where timely detection of fire events is essential. The models’ ability to detect fires within seconds of their occurrence, as evidenced by the low FSTA values, positions them as valuable tools for enhancing fire response efforts and reducing the impact of fires on life and property.
Furthermore, the potential of deep learning models to be integrated with Internet of Things (IoT) frameworks opens up new avenues for enhancing fire detection systems. IoT integration can facilitate real-time data collection and processing, enabling more responsive and adaptive fire detection mechanisms. The use of edge computing to process sensor data locally before sending them to a centralized system can also help in reducing response times and improving the efficiency of fire detection systems.
The deployment of deep learning models for fire detection represents a significant advancement in the field of fire safety. By leveraging the strengths of LSTM and Bi-LSTM architectures, the results of this study provide a foundation for developing robust, reliable, and efficient fire detection systems that can operate in real time. Future research should continue to explore and address the identified challenges, ensuring that these systems can be effectively implemented in diverse environments and integrated with existing fire safety infrastructures. With continued development and validation, deep learning-based fire detection systems hold promise for significantly advancing fire safety technology, ultimately contributing to safer living and working environments worldwide.
Moving forward, the integration of deep learning models in fire detection systems presents several promising research directions. One key area for future work is the incorporation of multi-modal data, such as visual, thermal, and chemical sensor inputs, to further enhance the accuracy and reliability of fire detection. Additionally, exploring the potential of federated learning could allow fire detection systems to be trained across multiple sites while maintaining data privacy, thus improving the system’s robustness without compromising security. Another important direction is improving the computational efficiency of these models so they can be implemented on edge devices, enabling faster real-time processing in low-power environments.
Furthermore, the application of deep learning models can be extended to a broader range of real-world scenarios, such as forest fire detection, where early detection could help prevent large-scale damage. These models could also be used in smart city infrastructures, integrated with IoT networks to provide continuous, real-time monitoring across large urban areas. By leveraging cloud computing and AI-driven automation, future intelligent fire detection systems could revolutionize emergency response strategies, enabling faster and more accurate interventions.
Expanding the application of these deep learning models beyond controlled environments opens up possibilities for real-world scenarios such as large industrial complexes, where early detection of smoldering fires could significantly reduce the risk of explosions. In addition, the models can be tailored for use in transportation systems, like airplanes or ships, where fire detection must be fast and highly reliable to ensure passenger safety. By continuously improving the adaptability of these models to diverse settings, future research can address the unique challenges posed by different environments, ensuring that fire detection systems become more versatile and universally applicable.

Author Contributions

Conceptualization, Y.K., B.J. and Y.H.; formal analysis, Y.K. and Y.H.; methodology, Y.K. and Y.B.; supervision, Y.B.; validation, Y.B.; writing—original draft, Y.K., B.J. and Y.H.; writing—review and editing, Y.B.; funding acquisition, Y.H. and B.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Safety Technology Commercialization Platform Construction Project (No. P0003951) with fund of the South Korean Ministry of Trade, Industry and Energy. This work was supported by the Korea Planning & Evaluation Institute of Industrial Technology and funded by the Korea government (Ministry of Trade, Industry and Energy, Ministry of Science and ICT, Ministry of the Interior and Safety, National Fire Agency, Project Number: 1761002860).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. U.S. Fire Administration. Statistics—Fires, Deaths, Injuries and Dollar Loss. Available online: https://www.usfa.fema.gov/statistics/ (accessed on 23 July 2024).
  2. National Fire Agency of Korea. National Fire Agency Statistical Yearbook 2023; National Fire Agency of Korea: Sejong, Republic of Korea, 2023; pp. 89–102.
  3. Li, N. The Construction of a Fire monitoring system based on multi-sensor and neural network. Int. J. Inf. Technol. Syst. Approach 2023, 16, 1–12. [Google Scholar] [CrossRef]
  4. Lin, G.; Zhang, Y.; Xu, G.; Zhang, Q. Smoke detection on video sequences using 3D convolutional neural networks. Fire Technol. 2019, 55, 1827–1847. [Google Scholar] [CrossRef]
  5. Chen, S.J.; Hovde, D.C.; Peterson, K.A.; Marshall, A.W. Fire detection using smoke and gas sensors. Fire Saf. J. 2007, 42, 507–515. [Google Scholar] [CrossRef]
  6. Fonollosa, J.; Solórzano, A.; Marco, S. Chemical sensor systems and associated algorithms for fire detection: A review. Sensors 2018, 18, 553. [Google Scholar] [CrossRef]
  7. Guar, A.; Singh, A.; Kumar, A.; Kulkarni, K.S.; Lala, S.; Kapoor, K.; Srivastava, V. Fire sensing technologies: A Review. IEEE Sens. J. 2019, 19, 3191–3202. [Google Scholar] [CrossRef]
  8. Baek, J.; Alhindi, T.J.; Jeong, Y.S.; Jeong, M.K.; Seo, S.; Kang, J.; Choi, J.; Chung, H. Real-time fire detection algorithm based on support vector machine with dynamic time warping kernel function. Fire Technol. 2021, 57, 2929–2953. [Google Scholar] [CrossRef]
  9. Baek, J.; Alhindi, T.J.; Jeong, Y.S.; Jeong, M.K.; Seo, S.; Kang, J.; Shim, W.; Heo, Y. A wavelet-based real-time fire detection algorithm with multi-modeling framework. Expert Syst. Appl. 2023, 233, 120940. [Google Scholar] [CrossRef]
  10. Choi, J.M.; Park, K.W.; Jeong, J.G.; Lee, Y.K.; Kim, G.N.; Choi, D.C.; Ko, M.H. Experimental study on the availability of fire detection using gas sensors for air quality measurement. Fire Sci. Eng. 2021, 35, 41–47. [Google Scholar]
  11. Tripathi, N.; Obulesu, D.; Murugan, A.S.S.; Mittal, V.; Babu, B.R.; Sharma, S. IOT based surveillance system for fire and smoke detection. In Proceedings of the 2022 5th International Conference on Contemporary Computing and Informatics (IC3I 2022), Uttar Pradesh, India, 14–16 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1557–1563. [Google Scholar]
  12. Milke, J.A.; Hulcher, M.E.; Worrell, C.L.; Gottuk, D.T.; Williams, F.W. Investigation of multi-sensor algorithms for fire detection. Fire Technol. 2003, 29, 363–382. [Google Scholar] [CrossRef]
  13. Cestari, L.A.; Worrell, C.; Milke, J.A. Advanced fire detection algorithms using data from the home smoke detector project. Fire Saf. J. 2005, 40, 1–28. [Google Scholar] [CrossRef]
  14. Muduli, L.; Mishra, D.P.; Jana, P.K. Optimized fuzzy logic-based fire monitoring in underground coal mines: Binary particle swarm optimization approach. IEEE Syst. J. 2019, 14, 3039–3046. [Google Scholar] [CrossRef]
  15. Gottuk, D.T.; Peatross, M.J.; Roby, R.J.; Beyler, C.L. Advanced fire detection using multi-signature alarm algorithms. Fire Saf. J. 2002, 37, 381–394. [Google Scholar] [CrossRef]
  16. Elmas, C.; Sönmez, Y. A data fusion framework with novel hybrid algorithm for multi-agent Decision Support System for Forest Fire. Expert Syst. Appl. 2011, 38, 9225–9236. [Google Scholar] [CrossRef]
  17. Ramasubramanian, S.; Muthukumaraswamy, S.A.; Sasikala, A. Fire detection using artificial intelligence for fire-fighting robots. In Proceedings of the 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS 2020), Madurai, India, 13–15 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 180–185. [Google Scholar]
  18. Sivathanu, Y.R.; Tseng, L.K. Fire detection using time series analysis of source temperatures. Fire Saf. J. 1997, 29, 301–315. [Google Scholar] [CrossRef]
  19. Adib, M.; Eckstein, R.; Hernandez-Sosa, G.; Sommer, M.; Lemmer, U. SnO2 nanowire-based aerosol jet printed electronic nose as fire detector. IEEE Sens. J. 2017, 18, 494–500. [Google Scholar] [CrossRef]
  20. Li, J.; Yan, B.; Zhang, M.; Zhang, J.; Jin, B.; Wang, Y.; Wang, D. Long-range Raman distributed fiber temperature sensor with early warning model for fire detection and prevention. IEEE Sens. J. 2019, 19, 3711–3717. [Google Scholar] [CrossRef]
  21. Rütimann, L. Reducing False Alarms (A Study of selected European Countries); Technical Report; Siemens Switzerland Ltd.: Zug, Switzerland, 2014. [Google Scholar]
  22. Park, J.K.; Nam, K. Implementation of multiple sensor data fusion algorithm for fire detection system. J. Korea Soc. Comp. Inf. 2020, 25, 9–16. [Google Scholar]
  23. Bukowski, R.; Peacock, R.; Averill, J.; Cleary, T.; Bryner, N.; Reneke, P. Performance of Home Smoke Alarms, Analysis of the Response of Several Available Technologies in Residential Fire Settings; Technical Note; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2008.
  24. Kim, W.J.; Kim, B.J.; Chung, K.S. Machine learning based fire detection method in sensor in wireless sensor networks. In Proceedings of the Symposium of the Korean Institute of Communications and Information Sciences, Busan, Republic of Korea, 29–30 July 2011; pp. 998–999. [Google Scholar]
  25. McAvoy, T.J.; Milke, J.; Kunt, T.A. Using multivariate statistical methods to detect fires. Fire Technol. 1996, 32, 6–24. [Google Scholar] [CrossRef]
  26. JiJi, R.D.; Hammond, M.H.; Williams, F.W.; Rose-Pehrsson, S.L. Multivariate statistical process control for continuous monitoring of networked early warning fire detection (EWFD) systems. Sens. Actuators B-Chem. 2003, 93, 107–116. [Google Scholar] [CrossRef]
  27. Croux, C.; Ruiz-Gazen, A. High breakdown estimators for principal components: The projection-pursuit approach revisited. J. Multivar. Anal. 2005, 95, 206–226. [Google Scholar] [CrossRef]
  28. Ferraty, F.; Vieu, P. Nonparametric Functional Data Analysis: Theory and Practice; Springer Science & Business Media: New York, NY, USA, 2006. [Google Scholar]
  29. Wang, X.G.; Lo, S.M.; Zhang, H.P. Influence of feature extraction duration and step size on ANN based multisensor fire detection performance. Procedia Eng. 2013, 52, 413–421. [Google Scholar] [CrossRef]
  30. Zheng, D.; Wang, Y.; Wang, Y. Intelligent monitoring system for home based on FRBF neural network. Int. J. Smart Home 2015, 9, 207–218. [Google Scholar] [CrossRef]
  31. Baek, J.; Alhindi, T.J.; Jeong, Y.S.; Jeong, M.K.; Seo, S.; Kang, J.; Heo, Y. Intelligent multi-sensor detection system for monitoring indoor building fires. IEEE Sens. J. 2021, 21, 27982–27992. [Google Scholar] [CrossRef]
  32. Zhang, T.; Wang, Z.; Wong, H.Y.; Tam, W.C.; Huang, X.; Xiao, F. Real-time forecast of compartment fire and flashover based on deep learning. Fire Saf. J. 2022, 130, 103579. [Google Scholar] [CrossRef]
  33. Özyurt, O. Efficient detection of different fire scenarios or nuisance incidents using deep learning methods. J. Build. Eng. 2024, 94, 109898. [Google Scholar] [CrossRef]
  34. NFPA. NFPA 10 Standard for Portable Fire Extinguishers 2022; NFPA (National Fire Protection Association): Quincy, MA, USA, 2022. [Google Scholar]
  35. Scikit-Learn. Available online: https://scikit-learn.org/stable/index.html (accessed on 23 July 2024).
  36. de Amorim, L.B.; Cavalcanti, G.D.; Cruz, R.M. The choice of scaling technique matters for classification performance. Appl. Soft Comput. 2023, 133, 109924. [Google Scholar] [CrossRef]
  37. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  38. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  39. Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef]
  40. Karim, F.; Majumdar, S.; Darabi, H.; Chen, S. LSTM fully convolutional networks for time series classification. IEEE Access 2017, 6, 1662–1669. [Google Scholar] [CrossRef]
  41. Ismail Fawaz, H.; Lucas, B.; Forestier, G.; Pelletier, C.; Schmidt, D.F.; Weber, J.; Webb, G.I.; Idoumghar, L.; Muller, P.; Petitjean, F. Inceptiontime: Finding alexnet for time series classification. Data Min. Knowl. Discov. 2020, 34, 1936–1962. [Google Scholar] [CrossRef]
  42. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (2015 CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  43. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, bbad467. [Google Scholar]
  44. Transformer Architecture Explained (Medium). Available online: https://medium.com/@amanatulla1606/transformer-architecture-explained-2c49e2257b4c (accessed on 30 July 2024).
  45. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  46. Hasanpour, S.H.; Rouhani, M.; Fayyaz, M.; Sabokrou, M. Let’s keep it simple, using simple architectures to outperform deeper and more complex architectures. arXiv 2016, arXiv:1608.06037. [Google Scholar]
  47. Grinsztajn, L.; Oyallon, E.; Varoquaux, G. Why do tree-based models still outperform deep learning on typical tabular data? Adv. Neural Inf. Process. Syst. 2022, 35, 507–520. [Google Scholar]
  48. Bargagli Stoffi, F.J.; Cevolani, G.; Gnecco, G. Simple models in complex worlds: Occam’s razor and statistical learning theory. Mind. Mach. 2022, 32, 13–42. [Google Scholar] [CrossRef]
Figure 1. Experiments to collect data for each fire scenario: (a) room corner tester for fire test; (b) fire experiment and sensing data collection process.
Figure 1. Experiments to collect data for each fire scenario: (a) room corner tester for fire test; (b) fire experiment and sensing data collection process.
Fire 07 00329 g001
Figure 2. Product specifications of sensors employed in the fire experiments.
Figure 2. Product specifications of sensors employed in the fire experiments.
Fire 07 00329 g002
Figure 3. Time series window size and slide intervals for building deep learning models to analyze fire sensing data.
Figure 3. Time series window size and slide intervals for building deep learning models to analyze fire sensing data.
Fire 07 00329 g003
Figure 4. Hyperparameter tuning: (a) model accuracy changes with window size; (b) model accuracy changes with slide.
Figure 4. Hyperparameter tuning: (a) model accuracy changes with window size; (b) model accuracy changes with slide.
Fire 07 00329 g004
Figure 5. Architecture of LSTM cell.
Figure 5. Architecture of LSTM cell.
Fire 07 00329 g005
Figure 6. Architecture of GRU cell.
Figure 6. Architecture of GRU cell.
Fire 07 00329 g006
Figure 7. The architecture of Bi-LSTM.
Figure 7. The architecture of Bi-LSTM.
Fire 07 00329 g007
Figure 8. The architecture of LSTM-FCN.
Figure 8. The architecture of LSTM-FCN.
Fire 07 00329 g008
Figure 9. Architecture of InceptionTime [41].
Figure 9. Architecture of InceptionTime [41].
Fire 07 00329 g009
Figure 10. Architecture of Transformer [44].
Figure 10. Architecture of Transformer [44].
Fire 07 00329 g010
Figure 11. Learning results according to model: (a) train loss, (b) validation loss, (c) train accuracy, and (d) validation accuracy.
Figure 11. Learning results according to model: (a) train loss, (b) validation loss, (c) train accuracy, and (d) validation accuracy.
Fire 07 00329 g011
Figure 12. Confusion matrix plot of each model for fire scenario prediction: (a) LSTM, (b) GRU, (c) Bi-LSTM, (d) LSTM-FCN, (e) InceptionTime, and (f) Transformer.
Figure 12. Confusion matrix plot of each model for fire scenario prediction: (a) LSTM, (b) GRU, (c) Bi-LSTM, (d) LSTM-FCN, (e) InceptionTime, and (f) Transformer.
Fire 07 00329 g012
Figure 13. ROC curve and AUC score for each fire scenario of each model: (a) LSTM, (b) GRU, (c) Bi-LSTM, (d) LSTM-FCN, (e) InceptionTime, and (f) Transformer.
Figure 13. ROC curve and AUC score for each fire scenario of each model: (a) LSTM, (b) GRU, (c) Bi-LSTM, (d) LSTM-FCN, (e) InceptionTime, and (f) Transformer.
Fire 07 00329 g013
Table 1. Classes of fire as per NFPA 10 Standard [34].
Table 1. Classes of fire as per NFPA 10 Standard [34].
ClassMaterialsExamples
ACommon combustible materialswood, paper, cloth, rubber, and many plastics
BFlammable liquids and gasesgasoline, tars, petroleum gases, oils, solvents, alcohols, propane, and butane
CEnergized electrical equipmentcomputers, servers, motors, and appliances
DCombustible metalsmagnesium, titanium, zirconium, sodium, lithium, and potassium
KCooking appliance fires involving combustible cooking mediaanimal and vegetable fats/oils
Table 2. Combustible substances and labels by fire scenario classifications.
Table 2. Combustible substances and labels by fire scenario classifications.
NFPA10 ClassFire Situation
Classes [23,34]
Combustible SubstancesLabel
AFlamingwood, paper, tissue, cotton, rubber, sponge, plastic, resinFlaming
KFlaminganimal and vegetable fats/oilsFlaming
-Smolderingpaper (smoldering), food (burning)Nuisance
-Heating (Cooking)food (cooking), gas burner, water (boiling)Heating
-Nuisancecandle, cigarette, mosquito repellentNuisance
-Normal-Normal
Table 3. Design of experiments for collecting fire-sensing data.
Table 3. Design of experiments for collecting fire-sensing data.
Burned MaterialLabelData Collection TimeNumber of Repetitions
-Normal60020
WoodFlaming6002
PaperFlaming6002
TissueFlaming6002
CottonFlaming6002
RubberFlaming6002
SpongeFlaming6002
PlasticFlaming6002
ResinFlaming6003
Fats/OilsFlaming6003
Food (cooking)Heating6007
Gas burnerHeating6007
Water (boiling)Heating6006
CandleNuisance6004
CigaretteNuisance6004
Mosquito repellentNuisance6004
Paper (smoldering)Nuisance6004
Food (burning)Nuisance6004
Table 4. Model configurations.
Table 4. Model configurations.
ParametersValues
Loss functionCategorical cross-entropy 1
OptimizerADAM 2
Evaluation metricsValidation accuracy
Epoch number100
Batch size32
Model checkpointIncluded
Save Best OnlyTrue
Learning rate10−4
1 This is a loss function for multi-class/label classification purposes. 2 An Adaptive Moment Estimation Optimizer [45].
Table 5. Optimal learning results by models.
Table 5. Optimal learning results by models.
ModelTrain LossValidation
Loss
Train AccuracyValidation
Accuracy
LSTM0.00450.00200.99860.9995
GRU0.01660.01340.99400.9955
Bi-LSTM0.00380.00200.99890.9995
LSTM-FCN0.01150.01260.99620.9969
InceptionTime0.17430.09990.94750.9805
Transformer0.06100.03380.98320.9894
Table 6. Results of classification performance indices of each model for fire scenario classification.
Table 6. Results of classification performance indices of each model for fire scenario classification.
ModelClassPrecisionRecallF1 ScoreAccuracy
LSTMNormal1.0001.0001.0000.999
Flaming0.9971.0000.998
Heating1.0000.9970.998
Nuisance1.0001.0001.000
GRUNormal1.0001.0001.0000.997
Flaming1.0000.9940.997
Heating0.9920.9970.994
Nuisance0.9970.9970.997
Bi-LSTMNormal1.0000.9210.9580.979
Flaming0.9921.0000.996
Heating0.9250.9920.957
Nuisance1.0001.0001.000
LSTM-FCNNormal1.0001.0001.0000.995
Flaming0.9861.0000.993
Heating1.0000.9810.991
Nuisance0.9941.0000.997
InceptionTimeNormal0.9580.9780.9680.970
Flaming0.9910.9570.974
Heating0.9830.9570.971
Nuisance0.9480.9860.967
TransformerNormal0.9611.0000.9800.982
Flaming1.0000.9810.991
Heating0.9910.9620.977
Nuisance0.9780.9860.982
Table 7. Real-time fire situation classification and performance evaluation of each model.
Table 7. Real-time fire situation classification and performance evaluation of each model.
ModelClassTFST (s)EFST (s)FSTA (s)FAR (%)
LSTMFlaming300329290.00
Heating300332320.00
Nuisance300321210.00
GRUFlaming300333330.00
Heating300342420.00
Nuisance300335350.00
Bi-LSTMFlaming300345450.00
Heating300360600.00
Nuisance300342420.00
LSTM-FCNFlaming300333330.00
Heating300331310.00
Nuisance300324240.00
InceptionTimeFlaming300346460.00
Heating300365650.00
Nuisance300351510.00
TransformerFlaming300341410.00
Heating300332320.00
Nuisance300339390.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, Y.; Heo, Y.; Jin, B.; Bae, Y. Real-Time Fire Classification Models Based on Deep Learning for Building an Intelligent Multi-Sensor System. Fire 2024, 7, 329. https://doi.org/10.3390/fire7090329

AMA Style

Kim Y, Heo Y, Jin B, Bae Y. Real-Time Fire Classification Models Based on Deep Learning for Building an Intelligent Multi-Sensor System. Fire. 2024; 7(9):329. https://doi.org/10.3390/fire7090329

Chicago/Turabian Style

Kim, Youngchan, Yoseob Heo, Byoungsam Jin, and Youngchul Bae. 2024. "Real-Time Fire Classification Models Based on Deep Learning for Building an Intelligent Multi-Sensor System" Fire 7, no. 9: 329. https://doi.org/10.3390/fire7090329

Article Metrics

Back to TopTop