Next Article in Journal
Scheduling a Single Machine with Primary and Secondary Objectives
Previous Article in Journal
A Modified Artificial Bee Colony Algorithm Based on the Self-Learning Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fire Detection Algorithm Based on Tchebichef Moment Invariants and PSO-SVM

School of Mechanical Engineering, Tongji University, Shanghai 201804, China
*
Author to whom correspondence should be addressed.
Algorithms 2018, 11(6), 79; https://doi.org/10.3390/a11060079
Submission received: 7 March 2018 / Revised: 16 May 2018 / Accepted: 17 May 2018 / Published: 25 May 2018

Abstract

:
Automatic fire detection, which can detect and raise the alarm for fire early, is expected to help reduce the loss of life and property as much as possible. Due to its advantages over traditional methods, image processing technology has been applied gradually in fire detection. In this paper, a novel algorithm is proposed to achieve fire image detection, combined with Tchebichef (sometimes referred to as Chebyshev) moment invariants (TMIs) and particle swarm optimization-support vector machine (PSO-SVM). According to the correlation between geometric moments and Tchebichef moments, the translation, rotation, and scaling (TRS) invariants of Tchebichef moments are obtained first. Then, the TMIs of candidate images are calculated to construct feature vectors. To gain the best detection performance, a PSO-SVM model is proposed, where the kernel parameter and penalty factor of support vector machine (SVM) are optimized by particle swarm optimization (PSO). Then, the PSO-SVM model is utilized to identify the fire images. Compared with algorithms based on Hu moment invariants (HMIs) and Zernike moment invariants (ZMIs), the experimental results show that the proposed algorithm can improve the detection accuracy, achieving the highest detection rate of 98.18%. Moreover, it still exhibits the best performance even if the size of the training sample set is small and the images are transformed by TRS.

1. Introduction

Fire has played a significant role in promoting the progress of human civilization. However, without proper management, it is also one of the major disasters causing huge loss of human lives and property all over the world. Therefore, it is essential to propose a reliable and effective algorithm to detect and raise the alarm for fire as soon as possible.
Traditional detection algorithms usually use heat sensors, thermocouples, or ionization sensors to detect fire, through detecting temperature and smoke particles [1]. In the past few years, Siemens launched a FDT221 fire detector. It is equipped with two redundant heat sensors, which monitor rooms in which a temperature rise is expected in case of fire. Mircom also launched the MIX-200 series of intelligent sensors for residential applications, equipped with photoelectric smoke detectors and electronic thermistors. Although these sensors may work efficiently in some particular cases, they suffer from large propagation delays of smoke and temperature, resulting in the increase of fire detection latency. Other algorithms, based on beam or aspirated smoke detectors, have been used to attempt to reduce the detection latency. Still, they cannot solve the practical problem completely [2]. Moreover, all of the abovementioned sensors require being in close proximity to the fire. Optical sensors have the advantages of long detection distance and fast response. However, as point detectors, their detection area is limited. These traditional algorithms do not detect the fire itself directly, therefore they are not always reliable.
Fire image detection is a relative novel technology based on video cameras, detecting the fire through the intelligent analysis of images with advanced algorithms. Compared with traditional sensors, the advantages of video cameras are listed as follows. First of all, video cameras can avoid detection latency to a great extent. They can also monitor a larger area as volume detectors, and are adaptable to outdoor locations, where traditional sensors are difficult to place. Finally, with the increasing concern of security, more and more surveillance cameras have been installed for various security applications, providing a convenient way to embed fire image detection into existing systems.
Research on fire image detection mainly focuses on two missions: feature extraction and state detection. For the former, there have been several studies highlighting a set of features for fire detection, such as color, flickering, motion, and edge blurring. Among them, color is considered as one of the most important clues. Li et al. [3] developed computer algorithms to segment the fire zone in the HSV (hue, saturation, value) color space and discover the correspondence points of image pairs, deciding the location, size, and propagation of the fire. A method based on the RGB (red, green, blue) color model and the difference between two consecutive potential fire masks was proposed in Reference [4], which indicated whether the potential region was a fire or fire-like object. The other color models such as YCbCr and CIE Lab were also used for fire pixel classification [5,6]. The analysis of fire-colored moving pixels in consecutive frames contributes to raising the alarm. Furthermore, some extra clues such as spatial difference and temporal changes are also involved. Habiboğlu et al. [7] used the color, spatial, and temporal information and divided the video into spatiotemporal blocks, extracting covariance-based features from these blocks to detect fire. Borges et al. [8] presented a probabilistic model for color-based fire detection to extract the features of candidate fire regions, including area size, boundary roughness, skewness, and surface coarseness. Unfortunately, color information and their derived descriptors are not reliable enough to perform fire detection. Due to the poor robustness in handling false alarm sources, they are affected easily by fire-colored objects, such as red flags, setting sun, lighting, and so on. Besides, some color spaces have the disadvantage of illumination dependence. In other words, the changes of illumination in the image have a negative effect on the fire pixel classification rules. Xu et al. [9] proposed a method for automatic fire smoke detection based on image visual features. This was achieved by extracting the static and dynamic features of fire smoke, including growth, disorder, frequent flicker in boundaries, self-similarity, and local wavelet energy. In Reference [10], a statistical analysis was carried out to detect smoke, using the idea that smoke is characterized by a grayish color with different illuminations. However, these smoke-based fire detection methods can give false alarms if proper consideration is not taken into account. Great challenges are yet to be overcome, such as dealing with smoke produced by cigarettes or friendly smoke. Several researchers have theoretically investigated the Gaussian mixture model [11,12] and optical flow method [13,14], which are suitable for fire image detection. However, these methods also have some drawbacks, such as relying on a background frame and requiring huge computational resources. From the above discussion, there is no doubt that a novel and effective algorithm for the feature extraction of fire images is needed.
As for state detection, artificial neural network (ANN) has been widely used for fire image detection, which is a special case of neural computation [15,16,17]. It applies historical data and has strong robustness of data analysis, achieving a better detection rate than the conventional algorithms. Nevertheless, the negative aspects of over-fitting, slow convergence rate, and becoming easily trapped in the local optimal solution are also obvious. Support vector machine (SVM) is a machine learning method based on statistical theory, which was first put forward by Vapnik [18]. It can overcome the dimensionality disaster and guarantee that the local and global optimal solution are exactly the same. SVM minimizes the upper bound of the generalization error based on structural risk minimization (SRM) and considers the empirical risk minimization (ERM) simultaneously, having a better generalization performance than ANN [19,20,21,22]. Meanwhile, it has a smaller demand for samples [23,24].
Many studies have been published concerning the state classification of fire with SVM [25,26,27,28], but little attention has been focused on how to select its parameters. It is well known that the performance of SVM greatly depends on its parameters. However, due to the lack of theoretical guidance, the choice of the parameters mostly relies on experience and manual selection in practical applications, which is troublesome and time-consuming [29,30,31]. In order to improve the fire detection performance and work efficiency, it is necessary to apply an effective and intelligent method to obtain the optimal parameters of SVM. In this paper, a novel fire image detection algorithm is proposed through combining Tchebichef moment invariants (TMIs) and particle swarm optimization-support vector machine (PSO-SVM). Firstly, the translation, rotation, and scaling (TRS) invariants of Tchebichef moments are obtained in terms of the geometric moments. Then, the features of candidate images are extracted by calculating their TMIs to form feature vectors. To gain the best performance of fire detection, a PSO-SVM model is built with the feature vectors of the training sample set, to which the PSO is applied to search the best kernel parameter and penalty factor of SVM. Afterwards, the PSO-SVM model is used to detect the fire images of the testing sample set. To evaluate the performance of the proposed algorithm, a comparison with the algorithms based on HMIs and ZMIs is also performed. Finally, the experimental results show that the proposed algorithm can improve the detection accuracy, achieving the highest detection rate of 98.18%. Moreover, it continues to exhibit the best performance even if the size of the training sample set is small and the images are transformed by TRS, proving its effectiveness and feasibility.
The remainder of this paper is structured as follows: Section 2 provides a brief introduction to Tchebichef moments. In Section 3, the translation, rotation, and scale invariants of Tchebichef moments are obtained, which contributes to form the feature vectors. Section 4 describes how to construct an SVM and introduces the basic principle of particle swarm optimization and the optimization procedure for SVM parameters. The process of fire image detection based on an PSO-SVM model is also presented. Section 5 describes the experiments conducted to demonstrate the performance of the proposed algorithm. Results of the experiments are also discussed in detail. Finally, the conclusions are stated in Section 6.

2. Tchebichef Moments

Moments are scalar quantities used to characterize a function and to capture its significant features. Moments have been widely used for description of the shape of a probability density function and in classic rigid-body mechanics to measure the mass distribution of a body [32]. Similarly, an image can be considered as any piece-wise continuous real function f ( x , y ) of two variables defined on a compact support D R × R and having a finite nonzero integral. From the mathematical point of view, image moments are equivalent to the projection of an image onto a polynomial basis. Therefore, the general moment G p q of an image is defined as:
G p q = D ψ p q ( x , y ) f ( x , y ) d x d y
where p , q are non-negative integers; p + q is called the order of the moment, and ψ p q ( x , y ) is a polynomial basis function defined on D .
For a digital image, the integral in Equation (1) is usually approximated by discrete summation. On an N × N image coordinate space, the moment integral can be expressed in the following discrete domain:
G p q = x = 0 N 1 y = 0 N 1 ψ p q ( x , y ) f ( x , y )
where 0 p , q , x , y N 1 .
Image moments are region-based descriptors. Depending on the polynomial basis used, we can recognize various image moments. In the same way, Tchebichef moments of order p + q on an image of size N × N pixels are defined using the following polynomial basis function:
ψ p q ( x , y ) = t p ( x ) t q ( y )
where t p ( x ) , t q ( y ) are the discrete orthogonal Tchebichef polynomials.
The discrete orthogonal Tchebichef polynomials t p ( x ) of order p and length N are defined as in Reference [33]:
t p ( x ) = p ! k = 0 p ( 1 ) p k ( N 1 k p k ) ( p + k p ) ( x k )
It can be observed that the value of t p ( x ) given by Equation (4) grows as N p , which is not suitable for defining moments [34]. To ensure the numerical stability of higher order functions, a scaled version of Tchebichef polynomials is proposed as follows [35]:
t ˜ p ( x ) = t p ( x ) ρ ( p , N )
where the squared-norm ρ ( p , N ) is given by:
ρ ( p , N ) = ( 2 p ) ! ( N + p 2 p + 1 )
Above all, for an N × N image f ( x , y ) , the Tchebichef moments T p q of order p + q are defined as:
T p q = x = 0 N 1 y = 0 N 1 t ˜ p ( x ) t ˜ q ( y ) f ( x , y )

3. Feature Extraction Based on TMIs

Translation, rotation, and scaling are the important transformations of spatial coordinates. In order to guarantee that the fire images can be recognized correctly and steadily, regardless of their position and orientation in the scene and the fire-to-camera distance, the Tchebichef moment invariants with respect to TRS are proposed to extract the features of images in this paper.

3.1. Tchebichef Polynomials Expansion

From Equations (4) and (5), the scaled discrete orthogonal Tchebichef polynomials t ˜ p ( x ) can be expanded into a power series as [35]:
t ˜ p ( x ) = k = 0 p C ( p , k ) ( x ) k
where the coefficient C ( p , k ) is given by:
C ( p , k ) = ( 1 ) p ρ ( p , N ) ( p + k ) ! ( p k ) ! ( k ! ) 2 ( N k 1 ) ! ( N p 1 ) !
Furthermore, using the Stirling numbers of the first kind s 1 ( k , i ) , the Pochhammer symbol ( x ) k can be expressed as:
( x ) k = i = 0 k ( 1 ) k i s 1 ( k , i ) x i
where s 1 ( k , i ) satisfy the following recurrence relations:
s 1 ( 0 , 0 ) = 1 s 1 ( 0 , i ) = s 1 ( k , 0 ) = 0 s 1 ( k , i ) = s 1 ( k 1 , i 1 ) ( k 1 ) s 1 ( k 1 , i )
with k 1 , i 1 .
Substituting Equation (10) into Equation (8), the scaled Tchebichef polynomials t ˜ p ( x ) can be rewritten as:
t ˜ p ( x ) = k = 0 p i = 0 k ( 1 ) k C ( p , k ) s 1 ( k , i ) x i = i = 0 p B ( i ) x i
where B ( i ) = k = i p ( 1 ) k C ( p , k ) s 1 ( k , i ) .

3.2. Tchebichef Moment Invariants

Equation (12) shows the correlation between scaled Tchebichef polynomials and the power series, which is helpful in expressing the Tchebichef moments as a linear combination of geometric moments. Substituting Equation (12) into Equation (7), the Tchebichef moments T p q can be expressed as:
T p q = i = 0 p j = 0 q B ( i ) B ( j ) m i j
where m i j is the geometric moment of order i + j , defined as:
m i j = x = 0 N 1 y = 0 N 1 x i y j f ( x , y )
Here, the translation, rotation, and scaling invariants of Tchebichef moments can be obtained by using the corresponding invariants of geometric moments. Invariants to translation can be achieved by shifting the coordinate origin to the image centroid. In the case of geometric moments, the central geometric moment μ i j of order i + j is defined as [36]:
μ i j = x = 0 N 1 y = 0 N 1 ( x x 0 ) i ( y y 0 ) j f ( x , y )
where ( x 0 , y 0 ) denotes the image centroid given by:
x 0 = m 10 / m 00 y 0 = m 01 / m 00
Then, rotation invariants are obtained by transforming the Cartesian coordinate system into the inertia principal axes system. According to the geometrical properties of cross-section in the mechanics of materials, the transformational relation is expressed as follows:
x = x cos α + y sin α y = y cos α x sin α
where α is the rotation angle of the original image given by:
α = 1 2 tan 1 2 μ 11 μ 20 μ 02
Here, it can be easily verified that the value of α given in Equation (18) is limited to [ 45 , 45 ] . In this paper, the modified version proposed in Reference [36] is used to obtain the exact angle α in the range of 0 to 360 .
Under the transformation given by Equation (17), the central moment of order i + j is modified as:
μ ˜ i j = x = 0 N 1 y = 0 N 1 ( ( x x 0 ) cos α + ( y y 0 ) sin α ) i ( ( y y 0 ) cos α ( x x 0 ) sin α ) j f ( x , y )
which holds invariant to image translation and rotation.
Furthermore, scaling invariants of μ ˜ i j are accomplished by the proper normalization of each moment. In theory, any nonzero moment can be invoked as a normalizing factor. Generally, a proper power of μ ˜ 00 is used to normalize μ ˜ i j . Hence, the normalized central geometric moment ν i j of order i + j is defined as:
ν i j = μ ˜ i j μ ˜ 00 r
where r = ( i + j + 2 ) / 2 .
From the above, by replacing the geometric moments m i j on the right-hand side of Equation (13) with ν i j , the translation and rotation invariants of Tchebichef moments T p q are obtained, as shown in the following expression:
T p q = i = 0 p j = 0 q B ( i ) B ( j ) ν i j
Observe that, when an image is scaled, the coefficients B ( i ) and B ( j ) are also scaled according to the scaling factor. Hence, it is reasonable to normalize the T p q using the size of the image. For an N × N image, the zero-order of T p q is obtained by using Equation (21), as follows:
T 00 = i = 0 0 j = 0 0 B ( i ) B ( j ) ν 00 = 1 N
From Equation (22), it can be seen that all of the Tchebichef moment invariants will be scaled if they are divided by T 00 , which is affected by the scaling of an image. Finally, the proposed translation, rotation, and scale invariants of Tchebichef moments are expressed as follows:
T ˜ p q = T p q T 00

3.3. Feature Extraction

Using Equation (23), any order TMIs of candidate images can be calculated. In practical application, the selected moment invariants are generally of finite order, such as T ˜ 01 , T ˜ 10 , T ˜ 11 , T ˜ 20 , , T ˜ 0 p , T ˜ q 0 .
Since T ˜ 01 and T ˜ 10 have nothing to do with the image content, the feature extraction of every candidate image will start from the second order of TMIs, forming a feature vector as shown in the following expression:
V = [ T ˜ 11 , T ˜ 02 , T ˜ 20 , , T ˜ 0 p , T ˜ q 0 ]

4. Particle Swarm Optimization-Support Vector Machine (PSO-SVM)

4.1. Support Vector Machine (SVM)

SVM is a supervised learning method based on the statistical learning theory and principle of structural risk minimization. Compared with neural network learning, SVM can solve the problems of over-fitting and local optimum. It can learn in high-dimensional feature spaces and obtain excellent learning performance with limited training samples. Therefore, SVM is particularly suitable for detecting whether the candidate area is on fire or not.
As for the SVM, the training samples include several consecutive images of the candidate area. The training sample set D t can be expressed as follows:
D t = { ( V i , y i ) | V i R d , y i { 1 , + 1 } , i = 1 , 2 , 3 , , n }
where V i is the input feature vector of an image to train the SVM classifier; y i is either 1 or + 1 , indicating that the training sample belongs to a fire or non-fire image; and n is the size of the training sample set.
In the d-dimensional space, the essential problem of SVM is to find a hyper-plane to divide the training samples into two different classes. In the case of non-linear classification, the input vector V i needs to be mapped into a very high-dimensional feature space by non-linear mapping θ ( V i ) . Afterwards, they can become linearly classified, as shown in Figure 1, just like achieving the non-linear classification algorithm in the relative original space. The hyper-plane can be denoted by:
w T θ ( V ) + b = 0
in which every sample is subjected to:
{ w T θ ( V i ) + b + 1 ,   y i = + 1 w T θ ( V i ) + b 1 ,   y i = 1
where w is the weight coefficient and b is the offset coefficient.
As shown in Figure 1, the margin between different classes can be derived as 2 / w . The optimal separating hyper-plane is that which separates the different classes with the maximal margin. It is easy to check that the maximum of 2 / w is equivalent to the minimum of w 2 / 2 . Considering the case in which the training sample set D t cannot be separated without error, the purpose of SVM can be described as:
{ min   1 2 w 2 + C i = 1 n ξ i s . t .   y i ( w T θ ( V i ) + b ) 1 ξ i
where C is the penalty factor of error classification; ξ is the slack variable; and ξ 0 .
According to Equation (28), the purpose is obviously a convex optimization problem. For the convenience of calculations, Equation (28) can be substituted into Lagrangian’s dual formulation, as shown in the following expression [18,37]:
{ min   i = 1 n β i 1 2 i = 1 n j = 1 n y i y j β i β j θ ( V i ) T θ ( V j ) s . t .   0 β i C , i = 1 n β i y i = 0
where β i is the Lagrange multiplier corresponding to every training sample and β i 0 . The sample is the support vector for which β i 0 , lying on one of the two hyper-planes: w T θ ( V + ) + b = + 1 and w T θ ( V ) + b = 1 .
To deal with the nonlinear SVM problem easily, a kernel function satisfying the Mercer’s condition is introduced to map the data into a high dimensional space. K ( V i , V j ) denotes the kernel function, and K ( V i , V j ) = θ ( V i ) T θ ( V j ) in the feature space [37]. As a result, the first one in Equation (29) is modified as:
min   i = 1 n β i 1 2 i = 1 n j = 1 n y i y j β i β j K ( V i , V j )
Assuming that the obtained optimal solution of the above equation is β i * , then the optimal coefficients w and b * of the hyper-plane are calculated as:
{ w * = i = 1 n β i y i θ ( V i ) b * = 1 2 w * T [ θ ( V h ) + θ ( V l ) ]
where θ ( V h ) and θ ( V l ) are a pair of support vectors belonging to different classes.
Hence, the corresponding decision function f ( V ) is expressed as follows [38]:
f ( V ) = sgn ( i = 1 n β i y i θ ( V i ) T θ ( V ) + b ) = sgn ( i = 1 n β i y i K ( V i , V ) + b )
whose output is the result of fire image detection.
In addition, as kernel function is the key technology of SVM, its selection will affect the learning ability and generalization ability of the model directly. At present, there have been several kernel functions available: polynomial kernel, sigmoid kernel, linear kernel, and radial basis function (RBF) kernel. Among them, the RBF kernel has the advantage of low computational complexity. Therefore, it is adopted as the kernel function for SVM in this paper and denoted by:
K ( x , x ) = exp ( x x 2 τ )
where τ is the kernel parameter to measure the width of RBF and τ > 0 ; x , x R d .

4.2. Particle Swarm Optimization for SVM Parameters

From Equations (29), (32), and (33), it is clear that the value selections of kernel parameter τ and penalty factor C have an effect on the detection result. If these parameters are excessively large, the detection accuracy rate will be very high in the training sample set, but very low in the testing sample set. On the contrary, if they are disproportionately small, the detection accuracy rate will be too low to satisfy, making the model useless.
PSO is a population-based global optimization algorithm from the simulation of birds feeding on behavior, proposed by Kennedy and Eberhart [39,40,41,42,43,44]. Owing to its advantageous characteristic of high efficiency, robustness, and easy implementation with code, the PSO algorithm is applied to optimize the parameters of SVM in this paper, combined with K-fold cross-validation.
In the PSO algorithm, the particle swarm is initialized by a population of particles, which are generated with random positions and velocities. The fitness value of particles are calculated with an objective function. Then, every particle updates its velocity and position dynamically with the personal-best value and global-best value. As a result, the global-best position is obtained by simply adjusting the position of every particle, according to its own personal-best value as well as neighboring particles.
In the s-dimensional space, the i-th ( i = 1 , 2 , , s ) particle’s position vector and velocity vector can be indicated as X i = ( X i 1 , X i 2 , , X i s ) and v i = ( v i 1 , v i 2 , , v i s ) , respectively. Then, the PSO algorithm can be described by the following equations:
v i s ( t + 1 ) = ω v i s ( t ) + c 1 r 1 ( t ) [ P i s ( t ) X i s ( t ) ] + c 2 r 2 ( t ) [ P g s ( t ) X i s ( t ) ]
X i s ( t + 1 ) = X i s ( t ) + v i s ( t + 1 )
where v i s ( t ) is the previous velocity, which is limited in the range of [ v max , v max ] ; P i s ( t ) is the individual-best position obtained for now; P g s ( t ) is the global-best position obtained for now; ω is the inertia weight value controlling the effect of the previous velocity on the current one; c 1 and c 2 are the acceleration coefficients; r 1 ( t ) and r 2 ( t ) are two independent pseudo-random numbers which are uniformly distributed in the range of [ 0 , 1 ] .
The flow diagram of PSO-SVM model construction is illustrated in Figure 2, and the process of optimizing the SVM parameters with PSO is described as follows:
Step 1.
Initialize particle and set PSO parameters. Generate the initial particles at random, which are composed of the SVM parameters τ and C . Then, set the PSO parameters, including the size of population s , maximum number of iterations t max , inertia weight ω , and acceleration coefficients c 1 , c 2 .
Step 2.
Fitness calculation. Evaluate the fitness value of every particle, which is the average classification accuracy of K-fold cross-validation. Take the current particle as its individual-best point and regard the particle with maximal fitness value as the global-best point.
Step 3.
Update. Every particle’s velocity and position are updated by Equations (34) and (35), respectively. Evaluate the fitness of every current particle and compare the fitness value with that of the individual-best point and that of the global-best point. If the current value is better, update the particle with the current value as its individual-best point or global-best point.
Step 4.
Check stopping criteria. If the maximum number of iterations is attained, the evolutionary process will stop. Otherwise, proceed to Step 3.
Step 5.
Determine parameters. When the maximum number of iterations is attained and the stopping criteria is satisfied, the values of optimal parameters τ and C are finally obtained. Then, the training and verification procedure is ended, and the PSO-SVM model is constructed.

4.3. Process of Fire Image Detection Based on the PSO-SVM Model

Combining the abovementioned contents, the flow diagram of fire image detection based on the PSO-SVM model is shown in Figure 3.
The process is described as follows. First, collect a series of candidate images using a video camera and divide the collected images into the training sample set and testing sample set. Then, calculate the Tchebichef moment invariants T ˜ p q of every candidate image and select some of them to construct its feature vector V i . Next, train the SVM with the feature vectors of the training sample set and optimize the SVM parameters by using PSO. Afterwards, the training sample set is input into the SVM with optimal parameters once again, contributing to the PSO-SVM model. Finally, the feature vectors of the testing sample set are input into the PSO-SVM model to predict their belonging labels, achieving the fire image detection.
To fully examine the performance of the PSO-SVM model, the detection rate A c c is used as the evaluation criteria, as shown in the following expression:
A c c = T P + T N T P + F P + T N + F N × 100 %
where T P is the amount of the correctly predicted fire images; T N is the amount of the correctly predicted non-fire images; F P is the amount of fire images that are incorrectly predicted as non-fire images; and F N is the amount of non-fire images that are incorrectly predicted as fire images.

5. Experiments

5.1. Experimental Setup

To collect a series of candidate images, a video camera was utilized to monitor the burning fire for a period of time. Normal heptane was used as fuel for the fire. A cauldron was used as the burner, which was about 700 mm in diameter and 22 mm deep. The fire-to-camera distance was about 3 m. There were 100 candidate images, with dimensions of 388 × 320. Among them, the 1st~60th images exhibited the stable burning of fire, as shown in Figure 4a–c; the 61st~93rd images showed the initial extinguishing of fire, as shown in Figure 4d–f; and the 94th~100th images contained the final extinguishing of fire, as shown in Figure 4g–i. It was observed that some images with initial extinguishing still contained some information about the unstable burning of fire, which were similar to those with stable burning. On the contrary, the images with final extinguishing were quite different from those with stable burning, which should be detected easily. However, in order to better evaluate the algorithm performance, the difficulty of detection should be increased. Thus, in our experiments, the first 60 candidate images were regarded as fire images and the rest were regarded as non-fire images. Moreover, according to the different experimental requirements, the candidate images could be divided into various training sample sets and testing sample sets correspondingly.
At first, the candidate images were converted into grayscale images. In the process of feature vector construction, selecting the Tchebichef moment invariants of proper orders is very important, as this has an effect on the accuracy of detection results. Considering the detection performance and running time of the experimental confirmation, the feature vector V i used in this paper is shown as follows:
V i = [ T ˜ 21 , T ˜ 22 , T ˜ 30 , T ˜ 40 , T ˜ 41 , T ˜ 50 ]
During the optimization process, the initialization parameters of PSO, such as the size of population, maximum number of iterations, inertia weight, and acceleration coefficients, were set as follows: s = 20 , t max = 100 , ω = 1 , c 1 = c 2 = 1.49618 . For the cross-validation, the K-fold method was performed with a K value of 3. The related parameters of τ and C for the RBF kernel function were varied in the arbitrarily fixed range of [ 0.1 , 100 ] . The above initialization parameters are given in Table 1. In addition, all of the experiments were implemented by Matlab 2013b and performed on a Windows 7 Ultimate operation system under the 2.4 GHz Intel Core i5 processor with 8 G RAM.

5.2. Results and Discussion

In the first experiment, 25 fire images and 20 non-fire images were randomly selected as the training sample set used to construct the PSO-SVM model. Meanwhile, the remaining 55 images were considered as the testing sample set and used to confirm the model accuracy. Then, the TMIs of the candidate images were calculated to form the feature vectors according to Equation (37). The Tchebichef moment invariants of candidate images in Figure 4 are given in Table 2.
Next, the feature vectors of the training sample set were input into the SVM, and the best values of the SVM parameters were sought by the PSO algorithm. As shown in Figure 5, the optimal parameters were obtained: τ = 1.6620 , C = 60.5028 . With the optimal parameters, the training sample set was input into the SVM to obtain the PSO-SVM model. Finally, the feature vectors of the testing sample set were input into the PSO-SVM model to detect fire images. To evaluate the performance of the proposed algorithm, a comparison with the algorithms based on HMIs and ZMIs were also carried out. The results of the first experiment are given in Table 3.
It can be seen from Table 3 that the detection rate of HMIs, which are derived from non-orthogonal polynomials, is the lowest (89.09%). On the contrary, the detection rate of TMIs derived from discrete orthogonal polynomials is the highest (98.18%). Although there is a false identification using the proposed algorithm, the results of fire detection do not depend on a single image, but rather continuous images in practical application. In addition, combined with Figure 5 and Table 3, the effectiveness of the PSO-SVM model was confirmed by that fact that it obtained the best performance and the highest rate of detection.
In the second experiment, the size of the training sample set was modified to observe its effect on the detection rate. There were 10, 15, 20, and 25 fire images and 5, 10, 15, and 20 non-fire images randomly selected. As a result, there were four training sample sets formed, whose sizes were 15, 25, 35, and 45, respectively. Meanwhile, the rest of candidate images composed four corresponding testing sample sets. In the same way, the comparison of the algorithms based on HMIs, ZMIs, and TMIs was performed. The PSO-SVM model was implemented to predict the belonging labels of the testing sample sets. The result of the second experiment can be observed in Figure 6.
When the size of the training sample set is small, the description of candidate images is not accurate enough that the detection rates all reach their respective lowest points. However, with the increase in the size of the training sample set, the description and the classification boundary become more and more accurate, which contributes to the increasing trend of the detection rates. Finally, all of the detection rates reach their respective maximums. According to Figure 6, we can see that when the size of the training sample set is 15, the detection rate of the algorithm based on HMIs is only 58.82%, and that of the algorithm based on ZMIs is also only 75.29%. In contrast, the detection rate of the proposed algorithm based on TMIs is 82.35% at this point. Furthermore, when the size increases from 15 to 45, the proposed algorithm has the highest detection rate and the smallest fluctuation in the detection rate throughout the different experiments. In conclusion, the results show that the proposed algorithm is more reliable, and can maintain the highest recognition rate even when the size of the training sample set is small.
In the third experiment, the performance of the proposed method in the TRS sample set was illustrated. First, the images shown in Figure 4 were selected as the temporary sample set. Then, each image of the temporary sample set was translated with translation vectors [−70, −50] and [50, 70], rotated by angles ± 15 , and scaled with scaling factors of 0.8~1.2 with 0.2 increments. Finally, a TRS sample set was obtained, including 63 (9 × 7) images. The images transformed by TRS of Figure 4a are shown in Figure 7.
The training sample set consisted of nine fire images and 18 non-fire images, which were randomly selected from the TRS sample set. The remaining 36 images were considered as the testing sample set. The following table gives the values of Tchebichef moment invariants of images in Figure 7. It can be seen from Table 4 that the feature vector is invariant to translation, rotation, and scaling.
Similarly, a comparison of the methods based on HMIs, ZMIs, and TMIs was also performed in the third experiment. The results are given in Table 5. Compared with the first experiment, there was some decline in all of the detection rates. As seen from the data for different moments, the detection rate of HMIs declined the most (8.53%), and the detection rate of TMIs experienced the least (0.96%). However, the method based on TMIs still achieved the highest recognition rate (97.22%) among the three methods, with only one false identification. In addition, only its detection rate exceeded 95%.
Here, it is worth noting that the polynomial basis functions of Hu moments have the simplest expression form, but they are not orthogonal polynomials. Due to the redundant information, the feature extraction based on HMIs has a poor capability of describing the target, resulting in a low recognition rate. On the other side, although the Zernike radial polynomials are orthogonal, they are in the continuous domain of image coordinate space. Their implementation involves numerical approximations, which has an impact on the orthogonal property. Besides, the Zernike radial polynomials are defined inside the unit circle. Their application also requires an appropriate transformation of the image coordinate space, increasing the computational complexity. On the contrary, the Tchebichef polynomials are orthogonal in the discrete domain of the image coordinate space, defining the corresponding moments directly on the image coordinate space and involving no numerical approximation. These advantages make them superior to the Hu moments and Zernike moments, and thus they present the best performance in terms of image description.
In a word, the proposed algorithm based on TMIs and the PSO-SVM model can achieve fire image detection accurately, which has advantages over algorithms based on HMIs and ZMIs. The effectiveness and feasibility of the proposed algorithm are also confirmed.

6. Conclusions

In this paper, a novel algorithm for fire image detection based on TMIs and the PSO-SVM model is proposed. The TRS invariants of Tchebichef moments are obtained by the correlation between geometric moments and Tchebichef moments. Then, the TMIs of candidate images are applied to form the feature vectors, which effectively avoid the discretization error of continuous orthogonal moment and redundant information. To improve the performance and detection rate, a PSO-SVM model is also proposed, in which the PSO is applied to optimize the SVM parameters. The experimental results show that the detection rate of TMIs is higher than those of HMIs and ZMIs, up to 98.18%. Even if the size of the training sample set is small and the images are transformed by TRS, the algorithm based on TMIs maintains the best detection performance. Therefore, the proposed algorithm is expected to have wide application in fire image detection.

Author Contributions

Y.B. and M.Y. proposed the algorithm in the paper; M.Y. conceived and performed the experiments, analyzed the data, and wrote the paper; X.F. contributed program code; Y.L. retrieved related literature.

Acknowledgments

The authors would like to acknowledge the support of the National Key R&D Program of China (Grant No. 2016YFC0802900).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Verstockt, S.; Lambert, P.; Walle, R.V.D.; Merci, B.; Sette, B. State of the art in vision-based fire and smoke detection. In Proceedings of the 14th International Conference on Automatic Fire Detection (AUBE), Duisburg, Germany, 8–10 September 2009. [Google Scholar]
  2. Wang, H.; Finn, A.; Erdinc, O.; Vincitore, A. Spatial-temporal structural and dynamics features for video fire detection. In Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV), Sarasota, FL, USA, 15–17 January 2013. [Google Scholar]
  3. Li, G.; Lu, G.; Yan, Y. Fire detection using stereoscopic imaging and image processing techniques. In Proceedings of the IEEE International Conference on Imaging Systems and Techniques (IST), Santorini, Greece, 14–17 October 2014. [Google Scholar]
  4. Duong, H.D.; Tinh, D.T. An efficient method for vision-based fire detection using SVM classification. In Proceedings of the International Conference on Soft Computing and Pattern Recognition (SoCPaR), Hanoi, Vietnam, 15–18 December 2013. [Google Scholar]
  5. Vipin, V. Image processing based forest fire detection. Int. J. Emerg. Technol. Adv. Eng. 2012, 2, 87–95. [Google Scholar]
  6. Celik, T. Fast and efficient method for fire detection using image processing. ETRI J. 2010, 32, 881–890. [Google Scholar] [CrossRef]
  7. Habiboğlu, Y.H.; Günay, O.; Çetin, A.E. Covariance matrix-based fire and flame detection method in video. Mach. Vis. Appl. 2012, 23, 1103–1113. [Google Scholar] [CrossRef] [Green Version]
  8. Borges, P.V.K.; Izquierdo, E. A probabilistic approach for vision-based fire detection in videos. IEEE Trans. Circuits Syst. Video Technol. 2010, 20, 721–731. [Google Scholar] [CrossRef]
  9. Xu, Z.; Xu, J. Automatic fire smoke detection based on image visual features. In Proceedings of the International Conference on Computational Intelligence and Security Workshops (CISW), Harbin, China, 15–19 September 2007. [Google Scholar]
  10. Çelik, T.; Özkaramanlı, H.; Demirel, H. Fire and smoke detection without sensors: Image processing based approach. In Proceedings of the 15th European Signal Processing Conference (EUSIPCO), Poznan, Poland, 3–7 September 2007. [Google Scholar]
  11. Chen, J.; He, Y.; Wang, J. Multi-feature fusion based fast video flame detection. Build. Environ. 2010, 45, 1113–1122. [Google Scholar] [CrossRef]
  12. Yuan, F. An integrated fire detection and suppression system based on widely available video surveillance. Mach. Vis. Appl. 2010, 21, 941–948. [Google Scholar] [CrossRef]
  13. Ha, C.; Hwang, U.; Jeon, G.; Cho, J.; Jeong, J. Vision-based fire detection algorithm using optical flow. In Proceedings of the Sixth International Conference on Complex, Intelligent and Software Intensive Systems (CISIS), Palermo, Italy, 4–6 July 2012. [Google Scholar]
  14. Yu, C.; Zhang, Y.; Fang, J.; Wang, J. Video smoke recognition based on optical flow. In Proceedings of the 2nd International Conference on Advanced Computer Control (ICACC), Shenyang, China, 27–29 March 2010. [Google Scholar]
  15. Wang, C.; Sun, L.; Yuan, T.; Sun, X. Wind turbine fire image detection based on LVQ neural network. In Proceedings of the IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), Chengdu, China, 19–22 June 2016. [Google Scholar]
  16. Rong, J.; Zhou, D.; Yao, W.; Gao, W.; Chen, J.; Wang, J. Fire flame detection based on GICA and target tracking. Opt. Laser Technol. 2013, 47, 283–291. [Google Scholar] [CrossRef]
  17. Wang, Y.; Ma, X. Fire detection based on image processing in coal mine. In Proceedings of the International Conference on Internet Computing and Information Services (ICICIS), Hong Kong, China, 17–18 September 2011. [Google Scholar]
  18. Vapnik, V. The Nature of Statistical Learning Theory, 2nd ed.; Springer: New York, NY, USA, 1999. [Google Scholar]
  19. Niu, X.; Yang, C.; Wang, H.; Wang, Y. Investigation of ANN and SVM based on limited samples for performance and emissions prediction of CRDI-assisted marine diesel engine. Appl. Ther. Eng. 2016, 111, 1353–1364. [Google Scholar] [CrossRef]
  20. Ren, J. ANN vs. SVM: Which one performs better in classification of MCCs in mammogram imaging. Knowl. Based Syst. 2012, 26, 144–153. [Google Scholar] [CrossRef] [Green Version]
  21. Widodo, A.; Yang, B.S. Support vector machine in machine condition monitoring and fault diagnosis. Mech. Syst. Signal Process. 2007, 21, 2560–2574. [Google Scholar] [CrossRef]
  22. Giorgi, M.G.D.; Campilongo, S.; Ficarella, A.; Congedo, P.M. Comparison between wind power prediction models based on wacelet decomposition with least-squares support vector machine (LS-SVM) and artificial neural network (ANN). Energies 2014, 7, 5251–5272. [Google Scholar] [CrossRef]
  23. Zhu, K.; Song, X.; Xue, D. A roller bearing fault diagnosis method based on hierarchical entropy and support vector machine with particle swarm optimization algorithm. Measurement 2014, 47, 669–675. [Google Scholar] [CrossRef]
  24. Pal, M.; Mather, P.M. Support vector machines for classification in remote sensing. Int. J. Remote Sens. 2015, 26, 1007–1011. [Google Scholar] [CrossRef]
  25. Jia, Y.; Wang, H. Image fire-detection system based on hierarchical cluster and support vector machine. Futur. Commun. Technol. 2014, 51, 1405–1414. [Google Scholar]
  26. Truong, T.X.; Kim, J.M. Fire flame detection in video sequences using multi-stage pattern recognition techniques. Eng. Appl. Artif. Intell. 2012, 25, 1365–1372. [Google Scholar] [CrossRef]
  27. Li, T.; Mao, Y.; Feng, P.; Wang, H.; Jian, D. An efficient fire detection method based on orientation feature. Int. J. Control Autom. Syst. 2013, 11, 1038–1045. [Google Scholar] [CrossRef]
  28. Zhao, J.; Zhang, Z.; Han, S.; Qu, C.; Yuan, Z.; Zhang, D. SVM based forest fire detection using static and dynamic features. Comput. Sci. Inf. Syst. 2011, 8, 821–841. [Google Scholar] [CrossRef]
  29. Du, J.; Liu, Y.; Yu, Y.; Yan, W. A prediction of precipitation data based on support vector machine and particle swarm optimization (PSO-SVM) algorithms. Algorithms 2017, 10, 57. [Google Scholar] [CrossRef]
  30. Wang, Y.M.; Cui, T.; Zhang, F.J.; Dong, T.P.; Li, S. Fault diagnosis of diesel engine lubrication system based on PSO-SVM and centroid location algorithm. In Proceedings of the International Conference on Control, Automation and Information Sciences (ICCAIS), Ansan, Korea, 27–29 October 2016. [Google Scholar]
  31. Ye, F.; Han, M. Simultaneous feature with support vector selection and parameters optimization using GA-based SVM solve the binary classification. In Proceedings of the IEEE International Conference on Computer Communication and the Internet, Wuhan, China, 13–15 October 2016. [Google Scholar]
  32. Flusser, J.; Zitova, B.; Suk, T. Moments and Moment Invariants in Pattern Recognition, 1st ed.; Wiley & Sons Ltd.: West Sussex, UK, 2009. [Google Scholar]
  33. Erdélyi, A.; Magnus, W.; Oberhettinger, F.; Tricomi, F.G. Higher Transcendental Functions, II; McGraw-Hill: New York, NY, USA, 1953. [Google Scholar]
  34. Wu, H.; Yan, S. Computing invariants of Tchebichef moments for shape based image retrieval. Neurocomputing 2016, 215, 110–117. [Google Scholar] [CrossRef]
  35. Mukundan, R.; Ong, S.H.; Lee, P.A. Image analysis by Tchebichef moments. IEEE Trans. Image Process. 2001, 10, 1357–1364. [Google Scholar] [CrossRef] [PubMed]
  36. Teague, M.R. Image analysis via the general theory of moments. J. Opt. Soc. Am. 1980, 70, 920–930. [Google Scholar] [CrossRef]
  37. Burges, C.J.C. A Tutorial on Support Vector Machines for Pattern Recognition; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1998. [Google Scholar]
  38. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  39. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Western Australia, Australia, 27 November–1 December 1995. [Google Scholar]
  40. Tu, C.J.; Chuang, L.Y.; Chang, J.Y.; Yang, C.H. Feature selection using PSO-SVM. Iaeng Int. J. Comput. Sci. 2007, 33, 111–116. [Google Scholar]
  41. Zhang, J.; Tittel, F.K.; Gong, L.; Lewicki, R.; Griffin, R.J.; Jiang, W.; Jiang, B.; Li, M. Support vector machine modeling using particle swarm optimization approach for the retrieval of atmospheric ammonia concentrations. Environ. Model. Assess. 2016, 21, 531–546. [Google Scholar] [CrossRef]
  42. Vashishtha, N.J. Particle swarm optimization based feature selection. Int. J. Comput. Appl. 2016, 146, 11–17. [Google Scholar]
  43. Saxena, A.; Shrivas, M.M. Filter—PSO based approach for feature selection. Int. J. Adv. Res. Comput. Sci. 2017, 8, 2063–2073. [Google Scholar]
  44. Pandey, A.; Jayant Deen, A.; Pandey, R. Content based structural recognition for image classification using PSO technique and SVM. Int. J. Comput. Appl. 2014, 87, 6–11. [Google Scholar] [CrossRef]
Figure 1. The schematic diagram of support vector machine (SVM) classification.
Figure 1. The schematic diagram of support vector machine (SVM) classification.
Algorithms 11 00079 g001
Figure 2. The flow diagram of particle swarm optimization-support vector machine (PSO-SVM) model construction.
Figure 2. The flow diagram of particle swarm optimization-support vector machine (PSO-SVM) model construction.
Algorithms 11 00079 g002
Figure 3. The flow diagram of fire image detection based on the PSO-SVM model.
Figure 3. The flow diagram of fire image detection based on the PSO-SVM model.
Algorithms 11 00079 g003
Figure 4. Candidate images: (ac) stable burning of fire; (df) initial extinguishing of fire; (gi) final extinguishing of fire.
Figure 4. Candidate images: (ac) stable burning of fire; (df) initial extinguishing of fire; (gi) final extinguishing of fire.
Algorithms 11 00079 g004
Figure 5. The fitness curve of particle swarm optimization.
Figure 5. The fitness curve of particle swarm optimization.
Algorithms 11 00079 g005
Figure 6. Effect of training dataset size on recognition rate.
Figure 6. Effect of training dataset size on recognition rate.
Algorithms 11 00079 g006
Figure 7. Images transformed by translation, rotation, and scaling (TRS). (a) Original image; (b) image translated with [−70, −50]; (c) image translated with [50, 70]; (d) image rotated by 15 ; (e) image rotated by 15 ; (f) image scaled with 0.8; (g) image scaled with 1.2.
Figure 7. Images transformed by translation, rotation, and scaling (TRS). (a) Original image; (b) image translated with [−70, −50]; (c) image translated with [50, 70]; (d) image rotated by 15 ; (e) image rotated by 15 ; (f) image scaled with 0.8; (g) image scaled with 1.2.
Algorithms 11 00079 g007
Table 1. The initialization parameters of PSO.
Table 1. The initialization parameters of PSO.
ParameterValueRangeDescription
s 20Size of population
t max 100Maximum number of iterations
ω 1Inertia weight
c 1 1.49618Acceleration coefficient 1
c 2 1.49618Acceleration coefficient 2
K 3Fold number of cross-validation
τ [ 0.1 , 100 ] Kernel parameter of SVM
C [ 0.1 , 100 ] Penalty parameter of SVM
Table 2. Tchebichef moment invariants of the candidate images in Figure 4.
Table 2. Tchebichef moment invariants of the candidate images in Figure 4.
Image T ˜ 21 T ˜ 22 T ˜ 30 T ˜ 40 T ˜ 41 T ˜ 50
(a)−5.93123.4247−4.18619.1684−9.2667−4.4579
(b)−5.92953.4353−4.18049.1665−9.2634−4.4530
(c)−5.90593.3894−4.09979.1391−9.2161−4.3828
(d)−5.37572.5663−2.28988.5260−8.1568−2.8091
(e)−5.53112.9393−2.82038.7057−8.4674−3.2705
(f)−5.77403.3193−3.64968.9866−8.9526−3.9913
(g)−5.52602.7327−2.80288.6997−8.4570−3.2550
(h)−5.17242.1677−1.59578.2908−7.7506−2.2056
(i)−5.57262.7335−2.96218.7537−8.5503−3.3936
Table 3. Detection rate of the testing sample set in the first experiment.
Table 3. Detection rate of the testing sample set in the first experiment.
ParameterHuZernikeTchebichef
Detection amount495254
Detection rate89.09%94.55%98.18%
Table 4. Tchebichef moment invariants of the images in Figure 7.
Table 4. Tchebichef moment invariants of the images in Figure 7.
T ˜ 21 T ˜ 22 T ˜ 30 T ˜ 40 T ˜ 41 T ˜ 50
(a)−5.93123.4247−4.18619.1684−9.2667−4.4579
(b)−5.96453.4536−4.15639.0566−9.1736−4.3712
(c)−5.89933.4438−4.29059.0983−9.0935−4.5017
(d)−5.93993.3933−4.12578.9619−9.0458−4.4266
(e)−6.01243.4201−4.10369.1818−9.1457−4.4338
(f)−5.94613.4323−4.13989.2082−9.2214−4.3910
(g)−6.00493.4671−4.20019.1286−9.3101−4.4073
Standard Deviation0.04050.02420.06210.08500.09390.0435
Table 5. Detection rate of the testing sample set in the third experiment.
Table 5. Detection rate of the testing sample set in the third experiment.
ParameterHuZernikeTchebichef
Detection amount293135
Detection rate80.56%86.11%97.22%

Share and Cite

MDPI and ACS Style

Bian, Y.; Yang, M.; Fan, X.; Liu, Y. A Fire Detection Algorithm Based on Tchebichef Moment Invariants and PSO-SVM. Algorithms 2018, 11, 79. https://doi.org/10.3390/a11060079

AMA Style

Bian Y, Yang M, Fan X, Liu Y. A Fire Detection Algorithm Based on Tchebichef Moment Invariants and PSO-SVM. Algorithms. 2018; 11(6):79. https://doi.org/10.3390/a11060079

Chicago/Turabian Style

Bian, Yongming, Meng Yang, Xuying Fan, and Yuchao Liu. 2018. "A Fire Detection Algorithm Based on Tchebichef Moment Invariants and PSO-SVM" Algorithms 11, no. 6: 79. https://doi.org/10.3390/a11060079

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop