Next Article in Journal
Advances in Electrochemical Impedance Spectroscopy Detection of Endocrine Disruptors
Next Article in Special Issue
Thermal Infrared Sensing for Near Real-Time Data-Driven Fire Detection and Monitoring Systems
Previous Article in Journal
Edge-Computing Architectures for Internet of Things Applications: A Survey
Previous Article in Special Issue
A Fast Segmentation Method for Fire Forest Images Based on Multiscale Transform and PCA
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing

by
Panagiotis Barmpoutis
,
Periklis Papaioannou
,
Kosmas Dimitropoulos
and
Nikos Grammalidis
*
Centre for Research and Technology Hellas, Information Technologies Institute, 57001 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(22), 6442; https://doi.org/10.3390/s20226442
Submission received: 15 October 2020 / Revised: 7 November 2020 / Accepted: 10 November 2020 / Published: 11 November 2020
(This article belongs to the Special Issue Sensors for Fire and Smoke Monitoring)

Abstract

:
The environmental challenges the world faces nowadays have never been greater or more complex. Global areas covered by forests and urban woodlands are threatened by natural disasters that have increased dramatically during the last decades, in terms of both frequency and magnitude. Large-scale forest fires are one of the most harmful natural hazards affecting climate change and life around the world. Thus, to minimize their impacts on people and nature, the adoption of well-planned and closely coordinated effective prevention, early warning, and response approaches are necessary. This paper presents an overview of the optical remote sensing technologies used in early fire warning systems and provides an extensive survey on both flame and smoke detection algorithms employed by each technology. Three types of systems are identified, namely terrestrial, airborne, and spaceborne-based systems, while various models aiming to detect fire occurrences with high accuracy in challenging environments are studied. Finally, the strengths and weaknesses of fire detection systems based on optical remote sensing are discussed aiming to contribute to future research projects for the development of early warning fire systems.

1. Introduction

Over the last few years, climate change and human-caused factors have a significant impact on the environment. Some of these events include heat waves, droughts, dust storms, floods, hurricanes, and wildfires. Wildfires have extreme consequences on local and global ecosystems and cause serious damages to infrastructure, injuries, and losses in human lives; therefore, fire detection and the accurate monitoring of the disturbance type, size, and impact over large areas is becoming increasingly important [1]. To this end, strong efforts have been made to avoid or mitigate such consequences by early fire detection or fire risk mapping [2]. Traditionally, forest fires were mainly detected by human observation from fire lookout towers and involved only primitive tools, such as the Osborne fire Finder [3]; however, this approach is inefficient, as it is prone to human error and fatigue. On the other hand, conventional sensors for the detection of heat, smoke, flame, and gas typically take time for the particles to reach the point of sensors and activate them. In addition, the range of such sensors is relatively small, hence, a large number of sensors need to be installed to cover large areas [4].
Recent advances in computer vision, machine learning, and remote sensing technologies offer new tools for detecting and monitoring forest fires, while the development of new materials and microelectronics have allowed sensors to be more efficient in identifying active forest fires. Unlike other fire detection review papers that have focused on various sensing technologies [5], on video flame or/and smoke methodologies in visible or/and InfraRed (IR) range [6,7,8,9], on various environments [10], and airborne systems [11,12], in this paper, we provide a comprehensive study of the most representative forest fire detection systems, focusing on those that use optical remote sensing, as well as digital image processing [13] and classification techniques [14]. Depending on the acquisition level, three broad categories of widely used systems that can detect or monitor active fire or smoke incidents in real/near-real-time are identified and discussed, namely terrestrial, aerial, and satellite. These systems are usually equipped with visible, IR, or multispectral sensors whose data are processed by machine learning methods. These methods rely either on the extraction of handcrafted features or on powerful deep learning networks (Figure 1) for early detection of forest fires (Figure 2) as well as for modeling fire or smoke behavior. Finally, we present the strengths and weaknesses of the aforementioned methods and sensors, as well as future trends in the field of early fire detection.
This paper is organized as follows: Section 2 covers different optical remote sensing systems for early fire detection, organized into three subsections—for Terrestrial, Aerial, and Satellite systems respectively—Section 3 includes discussion and the future scope of research.

2. Early Fire Detection Systems

2.1. Terrestrial Systems

Terrestrial-based early detection systems consist of either individual sensors (fixed, PTZ, or 360° cameras) or networks of ground sensors. These sensors need to be carefully placed to ensure adequate visibility. Thus, they are usually located in watchtowers, which are structures located on high vantage points for monitoring high-risk situations and can be used not only for detection but also for verification and localization of reported fires. There are two types of cameras used for early fire detection, namely optical cameras and IR cameras that can capture data ranging from low resolution to ultra-high resolution for different fire detection scenarios [15]. Optical cameras provide color information, whereas IR imaging sensors can provide a measure of the thermal radiation emitted by objects in the scene [16]. More recently, early detection systems that combine both types have also been introduced. The computer-based methods can process a high number of data aiming to achieve a consistent level of accuracy maintaining a low false alarm rate. In the following, we first present traditional approaches that are based on handcrafted features followed by more recent methods using deep learning for automated feature extraction.

2.1.1. Traditional Methods

Detection methods that use optical sensors or RGB cameras combine features that are related to the physical properties of flame and smoke, such as color, motion, spectral, spatial, temporal, and texture characteristics. The following color spaces have been used for the task of early fire detection: RGB [17,18,19], YCbCr [20], CIELAB [21], YUV [22,23], and HSV [24]; however, a drawback of color-based fire detection models is the high false alarm rates, since single-color information is insufficient in most cases for the early and robust fire detection. Thus, many of the developed methodologies combine color and motion information in images and videos [25]. Zhang et al. [26] used a probabilistic color-based model for the detection of fire regions and motion features for the final fire existence occurrence decision. Avgerinakis et al. [27] identified the smoke candidate blocks and then constructed the histograms of oriented gradients (HOGs) and histograms of optical flow (HOFs), thus taking into account both appearance and motion information. Likewise, Mueller et al. [28] used two optical flow schemes, namely optimal mass transport models and data-driven optical flow models.
Other researchers focused on the flickering effect of fire. This is observed in flame contours at a frequency of around 10 Hz, independently of the burning material and the burner [29]. To this end, Gunay et al. [30] distinguished flame flicker from the motion of fire-colored moving objects in videos using hidden Markov models (HMMs). Training HMMs leads to the reduction of data redundancy and improvement of reliability, while real-time detection is also achieved [31]. Furthermore, HMM-based methods for fire detection in the compressed domain have been proposed in MJPEG2000 [32] and H. 264 [33] compressed video. The use of multi-feature fire-based detection can offer more accurate results. Chen et al. [34] combined motion detection using a Gaussian mixture model, color analysis using an RGB color filtering, and flickering temporal analysis. The algorithm was applied to a video dataset consisting of different daytime and nighttime environments; however, at night, color analysis is less useful and night smoke is less visible. Thus, nighttime wildfire detection typically relies on motion analysis. Also, Töreyin, et al. [35] proposed a system equipped with an optical camera and a methodology that combines feature extraction (moving pixel/region extraction, color-based segmentation, and wavelet analysis in temporal and spatial domains), followed by a voting-based classifier. In [36], Barmpoutis et al. extracted an additional feature aiming to estimate a spatio–temporal consistency energy. Thereafter, they used a support vector machine (SVM) classifier to increase the robustness of fire detection.
Many other researchers have used infrared cameras aiming to reduce the false alarm rates of the optical-based terrestrial systems. The IR cameras measure the thermal radiation emitted by objects within the spectral range from either 3 to 5 μm (middle wavelength infrared, MWIR) or 8 to 14 μm (long wavelength infrared, LWIR). The MWIR band detectors, although they are optimal for fire detection, are expensive due to the cooling system required, so typically LWIR cameras are used. In IR videos the existence of rapid time-varying contours is an important sign of the presence of fire in the scene. Arrue et al. [37] observed that fire detection systems fail for distant fires and proposed a system for false alarm reduction in infrared forest-fire detection. More specifically, they used an adaptive infrared threshold, a segmentation method, and a neural network for early fire detection. Furthermore, Töreyin, et al. [16] used IR video to overcome the limitations of optical cameras in the detection of fires with little radiance in the visible spectrum. Specifically, they first estimated the boundaries of moving bright regions in each frame and then used spatio–temporal analysis in the wavelet domain using HMMs.
In contrast to single-sensor systems, multisensor systems typically cover wider areas and can achieve higher accuracies by fusing data from different sensors. The idea of integrated early fire detection and monitoring by combining data from optical and infrared cameras as well as a wireless sensor network (WSN) of temperature/humidity sensors was proposed by Grammalidis et al. [38]. Sensor data were processed and transmitted to a monitoring center employing computer vision and pattern recognition algorithms for automated fire detection and localization. The algorithm took into account color, spatial, and temporal information for flame detection, while for smoke detection, an online adaptive decision fusion (ADF) framework was developed. This framework consisted of several algorithms aiming to detect slow-moving objects, smoke-colored regions, and smoke region smoothness. Furthermore, improved early wildfire detection was achieved by fusing smoke detection from visual cameras and flame detection from infrared (LWIR) cameras. Similarly, Bosch et al. [39] proposed a system consisting of a wireless sensor network with a central monitoring station. In this, each sensor consists of an optical and a thermal camera and an integrated system for the processing of data and communication.
More recently, Barmpoutis et al. [40] and Dimitropoulos et al. [41] introduced fire detection systems based on smoke and flame dynamic texture analysis through linear dynamical systems (LDSs). Their modeling, combining color, motion, and spatio–temporal features led to higher detection rates and a significant reduction of false alarms. Temporal and spatial dynamic texture analysis of flame for forest fire detection was performed in [42]. Dynamic texture features were derived using two-dimensional (2D) spatial wavelet decomposition in the temporal domain and three-dimensional (3D) volumetric wavelet decomposition. In [43], the authors improved the smoke modeling of the fire incidents through dynamic textures solving higher order LDS (h-LDS). Finally, in [44], the authors took the advantage of the geometric properties of stabilized h-LDS (sh-LDS) space, and they proposed a novel descriptor, namely, histograms of Grassmannian points (HoGP) to improve the classification of both flame and smoke sequences.

2.1.2. Deep Learning Methods

In contrast to previously discussed methods that rely on handcrafted features, deep learning (DL) methods [45] can automatically extract and learn complex feature representations. Since the seminal work of Krizhevsky et al. [46], which achieved exceptional image classification performance by training a convolutional neural network (CNN) with efficient computation resources, deep learning is one of the most rapidly evolving fields and has been successfully applied to numerous computer vision problems. To this end, Luo et al. [47] developed a smoke detection algorithm based on the motion characteristics of smoke and a CNN. Firstly, they identified the candidate regions based on the background dynamic update and dark channel a priori method [48]. Then, the features of the candidate region were extracted automatically by a CNN consisting of five convolutional layers and three fully connected layers. In [49], the authors combined deep learning and handcrafted features to recognize the fire and smoke areas. For static features, the AlexNet architecture was adapted, while for dynamic features an adaptive weighted direction algorithm was used. Moreover, Sharma et al. [50] used optical images and re-tuned two pre-trained CNNs, based on VGG16 and ResNet50 backbones to distinguish images that contain fire and images that do not. It is worth mentioning that for the training, they created an unbalanced dataset including more non-fire images. Zhang et al. [51] proposed deep CNNs for forest fire detection in a cascaded fashion. Firstly, they trained a full image fire classifier to decide whether the image contains the fire or not and then applied a fine-grained patch classifier to localize the fire patches within this image. The full image classifier is a deep CNN that has been fine-tuned from AlexNet and the fine-grained patch classifier is a two-layer fully connected neural networks trained with the upsampled Pool-5 features. Muhammad et al. [52], inspired by GoogleNet architecture and proposed a fine-tuned fire detection CNN model for surveillance videos, while Shen et al. [53] used an optimized YOLO model for flame detection from video frames. Frizzi et al. [54] built a simple CNN consisting of nine layers. The architecture of this model was similar to LeNet-5 including dropout layers and used a leaky rectified linear unit (ReLU) activation function. Muhamad et al. [55] proposed a fine-tuned CNNs architecture based on the SqueezeNet model and developed a feature map selection algorithm for fire segmentation and background analysis. In [56], the authors combined AlexNet as a baseline architecture and the internet of multimedia things (IoMT) for fire detection and disaster management. The developed system introduced an adaptive prioritization mechanism for cameras in the surveillance system allowing high-resolution cameras to be activated to confirm the fire and analyze the data in real-time. Furthermore, Dunnings and Breckon [57] used low-complexity CNN architectural variants and applied a superpixel localization approach aiming to reduce the computational performance offering up to 17 fps processing time.
Since the number of publicly available wildfire datasets is still limited, Sousa et al. [58] proposed a fire detection method based on data augmentation and transfer learning. A method that had been pre-trained on the ImageNet Inception-v3 model was retrained and evaluated using ten-fold cross-validation on the Corsican Fire Database [59]. Extending deep learning approaches, Barmpoutis et al. [60] combined the power of faster region-based convolutional neural network (R-CNN) with multidimensional dynamic texture analysis based on higher-order LDSs aiming the early forest fire detection. A modified faster R-CNN and a 3D CNN were combined in [61]. To that end, the faster R-CNN with non-maximum annexation was utilized to realize the smoke target location based on static spatial information and then a 3D CNN was used for smoke recognition by combining dynamic spatial–temporal information. Jadon et al. [62] developed the FireNet convolution neural network using a standard fire dataset and a self-proposed dataset, achieving an encouraging performance in terms of a series of evaluation metrics. Zhang et al. [63] trained a faster R-CNN for forest smoke detection by creating synthetic smoke images and achieved great performance when tested on real smoke images. Moreover, in [64] the authors extracted spatial features through a faster R-CNN for the detection of the suspected regions of fire (SroFs) and non-fire. Then, the features of the detected SroFs in successive frames were used by a long short-term memory (LSTM) for them to identify whether there is a fire or not in a short-term period. Finally, a majority voting method and the exploitation of fire dynamics were used for the final decision. Shi et al. [65] inspired by the idea of R-CNN and combined image saliency detection and convolutional neural networks. More specifically, they utilized the pixel-wise image saliency aggregating (PISA) method [66] to identify the candidate regions and then classified them into fire or non-fire regions.
Instead of extracting bounding boxes, Yuan et al. [67] used a two-path encoder-decoder fully convolutional network (FCN) for visual smoke segmentation. FCNs can achieve end-to-end pixel-wise segmentation so the precise location of smoke can be identified in images. They also created synthetic smoke images instead of labeling the real smoke images manually for training and then tested the network on both synthetic and real videos. Cheng et al. [68] proposed a smoke detection model using Deeplabv3+ and a generative adversarial network (GAN). The smoke pixels were first identified by fusing the result Deeplabv3+ and the heatmap of smoke based on HSV features. Then, a GAN was employed for predicting the smoke trend heatmap based on the space–time analysis of the smoke videos. Finally, in [69] the authors used a two-stage training of deep convolutional GANs for smoke detection. This procedure included a regular training step of a deep convolutional (DC)-GAN with real images and noise vectors and a training step of the discriminator separately using the smoke images without the generator.

2.2. Unmanned Aerial Vehicles

Terrestrial imaging systems can detect both flame and smoke, but in many cases, it is almost impossible to view, in a timely manner, the flames of a wildfire from a ground-based camera or a mounted camera on a forest watchtower. To this end, autonomous unmanned aerial vehicles (UAVs) can provide a broader and more accurate perception of the fire from above, even in areas that are inaccessible or considered too dangerous for operations by firefighting crews. Either fixed or rotary-wing UAVs cover wider areas and are flexible, allowing the change of monitoring area, but they are affected by weather conditions and have limited flight time. UAVs mostly use ultra-high-resolution optical or infrared charge-coupled device (CCD) cameras to capture images as well as other sensors for navigation, such as global positioning system (GPS) receivers or inertial measurement units (IMUs).

2.2.1. Traditional Methods

The first attempts for aerial fire detection began around 1920 when planes were used for forest fire detection as a result of their unsuccessful deployment for the extinguishing of fires [70]. In 1986, Stearns et al. [71] captured IR images by the Flying Infrared Signatures Technology Aircraft of the U.S. Air Force Geophysics Laboratory and described a spatial and spectral analysis methodology to provide wildfire detection and monitoring for fire control. Similarly, Den Breejen et al. [72] used a single manned aerial vehicle for forest fire detection and tracking; however, although the operation of manned aerial vehicles is safer in busy airspace around fire [12], these vehicles are typically large and require increased operational costs making them a less useful tool for fire detection.
In recent times, the deployment of UAVs is considered to be a better option for the task of forest fire detection. More specifically, to achieve forest fire detection and tracking, Yuan et al. [73] used median filtering for noise reduction, color analysis based on CIELAB color space, Otsu threshold for fire segmentation, and blob counting. More recently, in [74] they used visual images captured by an optical camera of an unmanned aerial vehicle. Then, two color spaces, namely, RGB and HIS were chosen as inputs of a fuzzy logic rule and an extended Kalman filter was employed to adapt environmental condition variations and to perform smoke detection. Extending the color-based methods in [75,76] fire flame and smoke pixels are segmented using both color and motion characteristics. For the estimation of color features, they utilized three color spaces RGB, YCbCr, and HIS, whereas for the extraction of motion characteristics they noted that flames have turbulent movement or disordered characteristics. Thus, an optical flow algorithm was used to examine the motion characteristic of forest fires and extract fire motion pixels using dynamic background analysis.
Aiming to identify the fire location in terms of latitude, longitude, and altitude in [77], the authors used a DJI F550 hexacopter and applied two coordinate system transformations between the body-fixed frame, namely north-east-down frame (NED) and Earth-centered Earth-fixed (ECEF) frame. Then, a rule-based color model combining RGB and YCbCr color spaces was used for the identification of fire pixels. A fire simulation platform based on the Unity game engine and robot operating systems (ROS) was developed by Esfahlani [78]. The video data were collected through a monocular camera and navigation relied on a simultaneous localization and mapping (SLAM) system. A fire detection algorithm based on color, movement attributes, temporal variation of fire intensity, and its accumulation around a point was deployed. Finally, a mixed reality (MR) system incorporating physical and virtual elements was adopted to visualize and test the developed system.
Sudhakar et al. [79] proposed a method for forest fire detection through UAVs equipped with an optical and an infrared camera. They used a LAB color model and a motion-based algorithm followed by a maximally stable extremal regions (MSERs) extraction module. For improved presentation, the extracted forest fire detections are joined with landscape information and meteorological data. In [80] two types of UAVs, a fixed-wing drone and a rotary-wing drone equipped with optical and thermal cameras were used. As soon as the fixed-wing drone detects a fire, the rotary-wing drone will fly at a much lower altitude (10 to 350 m) compared to a fixed-wing UAV (350 to 5500 m), thus having better and more detailed visibility of the area and reducing false alarms through a neural network. Chen et al. [81] used optical and infrared sensors and data to train a CNN first for smoke detection and then for flame detection. In [82], the authors developed a system consisting of a central station and several aerial vehicles equipped with infrared or visual cameras, aiming to increase the coverage area. For fire detection, they applied a threshold for fire segmentation and then performed color and fire contour analysis.

2.2.2. Deep Learning Methods

Zhao et al. [83] used a UAV equipped with GPS and deployed a saliency detection algorithm for localization and segmentation of the fire area in aerial images. Then, a 15 layered deep convolutional neural network architecture was employed for both low and high-level fire feature extraction and classification. Tang et al. [84] captured 4K data using a ZenMuse XT2 dual vision sensor and applied an adaptive sub-region select block to detect fire candidate areas in 4K resolution images. Then, a YOLOv3 backbone architecture was used for fire detection. Jiao et al. [85] used a UAV to capture and transmit images to the ground station in real-time deploying a YOLOv3 network for fire detection. Furthermore, they deployed a UAV equipped with a visible and an infrared camera for image acquisition [86]. The onboard computer carried by UAV can perform local image processing and mission planning through a YOLOv3-tiny architecture. In addition, a ground station receives images and location information of fire spots, contributing to the detection of forest fires. Furthermore, it provides operational commands to the UAV for path planning and re-planning. Integrating fog computing and CNNs, Srinivas and Dua [87] employed a UAV to reduce the false alarm rates towards the task of forest fire detection at an early stage. More recently, Barmpoutis, et al. [88,89] used an optical 360-degree complementary metal-oxide-semiconductor (CMOS) camera mounted on a UAV to capture an unlimited field of view images. More specifically, they converted the equirectangular raw data to cubemap and stereographic projections, respectively. Then, they used deep neural networks and exploited fire dynamic textures aiming to reduce false alarms that are caused due to clouds, sunlight reflections, and fire/smoke-colored objects. Experimental results demonstrate the great potential of the proposed system for both flame and smoke detection.

2.3. Spaceborne (Satellite) Systems

Recently, mainly due to the large number of satellites launched and the decrease of associated costs, there are many research efforts to detect forest wildfires from satellite images. Specifically, a set of satellites were designed for Earth observation (EO, e.g., environmental monitoring or meteorology). Depending on their orbit, satellites can be broadly classified into various categories, each having their advantages and disadvantages. The most important categories include: (a) the geostationary orbit (GEO), which is a circular orbit with an altitude of 35,786 kilometers and zero inclination, so that the satellite does not move at all relative to the ground, providing a constant view of the same surface area, (b) the low Earth orbit (LEO), which has an altitude of 2000 km or less, requires the lowest amount of energy for satellite placement and provides high bandwidth and low communication latency, (c) the polar sun-synchronous orbit (SSO), which is a nearly polar orbit that passes the equator at the same local time on every pass. Most EO satellites are in specific low Earth polar SSO orbits, whose altitude and inclination are precisely calculated so that the satellite will always observe the same scene with the same angle of illumination coming from the Sun, so shadows appear the same on every pass.
Data from sun-synchronous satellites have high spatial resolution but low temporal resolution, whereas geostationary satellites have high temporal resolution but low spatial resolution. Some satellites of the first category, like Landsat or Sentinel satellites, have a large revisit time (eight-day repeat cycle for LandSat-7/8 and approximately 2–3 days at mid-latitudes for Sentinel 2A/2B). Hence, they are unsuitable for real-time active forest fire detection, but only for less time-sensitive tasks, e.g., burnt area estimation, and therefore their studies fall within the scope of this paper.

2.3.1. Fire and Smoke Detection from Sun-Synchronous Satellites

Imaging sensors in sun-synchronous satellites include three multispectral imaging sensors, namely advanced very-high-resolution radiometer (AVHRR) [90], moderate resolution imaging spectroradiometer (MODIS) [91], and visible infrared imaging radiometer suite (VIIRS) [92,93], whose data have also been used for wildfire detection. The advanced very-high-resolution radiometer (AVHRR/3) is a multipurpose imaging instrument that measures the reflectance of the Earth and has been used for global monitoring of cloud cover, sea surface temperature, ice, snow, and vegetation cover characteristics [90]. AVHRR instruments are or have been carried by the National Oceanic and Atmospheric Administration (NOAA) family of polar-orbiting platforms (polar-orbiting operational environmental satellite—POES) and European Meteorological Operational (MetOp) satellites. The instrument provides six channels, three in the visible/near-infrared region and three thermal infrared channels, with 1 km spatial resolution. The moderate resolution imaging spectroradiometer (MODIS), onboard the National Aeronautics Space Administration (NASA) EO Terra and Aqua satellites that have a revisit time of 1–2 days, capture data in 36 spectral bands ranging in wavelengths from 0.4 to 14.4 μm and at varying spatial resolutions (2 bands at 250 m, 5 bands at 500 m, and 29 bands at 1 km). MODIS was succeeded by the visible infrared imaging radiometer suite (VIIRS) instrument onboard the Suomi National Polar-orbiting Partnership (NPP) and NOAA-20 weather satellites. The instrument provides 22 different spectral bands, i.e., 16 moderate-resolution bands (M-bands, 750 m), 5 imaging resolution bands (I-bands, 375 m), and 1 day/night panchromatic band (750 m).

Traditional Methods

These imaging sensors have also been extensively applied for near-real-time wildfire detection. For instance in [94], Sayad et al. combined big data, remote sensing, and data mining algorithms (artificial neural network and SVM) to process big data collected from MODIS images and extract insights from them to predict the occurrence of wildfires. More specifically, they used pre-processed MODIS data to create a dataset based on three parameters related to the state of the crops: namely normalized difference vegetation index (NDVI), land surface temperature (LST), and thermal anomalies. For wildfire prediction, they used two different supervised classification approaches based on neural networks and SVM, achieving good prediction accuracies, i.e., 98.32% and 97.48%, respectively. Results were assessed using several validation strategies (e.g., classification metrics, cross-validation, and regularization) and comparisons with other wildfire prediction models, demonstrating the efficiency of the model in predicting the occurrence of wildfires.
Several papers deal with the problem of smoke detection based on MODIS data, which is a very challenging problem given its strong similarity with clouds, haze, and other similar phenomena. Shukla et al. [95] proposed an algorithm for automatic detection of smoke using MODIS data, which was based on a multiband thresholding technique for discriminating between smoke plumes and clouds. Results suggested that the algorithm was able to isolate smoke pixels in the presence of other scene types, such as clouds, although it performed better in identifying fresh dense smoke as compared to highly diffused smoke. Similarly, Li et al. [96] proposed an approach to automatically separate smoke plumes from clouds and background by analyzing MODIS data. Specifically, a previous approach proposed by Li et al. [97] for the AVHRR sensor was improved based on spectral analysis among the smoke, cloud, and underlying surface using MODIS data. Specifically, a multi-threshold method was used for extracting training sample sets to train a back-propagation neural network (BPNN) to discriminate between three classes: (smoke, cloud, and underlying surface). Results using MODIS data of several forest fires occurred in different places and different dates were satisfactory. Advantages include the ability of the algorithm to detect smoke plumes in different seasons using seasonal training data sets, as well as that it provides quantitative and continuous outputs of smoke and other objects.
Many researchers used active fire products derived from these sensors to assess various other proposed algorithms. Hally et al. [98] examined the performance of a threshold algorithm against commonly used products such as the VIIRS active fire product, to determine the completeness of anomaly capture. Specifically, the study considers two commonly used active fire products: the MODIS Collection 6 (MOD/MYD14) 1 km active fire product as outlined in Giglio et al. [99] and the VIIRS 375 m (VNP14IMG) active fire product described in Schroeder et al. [93]. In both cases, the geographic position of the detected hotspots, as well as the time of satellite overpass, were used. Also, Wickramasinghe et al. [100] compared the Advanced Himawari Imager—Fire Surveillance Algorithm (AHI-FSA) across the Northern Territory of Australia (1.4 million km2) over ten days with the well-established active fire products from satellites: MODIS and VIIRS.
Finally, the Chinese HuanJing sun-synchronous satellites (“HuanJing” means “environment” in Chinese) are satellites for disaster and environmental monitoring that are capable of visible, infrared, multi-spectral, and synthetic aperture radar imaging. Lin et al. [101] presented a spatio–temporal model (STM) based forest fire detection method that uses multiple images of the inspected scene based on Huanjing-1B satellite images. A comparison of detection results demonstrated that the proposed algorithm is useful to represent the spatio–temporal information contained in multi-temporal remotely sensed data.

Deep Learning Methods

Deep Learning methods have also been recently applied for fire and smoke detection from multispectral satellite images. Ba et al. [102] presented a new large-scale satellite imagery dataset based on MODIS data, namely USTC_SmokeRS, consisting of 6225 satellite images from six classes (i.e., cloud, dust, haze, land, seaside, and smoke) and covering various areas/regions over the world. Using this dataset, they evaluated several state-of-the-art deep learning-based image classification models for smoke detection and proposed SmokeNet, a new CNN model that incorporated spatial- and channel-wise attention in CNN to enhance feature representation for scene classification. Also, Priya et al. [103] used a dataset of 534 RGB satellite images from different sources, including MODIS images from the NASA Worldview platform and Google. An effective approach using an Inception-v3 CNN framework and transfer learning was used for fire and non-fire image classification. Then, the fire regions were extracted based on thresholding and local binary patterns.

2.3.2. Fire and Smoke Detection from Geostationary Satellites

Regarding satellite imagery from geostationary satellites, important work for fire and smoke detection has already been performed using the advanced Himawari imager (AHI) sensor of the Himawari-8 weather satellite. Himawari 8 is a new generation of Japanese geostationary weather satellites operated by the Japan Meteorological Agency. AHI-8 has significantly higher radiometric, spectral, and spatial resolution than its predecessor.
Regarding Europe and the US, two additional sensors that are installed in geostationary satellites are the European Space Agency (ESA) Meteosat second generation (MSG, a satellite series)-spinning enhanced visible and infrared imager (SEVIRI) sensor and the NASA geostationary operational environmental satellite (GOES)-16 advanced baseline imager (ABI) sensor. The MSG-SEVIRI geostationary sensor is a 50 cm diameter aperture, line-by-line scanning radiometer, which provides image data in 12 spectral channels (four visible and near-infrared (NIR), including a broadband high resolution (1 km) visible channel, and eight thermal IR with a resolution of 3 km) with a baseline repeat cycle of 15 min. GOES-16 is the first of the GOES-R series of the geostationary operational environmental satellite (GOES) operated by NASA and the NOAA. The advanced baseline imager (ABI) is its primary instrument, providing high spatial and temporal resolution imagery of the Earth through 16 spectral bands at visible and infrared wavelengths.

Traditional Methods

Hally et al. [98] extended previous work by the same authors Hally et al. [104] using AHI sensor data from the Himawari geostationary satellite in the application of a multi-temporal method of background temperature estimation, known as the broad area training (BAT). This method involves a two-step process for geostationary data: a preprocessing step, where AHI Band 7 images are aggregated and then a fitting step, where this spatially aggregated data are used in individual pixel fitting using a single value decomposition (SVD) process. These fittings at the pixel level can then be compared to the raw brightness temperature data as measured by the satellite sensor, to identify thermal anomalies such as those caused by an active fire. Results are seen to compare favorably to active fire products produced by low Earth orbit satellite data during the period of study. Fatkhuroyan et al. [105] perform a study of data from fires in Sumatera and Kalimantan regions in August, September, and October 2015 and concluded that smoke detection and monitoring is feasible using pseudo-RGB images consisting of one visual channel and two near-infrared channels. A limitation revealed by this study is that Himawari-8/AHI is a passive sensor that very dependent on the reflection of solar radiance, so it can only monitor the forest fire during the day-time. Xu et al. [106] investigated the feasibility of extracting real-time information about the spatial extents of wildfires using the Himawari-8 satellite. The algorithm is based on previous work using the MODIS sensor: it first identifies possible hotspots and then eliminates false alarms by applying certain thresholds, similar to Giglio et al. [99]. False alarms are then rejected by cloud, water, and coast tests based on the additional bands and comparison with neighboring pixels. Results demonstrated that fire detection is robust to smoke and moderate cloud obscuration and sensitive enough for early detection of wildfires.
Typically, only temporal-based fire detection algorithms are used for geostationary orbital sensors, detecting the fire by analyzing multi-temporal changes of brightness temperature (BT). On the other hand, polar-orbiting platforms, use spatial-based fire detection algorithms, which are commonly classified either “fixed-threshold” or “contextual”. Aiming to combine the merits of both approaches, Xie et al. [107] presented a spatio–temporal contextual model (STCM) that fully exploits geostationary data’s spatial and temporal dimensions using data from Himawari-8 Satellite. They applied an improved robust fitting algorithm to model each pixel’s diurnal temperature cycles (DTC) in the middle and long infrared bands. For each pixel, a Kalman filter was used to blend the DTC to estimate the true background brightness temperature.
Significant research results have also been produced using data from the GOES ABI and MSG SEVIRI instruments. A multi-temporal change-detection technique, namely robust satellite techniques for fires detection and monitoring (RST-FIRES) using data from the MSG-SEVIRI sensor. Filizzola et al. [108] was seen to be very efficient for the timely detection of even small/short-living fire incidents. Furthermore, Di Biase et al. [109] updated the satellite fire detection (SFIDE) algorithm, previously proposed by the same authors Laneve et al. [110] to reduce the false alarm rate. Specifically, they improved the estimation of the reference temperature used to define a fire pixel, the cloud mask accuracy, and the exploitation of the high refresh rate of the images to implement several tests for more accurate detection of forest fires. In Hall et al. [111], both satellite imaging sensors (GOES ABI and MSG SEVIRI) were found to be very efficient in detecting active fire incidents. Additional sensing systems providing such broad spatial coverage and favoring improved geostationary satellite fire data consistency across regions could further improve performance. Reference fire data were derived from Landsat-8 Operational Land Imager (OLI) 30 m imagery using the Schroeder et al. [112] Landsat-8 OLI automated fire detection algorithm.

Deep Learning Methods

Very recently, Larsen et al. [113] presented a deep FCN for predicting fire smoke in satellite imagery in near-real-time (NRT) using data from the Himawari-8/AHI sensor. Also, Phan et al. [114] proposed a novel remote wildfire detection framework based on GOES-16 sensor imagery, which contains three distinct stages: (i) a streaming data processing component to purify and examine raw image data for ROI’s, (ii) an early wildfire prediction component using deep learning architectures to grab spatial and spectral designs for more accurate and robust detection, and (iii) a streaming data visualization dashboard for potential wildfire incidents.

2.3.3. Fire and Smoke Detection Using CubeSats

A recent trend in remote sensing from satellites is using “Cubsats”, i.e., miniaturized satellites that typically weigh between 1 and 10 kg and follow the popular ‘CubeSat’ standard [115], which defines the outer dimensions of the satellite within multiple cubic units of 10 × 10 × 10 cm and can accommodate small technology payloads to perform different scientific research or commercial functions and explore new space technologies. Technically, it is easier for CubeSats to function in the LEO-zone due to their unique characteristics. More than 1100 CubeSats have been successfully launched by several universities and companies worldwide, as of January 2020.
The latest advances in satellite imagery had allowed CubeSats to rapidly discover wildfires. Barschke et al. (2017) [116] described a nanosatellite called TUBIN (Technische Universität Berlin Infrared Nanosatellite), which was designed to validate an infrared microbolometer for wildfire remote sensing. Another constellation of four 6U Cubesats for monitoring forest fires and other natural disasters was proposed in Africa and Asia [117]. The payload is an optical sensor with three spectral bands (green, red, and near-infrared) with a revisit time of 72 h. Shah et al. [118] proposed a system consisting of a constellation of nanosatellites equipped with multi-spectral visible to Infrared (IR) cameras and a ground station, which will allow all surface points on the planet to be revisited at least once in an hour. Capturing a surface location with high resolution in MWIR and LWIR allows for the precise estimation of the thermal output of the surface. Simulations indicated that a fire of about four hundred square meters can be detected using a payload of a multispectral IR camera measuring incident power in two thermal infrared bands (mid-wave and long-wave). The system will use onboard data processing, enabling an early wildfire warning within 30 min and minimizing bandwidth requirements. Additionally, compressed raw images can be transmitted to the ground station to provide global thermal data updated every 90 min. The first satellite is planned for launch in late 2020 with the data available for research purposes.

3. Discussion and Conclusions

Three categories of early fire and smoke detection systems have been analyzed and compared (Table 1) thoroughly in this literature review paper, namely terrestrial, unmanned aerial vehicles, and satellite-based systems. In general, terrestrial systems tend to be more efficient in terms of accuracy and response time to wildfire incidents. Furthermore, these systems offer high spatial resolution depending on the camera resolution and the viewing angle/distance; however, their coverage is limited when compared to the other two solutions, as the cameras are placed in fixed positions and additional limitations may apply (e.g., occlusions).
On the other hand, aerial-based systems gained recently a lot of attention due to the rapid development of UAV technology. Such systems provide a broader and more accurate perception of the fire, even in regions that are inaccessible or considered too dangerous for fire-fighting crews. In addition, UAVs can cover wider areas and are flexible, in the sense that they monitor different areas, as needed. Recent technological achievements have led to better camera resolutions offering high spatial resolution, wider field of view, and better battery autonomy. The latest UAVs are equipped with both visible and infrared cameras, improving the detection accuracy and allowing night operation; however, they are affected by weather conditions and, in many cases, their flight time is limited.
Finally, Earth Observation satellite systems have been used successfully for wildfire detection, mainly due to their large-scale coverage. The majority of satellites providing earth imagery are either geostatic or in the near-polar sun-synchronous orbit and include multispectral imaging sensors. Sun-synchronous satellites provide data with high spatial resolution but low temporal resolution, while geostationary satellites have a high temporal resolution but low spatial resolution. More recently, advances in nanomaterials and micro-electronics technologies have allowed the use of tiny low-Earth-orbiting satellites, known as CubeSats. CubeSats have significant advantages in comparison with traditional satellites regarding smoke and fire detection, since they are more effective in terms of costs, temporal resolution/response time, and coverage. In addition, they are smaller in size than traditional satellites and need less time to be put into orbit; however, one issue that needs to be tackled is their poor ability to transmit large amounts of data to the ground. From the aforementioned analysis, it is clear that each category has its advantages and disadvantages. To this end, recent research efforts on wildfire detection [127,128,129] focus either on the combination of these technologies or on the use of additional input sources such as crowdsourcing, social media, and weather forecasting.
It is worth mentioning that most of the institutions and agencies aiming to support wildfire management at the national and regional level use either satellites or combine them with a small fleet of planes to detect and map the extent, spread, and impact of forest fires [130,131]. Furthermore, various organizations have installed network-connected optical cameras in or near wildland areas sharing live images on the web to assist early forest fire detection [132].
A detailed comparison between the three categories of early fire detection systems in terms of the performance (Accuracy), number of research papers (Volume of works), the potential for future improvement (Future potential), the minimum fire size that can be detected (Minimum fire size), the monitoring area covered by the system (Coverage area), and time needed for early fire detection (Response time) in the scale 0 (low) to 5 (high) is shown in Figure 3. Most of the literature shows that terrestrial systems have been extensively studied, achieving high accuracy rates and fast response times, despite the limited coverage area that they offer. Thus, large networks of ground sensors can be deployed to increase the coverage area; however, in this case, a trade-off between the number of sensors, cost, and complexity is required. On the other hand, aerial and satellite-based systems provide better coverage. These systems have already shown their great potential and accuracy rates and response times.
Also, the terrestrial and aerial-based systems can detect fires at a very early stage depending on their distance from the fire and their spatial resolution in parallel with short latency time [132]. In contrast, regarding satellite-based systems, time latency and minimum detectable fire size are expected to be improved in the following years. Currently, imaging sensors in sun-synchronous-orbiting satellites, such as MODIS, can detect after observation very small fires (up to 50 m2) under near-ideal conditions and an average size of 30 × 30 = 900 m2 under a variety of conditions [133]. Regarding the latency time, MODIS fire products are produced and delivered to fire managers partners in near-real-time (within 2–4 h of when MODIS collected the observations) [133]. On the other hand, geostationary sensors, like Himawari-8 AHI, can provide observations every 10–30 min, making them ideal for near-real-time fire surveillance, at the cost of a higher spatial resolution (2 km pixel size, which can be reduced to 500 m) [100]. Furthermore, the creation of new datasets for the training of novel deep learning algorithms (e.g., super-resolution), as well as advances in transmission technologies will further contribute towards this direction.
Similarly, Figure 4 demonstrates some attributes for each of the three sensor types: visible, infrared, and multispectral. Although the research community has thoroughly utilized optical-based systems, multispectral approaches seem to achieve better accuracy rates, due to the complementarity of information provided by different spectral bands; however, the use of multispectral technology increases significantly the overall cost of the system. This fact justifies the extensive use of low-cost optical sensors by many research works in the literature. Nevertheless, the wider use of multispectral sensors in different systems is expected to further improve the performance of early wildfire detection systems. To this end, extensive research is still needed on systems that integrate multimodal sensing technologies along with advanced deep-learning algorithms.
Furthermore, to explore the evolution of the forest fire detection research domain, we carried out a bibliometric analysis. The initial search yielded over 2024 papers related to forest fire detection published in the Web of Science (WoS) database [134]. Figure 5 shows the trend in the number of articles published between 1 January 1990 and 31 October 2020. Narrowing the results to only the imaging research area, the search yielded 697 published articles (Figure 6). These results show that there is a growth in publications in the last 30 years. Of these, 378 are related to forest fire detection based on terrestrial systems, 59 based on aerial systems, and 260 based on satellite systems (Figure 7). The results of Figure 7 indicate that the field of forest fire detection in the image research area for terrestrial, aerial, and satellite-based systems is still evolving.
Subsequently, to provide information on the identity of papers and the corresponding systems that receive the most citations, we performed a quantitative citation analysis. A citation occurs when one paper refers to another paper. In this analysis (Figure 8), we identified that papers related to forest fire detection in the imaging satellite-based research area receive the most citations followed by terrestrial and aerial-based papers. Finally, to identify the study area of the aforementioned papers, a study analysis of the funding agencies was performed. In Figure 9, ten of the organizations and agencies that were the major sponsors of these papers are shown. More specifically, the National Natural Science Foundation of China (NSFC) has funded 42 papers related to forest fire detection in the imaging research area, while National Aeronautics Space Administration (NASA) has funded 37 papers. In addition, organizations and agencies from Canada, the European Union, France, the USA, and the UK have funded more than 79 papers related to the field of imaging-based forest fire detection. Similarly, in the second analysis, the authors’ countries of affiliation were mapped (Figure 10). To this end, over 25 percent and 15 percent of the authors are affiliated with an organization that is based in the USA and China, respectively.

Author Contributions

Conceptualization, P.B., P.P., K.D., and N.G.; formal analysis, P.B., P.P., K.D., and N.G.; writing—original draft preparation, P.B. and P.P.; writing—review and editing, K.D. and N.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Greece and the European Union, projects i-FORESTER (“Reinforcement of Postdoctoral Researchers—2nd Cycle”) and eOUTLAND (“INTERREG V-A COOPERATION PROGRAMME Greece-Bulgaria 2014–2020”, grant number 1672).

Acknowledgments

This research is co-financed by Greece and the European Union (European Social Fund—ESF) through the Operational Programme “Human Resources Development, Education and Lifelong Learning” in the context of the call “Reinforcement of Postdoctoral Researchers—2nd Cycle” (MIS 5033021) for the project "i-FORESTER: Intelligent system for FOREST firE suRveillance", implemented by the State Scholarships Foundation (ΙΚΥ). Periklis Papaioannou, Kosmas Dimitropoulos, and Nikos Grammalidis have received funding from INTERREG V-A COOPERATION PROGRAMME Greece-Bulgaria 2014–2020 project “eOUTLAND: Protecting biodiversity at NATURA 2000 sites and other protected areas from natural hazards through a certified framework for cross-border education, training, and support of civil protection volunteers based on innovation and new technologies”.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

ABIAdvanced Baseline Imager
ADFAdaptive Decision Fusion
AHIAdvanced Himawari Imager
AVHRRAdvanced Very-High-Resolution Radiometer
BATBroad Area Training
BPNNBack-Propagation Neural Network
BTBrightness Temperature
CCDCharge-Coupled Device
CMOSComplementary Metal-Oxide-Semiconductor
CNNConvolutional Neural Network
DCDeep Convolutional
DLDeep Learning
DTCDiurnal Temperature Cycles
ECEFEarth-Centered Earth-Fixed
EOEarth Observation
ESAEuropean Space Agency
FCNFully Convolutional Network
GANGenerative Adversarial Network
GEOGeostationary Orbit
GOESGeostationary Operational Environmental Satellite
GPSGlobal Positioning System
h-LDShigher-order LDS
HMMHidden Markov Models
HOFHistograms of Optical Flow
HOGHistograms of Oriented Gradients
HoGPHistograms of Grassmannian Points
IMUInertial Measurement Unit
IoMTInternet of Multimedia Things
IRInfraRed
LDSLinear Dynamical Systems
LEOLow Earth Orbit
LSTLand Surface Temperature
LSTMLong Short-Term Memory
LWIRLong Wavelength InfraRed
MetOpMeteorological Operational
MODISModerate Resolution Imaging Spectroradiometer
MRMixed Reality
MSERMaximally Stable Extremal Regions
MSGMeteosat Second Generation
MWIRMiddle Wavelength InfraRed
NDVINormalized Difference Vegetation Index
NEDNorth-East-Down
NIRNear-InfraRed
NOAANational Oceanic and Atmospheric Administration
NPPNational Polar-orbiting Partnership
NRTNear-Real-Time
OLIOperational Land Imager
PISAPixelwise Image Saliency Aggregating
POESPolar-orbiting Operational Environmental Satellite
R-CNNRegion-Based Convolutional Neural Networks
ReLURectified Linear Unit
RSTRobust Satellite Techniques
SEVIRISpinning Enhanced Visible and Infrared Imager
SFIDESatellite Fire Detection
sh-LDSstabilized h-LDS
SLAMSimultaneous Localization and Mapping
SroFsSuspected Regions of Fire
SSOSun-Synchronous Orbit
STCMSpatio–temporal Contextual Model
STMSpatio–Temporal Model
SVDSingle Value Decomposition
SVMSupport Vector Machine
UAVUnmanned Aerial Vehicles
VIIRSVisible Infrared Imaging Radiometer Suite
WoSWeb of Science
WSNWireless Sensor Network

References

  1. Tanase, M.A.; Aponte, C.; Mermoz, S.; Bouvet, A.; Le Toan, T.; Heurich, M. Detection of windthrows and insect outbreaks by L-band SAR: A case study in the Bavarian Forest National Park. Remote Sens. Environ. 2018, 209, 700–711. [Google Scholar] [CrossRef]
  2. Pradhan, B.; Suliman, M.D.H.B.; Awang, M.A.B. Forest fire susceptibility and risk mapping using remote sensing and geographical information systems (GIS). Disaster Prev. Manag. Int. J. 2007, 16. [Google Scholar] [CrossRef]
  3. Kresek, R. History of the Osborne Firefinder. 2007. Available online: http://nysforestrangers.com/archives/osborne%20firefinder%20by%20kresek.pdf (accessed on 7 September 2020).
  4. Bouabdellah, K.; Noureddine, H.; Larbi, S. Using wireless sensor networks for reliable forest fires detection. Procedia Comput. Sci. 2013, 19, 794–801. [Google Scholar] [CrossRef] [Green Version]
  5. Gaur, A.; Singh, A.; Kumar, A.; Kulkarni, K.S.; Lala, S.; Kapoor, K.; Srivastava, V.; Kumar, A.; Mukhopadhyay, S.C. Fire sensing technologies: A review. IEEE Sens. J. 2019, 19, 3191–3202. [Google Scholar] [CrossRef]
  6. Gaur, A.; Singh, A.; Kumar, A.; Kumar, A.; Kapoor, K. Video Flame and Smoke Based Fire Detection Algorithms: A Literature Review. Fire Technol. 2020, 56, 1943–1980. [Google Scholar] [CrossRef]
  7. Kaabi, R.; Frizzi, S.; Bouchouicha, M.; Fnaiech, F.; Moreau, E. Video smoke detection review: State of the art of smoke detection in visible and IR range. In Proceedings of the 2017 International Conference on Smart, Monitored and Controlled Cities (SM2C), Kerkennah-Sfax, Tunisia, 17 February 2017; pp. 81–86. [Google Scholar]
  8. Garg, S.; Verma, A.A. Review Survey on Smoke Detection. Imp. J. Interdiscip. Res. 2016, 2, 935–939. [Google Scholar]
  9. Memane, S.E.; Kulkarni, V.S. A review on flame and smoke detection techniques in video’s. Int. J. Adv. Res. Electr. Electr. Instrum. Eng. 2015, 4, 885–889. [Google Scholar]
  10. Bu, F.; Gharajeh, M.S. Intelligent and vision-based fire detection systems: A survey. Image Vis. Comput. 2019, 91, 103803. [Google Scholar] [CrossRef]
  11. Yuan, C.; Zhang, Y.; Liu, Z. A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques. Can. J. For. Res. 2015, 45, 783–792. [Google Scholar] [CrossRef]
  12. Allison, R.S.; Johnston, J.M.; Craig, G.; Jennings, S. Airborne optical and thermal remote sensing for wildfire detection and monitoring. Sensors 2016, 16, 1310. [Google Scholar] [CrossRef] [Green Version]
  13. Nixon, M.; Aguado, A. Feature Extraction and Image Processing for Computer Vision; Academic Press: Cambridge, MA, USA, 2019. [Google Scholar]
  14. Mather, P.; Tso, B. Classification Methods for Remotely Sensed Data; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  15. Çetin, A.E.; Dimitropoulos, K.; Gouverneur, B.; Grammalidis, N.; Günay, O.; Habiboǧlu, Y.H.; Töreyin, B.U.; Verstockt, S. Video fire detection–review. Dig. Signal Process. 2013, 23, 1827–1843. [Google Scholar] [CrossRef] [Green Version]
  16. Töreyin, B.U.; Cinbis, R.G.; Dedeoglu, Y.; Cetin, A.E. Fire detection in infrared video using wavelet analysis. Opt. Eng. 2007, 46, 067204. [Google Scholar] [CrossRef] [Green Version]
  17. Cappellini, Y.; Mattii, L.; Mecocci, A. An Intelligent System for Automatic Fire Detection in Forests. In Recent Issues in Pattern Analysis and Recognition; Springer: Berlin/Heidelberg, Germany, 1989; pp. 563–570. [Google Scholar]
  18. Chen, T.H.; Wu, P.H.; Chiou, Y.C. An early fire-detection method based on image processing. In Proceedings of the 2004 International Conference on Image Processing (ICIP 04), Singapore, 24–27 October 2004; Volume 3, pp. 1707–1710. [Google Scholar]
  19. Dimitropoulos, K.; Gunay, O.; Kose, K.; Erden, F.; Chaabene, F.; Tsalakanidou, F.; Grammalidis, N.; Çetin, E. Flame detection for video-based early fire warning for the protection of cultural heritage. In Proceedings of the Euro-Mediterranean Conference, Limassol, Cyprus, 29 October–3 November 2012; Springer: Berlin/Heidelberg, Germany; pp. 378–387. [Google Scholar]
  20. Celik, T.; Demirel, H. Fire detection in video sequences using a generic color model. Fire Saf. J. 2009, 44, 147–158. [Google Scholar] [CrossRef]
  21. Celik, T. Fast and efficient method for fire detection using image processing. ETRI J. 2010, 32, 881–890. [Google Scholar] [CrossRef] [Green Version]
  22. Marbach, G.; Loepfe, M.; Brupbacher, T. An image processing technique for fire detection in video images. Fire Saf. J. 2006, 41, 285–289. [Google Scholar] [CrossRef]
  23. Kim, D.; Wang, Y.F. Smoke detection in video. In Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering, Los Angeles, CA, USA, 31 March–2 April 2009; Volume 5, pp. 759–763. [Google Scholar]
  24. Yamagishi, H.; Yamaguchi, J. Fire flame detection algorithm using a color camera. In Proceedings of the MHS’99, 1999 International Symposium on Micromechatronics and Human Science (Cat. No. 99TH8478), Nagoya, Japan, 23–26 November 1999; pp. 255–260. [Google Scholar]
  25. Dimitropoulos, K.; Tsalakanidou, F.; Grammalidis, N. Flame detection for video-based early fire warning systems and 3D visualization of fire propagation. In Proceedings of the 13th IASTED International Conference on Computer Graphics and Imaging (CGIM 2012), Crete, Greece, 18–20 June 2012; Available online: https://zenodo.org/record/1218#.X6qSVmj7Sbg (accessed on 10 November 2020).
  26. Zhang, Z.; Shen, T.; Zou, J. An improved probabilistic approach for fire detection in videos. Fire Technol. 2014, 50, 745–752. [Google Scholar] [CrossRef]
  27. Avgerinakis, K.; Briassouli, A.; Kompatsiaris, I. Smoke detection using temporal HOGHOF descriptors and energy colour statistics from video. In Proceedings of the International Workshop on Multi-Sensor Systems and Networks for Fire Detection and Management, Antalya, Turkey, 8–9 November 2012. [Google Scholar]
  28. Mueller, M.; Karasev, P.; Kolesov, I.; Tannenbaum, A. Optical flow estimation for flame detection in videos. IEEE Trans. Image Process. 2013, 22, 2786–2797. [Google Scholar] [CrossRef] [Green Version]
  29. Chamberlin, D.S.; Rose, A. The First Symposium (International) on Combustion. Combust. Inst. Pittsburgh 1965, 1965, 27–32. [Google Scholar]
  30. Günay, O.; Taşdemir, K.; Töreyin, B.U.; Çetin, A.E. Fire detection in video using LMS based active learning. Fire Technol. 2010, 46, 551–577. [Google Scholar] [CrossRef]
  31. Teng, Z.; Kim, J.H.; Kang, D.J. Fire detection based on hidden Markov models. Int. J. Control Autom. Syst. 2010, 8, 822–830. [Google Scholar] [CrossRef]
  32. Töreyin, B.U. Smoke detection in compressed video. In Applications of Digital Image Processing XLI; International Society for Optics and Photonics: Bellingham, WA, USA, 2018; Volume 10752, p. 1075232. [Google Scholar]
  33. Savcı, M.M.; Yıldırım, Y.; Saygılı, G.; Töreyin, B.U. Fire detection in H. 264 compressed video. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8310–8314. [Google Scholar]
  34. Chen, J.; He, Y.; Wang, J. Multi-feature fusion based fast video flame detection. Build. Environ. 2010, 45, 1113–1122. [Google Scholar] [CrossRef]
  35. Töreyin, B.U.; Dedeoğlu, Y.; Güdükbay, U.; Cetin, A.E. Computer vision based method for real-time fire and flame detection. Pattern Recognit. Lett. 2006, 27, 49–58. [Google Scholar] [CrossRef] [Green Version]
  36. Barmpoutis, P.; Dimitropoulos, K.; Grammalidis, N. Real time video fire detection using spatio-temporal consistency energy. In Proceedings of the 10th IEEE International Conference on Advanced Video and Signal Based Surveillance, Krakow, Poland, 27–30 August 2013; pp. 365–370. [Google Scholar]
  37. Arrue, B.C.; Ollero, A.; De Dios, J.M. An intelligent system for false alarm reduction in infrared forest-fire detection. IEEE Intell. Syst. Their Appl. 2000, 15, 64–73. [Google Scholar] [CrossRef] [Green Version]
  38. Grammalidis, N.; Cetin, E.; Dimitropoulos, K.; Tsalakanidou, F.; Kose, K.; Gunay, O.; Gouverneur, B.; Torri, D.; Kuruoglu, E.; Tozzi, S.; et al. A Multi-Sensor Network for the Protection of Cultural Heritage. In Proceedings of the 19th European Signal Processing Conference, Barcelona, Spain, 29 August–2 September 2011. [Google Scholar]
  39. Bosch, I.; Serrano, A.; Vergara, L. Multisensor network system for wildfire detection using infrared image processing. Sci. World J. 2013, 2013, 402196. [Google Scholar] [CrossRef] [Green Version]
  40. Barmpoutis, P.; Dimitropoulos, K.; Grammalidis, N. Smoke detection using spatio-temporal analysis, motion modeling and dynamic texture recognition. In Proceedings of the 22nd European Signal Processing Conference, Lisbon, Portugal, 1–5 September 2014; pp. 1078–1082. [Google Scholar]
  41. Dimitropoulos, K.; Barmpoutis, P.; Grammalidis, N. Spatio-temporal flame modeling and dynamic texture analysis for automatic video-based fire detection. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 339–351. [Google Scholar] [CrossRef]
  42. Prema, C.E.; Vinsley, S.S.; Suresh, S. Efficient flame detection based on static and dynamic texture analysis in forest fire detection. Fire Technol. 2018, 54, 255–288. [Google Scholar] [CrossRef]
  43. Dimitropoulos, K.; Barmpoutis, P.; Grammalidis, N. Higher order linear dynamical systems for smoke detection in video surveillance applications. IEEE Trans. Circuits Syst. Video Technol. 2016, 27, 1143–1154. [Google Scholar] [CrossRef]
  44. Dimitropoulos, K.; Barmpoutis, P.; Kitsikidis, A.; Grammalidis, N. Classification of multidimensional time-evolving data using histograms of grassmannian points. IEEE Trans. Circuits Syst. Video Technol. 2016, 28, 892–905. [Google Scholar] [CrossRef]
  45. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [Green Version]
  46. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2012; pp. 1097–1105. [Google Scholar]
  47. Luo, Y.; Zhao, L.; Liu, P.; Huang, D. Fire smoke detection algorithm based on motion characteristic and convolutional neural networks. Multimed. Tools Appl. 2018, 77, 15075–15092. [Google Scholar] [CrossRef]
  48. Zhao, L.; Luo, Y.M.; Luo, X.Y. Based on dynamic background update and dark channel prior of fire smoke detection algorithm. Appl. Res. Comput. 2017, 34, 957–960. [Google Scholar]
  49. Wu, X.; Lu, X.; Leung, H. An adaptive threshold deep learning method for fire and smoke detection. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 1954–1959. [Google Scholar]
  50. Sharma, J.; Granmo, O.C.; Goodwin, M.; Fidje, J.T. Deep convolutional neural networks for fire detection in images. In Proceedings of the International Conference on Engineering Applications of Neural Networks, Athens, Greece, 25–27 August 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 183–193. [Google Scholar]
  51. Zhang, Q.; Xu, J.; Xu, L.; Guo, H. Deep convolutional neural networks for forest fire detection. In 2016 International Forum on Management, Education and Information Technology Application; Atlantis Press: Paris, France, 2016. [Google Scholar]
  52. Muhammad, K.; Ahmad, J.; Mehmood, I.; Rho, S.; Baik, S.W. Convolutional neural networks based fire detection in surveillance videos. IEEE Access 2018, 6, 18174–18183. [Google Scholar] [CrossRef]
  53. Shen, D.; Chen, X.; Nguyen, M.; Yan, W.Q. Flame detection using deep learning. In Proceedings of the 2018 4th International Conference on Control, Automation and Robotics (ICCAR), Auckland, New Zealand, 20–23 April 2018; pp. 416–420. [Google Scholar]
  54. Frizzi, S.; Kaabi, R.; Bouchouicha, M.; Ginoux, J.M.; Moreau, E.; Fnaiech, F. Convolutional neural network for video fire and smoke detection. In Proceedings of the IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society, Florence, Italy, 23–26 October 2016; pp. 877–882. [Google Scholar]
  55. Muhammad, K.; Ahmad, J.; Lv, Z.; Bellavista, P.; Yang, P.; Baik, S.W. Efficient deep CNN-based fire detection and localization in video surveillance applications. IEEE Trans. Syst. Man Cybern. Syst. 2018, 49, 1419–1434. [Google Scholar] [CrossRef]
  56. Muhammad, K.; Ahmad, J.; Baik, S.W. Early fire detection using convolutional neural networks during surveillance for effective disaster management. Neurocomputing 2018, 288, 30–42. [Google Scholar] [CrossRef]
  57. Dunnings, A.J.; Breckon, T.P. Experimentally defined convolutional neural network architecture variants for non-temporal real-time fire detection. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 1558–1562. [Google Scholar]
  58. Sousa, M.J.; Moutinho, A.; Almeida, M. Wildfire detection using transfer learning on augmented datasets. Expert Syst. Appl. 2020, 142, 112975. [Google Scholar] [CrossRef]
  59. Toulouse, T.; Rossi, L.; Campana, A.; Celik, T.; Akhloufi, M.A. Computer vision for wildfire research: An evolving image dataset for processing and analysis. Fire Saf. J. 2017, 92, 188–194. [Google Scholar] [CrossRef] [Green Version]
  60. Barmpoutis, P.; Dimitropoulos, K.; Kaza, K.; Grammalidis, N. Fire Detection from Images Using Faster R-CNN and Multidimensional Texture Analysis. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 12–17 May 2019; pp. 8301–8305. [Google Scholar]
  61. Lin, G.; Zhang, Y.; Xu, G.; Zhang, Q. Smoke detection on video sequences using 3D convolutional neural networks. Fire Technol. 2019, 55, 1827–1847. [Google Scholar] [CrossRef]
  62. Jadon, A.; Omama, M.; Varshney, A.; Ansari, M.S.; Sharma, R. Firenet: A specialized lightweight fire & smoke detection model for real-time iot applications. arXiv 2019, arXiv:1905.11922. [Google Scholar]
  63. Zhang, Q.X.; Lin, G.H.; Zhang, Y.M.; Xu, G.; Wang, J.J. Wildland forest fire smoke detection based on faster R-CNN using synthetic smoke images. Procedia Eng. 2018, 211, 441–446. [Google Scholar] [CrossRef]
  64. Kim, B.; Lee, J. A video-based fire detection using deep learning models. Appl. Sci. 2019, 9, 2862. [Google Scholar] [CrossRef] [Green Version]
  65. Shi, L.; Long, F.; Lin, C.; Zhao, Y. Video-based fire detection with saliency detection and convolutional neural networks. In International Symposium on Neural Networks; Springer: Cham, Switzerland, 2017; pp. 299–309. [Google Scholar]
  66. Wang, K.; Lin, L.; Lu, J.; Li, C.; Shi, K. PISA: Pixelwise image saliency by aggregating complementary appearance contrast measures with edge-preserving coherence. IEEE Trans. Image Process. 2015, 9, 2115–2122. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Yuan, F.; Zhang, L.; Xia, X.; Wan, B.; Huang, Q.; Li, X. Deep smoke segmentation. Neurocomputing 2019, 357, 248–260. [Google Scholar] [CrossRef]
  68. Cheng, S.; Ma, J.; Zhang, S. Smoke detection and trend prediction method based on Deeplabv3+ and generative adversarial network. J. Electron. Imaging 2019, 28, 033006. [Google Scholar] [CrossRef]
  69. Aslan, S.; Güdükbay, U.; Töreyin, B.U.; Çetin, A.E. Early wildfire smoke detection based on motion-based geometric image transformation and deep convolutional generative adversarial networks. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8315–8319. [Google Scholar]
  70. Hristov, G.; Raychev, J.; Kinaneva, D.; Zahariev, P. Emerging methods for early detection of forest fires using unmanned aerial vehicles and lorawan sensor networks. In Proceedings of the IEEE 28th EAEEIE Annual Conference, Hafnarfjordur, Iceland, 26–28 September 2018; pp. 1–9. [Google Scholar]
  71. Stearns, J.R.; Zahniser, M.S.; Kolb, C.E.; Sandford, B.P. Airborne infrared observations and analyses of a large forest fire. Appl. Opt. 1986, 25, 2554–2562. [Google Scholar] [CrossRef] [PubMed]
  72. Den Breejen, E.; Breuers, M.; Cremer, F.; Kemp, R.; Roos, M.; Schutte, K.; De Vries, J.S. Autonomous Forest Fire Detection; ADAI-Associacao para o Desenvolvimento da Aerodinamica Industrial: Coimbra, Portugal, 1998; pp. 2003–2012. [Google Scholar]
  73. Yuan, C.; Liu, Z.; Zhang, Y. UAV-based forest fire detection and tracking using image processing techniques. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 639–643. [Google Scholar]
  74. Yuan, C.; Liu, Z.; Zhang, Y. Learning-based smoke detection for unmanned aerial vehicles applied to forest fire surveillance. J. Intell. Robot. Syst. 2019, 93, 337–349. [Google Scholar] [CrossRef]
  75. Dang-Ngoc, H.; Nguyen-Trung, H. Aerial Forest Fire Surveillance-Evaluation of Forest Fire Detection Model using Aerial Videos. In Proceedings of the 2019 International Conference on Advanced Technologies for Communications (ATC), Hanoi, Vietnam, 17–19 October 2019; pp. 142–148. [Google Scholar]
  76. Yuan, C.; Liu, Z.; Zhang, Y. Aerial images-based forest fire detection for firefighting using optical remote sensing techniques and unmanned aerial vehicles. J. Intell. Robot. Syst. 2017, 88, 635–654. [Google Scholar] [CrossRef]
  77. De Sousa, J.V.R.; Gamboa, P.V. Aerial Forest Fire Detection and Monitoring Using a Small UAV. KnE Eng. 2020, 242–256. [Google Scholar] [CrossRef]
  78. Esfahlani, S.S. Mixed reality and remote sensing application of unmanned aerial vehicle in fire and smoke detection. J. Ind. Inf. Integr. 2019, 15, 42–49. [Google Scholar] [CrossRef]
  79. Sudhakar, S.; Vijayakumar, V.; Kumar, C.S.; Priya, V.; Ravi, L.; Subramaniyaswamy, V. Unmanned Aerial Vehicle (UAV) based Forest Fire Detection and monitoring for reducing false alarms in forest-fires. Comput. Commun. 2020, 149, 1–16. [Google Scholar] [CrossRef]
  80. Kinaneva, D.; Hristov, G.; Raychev, J.; Zahariev, P. Early forest fire detection using drones and artificial intelligence. In Proceedings of the 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 20–24 May 2019; pp. 1060–1065. [Google Scholar]
  81. Chen, Y.; Zhang, Y.; Xin, J.; Yi, Y.; Liu, D.; Liu, H. A UAV-based Forest Fire Detection Algorithm Using Convolutional Neural Network. In Proceedings of the IEEE 37th Chinese Control Conference, Wuhan, China, 25–27 July 2018; pp. 10305–10310. [Google Scholar]
  82. Merino, L.; Caballero, F.; Martínez-De-Dios, J.R.; Maza, I.; Ollero, A. An unmanned aircraft system for automatic forest fire monitoring and measurement. J. Intell. Robot. Syst. 2012, 65, 533–548. [Google Scholar] [CrossRef]
  83. Zhao, Y.; Ma, J.; Li, X.; Zhang, J. Saliency detection and deep learning-based wildfire identification in UAV imagery. Sensors 2018, 18, 712. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Tang, Z.; Liu, X.; Chen, H.; Hupy, J.; Yang, B. Deep Learning Based Wildfire Event Object Detection from 4K Aerial Images Acquired by UAS. AI 2020, 1, 166–179. [Google Scholar] [CrossRef]
  85. Jiao, Z.; Zhang, Y.; Mu, L.; Xin, J.; Jiao, S.; Liu, H.; Liu, D. A YOLOv3-based Learning Strategy for Real-time UAV-based Forest Fire Detection. In Proceedings of the 2020 Chinese Control and Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 4963–4967. [Google Scholar]
  86. Jiao, Z.; Zhang, Y.; Xin, J.; Mu, L.; Yi, Y.; Liu, H.; Liu, D. A Deep Learning Based Forest Fire Detection Approach Using UAV and YOLOv3. In Proceedings of the 2019 1st International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China, 23–27 July 2019; pp. 1–5. [Google Scholar]
  87. Srinivas, K.; Dua, M. Fog Computing and Deep CNN Based Efficient Approach to Early Forest Fire Detection with Unmanned Aerial Vehicles. In Proceedings of the International Conference on Inventive Computation Technologies, Coimbatore, India, 29–30 August 2019; Springer: Cham, Switzerland, 2020; pp. 646–652. [Google Scholar]
  88. Barmpoutis, P.; Stathaki, T. A Novel Framework for Early Fire Detection Using Terrestrial and Aerial 360-Degree Images. In Proceedings of the International Conference on Advanced Concepts for Intelligent Vision Systems, Auckland, New Zealand, 10–14 February 2020; Springer: Cham, Switzerland, 2020; pp. 63–74. [Google Scholar]
  89. Barmpoutis, P.; Stathaki, T.; Dimitropoulos, K.; Grammalidis, N. Early Fire Detection Based on Aerial 360-Degree Sensors, Deep Convolution Neural Networks and Exploitation of Fire Dynamic Textures. Remote Sens. 2020, 12, 3177. [Google Scholar] [CrossRef]
  90. He, L.; Li, Z. Enhancement of a fire detection algorithm by eliminating solar reflection in the mid-IR band: Application to AVHRR data. Int. J. Remote Sens. 2012, 33, 7047–7059. [Google Scholar] [CrossRef]
  91. He, L.; Li, Z. Enhancement of fire detection algorithm by eliminating solar contamination effect and atmospheric path radiance: Application to MODIS data. Int. J. Remote Sens. 2011, 32, 6273–6293. [Google Scholar] [CrossRef]
  92. Csiszar, I.; Schroeder, W.; Giglio, L.; Ellicott, E.; Vadrevu, K.P.; Justice, C.O.; Wind, B. Active fires from the Suomi NPP Visible Infrared Imaging Radiometer, Suite: Product Status and first evaluation results. J. Geophys. Res. Atmos. 2014, 119, 803–816. [Google Scholar] [CrossRef]
  93. Schroeder, W.; Oliva, P.; Giglio, L.; Csiszar, I.A. The New VIIRS 375m Active Fire Detection Data Product: Algorithm Description and Initial Assessment. Remote Sens. Environ. 2014, 143, 85–96. [Google Scholar] [CrossRef]
  94. Sayad, Y.O.; Mousannif, H.; Al Moatassime, H. Predictive modeling of wildfires: A new dataset and machine learning approach. Fire Saf. J. 2019, 104, 130–146. [Google Scholar] [CrossRef]
  95. Shukla, B.P.; Pal, P.K. Automatic smoke detection using satellite imagery: Preparatory to smoke detection from Insat-3D. Int. J. Remote Sens. 2009, 30, 9–22. [Google Scholar] [CrossRef]
  96. Li, X.; Song, W.; Lian, L.; Wei, X. Forest fire smoke detection using back-propagation neural network based on MODIS data. Remote Sens. 2015, 7, 4473–4498. [Google Scholar] [CrossRef] [Green Version]
  97. Li, Z.; Khananian, A.; Fraser, R.H.; Cihlar, J. Automatic detection of fire smoke using artificial neural networks and threshold approaches applied to AVHRR imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1859–1870. [Google Scholar]
  98. Hally, B.; Wallace, L.; Reinke, K.; Jones, S.; Skidmore, A. Advances in active fire detection using a multi-temporal method for next-generation geostationary satellite data. Int. J. Dig. Earth 2019, 12, 1030–1045. [Google Scholar] [CrossRef]
  99. Giglio, L.; Schroeder, W.; Justice, C.O. The Collection 6 MODIS Active Fire Detection Algorithm and Fire Products. Remote Sens. Environ. 2016, 178, 31–41. [Google Scholar] [CrossRef] [Green Version]
  100. Wickramasinghe, C.; Wallace, L.; Reinke, K.; Jones, S. Intercomparison of Himawari-8 AHI-FSA with MODIS and VIIRS active fire products. Int. J. Dig. Earth 2018. [Google Scholar] [CrossRef]
  101. Lin, L.; Meng, Y.; Yue, A.; Yuan, Y.; Liu, X.; Chen, J.; Zhang, M.; Chen, J. A spatio-temporal model for forest fire detection using HJ-IRS satellite data. Remote Sens. 2016, 8, 403. [Google Scholar] [CrossRef] [Green Version]
  102. Ba, R.; Chen, C.; Yuan, J.; Song, W.; Lo, S. SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention. Remote Sens. 2019, 11, 1702. [Google Scholar] [CrossRef] [Green Version]
  103. Vani, K. Deep Learning Based Forest Fire Classification and Detection in Satellite Images. In Proceedings of the 2019 11th International Conference on Advanced Computing (ICoAC), Chennai, India, 18–20 December 2019; pp. 61–65. [Google Scholar]
  104. Hally, B.; Wallace, L.; Reinke, K.; Jones, S. A Broad-Area Method for the Diurnal Characterisation of Upwelling Medium Wave Infrared Radiation. Remote Sens. 2017, 9, 167. [Google Scholar] [CrossRef] [Green Version]
  105. Fatkhuroyan, T.W.; Andersen, P. Forest fires detection in Indonesia using satellite Himawari-8 (case study: Sumatera and Kalimantan on august-october 2015). In IOP Conference Series: Earth and Environmental Science; IOP Publishing Ltd.: Bristol, UK, 2017; Volume 54, pp. 1315–1755. [Google Scholar]
  106. Xu, G.; Zhong, X. Real-time wildfire detection and tracking in Australia using geostationary satellite: Himawari-8. Remote Sens. Lett. 2017, 8, 1052–1061. [Google Scholar] [CrossRef]
  107. Xie, Z.; Song, W.; Ba, R.; Li, X.; Xia, L. A spatiotemporal contextual model for forest fire detection using Himawari-8 satellite data. Remote Sens. 2018, 10, 1992. [Google Scholar] [CrossRef] [Green Version]
  108. Filizzola, C.; Corrado, R.; Marchese, F.; Mazzeo, G.; Paciello, R.; Pergola, N.; Tramutoli, V. RST-FIRES, an exportable algorithm for early-fire detection and monitoring: Description, implementation, and field validation in the case of the MSG-SEVIRI sensor. Remote Sens. Environ. 2017, 192, e2–e25. [Google Scholar] [CrossRef]
  109. Di Biase, V.; Laneve, G. Geostationary sensor based forest fire detection and monitoring: An improved version of the SFIDE algorithm. Remote Sens. 2018, 10, 741. [Google Scholar] [CrossRef] [Green Version]
  110. Laneve, G.; Castronuovo, M.M.; Cadau, E.G. Continuous monitoring of forest fires in the Mediterranean area using MSG. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2761–2768. [Google Scholar] [CrossRef]
  111. Hall, J.V.; Zhang, R.; Schroeder, W.; Huang, C.; Giglio, L. Validation of GOES-16 ABI and MSG SEVIRI active fire products. Int. J. Appl. Earth Obs. Geoinf. 2019, 83, 101928. [Google Scholar] [CrossRef]
  112. Schroeder, W.; Oliva, P.; Giglio, L.; Quayle, B.; Lorenz, E.; Morelli, F. Active fire detection using Landsat-8/OLI data. Remote Sens. Environ. 2016, 185, 210–220. [Google Scholar] [CrossRef] [Green Version]
  113. Larsen, A.; Hanigan, I.; Reich, B.J.; Qin, Y.; Cope, M.; Morgan, G.; Rappold, A.G. A deep learning approach to identify smoke plumes in satellite imagery in near-real time for health risk communication. J. Expo. Sci. Environ. Epidemiol. 2020, 1–7. [Google Scholar] [CrossRef]
  114. Phan, T.C.; Nguyen, T.T. Remote Sensing Meets Deep Learning: Exploiting Spatio-Temporal-Spectral Satellite Images for Early Wildfire Detection. No. REP_WORK. 2019. Available online: https://infoscience.epfl.ch/record/270339 (accessed on 7 September 2020).
  115. Cal Poly, S.L.O. The CubeSat Program, CubeSat Design Specification Rev. 13. 2014. Available online: http://blogs.esa.int/philab/files/2019/11/RD-02_CubeSat_Design_Specification_Rev._13_The.pdf (accessed on 7 September 2020).
  116. Barschke, M.F.; Bartholomäus, J.; Gordon, K.; Lehmenn, M.; Brie, K. The TUBIN nanosatellite mission for wildfire detection in thermal infrared. CEAS Space 2017, 9, 183–194. [Google Scholar] [CrossRef]
  117. Kameche, M.; Benzeniar, H.; Benbouzid, A.B.; Amri, R.; Bouanani, N. Disaster monitoring constellation using nanosatellites. J. Aerosp. Technol. Manag. 2014, 6, 93–100. [Google Scholar] [CrossRef] [Green Version]
  118. MODIS—Moderate Resolution Imaging Spectroradiometer, Specifications. Available online: https://modis.gsfc.nasa.gov/about/specifications.php (accessed on 15 September 2020).
  119. Himawari-8 and 9, Specifications. Available online: https://earth.esa.int/web/eoportal/satellite-missions/h/himawari-8-9 (accessed on 15 September 2020).
  120. The SEVIRI Instrument. Available online: https://www.eumetsat.int/website/wcm/idc/groups/ops/documents/document/mday/mde1/~edisp/pdf_ten_msg_seviri_instrument.pdf (accessed on 15 September 2020).
  121. GOES-16ABI, Specifications. Available online: https://www.goes-r.gov/spacesegment/abi.html (accessed on 15 September 2020).
  122. Huan Jing-1: Environmental Protection & Disaster Monitoring Constellation. Available online: https://earth.esa.int/web/eoportal/satellite-missions/h/hj-1 (accessed on 15 September 2020).
  123. POES Series, Specifications. Available online: https://directory.eoportal.org/web/eoportal/satellite-missions/n/noaa-poes-series-5th-generation (accessed on 15 September 2020).
  124. VIIRS-375m. Available online: https://earthdata.nasa.gov/earth-observation-data/near-real-time/firms/viirs-i-band-active-fire-data (accessed on 15 September 2020).
  125. Visible Infrared Imaging Radiometer Suite (VIIRS) 375m Active Fire Detection and Characterization Algorithm Theoretical Basis Document 1.0. Available online: https://viirsland.gsfc.nasa.gov/PDF/VIIRS_activefire_375m_ATBD.pdf (accessed on 15 September 2020).
  126. Shah, S.B.; Grübler, T.; Krempel, L.; Ernst, S.; Mauracher, F.; Contractor, S. Real-time wildfire detection from space—A trade-off between sensor quality, physical limitations and payload size. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019. [Google Scholar] [CrossRef] [Green Version]
  127. Pérez-Lissi, F.; Aguado-Agelet, F.; Vázquez, A.; Yañez, P.; Izquierdo, P.; Lacroix, S.; Bailon-Ruiz, R.; Tasso, J.; Guerra, A.; Costa, M. FIRE-RS: Integrating land sensors, cubesat communications, unmanned aerial vehicles and a situation assessment software for wildland fire characterization and mapping. In Proceedings of the 69th International Astronautical Congress, Bremen, Germany, 1–5 October 2018. [Google Scholar]
  128. Escrig, A.; Liz, J.L.; Català, J.; Verda, V.; Kanterakis, G.; Carvajal, F.; Pérez, I.; Lewinski, S.; Wozniak, E.; Aleksandrowicz, S.; et al. Advanced Forest Fire Fighting (AF3) European Project, preparedness for and management of large scale forest fires. In Proceedings of the XIV World Forestry Congress 2015, Durban, South Africa, 7–11 September 2015. [Google Scholar]
  129. Bielski, C.; O’Brien, V.; Whitmore, C.; Ylinen, K.; Juga, I.; Nurmi, P.; Kilpinen, J.; Porras, I.; Sole, J.M.; Gamez, P.; et al. Coupling early warning services, crowdsourcing, and modelling for improved decision support and wildfire emergency management. In Proceedings of the IEEE International Conference on Big Data (Big Data), Boston, MA, USA, 11–14 December 2017; pp. 3705–3712. [Google Scholar]
  130. European Forest Fire Information System (EFFIS). Available online: https://effis.jrc.ec.europa.eu/ (accessed on 2 November 2020).
  131. NASA Tracks Wildfires From Above to Aid Firefighters Below. Available online: https://www.nasa.gov/feature/goddard/2019/nasa-tracks-wildfires-from-above-to-aid-firefighters-below (accessed on 2 November 2020).
  132. Govil, K.; Welch, M.L.; Ball, J.T.; Pennypacker, C.R. Preliminary Results from a Wildfire Detection System Using Deep Learning on Remote Camera Images. Remote Sens. 2020, 12, 166. [Google Scholar] [CrossRef] [Green Version]
  133. MODIS Data Product Non-Technical Description—MOD 14. Available online: https://modis.gsfc.nasa.gov/data/dataprod/nontech/MOD14.php (accessed on 1 November 2020).
  134. Web of Science. Available online: http://apps.webofknowledge.com/ (accessed on 1 November 2020).
Figure 1. Generalized multispectral imaging systems for early fire detection.
Figure 1. Generalized multispectral imaging systems for early fire detection.
Sensors 20 06442 g001
Figure 2. Systems discussed in this review target the detection of fire in the early stages of the fire cycle.
Figure 2. Systems discussed in this review target the detection of fire in the early stages of the fire cycle.
Sensors 20 06442 g002
Figure 3. Radar chart showcasing the findings of this review for different early forest fire detection systems with regards to accuracy, response time, coverage area, future potential, and volume of works in the scale 0 (low) to 5 (high).
Figure 3. Radar chart showcasing the findings of this review for different early forest fire detection systems with regards to accuracy, response time, coverage area, future potential, and volume of works in the scale 0 (low) to 5 (high).
Sensors 20 06442 g003
Figure 4. Radar chart showcasing the findings of this review for different sensor types with regards to accuracy, response time, cost, future potential, and volume of works in the scale 0 (low) to 5 (high).
Figure 4. Radar chart showcasing the findings of this review for different sensor types with regards to accuracy, response time, cost, future potential, and volume of works in the scale 0 (low) to 5 (high).
Sensors 20 06442 g004
Figure 5. The number of published articles per year related to forest fire detection. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Figure 5. The number of published articles per year related to forest fire detection. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Sensors 20 06442 g005
Figure 6. The number of published articles per year related to forest fire detection in the imaging research area. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Figure 6. The number of published articles per year related to forest fire detection in the imaging research area. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Sensors 20 06442 g006
Figure 7. The number of published articles per year for terrestrial, aerial, and satellite-based systems. The analysis was performed for forest fire detection in the imaging research area. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Figure 7. The number of published articles per year for terrestrial, aerial, and satellite-based systems. The analysis was performed for forest fire detection in the imaging research area. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Sensors 20 06442 g007
Figure 8. The number of times cited the published articles per year for terrestrial, aerial, and satellite-based systems. The analysis was performed for forest fire detection in the imaging research area. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Figure 8. The number of times cited the published articles per year for terrestrial, aerial, and satellite-based systems. The analysis was performed for forest fire detection in the imaging research area. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Sensors 20 06442 g008
Figure 9. Organizations and agencies that funded most of the published articles for forest fire detection in the imaging research area. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Figure 9. Organizations and agencies that funded most of the published articles for forest fire detection in the imaging research area. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Sensors 20 06442 g009
Figure 10. Authors’ affiliation by country (%) for forest fire detection in the imaging research area. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Figure 10. Authors’ affiliation by country (%) for forest fire detection in the imaging research area. Data retrieved from Web of Science [134] for dates between 1990 to October 2020.
Sensors 20 06442 g010
Table 1. Multispectral imaging systems and their characteristics.
Table 1. Multispectral imaging systems and their characteristics.
(Satellite)-SensorSpectral BandsAccess to the DataSpecs/Advantages/LimitationsSpatial ScaleSpatial ResolutionData CoverageAccuracy Range
TerrestrialOpticalVisible spectrumBoth web cameras and image and video datasets are availableEasy to operate, limited field of view, need to be carefully placed in order to ensure adequate visibility.LocalVery high spatial resolution (centimeters) depending on camera resolution and distance between the camera and the eventLimited coverage depending the specific task of each system85%–100%
[35,40,58,60]
IRInfrared spectrum
MultimodalMultispectral
AerialOpticalVisible spectrumLimited number of accessible published dataBroader and more accurate perception of the fire, cover wider areas, flexible, affected by weather conditions, limited flight time.Local—RegionalHigh spatial resolution depending on flight altitude, camera resolution and distance between the camera and the eventCoverage of hundred hectares depending on battery capacity.70%–94.6%
[75,86,89]
IRInfrared spectrum
MultimodalMultispectral
SatelliteTerra/Aqua-MODIS [118]36 (0.4–14.4 μm)Registration Required
(NASA)
Easily accessible, limited spatial resolution, revisit time: 1–2 daysGlobal0.25 km (bands 1–2) 0.5 km (bands 3–7)
1 km (bands 8–36)
Earth92.75%–98.32%
[94,95,96,99,102]
Himawari-8/9—AHI-8 [119]16 (0.4–13.4 μm)Registration Required/
(Himawari Cloud)
Imaging sensors with high radiometric, spectral, and temporal resolution. 10 min (Full disk), revisit time: 5 min for areas in Japan/Australia)Regional0.5 km or 1 km for visible and near-infrared bands and 2 km for infrared bandsEast Asia and Western Pacific75%–99.5%
[98,100,104,105,106,107,113]
MSG—SEVIRI [120]12 (0.4–13.4 μm)Registration Required (EUMETSATLow noise in the long-wave IR channels, tracking of dust storms in near-real-time, susceptibility of the larger field of view to contamination by cloud and lack of dual-view capability, revisit time: 5–15 minRegional1 km for the high-resolution visible channel
3 km for the infrared and the 3 other visible channels
Atlantic Ocean, Europe and Africa71.1%–98%
[108,109,110,111]
GOES-16ABI [121]16 (0.4–13.4 μm)Registration Required (NOAA)Infrared resolutions allow the detection of much smaller wildland fires with high temporal resolution but relatively low spatial resolution, and delays in data delivery, revisit time: 5–15 minRegional0.5 km for the 0.64 μm visible channel
1 km for other visible/near-IR
2 km for bands > 2 μm
Western Hemisphere
(North and South America)
94%–98%
[111,114]
HuanJing (HJ)-1B—WVC (Wide View CCD Camera)/IRMSS (Infrared Multispectral Scanner) [122]WVC: 4 (0.43–0.9 μm)
IRMSS: 4 (0.75–12.5 μm)
Registration RequiredLack of an onboard calibration system to track HJ-1 sensors’ on-orbit behavior throughout the life of the mission, revisit time: 4 daysRegionalWVC: 30 m
IRMSS: 150–300 m
Asian and Pacific Region94.45% [101]
POES/MetOp—AVHRR [123]6 (0.58–12.5 μm)Registration Required (NOAA)Coarse spatial resolution, revisit time: 6 hGlobal1.1 km by 4 km at nadirEarth99.6% [97]
S-NPP/NOAA-20/NOAA—VIIRS-375 m [124,125]16 M-bands (0.4–12.5 μm)
5 I-bands (0.6–12.4 μm)
1 DNB (0.5–0.9 µm)
Registration Required
(NASA)
Increased spatial resolution, improved mapping of large fire perimeters, revisit time: 12 hGlobal0.75 km (M-bands)
0.375 km (I-bands)
0.75 km (DNB)
Earth 89%–98.8% [93]
CubeSats (data refer to a specific design from [126])2: MWIR (3–5 μm) and LWIR (8–12 μm)Commercial access plannedSmall physical size, reduced cost, improved temporal resolution/response time, Revisit time: less than 1 h.Global0.2 kmWide coverage in orbitThe first satellite is planned for launch in late 2020
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors 2020, 20, 6442. https://doi.org/10.3390/s20226442

AMA Style

Barmpoutis P, Papaioannou P, Dimitropoulos K, Grammalidis N. A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors. 2020; 20(22):6442. https://doi.org/10.3390/s20226442

Chicago/Turabian Style

Barmpoutis, Panagiotis, Periklis Papaioannou, Kosmas Dimitropoulos, and Nikos Grammalidis. 2020. "A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing" Sensors 20, no. 22: 6442. https://doi.org/10.3390/s20226442

APA Style

Barmpoutis, P., Papaioannou, P., Dimitropoulos, K., & Grammalidis, N. (2020). A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors, 20(22), 6442. https://doi.org/10.3390/s20226442

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop