Next Article in Journal
Identity-Based Online/Offline Encryption Scheme from LWE
Previous Article in Journal
Using MRI Texture Analysis Machine Learning Models to Assess Graft Interstitial Fibrosis and Tubular Atrophy in Patients with Transplanted Kidneys
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review of Modern Forest Fire Detection Techniques: Innovations in Image Processing and Deep Learning

by
Berk Özel
1,†,
Muhammad Shahab Alam
2 and
Muhammad Umer Khan
1,*,†
1
Department of Mechatronics Engineering, Atilim University, Ankara 06830, Turkey
2
Defense Technologies Institute, Gebze Technical University, Gebze 41400, Turkey
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Information 2024, 15(9), 538; https://doi.org/10.3390/info15090538
Submission received: 3 July 2024 / Revised: 28 August 2024 / Accepted: 29 August 2024 / Published: 3 September 2024
(This article belongs to the Section Review)

Abstract

:
Fire detection and extinguishing systems are critical for safeguarding lives and minimizing property damage. These systems are especially vital in combating forest fires. In recent years, several forest fires have set records for their size, duration, and level of destruction. Traditional fire detection methods, such as smoke and heat sensors, have limitations, prompting the development of innovative approaches using advanced technologies. Utilizing image processing, computer vision, and deep learning algorithms, we can now detect fires with exceptional accuracy and respond promptly to mitigate their impact. In this article, we conduct a comprehensive review of articles from 2013 to 2023, exploring how these technologies are applied in fire detection and extinguishing. We delve into modern techniques enabling real-time analysis of the visual data captured by cameras or satellites, facilitating the detection of smoke, flames, and other fire-related cues. Furthermore, we explore the utilization of deep learning and machine learning in training intelligent algorithms to recognize fire patterns and features. Through a comprehensive examination of current research and development, this review aims to provide insights into the potential and future directions of fire detection and extinguishing using image processing, computer vision, and deep learning.

1. Introduction

Forests cover approximately 4 billion hectares of the world’s landmass, roughly equivalent to 30% of the total land [1]. The preservation of forests is essential for maintaining biodiversity on a global scale. Wildfires are destructive events that could adversely change the balance of our planet and threaten our future [2]. Wildfires have long-term devastating effects on ecosystems, such as destroying vegetation dynamics, greenhouse gas emissions, loss of wildlife habitat, and destruction of land covers. The early detection and rapid extinguishing of fires are crucial in minimizing the loss of life and property [3]. Traditional fire detection systems that rely on smoke or heat detectors suffer from low accuracy and long response times [4]. However, advancements in image processing (IP), computer vision (CV), and deep learning (DL) have opened up new possibilities for more effective and efficient fire detection and extinguishing systems [5]. These systems utilize cameras and sophisticated algorithms to analyze visual data in real-time, enabling early fire detection and efficient fire suppression strategies.
In most of the literature, researchers have mainly posed their problem under the paradigm of fire detection [6,7,8]. But some researchers have also explored different aspects of the phenomenon of combustion i.e., smoke [9,10], flame [11], and fire [12], with the intent to effectively determine the threats due to fire. In summary, fire is the overall phenomenon of combustion involving the rapid oxidation of a fuel source, while flame represents the visible, gaseous part of a fire that emits light and heat. Smoke, on the other hand, is the collection of particles and gases released during a fire, which can be toxic and pose health hazards [13]. In this paper, we review the automatic fire, flame, and smoke detection for the last eleven years, i.e., from 2013–2023, using deep learning and image processing.
Image processing techniques enable the extraction of relevant features from images or video streams that are captured by cameras [14]. This includes analyzing color, texture, and spatial information to identify potentially fire-related patterns [15]. By applying algorithms such as edge detection, segmentation, and object recognition, fire can be detected and differentiated from non-fire elements with a high degree of accuracy [16,17].
Computer vision can play a crucial role in early fire detection by utilizing image and video processing techniques to analyze visual data and identify signs of fire [18]. CV algorithms can identify patterns based on features such as color, shape, and motion [19,20]. CV with thermal imaging technology can detect fires based on temperature variations [21,22]. It is important to note that CV conjugated with other fire safety measures, such as smoke detectors, heat sensors, and human intervention, enhances early fire detection. DL combined with CV can also effectively recognize various fire characteristics, including flames, smoke patterns, and heat signatures [23]. It enables more precise and reliable fire detection, even in challenging environments with variable lighting conditions or occlusions.
Deep learning, a subset of machine learning (ML), has revolutionized the field of CV by enabling the training of highly complex and accurate models [24]. Deep learning models, such as convolutional neural networks (CNNs), can be trained on vast amounts of labeled fire-related images and videos, learning to automatically extract relevant features and classify fire instances with remarkable precision [25,26]. These models can continuously improve their performance through iterative training, enhancing their ability to detect fires and reducing false alarms [27].
This work provides a systematic review of the most representative fire and/or smoke detection and extinguishing systems, highlighting the potential of image processing, computer vision, and deep learning. Based on three types of inputs, i.e., camera images, videos, and satellite images, the widely used methods for identifying active fire, flame, and smoke are discussed. As research and development continue to advance these technologies, future fire extinguishing systems promise to provide robust protection against the devastating effects of fires, ultimately saving lives and minimizing property damage.
The remainder of this paper is structured as follows: Section 2 presents the search strategy and selection criteria. Section 3 details the broadly defined classes for fire and smoke detection. Section 4 presents an analysis of the selected topic areas, discussing representative publications from each area in detail. In Section 5, we provide the discussion related to the factors critical for forest fire, followed by the recommendations for future research in Section 6. Lastly, Section 7 concludes this study with some concluding thoughts.

2. Methodology: Search Strategy and Selection Criteria

The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) [28] framework defined the methodology for this systematic review. PRISMA provides a standardized approach for conducting and reporting systematic reviews, ensuring that all relevant studies are identified and assessed comprehensively and transparently. This review aims to understand the approaches used to detect or extinguish forest fires. The required data for this systematic review were gathered from two renowned sources, Web of Science and IEEE Xplore®, and the review was limited to peer-reviewed journal articles published from 2013 to 2023. Web of Science is a research database that offers a wide range of scholarly articles across many disciplines. It includes citation indexing, which helps track the impact of research. IEEE Xplore® is a digital library focused on electrical engineering, electronics, computer science, and other related fields. It provides access to technical literature like journal articles, conference proceedings, and technical standards. We used the EndNote 20.6 reference manager, a software tool by Clarivate, to organize and manage the references collected during the review process. EndNote helped us to classify the references, filter relevant studies, and screen for duplicates, as well as ensure a comprehensive and systematic review of the literature. This tool is widely used in academic research to streamline the process of citation management and bibliography creation. “Fire Detection” was used in conjunction with “Computer Vision”, “Machine Learning”, “Image Processing”, and “Deep Learning” to define the primary search string. To identify the applications of fire detection, “Fire Extinguishing” conjugated with “UAV” and “UGV” was used to define the secondary search string. The pictorial view of the selected areas of the research along with their distribution is depicted in Figure 1.
Figure 2 illustrates the PRISMA framework used to identify and select the most relevant literature. As a result of the research conducted using the primary keywords, 1872 records in Web of Science and 288 records in IEEE Xplore® were retrieved. Data from both sources were merged and after duplicate removal, 1823 records were left. By excluding all records published before 2013 and after 2023, and by applying the search string (“Forest Fire” | | “Wildfire”) (“detection” | | “recognition” | | “extinguish”) in the abstract, title, and keyword fields, only 270 were retained. Another screening was applied to obtain the most relevant data aligned with our interest and by excluding publications for which the full text was not accessible, a total of 155 journal papers from the most relevant journals were retained for detailed review.
To analyze these publications, Figure 3 illustrates the number of journal publications from 2013–2023. The increasing trend after 2018 is an indicator of growing interest in this area of study. The top five journals publishing the most papers on this topic are Fire Technology (9), Forests (14), IEEE Access (9), Remote Sensing (21), and Sensors (13). These journals account for almost 43% of all publications.

3. Research Topics

While conducting our literature search, we tried to cover all aspects contributing to the overall topic. Though these can be considered distinct research topics, from the perspective of deep learning, they play their part mutually.
  • Image Processing: Research that focuses on fire detection based on the features extracted after processing the image [29,30].
  • Computer Vision: Research focusing on the algorithms to understand and interpret the visual data to identify fire [31].
  • Deep Learning: Research associated with the models that can continuously enhance their ability to detect fires [32].
Based on the literature search, four main groups were formulated to classify the publication results. This classification is mainly based on the research topic, theme, title, practical implication, and keywords. Each publication in our search fell broadly into one of these categories:
  • Fire: Research that addresses the methods capable of identifying the forest fire in real-time or based on datasets [33,34].
  • Smoke: Research focusing on the methods to identify smoke with its different color variations [35,36].
  • Fire and Flame: Research associated with the methods that can identify fire and flame [37].
  • Fire and Smoke: Research that explores the methods focusing on the accurate determination of fire and smoke [38].
    Another category has been introduced that is a part of the above-defined categories in the field, but with application orientation, with the help of robots.
  • Applications: Research that addresses a robot’s ability not only to detect fire but also to extinguish it [39,40,41].

4. Analysis

The distribution of various publications in selected categories is illustrated in Figure 4. From the defined categories, fire detection was the most dominant class containing 68 (44%) of the 155 total publications, followed by smoke detection with 33 (21%), fire and smoke with 23 (15%), applications with 18 (12%), and fire and flame with 13 (8%). The data highlight that fire detection and monitoring are foundational areas in the field, while practical applications for fire extinguishing, particularly those involving unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs), remain less developed. Only seven articles focused on UGVs and 11 on UAVs for fire extinguishing, indicating that on-filed utilization in this area is still in its early stages.
Deep learning has been successfully applied to fire, flame, and smoke detection tasks, where its ability has been utilized to learn complex patterns and features from large amounts of data [42,43]. The primary task in fire detection is dataset collection, which consists of a large dataset of images or videos containing both fire and non-fire scenes [44]. The collected data need to be preprocessed to ensure consistency and quality. This may involve resizing images, normalizing pixel values, removing noise, and augmenting the dataset by applying transformations like rotation, scaling, or flipping [45]. Afterward, a deep learning model needs to be designed and trained to perform fire, smoke, or flame detection. CNNs are commonly used for this purpose due to their effectiveness in image-processing tasks [46]. The architecture can be customized based on the specific requirements and complexity of the detection task [47].
For all publications, we extracted some key information such as dataset, data type, method, objective, and achievement. One or two representative publications were picked from each category based on the annual citation count (ACC). The ACC is a metric that indicates the average number of citations per year since publication. The citation count was retrieved from the Web of Science till July 2024. To qualify for the representative publication, each publication’s ACC should have a positive standard deviation, Std (ACC).

4.1. Fire

It is important to note that deep learning models for fire detection rely heavily on the quality and diversity of the training data. Obtaining a comprehensive and representative dataset is crucial for achieving accurate and robust fire detection performance. Past research efforts related to fire detection are listed in Table 1 in terms of the dataset, method, objectives, and achievements.
  • Representative Publications:
The annual citation count for all the papers listed in this category was calculated and is illustrated in Figure 5. The paper entitled “A Forest Fire Detection System Based on Ensemble Learning” was selected from this category as a representative publication, published in 2021, due to its highest ACC score [80]. In this work, the authors developed a forest fire detection system based on ensemble learning. First, two individual learners YOLOv5 and EfficientNet, were integrated to accomplish fire detection. Secondly, another individual learner, EfficientNet, was introduced for learning global information to avoid false positives. The used dataset contains 2976 forest fire images and 7605 non-fire images. Sufficient training sets enabled EfficientNet to show a good discriminability between fire objects and fire-like objects, with 99.6% accuracy on 476 fire images and a 99.7% accuracy on 676 fire-like images.

4.2. Smoke

Deep learning models learn to extract relevant features from input data automatically. During training, the model can learn discriminative features from smoke images that are independent of color. By focusing on shape, texture, and spatial patterns rather than color-specific cues, the model becomes less sensitive to color variations and can detect smoke effectively. Table 2 highlights the research focused on smoke detection.
  • Representative Publications:
The ACC score for all the publications falling in the category was determined and is illustrated in Figure 6. Based on the plot, the two best performers were chosen from this category. A notable publication [143] titled ‘Learning Discriminative Feature Representation with Pixel-Level Supervision for Forest Smoke Recognition,’ focuses on forest smoke recognition through using a Pixel-Level Supervision Neural Network. The research employed non-binary pixel-level supervision to enhance model training, introducing a dataset of 77,910 images. To improve the accuracy of smoke detection, the study integrated the Detail-Difference-Aware Module to differentiate between smoke and smoke-like targets, the Attention-based Feature Separation Module to amplify smoke-relevant features, and the Multi-Connection Aggregation Method to enhance feature representation. The proposed model achieved a remarkable detection rate of 96.95%.
The second representative publication, titled ‘SmokeNet: Satellite Smoke Scene Detection Using Convolutional Neural Network with Spatial and Channel-Wise Attention’ [145] and published in 2019, aimed to detect wildfire smoke using a large-scale satellite imagery dataset. It proposed a new CNN model, SmokeNet, which incorporates spatial and channel-wise attention for enhanced feature representation. The USTC_SmokeRS dataset, consisting of 6225 images across six classes (cloud, dust, haze, land, seaside, and smoke), served as the benchmark. The SmokeNet model achieved the best accuracy rate of 92.75% and a Kappa coefficient of 0.9130, outperforming other state-of-the-art models.

4.3. Fire and Flame

Deep learning models can integrate multiple data sources to improve fire and flame detection. In addition to visual data, other sources such as thermal imaging, infrared sensors, or gas sensors can be used to provide complementary information. By fusing these multi-modal inputs, the model can enhance its ability to detect fire and flame accurately, even in challenging conditions. The existing work related to fire and flame detection is presented in Table 3.
  • Representative Publications:
Through an ACC graph for this category, as shown in Figure 7, only the best performer was chosen. A representative publication [160], entitled ‘The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier’ and published in 2019, was chosen. This work aimed to identify flame areas using a flame recognition model based on an Incremental Vector SVM classifier. It introduces flame characteristics in color space and employs dynamic feature fusion to remove image noise from SIFT features, enhancing feature extraction accuracy. The SIFT feature extraction method incorporates flame-specific color spatial characteristics, achieving a testing accuracy of 97.16%.

4.4. Fire and Smoke

Deep learning models excel at learning hierarchical representations of data. They can learn features at different levels of abstraction, enabling them to capture both local and global patterns associated with fire and smoke. This enhances their ability to detect fire and smoke under various environmental conditions and appearances. A total of twenty-three publications have been identified in this category, as listed in Table 4.
  • Representative Publications:
Based upon an ACC graph, as shown in Figure 8, the top performer in terms of ACC in this category was the paper titled ’Forest fire and smoke detection using deep learning-based learning without forgetting’ [179]. The authors utilized transfer learning to enhance the analysis of forest smoke in satellite images. Their study introduced a dataset of 999 satellite images and employed learning without forgetting to train the network on a new task while preserving its pre-existing capabilities. In using the Xception model with LwF, the research achieved an accuracy of 91.41% on the BowFire dataset and 96.89% on the original dataset, demonstrating significant improvements in forest fire and smoke detection accuracy.
Based on the plot, Ref. [168] was the second-best performer with the second-highest score of almost thirty-five. This publication, entitled ‘Fast forest fire smoke detection using MVMNet’, was published in 2022. The paper proposed multi-oriented detection based on a value conversion-attention mechanism module and mixed-NMS for smoke detection. They obtained the forest fire multi-oriented detection dataset, which includes 15,909 images. The mAP and mAP50 achieved were 78.92% and 88.05%, respectively.

4.5. Applications of Robots in Fire Detection and Extinguishing

Robots equipped with cameras or vision sensors can capture images or video footage of their surroundings. Deep learning models trained on fire datasets can analyze this visual input, enabling the robot to detect the presence of fire. CNNs are commonly used for image-based fire detection in robot systems.
Deep learning models can be employed to enhance the robot’s decision-making capabilities during fire extinguishing operations. By training the model on datasets that include fire dynamics, robot behavior, and firefighting strategies, the robot can learn to make informed decisions on approaches such as selecting the appropriate firefighting equipment, assessing the fire’s intensity, or planning extinguishing maneuvers. There exist very few examples where robots are utilized in actual fields for forest fire detection. To highlight the potential of robots in fire detection and extinguishing, indoor and outdoor scenarios, in addition to wildfires, are also included. Past research efforts related to fire detection and extinguishing with the help of robots are listed in Table 5.
  • Representative Publications:
The ACC for papers in this category is illustrated in Figure 9. Two papers were chosen as representative publications from this category. One of the selected papers is entitled ‘The Role of UAV-IoT Networks in Future Wildfire Detection’. In this paper, a novel wildfire detection solution based on unmanned aerial vehicle-assisted Internet of Things (UAV-IoT) networks was proposed [192]. The main objectives were to study the performance and reliability of the UAV-IoT networks for wildfire detection and to present a guideline to optimize the UAV-IoT network to improve fire detection probability under limited system cost budgets. Discrete-time Markov chain analysis was utilized to compute the fire detection and false-alarm probabilities. Numerical results suggested that, given enough system cost, UAV-IoT-based fire detection can offer a faster and more reliable wildfire detection solution than state-of-the-art satellite imaging techniques.
The second paper that was chosen is titled ’A Survey on Robotic Technologies for Forest Firefighting: Applying Drone Swarms to Improve Firefighters’ Efficiency and Safety’ [201]. In this paper, a concept for deploying drone swarms in fire prevention, surveillance, and extinguishing tasks was proposed. The objectives included evaluating the effectiveness of drone swarms in enhancing operational efficiency and safety in firefighting missions, as well as in addressing the challenges reported by firefighters. The system utilizes a fleet of homogeneous quad-copters equipped for tasks such as surveillance, mapping, and monitoring. Moreover, the paper discussed the potential of this drone swarm system to improve firefighting operations and outlined challenges related to scalability, operator training, and drone autonomy.

5. Discussion

Fire, smoke, and flame detection and their extinguishing are considered challenging problems due to the complex behavior and dynamics of fire, which makes them difficult to predict and control. Based on the literature, we identified the following important factors.

5.1. Variability in Fire, Smoke, and Flame Types and Appearances

In our analysis, almost all articles were found to have utilized modern resources and technologies to make the proposed approaches as effective as possible. We found several articles in the literature that focused on variation based on the type, color, size, and intensity (Table 6).
Our analysis found that forest fire detection and extinguishing systems underscore the significant advancements made in this field, particularly in leveraging modern resources and technologies such as deep neural networks (DNNs). These technologies have proven essential in addressing the variability in fire, smoke, and flame types; appearances; and intensities, enabling more accurate detection and response.

5.2. Response Time

The ability to detect fires early is crucial for prompt intervention and minimizing potential damage. Many studies have emphasized early detection, but there is often a lack of concrete evidence regarding the computational efficiency and real-world effectiveness of these methods, particularly in forest fire scenarios. A common issue is the lack of practical testing and transparency. For instance, [62] tested a GMM to detect the smoke signatures on plumes, achieving a detection rate of 18–20 fps, but they did not test it in real forest fire scenarios, limiting practical evidence. Similarly, [78] conducted tests with a controlled small fire but did not provide time metrics for real-time applicability. The authors in [164] utilized a dataset collected over 274 days from nine real surveillance cameras mentioning “early detection” without specific metrics, making it difficult to assess it for practical effectiveness. In [78], the authors claimed to detect 78% of wildfires earlier than the VIIRS active fire product, but they did not include explicit time measurements, hindering a thorough evaluation of its early-detection capabilities.
Some studies provided more concrete data on the speed and efficiency of their detection methods. For example, [73] used aerial image analysis with ensemble learning to achieve an inference time of 0.018 s, showcasing rapid detection potential. The multi-oriented detection method in [168] achieved a frame rate of 122 fps, which was higher than YOLOv5 (156 fps), though with a lower mean average precision (mAP). Another study used a dataset of 1135 images, reporting an inference time of 2 s for forest fire segmentation using vision transformers [70]. The deep neural network-based approach (AddNet) saved 12.4% time compared to a regular CNN, and it was tested on a dataset of 4000 images [81]. The performance of EfficientDet, YOLOv3, SSD, and Faster R-CNN was evaluated on a dataset of 17,840 images, with YOLOv3 being the fastest at 27 fps [162]. The method in [174], evaluated with a dataset of 16,140 images, achieved a processing time per image of 0.42 s, which was claimed to be four times faster than the compared models.
Although “early detection” is a frequently used term, specific, quantifiable metrics to support these claims are often lacking. The reviewed studies highlight various methods and technologies, but the need for comprehensive, real-world testing and transparent reporting remains.

5.3. Environmental Contextual and Adaptability

The effectiveness of fire detection systems under various environmental conditions is critical for their accuracy and reliability. Environmental factors such as weather, terrain, and other influences can significantly impact performance, leading to false positives or missed detection.
Environmental factors like cloud cover and weather conditions pose significant challenges for fire detection systems. For example, [75] achieved a 92% detection rate in clear weather but only 56% in cloudy conditions using multi-sensor satellite imagery from Sentinel-1 and Sentinel-2. Similarly, [78] utilized geostationary weather satellite data and proposed max aggregation to reduce cloud and smoke interference, enhancing detection accuracy. Not all studies addressed varying weather conditions comprehensively. Ref. [150] used an unsupervised method without specific solutions for different forecast conditions, demonstrating a lack of robustness in dynamic environments. Additionally, [115] highlighted that wildfire detection probability by MODIS is significantly influenced by factors such as daily relative humidity, wind speed, and altitude, underscoring the need for adaptable detection systems.
False positives are a critical issue in fire detection systems as they can lead to unnecessary alarms and resource deployment. Various strategies have been employed to mitigate this issue. For instance, [72] proposed dividing detected regions into blocks and using multidimensional texture features with a clustering approach to filter out false positives accurately. This method focuses on improving the specificity of the detection system. Other approaches include threshold optimization, as seen in [57], where fires with more than a 30% confidence level were selected to reduce false alarms in the MODIS14 dataset. Ref. [62] attempted to discriminate between smoke, fog, and clouds by converting the RGB color space to hue, saturation, and luminance; though the study lacked a thorough evaluation and comparison of results.
Combining traditional and deep learning methods has shown promise in improving detection accuracy. Ref. [121] integrated a hand-designed smoke detection model with a deep learning model, successfully reducing the false negative and false positive rates, thereby enhancing smoke recognition accuracy. The authors in [147] addressed the challenge of non-smoke images containing features similar to smoke, such as colors, shapes, and textures, by proposing multiscale convolutional layers for scale-invariant smoke recognition.
Detection in fog or dust conditions presents additional challenges. The authors in [151] compared their approach with other methods, including SVM, Bayes classifier, fuzzy c-means, and Back Propagation Neural Network, and they demonstrated the lowest false alarm rate in wildfire smoke detection under heavy fog. Further advancements include the use of quasi-dynamic features and dual tree-complex wavelet transform with elastic net processing, as proposed by [177], to handle disturbances like fog and haze. Similarly, [148] developed a deep convolutional neural network to address variations in image datasets, such as clouds, fog, and sandstorms, achieving an average accuracy of 97%. However, they noted a performance degradation when testing on wildfire smoke compared to nearby smoke, indicating the need for more specialized training datasets.

5.4. Extinguishing Efficiency

Most of the development of firefighting robots has mainly focused on indoor and smooth outdoor environments, limiting their use in rugged terrains like forests. These robots are designed to assist in firefighting, but their effectiveness in actual forest environments is largely untested. Most existing firefighting UGVs are suited for smooth surfaces and controlled conditions, such as urban areas, and are equipped with fire suppression systems and sensors. However, they are not optimized for the unpredictable conditions of forests.
Some pioneering efforts are being made to develop technologies specifically for forest environments. For instance, a UAV platform with a 600-L payload capacity and equipped with thermographic cameras and navigation systems has been proposed, but it has not been fully tested in real-world conditions [198]. Another study explored the use of fire extinguishing balls deployed from unmanned aircraft systems; though practical effectiveness remained uncertain due to limited integration evidence [199,200]. Research has also focused on robotized surveillance with conventional, multi-spectral, and thermal cameras, primarily for situational awareness and early detection [201]. However, there is a gap in integrating autonomous systems for direct fire suppression, with most efforts centered on surveillance rather than active firefighting.
While there are promising developments, forest firefighting robots are still in the early stages of research and development. Most current technologies are designed for controlled environments and have not been extensively tested in forest conditions. Therefore, their efficiency and practical effectiveness cannot be validated due to a lack of evidence and comprehensive testing.

5.5. Compliance and Standards

The use of UAVs for forest fire detection and extinguishing offers advantages like rapid deployment, real-time data acquisition, and access to hard-to-reach areas. However, integrating UAVs into these applications presents challenges, particularly regarding compliance with regional regulations and safety standards. For instance, in Canada, UAV operators must obtain a pilot license, maintain a line of sight with the UAV, and avoid flying near forest fires [130]. These regulations, while essential for safety, can limit the effectiveness and operational scope of UAV-based systems. Our review found a lack of focus on developing UAV hardware that complies with these regulatory frameworks, highlighting the need for compliant technologies that can operate safely and legally across different regions.

6. Recommendations for Future Research

A review of the current literature on forest fire detection and extinguishing systems revealed several key areas where further research and development are needed. Addressing these gaps will not only enhance the effectiveness of these systems but also ensure their safe and compliant integration into existing fire management frameworks. Below are three primary gaps that were identified, along with corresponding recommendations for future research.

6.1. Recommendation 1: Integration of Real-Time Data Processing and Decision-Making Algorithms

Gap: Current research often focuses on the capabilities of UAV systems for data collection but there is a lack of emphasis on the integration of real-time data processing and decision-making algorithms [82,130]. This integration is crucial for enabling UAVs to respond promptly and accurately to detecting fires, especially in rapidly changing environments.
Recommendation: Future research should concentrate on developing and integrating advanced algorithms capable of real-time data processing [174] and decision making [202]. This includes machine learning and AI techniques that can analyze sensor data on-the-fly, identify potential fire hazards, and make autonomous decisions regarding navigation and intervention. Researchers should explore how these algorithms can be implemented efficiently on UAV platforms, considering constraints like computational power and energy consumption [110,169].

6.2. Recommendation 2: Effectiveness and Autonomy in Real-World Conditions

Gap: Although numerous UAV systems have been proposed for forest fire detection and extinguishing, many have not been extensively tested or validated in real-world conditions [65,73,198]. This lack of field testing raises concerns about the practical effectiveness, functionality, and autonomy of these systems in the diverse and challenging environments typical of forest fires.
Recommendation: There is a need for comprehensive field trials and simulations that replicate the conditions of actual forest fires. Future research should focus on developing and testing UAV systems in varied and dynamic environments to assess their performance in detecting and responding to fires. This includes testing the systems’ navigation capabilities, sensor accuracy, and overall operational reliability.

6.3. Recommendation 3: Human–Robot Interactions and Collaboration

Gap: While UAVs offer advanced surveillance and early detection capabilities, there is limited research on how these systems can effectively interact and collaborate with human firefighters. Our analysis found no article that discusses the HRI for the forest fire. Ensuring seamless HRI is crucial for optimizing the use of UAVs in firefighting, including coordinating actions with ground teams and ensuring the safety and efficiency of operations.
Recommendation: Future research should explore the development of systems and protocols that facilitate effective HRI in the context of forest firefighting. This includes designing intuitive interfaces and communication systems that allow human operators to easily control and monitor UAVs. Additionally, research should focus on developing collaborative frameworks where UAVs and human firefighters can work together, leveraging each other’s strengths. For example, UAVs can provide real-time aerial data to ground teams, enhancing situational awareness and guiding decision-making processes [58]. Studies should also address the psychological and ergonomic aspects of HRI, ensuring that the introduction of UAVs does not overwhelm or distract human operators but rather complements their efforts.

7. Conclusions

Automatic fire detection in forests is a critical aspect of modern wildfire management and prevention. In this paper, through the PRISMA framework, we surveyed a total of 155 journal papers that concentrated on fire detection using image processing, computer vision, deep learning, and machine learning for the time span of 2013–2023. The literature review was mainly classified into four categories: fire, smoke, fire and flame, and fire and smoke. We also categorized the literature based on their applications in real fields for fire detection, fire extinguishing, or a combination of both. We observed an exponential increase in the number of publications from 2018 onward; however, very limited research has been conducted in the utilization of robots for the detection and extinguishing of fire in hazardous environments. We predict that, with the increasing number of fire incidents in the forests and with the increased popularity of robots, the trend of autonomous systems for fire detection and extinguishing will thrive. We hope that this research work can be used as a guidebook for researchers who are looking for recent developments in forest fire detection using deep learning and image processing to perform further research in this domain.

Author Contributions

B.Ö.: conceptualization, methodology, formal analysis, and writing–original draft preparation; M.S.A.: methodology, investigation, visualization, and writing–review and editing; M.U.K.: conceptualization, methodology, investigation, visualization, writing—review and editing, and supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this study:
AAFLMAttention-Based Adaptive Fusion Residual Module
AAPF Auto-Organization, Adaptive Frame Periods
ADE-NetAttention-based Dual-Encoding Network
AERNetAn Effective Real-time Fire Detection Network
AFMAttention Fusion Module
AFSMAttention-Based Feature Separation Module
AGEAttention-Guided Enhancement
AMPAutomatic Mixed Precision
ANNArtificial Neural Network
ASFFAdaptively Spatial Feature Fusion
AUROCArea Under the Receiver Operating Characteristic
BNNBayesian Neural Network
BiFPNBidirectional Feature Pyramid Network
BPNNBack Propagation Neural Network
CACoordinate Attention
CARAFEContent-Aware Reassembly of Features
CBAMConvolutional Block Attention Module
CCDCContinuous Change Detection and Classification
CEPComplex Event Processing
CIoUComplete Intersection over Union
CoLBPCo-Occurrence of Local Binary Pattern
DARADual Fusion Attention Residual Feature Attention
DBNDeep Belief Network
DCNNDeep Convolutional Neural Network
DDAMDetail-Difference-Aware Module
DETRDetection Transformer
DPPMDense Pyramid Pooling Module
DTMCDiscrete-Time Markov Chain
ECAEfficient Channel Attention
ELMExtreme Learning Machine
ESRGANEnhanced Super-Resolution Generative Adversarial Network
FCNFully Convolutional Network
FCOSFully Convolutional One-Stage
FFDIForest Fire Detection Index
FFDSMForest Fire Detection and Segmentation Model
FILDAFirelight Detection Algorithm
FLFederated Learning
FLAMEFire Luminosity Airborne-based Machine Learning Evaluation
FSCNFully Symmetric Convolutional–Deconvolutional Neural Network
GCFGlobal Context Fusion
GISGeographic Information System
GLCMGray Level Co-Occurrence Matrix
GMMGaussian Mixture Model
GRUGated Recurrent Unit
GSConvGhost Shuffle Convolution
HRIHuman–Robot Interaction
HDLBPHamming Distance Based Local Binary Pattern
ISSAImproved Sparrow Search Algorithm
KNNK-Nearest Neighbor
K-SVDK-Singular Value Decomposition
LBPLocal Binary Pattern
LMINetLabel-Relevance Multi-Direction Interaction Network
LSTMLong Short-Term Memory Networks
LwFLearning without Forgetting
MAE-NetMulti-Attention Fusion
MCCLMulti-scale Context Contrasted Local Feature Module
MCAMMulti-Connection Aggregation Method
MQTTMessage Queuing Telemetry Transport
MSDMulti-Scale Detection
MTLMulti-Task Learning
MWIRMiddle Wavelength Infrared
NBRNormalized Burned Ratio
NDVINormalized Difference Vegetation Index
PANetPath Aggregation Network
PConvPartial Convolution
PODProbability of Detection
POFDProbability of False Detection
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
PSNetPixel-level Supervision Neural Network
PSOParticle Swarm Optimization
R-CNNRegion-Based Convolutional Neural Network
RECABResidual Efficient Channel Attention Block
RFBReceptive Field Block
ROIRegion of Interest
RNNRecurrent Neural Network
RSRemote Sensing
SE-GhostNetSqueeze and Excitation–GhostNet
SHAPShapley Additive Explanations
SIFTScale Invariant Feature Transform
SIoUSCYLLA–Intersection Over Union
SPPFSpatial Pyramid Pooling Fast
SPPF+Spatial Pyramid Pooling Fast+
SVMSupport Vector Machine
TECNNTransformer-Enhanced Convolutional Neural Network
TWSVMTwin Support Vector Machine
USGSUnited States Geological Survey
ViTVision Transformer
VHRVery High Resolution
VIIRSVisible Infrared Imaging Radiometer Suite
VSUVideo Surveillance Unit
WIoUWise–IoU
YOLOYou Only Look Once

References

  1. Brunner, I.; Godbold, D.L. Tree roots in a changing world. J. For. Res. 2007, 12, 78–82. [Google Scholar] [CrossRef]
  2. Ball, G.; Regier, P.; González-Pinzón, R.; Reale, J.; Van Horn, D. Wildfires increasingly impact western US fluvial networks. Nat. Commun. 2021, 12, 2484. [Google Scholar] [CrossRef] [PubMed]
  3. Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A review on early forest fire detection systems using optical remote sensing. Sensors 2020, 20, 6442. [Google Scholar] [CrossRef] [PubMed]
  4. Truong, C.T.; Nguyen, T.H.; Vu, V.Q.; Do, V.H.; Nguyen, D.T. Enhancing fire detection technology: A UV-based system utilizing fourier spectrum analysis for reliable and accurate fire detection. Appl. Sci. 2023, 13, 7845. [Google Scholar] [CrossRef]
  5. Geetha, S.; Abhishek, C.; Akshayanat, C. Machine vision based fire detection techniques: A survey. Fire Technol. 2021, 57, 591–623. [Google Scholar] [CrossRef]
  6. Alkhatib, A.A. A review on forest fire detection techniques. Int. J. Distrib. Sens. Netw. 2014, 10, 597368. [Google Scholar] [CrossRef]
  7. Yuan, C.; Liu, Z.; Zhang, Y. Fire detection using infrared images for UAV-based forest fire surveillance. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 567–572. [Google Scholar]
  8. Yang, X.; Hua, Z.; Zhang, L.; Fan, X.; Zhang, F.; Ye, Q.; Fu, L. Preferred vector machine for forest fire detection. Pattern Recognit. 2023, 143, 109722. [Google Scholar] [CrossRef]
  9. Yuan, C.; Liu, Z.; Zhang, Y. Learning-based smoke detection for unmanned aerial vehicles applied to forest fire surveillance. J. Intell. Robot. Syst. 2019, 93, 337–349. [Google Scholar] [CrossRef]
  10. Li, X.; Song, W.; Lian, L.; Wei, X. Forest fire smoke detection using back-propagation neural network based on MODIS data. Remote Sens. 2015, 7, 4473–4498. [Google Scholar] [CrossRef]
  11. Mahmoud, M.A.; Ren, H. Forest Fire Detection Using a Rule-Based Image Processing Algorithm and Temporal Variation. Math. Probl. Eng. 2018, 2018, 7612487. [Google Scholar] [CrossRef]
  12. Khan, A.; Hassan, B.; Khan, S.; Ahmed, R.; Abuassba, A. DeepFire: A novel dataset and deep transfer learning benchmark for forest fire detection. Mob. Inf. Syst. 2022, 2022, 5358359. [Google Scholar] [CrossRef]
  13. Rangwala, A.S.; Raghavan, V. Mechanism of Fires: Chemistry and Physical Aspects; Springer Nature: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  14. Wu, D.; Zhang, C.; Ji, L.; Ran, R.; Wu, H.; Xu, Y. Forest fire recognition based on feature extraction from multi-view images. Trait. Signal 2021, 38, 775–783. [Google Scholar] [CrossRef]
  15. Qiu, X.; Xi, T.; Sun, D.; Zhang, E.; Li, C.; Peng, Y.; Wei, J.; Wang, G. Fire detection algorithm combined with image processing and flame emission spectroscopy. Fire Technol. 2018, 54, 1249–1263. [Google Scholar] [CrossRef]
  16. Dzigal, D.; Akagic, A.; Buza, E.; Brdjanin, A.; Dardagan, N. Forest fire detection based on color spaces combination. In Proceedings of the 2019 11th International Conference on Electrical and Electronics Engineering (ELECO), Bursa, Turkey, 28–30 November 2019; pp. 595–599. [Google Scholar] [CrossRef]
  17. Khalil, A.; Rahman, S.U.; Alam, F.; Ahmad, I.; Khalil, I. Fire detection using multi color space and background modeling. Fire Technol. 2021, 57, 1221–1239. [Google Scholar] [CrossRef]
  18. Gaur, A.; Singh, A.; Kumar, A.; Kumar, A.; Kapoor, K. Video flame and smoke based fire detection algorithms: A literature review. Fire Technol. 2020, 56, 1943–1980. [Google Scholar] [CrossRef]
  19. Wu, H.; Wu, D.; Zhao, J. An intelligent fire detection approach through cameras based on computer vision methods. Process. Saf. Environ. Prot. 2019, 127, 245–256. [Google Scholar] [CrossRef]
  20. Khondaker, A.; Khandaker, A.; Uddin, J. Computer Vision-based Early Fire Detection Using Enhanced Chromatic Segmentation and Optical Flow Analysis Technique. Int. Arab. J. Inf. Technol. 2020, 17, 947–953. [Google Scholar] [CrossRef]
  21. He, Y. Smart detection of indoor occupant thermal state via infrared thermography, computer vision, and machine learning. Build. Environ. 2023, 228, 109811. [Google Scholar] [CrossRef]
  22. Mazur-Milecka, M.; Głowacka, N.; Kaczmarek, M.; Bujnowski, A.; Kaszyński, M.; Rumiński, J. Smart city and fire detection using thermal imaging. In Proceedings of the 2021 14th International Conference on Human System Interaction (HSI), Gdańsk, Poland, 8–10 July 2021; pp. 1–7. [Google Scholar] [CrossRef]
  23. Bouguettaya, A.; Zarzour, H.; Taberkit, A.M.; Kechida, A. A review on early wildfire detection from unmanned aerial vehicles using deep learning-based computer vision algorithms. Signal Process. 2022, 190, 108309. [Google Scholar] [CrossRef]
  24. Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [Google Scholar] [CrossRef]
  25. Saponara, S.; Elhanashi, A.; Gagliardi, A. Real-time video fire/smoke detection based on CNN in antifire surveillance systems. J.-Real-Time Image Process. 2021, 18, 889–900. [Google Scholar] [CrossRef]
  26. Florath, J.; Keller, S. Supervised Machine Learning Approaches on Multispectral Remote Sensing Data for a Combined Detection of Fire and Burned Area. Remote Sens. 2022, 14, 657. [Google Scholar] [CrossRef]
  27. Mohammed, R. A real-time forest fire and smoke detection system using deep learning. Int. J. Nonlinear Anal. Appl. 2022, 13, 2053–2063. [Google Scholar] [CrossRef]
  28. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [PubMed]
  29. Mahmoud, M.A.I.; Ren, H. Forest fire detection and identification using image processing and SVM. J. Inf. Process. Syst. 2019, 15, 159–168. [Google Scholar] [CrossRef]
  30. Yuan, C.; Ghamry, K.A.; Liu, Z.; Zhang, Y. Unmanned aerial vehicle based forest fire monitoring and detection using image processing technique. In Proceedings of the 2016 IEEE Chinese Guidance, Navigation and Control Conference (CGNCC), Miami, FL, USA, 13–16 June 2016; pp. 1870–1875. [Google Scholar] [CrossRef]
  31. Rahman, E.U.; Khan, M.A.; Algarni, F.; Zhang, Y.; Irfan Uddin, M.; Ullah, I.; Ahmad, H.I. Computer Vision-Based Wildfire Smoke Detection Using UAVs. Math. Probl. Eng. 2021, 2021, 9977939. [Google Scholar] [CrossRef]
  32. Almasoud, A.S. Intelligent Deep Learning Enabled Wild Forest Fire Detection System. Comput. Syst. Sci. Eng. 2023, 44. [Google Scholar] [CrossRef]
  33. Chen, X.; Hopkins, B.; Wang, H.; O’Neill, L.; Afghah, F.; Razi, A.; Fulé, P.; Coen, J.; Rowell, E.; Watts, A. Wildland fire detection and monitoring using a drone-collected RGB/IR image dataset. IEEE Access 2022, 10, 121301–121317. [Google Scholar] [CrossRef]
  34. Dewangan, A.; Pande, Y.; Braun, H.W.; Vernon, F.; Perez, I.; Altintas, I.; Cottrell, G.W.; Nguyen, M.H. FIgLib & SmokeyNet: Dataset and deep learning model for real-time wildland fire smoke detection. Remote Sens. 2022, 14, 1007. [Google Scholar] [CrossRef]
  35. Zhou, Z.; Shi, Y.; Gao, Z.; Li, S. Wildfire smoke detection based on local extremal region segmentation and surveillance. Fire Saf. J. 2016, 85, 50–58. [Google Scholar] [CrossRef]
  36. Zhang, Q.x.; Lin, G.h.; Zhang, Y.m.; Xu, G.; Wang, J.j. Wildland forest fire smoke detection based on faster R-CNN using synthetic smoke images. Procedia Eng. 2018, 211, 441–446. [Google Scholar] [CrossRef]
  37. Sudhakar, S.; Vijayakumar, V.; Kumar, C.S.; Priya, V.; Ravi, L.; Subramaniyaswamy, V. Unmanned Aerial Vehicle (UAV) based Forest Fire Detection and monitoring for reducing false alarms in forest-fires. Comput. Commun. 2020, 149, 1–16. [Google Scholar] [CrossRef]
  38. Hossain, F.A.; Zhang, Y.; Yuan, C.; Su, C.Y. Wildfire flame and smoke detection using static image features and artificial neural network. In Proceedings of the 2019 1st International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China, 23–27 July 2019. [Google Scholar] [CrossRef]
  39. Ghamry, K.A.; Kamel, M.A.; Zhang, Y. Cooperative forest monitoring and fire detection using a team of UAVs-UGVs. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 7–10 June 2016; pp. 1206–1211. [Google Scholar] [CrossRef]
  40. Akhloufi, M.A.; Couturier, A.; Castro, N.A. Unmanned aerial vehicles for wildland fires: Sensing, perception, cooperation and assistance. Drones 2021, 5, 15. [Google Scholar] [CrossRef]
  41. Battistoni, P.; Cantone, A.A.; Martino, G.; Passamano, V.; Romano, M.; Sebillo, M.; Vitiello, G. A cyber-physical system for wildfire detection and firefighting. Future Internet 2023, 15, 237. [Google Scholar] [CrossRef]
  42. Jiao, Z.; Zhang, Y.; Xin, J.; Mu, L.; Yi, Y.; Liu, H.; Liu, D. A deep learning based forest fire detection approach using UAV and YOLOv3. In Proceedings of the 2019 1st International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China, 23–27 July 2019. [Google Scholar] [CrossRef]
  43. Ghali, R.; Akhloufi, M.A. Deep learning approaches for wildland fires using satellite remote sensing data: Detection, mapping, and prediction. Fire 2023, 6, 192. [Google Scholar] [CrossRef]
  44. Artés, T.; Oom, D.; De Rigo, D.; Durrant, T.H.; Maianti, P.; Libertà, G.; San-Miguel-Ayanz, J. A global wildfire dataset for the analysis of fire regimes and fire behaviour. Sci. Data 2019, 6, 296. [Google Scholar] [CrossRef] [PubMed]
  45. Sayad, Y.O.; Mousannif, H.; Al Moatassime, H. Predictive modeling of wildfires: A new dataset and machine learning approach. Fire Saf. J. 2019, 104, 130–146. [Google Scholar] [CrossRef]
  46. Zhang, G.; Wang, M.; Liu, K. Deep neural networks for global wildfire susceptibility modelling. Ecol. Indic. 2021, 127, 107735. [Google Scholar] [CrossRef]
  47. Zheng, S.; Zou, X.; Gao, P.; Zhang, Q.; Hu, F.; Zhou, Y.; Wu, Z.; Wang, W.; Chen, S. A forest fire recognition method based on modified deep CNN model. Forests 2024, 15, 111. [Google Scholar] [CrossRef]
  48. Zhang, L.; Wang, M.; Fu, Y.; Ding, Y. A Forest Fire Recognition Method Using UAV Images Based on Transfer Learning. Forests 2022, 13, 975. [Google Scholar] [CrossRef]
  49. Qian, J.; Lin, H. A Forest Fire Identification System Based on Weighted Fusion Algorithm. Forests 2022, 13, 1301. [Google Scholar] [CrossRef]
  50. Anh, N. Efficient Forest Fire Detection using Rule-Based Multi-color Space and Correlation Coefficient for Application in Unmanned Aerial Vehicles. Ksii Trans. Internet Inf. Syst. 2022, 16, 381–404. [Google Scholar] [CrossRef]
  51. Zhang, J.; Zhu, H.; Wang, P.; Ling, X. ATT Squeeze U-Net: A lightweight Network for Forest Fire Detection and Recognition. IEEE Access 2021, 9, 10858–10870. [Google Scholar] [CrossRef]
  52. Qi, R.; Liu, Z. Extraction and Classification of Image Features for Fire Recognition Based on Convolutional Neural Network. Trait. Signal 2021, 38, 895–902. [Google Scholar] [CrossRef]
  53. Chanthiya, P.; Kalaivani, V. Forest fire detection on LANDSAT images using support vector machine. Concurr.-Comput.-Pract. Exp. 2021, 33, e6280. [Google Scholar] [CrossRef]
  54. Sousa, M.; Moutinho, A.; Almeida, M. Thermal Infrared Sensing for Near Real-Time Data-Driven Fire Detection and Monitoring Systems. Sensors 2020, 20, 6803. [Google Scholar] [CrossRef] [PubMed]
  55. Chung, M.; Han, Y.; Kim, Y. A Framework for Unsupervised Wildfire Damage Assessment Using VHR Satellite Images with PlanetScope Data. Remote Sens. 2020, 12, 3835. [Google Scholar] [CrossRef]
  56. Wang, Y.; Dang, L.; Ren, J. Forest fire image recognition based on convolutional neural network. J. Algorithm. Comput. Technol. 2019, 13, 1748302619887689. [Google Scholar] [CrossRef]
  57. Park, W.; Park, S.; Jung, H.; Won, J. An Extraction of Solar-contaminated Energy Part from MODIS Middle Infrared Channel Measurement to Detect Forest Fires. Korean J. Remote Sens. 2019, 35, 39–55. [Google Scholar] [CrossRef]
  58. Yuan, C.; Liu, Z.; Zhang, Y. Aerial Images-Based Forest Fire Detection for Firefighting Using Optical Remote Sensing Techniques and Unmanned Aerial Vehicles. J. Intell. Robot. Syst. 2017, 88, 635–654. [Google Scholar] [CrossRef]
  59. Prema, C.; Vinsley, S.; Suresh, S. Multi Feature Analysis of Smoke in YUV Color Space for Early Forest Fire Detection. Fire Technol. 2016, 52, 1319–1342. [Google Scholar] [CrossRef]
  60. Polivka, T.; Wang, J.; Ellison, L.; Hyer, E.; Ichoku, C. Improving Nocturnal Fire Detection With the VIIRS Day-Night Band. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5503–5519. [Google Scholar] [CrossRef]
  61. Lin, L. A Spatio-Temporal Model for Forest Fire Detection Using HJ-IRS Satellite Data. Remote Sens. 2016, 8, 403. [Google Scholar] [CrossRef]
  62. Yoon, S.; Min, J. An Intelligent Automatic Early Detection System of Forest Fire Smoke Signatures using Gaussian Mixture Model. J. Inf. Process. Syst. 2013, 9, 621–632. [Google Scholar] [CrossRef]
  63. Xue, Z.; Lin, H.; Wang, F. A Small Target Forest Fire Detection Model Based on YOLOv5 Improvement. Forests 2022, 13, 1332. [Google Scholar] [CrossRef]
  64. Seydi, S.T.; Saeidi, V.; Kalantar, B.; Ueda, N.; Halin, A.A. Fire-Net: A Deep Learning Framework for Active Forest Fire Detection. J. Sensors 2022, 2022, 8044390. [Google Scholar] [CrossRef]
  65. Lu, K.; Xu, R.; Li, J.; Lv, Y.; Lin, H.; Li, Y. A Vision-Based Detection and Spatial Localization Scheme for Forest Fire Inspection from UAV. Forests 2022, 13, 383. [Google Scholar] [CrossRef]
  66. Lu, K.; Huang, J.; Li, J.; Zhou, J.; Chen, X.; Liu, Y. MTL-FFDET: A Multi-Task Learning-Based Model for Forest Fire Detection. Forests 2022, 13, 1448. [Google Scholar] [CrossRef]
  67. Guan, Z.; Miao, X.; Mu, Y.; Sun, Q.; Ye, Q.; Gao, D. Forest Fire Segmentation from Aerial Imagery Data Using an Improved Instance Segmentation Model. Remote Sens. 2022, 14, 3159. [Google Scholar] [CrossRef]
  68. Li, W.; Lin, Q.; Wang, K.; Cai, K. Machine vision-based network monitoring system for solar-blind ultraviolet signal. Comput. Commun. 2021, 171, 157–162. [Google Scholar] [CrossRef]
  69. Kim, B.; Lee, J. A Bayesian Network-Based Information Fusion Combined with DNNs for Robust Video Fire Detection. Appl. Sci. 2021, 11, 7624. [Google Scholar] [CrossRef]
  70. Ghali, R.; Akhloufi, M.; Jmal, M.; Mseddi, W.; Attia, R. Wildfire Segmentation Using Deep Vision Transformers. Remote Sens. 2021, 13, 3527. [Google Scholar] [CrossRef]
  71. Toptas, B.; Hanbay, D. A new artificial bee colony algorithm-based color space for fire/flame detection. Soft Comput. 2020, 24, 10481–10492. [Google Scholar] [CrossRef]
  72. Barmpoutis, P.; Stathaki, T.; Dimitropoulos, K.; Grammalidis, N. Early fire detection based on aerial 360-degree sensors, deep convolution neural networks and exploitation of fire dynamic textures. Remote Sens. 2020, 12, 3177. [Google Scholar] [CrossRef]
  73. Ghali, R.; Akhloufi, M.; Mseddi, W. Deep Learning and Transformer Approaches for UAV-Based Wildfire Detection and Segmentation. Sensors 2022, 22, 1977. [Google Scholar] [CrossRef]
  74. Zhang, Q.; Ge, L.; Zhang, R.; Metternicht, G.; Liu, C.; Du, Z. Towards a Deep-Learning-Based Framework of Sentinel-2 Imagery for Automated Active Fire Detection. Remote Sens. 2021, 13, 4790. [Google Scholar] [CrossRef]
  75. Rashkovetsky, D.; Mauracher, F.; Langer, M.; Schmitt, M. Wildfire Detection From Multisensor Satellite Imagery Using Deep Semantic Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7001–7016. [Google Scholar] [CrossRef]
  76. Pereira, G.; Fusioka, A.; Nassu, B.; Minetto, R. Active fire detection in Landsat-8 imagery: A large-scale dataset and a deep-learning study. Isprs J. Photogramm. Remote Sens. 2021, 178, 171–186. [Google Scholar] [CrossRef]
  77. Benzekri, W.; Moussati, A.; Moussaoui, O.; Berrajaa, M. Early Forest Fire Detection System using Wireless Sensor Network and Deep Learning. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 496–503. [Google Scholar] [CrossRef]
  78. Zhao, Y.; Ban, Y. GOES-R Time Series for Early Detection of Wildfires with Deep GRU-Network. Remote Sens. 2022, 14, 4347. [Google Scholar] [CrossRef]
  79. Hong, Z. Active Fire Detection Using a Novel Convolutional Neural Network Based on Himawari-8 Satellite Images. Front. Environ. Sci. 2022, 10, 794028. [Google Scholar] [CrossRef]
  80. Xu, R.; Lin, H.; Lu, K.; Cao, L.; Liu, Y. A Forest Fire Detection System Based on Ensemble Learning. Forests 2021, 12, 217. [Google Scholar] [CrossRef]
  81. Pan, H.; Badawi, D.; Zhang, X.; Cetin, A. Additive neural network for forest fire detection. Signal Image Video Process. 2020, 14, 675–682. [Google Scholar] [CrossRef]
  82. Zhao, Y.; Ma, J.; Li, X.; Zhang, J. Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery. Sensors 2018, 18, 712. [Google Scholar] [CrossRef] [PubMed]
  83. Zhang, A.; Zhang, A. Real-Time Wildfire Detection and Alerting with a Novel Machine Learning Approach A New Systematic Approach of Using Convolutional Neural Network (CNN) to Achieve Higher Accuracy in Automation. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 1–6. [Google Scholar]
  84. Wahyono; Harjoko, A.; Dharmawan, A.; Adhinata, F.D.; Kosala, G.; Jo, K.H. Real-time forest fire detection framework based on artificial intelligence using color probability model and motion feature analysis. Fire 2022, 5, 23. [Google Scholar] [CrossRef]
  85. Phan, T.; Quach, N.; Nguyen, T.; Nguyen, T.; Jo, J.; Nguyen, Q. Real-time wildfire detection with semantic explanations. Expert Syst. Appl. 2022, 201, 117007. [Google Scholar] [CrossRef]
  86. Yang, X. Pixel-level automatic annotation for forest fire image. Eng. Appl. Artif. Intell. 2021, 104, 104353. [Google Scholar] [CrossRef]
  87. Shamsoshoara, A.; Afghah, F.; Razi, A.; Zheng, L.; Fule, P.; Blasch, E. Aerial imagery pile burn detection using deep learning: The FLAME dataset. Comput. Netw. 2021, 193, 108001. [Google Scholar] [CrossRef]
  88. Liu, Z.; Zhang, K.; Wang, C.; Huang, S. Research on the identification method for the forest fire based on deep learning. Optik 2020, 223, 165491. [Google Scholar] [CrossRef]
  89. Khurana, M.; Saxena, V. A Unified Approach to Change Detection Using an Adaptive Ensemble of Extreme Learning Machines. IEEE Geosci. Remote Sens. Lett. 2020, 17, 794–798. [Google Scholar] [CrossRef]
  90. Huang, X.; Du, L. Fire Detection and Recognition Optimization Based on Virtual Reality Video Image. IEEE Access 2020, 8, 77951–77961. [Google Scholar] [CrossRef]
  91. Govil, K.; Welch, M.L.; Ball, J.T.; Pennypacker, C.R. Preliminary results from a wildfire detection system using deep learning on remote camera images. Remote Sens. 2020, 12, 166. [Google Scholar] [CrossRef]
  92. Ouni, S.; Ayoub, Z.; Kamoun, F. Auto-organization approach with adaptive frame periods for IEEE 802.15.4/zigbee forest fire detection system. Wirel. Netw. 2019, 25, 4059–4076. [Google Scholar] [CrossRef]
  93. Jang, E.; Kang, Y.; Im, J.; Lee, D.; Yoon, J.; Kim, S. Detection and Monitoring of Forest Fires Using Himawari-8 Geostationary Satellite Data in South Korea. Remote Sens. 2019, 11, 271. [Google Scholar] [CrossRef]
  94. Mao, W.; Wang, W.; Dou, Z.; Li, Y. Fire Recognition Based On Multi-Channel Convolutional Neural Network. Fire Technol. 2018, 54, 531–554. [Google Scholar] [CrossRef]
  95. Zheng, S.; Gao, P.; Zhou, Y.; Wu, Z.; Wan, L.; Hu, F.; Wang, W.; Zou, X.; Chen, S. An accurate forest fire recognition method based on improved BPNN and IoT. Remote Sens. 2023, 15, 2365. [Google Scholar] [CrossRef]
  96. Liu, T.; Chen, W.; Lin, X.; Mu, Y.; Huang, J.; Gao, D.; Xu, J. Defogging Learning Based on an Improved DeepLabV3+ Model for Accurate Foggy Forest Fire Segmentation. Forests 2023, 14, 1859. [Google Scholar] [CrossRef]
  97. Reis, H.C.; Turk, V. Detection of forest fire using deep convolutional neural networks with transfer learning approach. Appl. Soft Comput. 2023, 143, 110362. [Google Scholar] [CrossRef]
  98. Pang, Y.; Wu, Y.; Yuan, Y. FuF-Det: An Early Forest Fire Detection Method under Fog. Remote Sens. 2023, 15, 5435. [Google Scholar] [CrossRef]
  99. Lin, J.; Lin, H.; Wang, F. A semi-supervised method for real-time forest fire detection algorithm based on adaptively spatial feature fusion. Forests 2023, 14, 361. [Google Scholar] [CrossRef]
  100. Akyol, K. Robust stacking-based ensemble learning model for forest fire detection. Int. J. Environ. Sci. Technol. 2023, 20, 13245–13258. [Google Scholar] [CrossRef]
  101. Niu, K.; Wang, C.; Xu, J.; Yang, C.; Zhou, X.; Yang, X. An Improved YOLOv5s-Seg Detection and Segmentation Model for the Accurate Identification of Forest Fires Based on UAV Infrared Image. Remote Sens. 2023, 15, 4694. [Google Scholar] [CrossRef]
  102. Sarikaya Basturk, N. Forest fire detection in aerial vehicle videos using a deep ensemble neural network model. Aircr. Eng. Aerosp. Technol. 2023, 95, 1257–1267. [Google Scholar] [CrossRef]
  103. Rahman, A.; Sakif, S.; Sikder, N.; Masud, M.; Aljuaid, H.; Bairagi, A.K. Unmanned aerial vehicle assisted forest fire detection using deep convolutional neural network. Intell. Autom. Soft Comput 2023, 35, 3259–3277. [Google Scholar] [CrossRef]
  104. Ghali, R.; Akhloufi, M.A. CT-Fire: A CNN-Transformer for wildfire classification on ground and aerial images. Int. J. Remote Sens. 2023, 44, 7390–7415. [Google Scholar] [CrossRef]
  105. Abdusalomov, A.B.; Islam, B.M.S.; Nasimov, R.; Mukhiddinov, M.; Whangbo, T.K. An improved forest fire detection method based on the detectron2 model and a deep learning approach. Sensors 2023, 23, 1512. [Google Scholar] [CrossRef]
  106. Supriya, Y.; Gadekallu, T.R. Particle swarm-based federated learning approach for early detection of forest fires. Sustainability 2023, 15, 964. [Google Scholar] [CrossRef]
  107. Khennou, F.; Akhloufi, M.A. Improving wildland fire spread prediction using deep U-Nets. Sci. Remote Sens. 2023, 8, 100101. [Google Scholar] [CrossRef]
  108. Peruzzi, G.; Pozzebon, A.; Van Der Meer, M. Fight fire with fire: Detecting forest fires with embedded machine learning models dealing with audio and images on low power iot devices. Sensors 2023, 23, 783. [Google Scholar] [CrossRef]
  109. Barmpoutis, P.; Kastridis, A.; Stathaki, T.; Yuan, J.; Shi, M.; Grammalidis, N. Suburban Forest Fire Risk Assessment and Forest Surveillance Using 360-Degree Cameras and a Multiscale Deformable Transformer. Remote Sens. 2023, 15, 1995. [Google Scholar] [CrossRef]
  110. Almeida, J.S.; Jagatheesaperumal, S.K.; Nogueira, F.G.; de Albuquerque, V.H.C. EdgeFireSmoke++: A novel lightweight algorithm for real-time forest fire detection and visualization using internet of things-human machine interface. Expert Syst. Appl. 2023, 221, 119747. [Google Scholar] [CrossRef]
  111. Zheng, H.; Dembele, S.; Wu, Y.; Liu, Y.; Chen, H.; Zhang, Q. A lightweight algorithm capable of accurately identifying forest fires from UAV remote sensing imagery. Front. For. Glob. Chang. 2023, 6, 1134942. [Google Scholar] [CrossRef]
  112. Shahid, M.; Chen, S.F.; Hsu, Y.L.; Chen, Y.Y.; Chen, Y.L.; Hua, K.L. Forest fire segmentation via temporal transformer from aerial images. Forests 2023, 14, 563. [Google Scholar] [CrossRef]
  113. Ahmad, K.; Khan, M.S.; Ahmed, F.; Driss, M.; Boulila, W.; Alazeb, A.; Alsulami, M.; Alshehri, M.S.; Ghadi, Y.Y.; Ahmad, J. FireXnet: An explainable AI-based tailored deep learning model for wildfire detection on resource-constrained devices. Fire Ecol. 2023, 19, 54. [Google Scholar] [CrossRef]
  114. Wang, X.; Pan, Z.; Gao, H.; He, N.; Gao, T. An efficient model for real-time wildfire detection in complex scenarios based on multi-head attention mechanism. J.-Real-Time Image Process. 2023, 20, 66. [Google Scholar] [CrossRef]
  115. Ying, L.X.; Shen, Z.H.; Yang, M.Z.; Piao, S.L. Wildfire Detection Probability of MODIS Fire Products under the Constraint of Environmental Factors: A Study Based on Confirmed Ground Wildfire Records. Remote Sens. 2019, 11, 31. [Google Scholar] [CrossRef]
  116. Liu, T. Video Smoke Detection Method Based on Change-Cumulative Image and Fusion Deep Network. Sensors 2019, 19, 5060. [Google Scholar] [CrossRef] [PubMed]
  117. Bugaric, M.; Jakovcevic, T.; Stipanicev, D. Adaptive estimation of visual smoke detection parameters based on spatial data and fire risk index. Comput. Vis. Image Underst. 2014, 118, 184–196. [Google Scholar] [CrossRef]
  118. Xie, J.; Yu, F.; Wang, H.; Zheng, H. Class Activation Map-Based Data Augmentation for Satellite Smoke Scene Detection. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6510905. [Google Scholar] [CrossRef]
  119. Zhu, G.; Chen, Z.; Liu, C.; Rong, X.; He, W. 3D video semantic segmentation for wildfire smoke. Mach. Vis. Appl. 2020, 31, 50. [Google Scholar] [CrossRef]
  120. Li, X.; Chen, Z.; Wu, Q.; Liu, C. 3D Parallel Fully Convolutional Networks for Real-Time Video Wildfire Smoke Detection. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 89–103. [Google Scholar] [CrossRef]
  121. Peng, Y.; Wang, Y. Real-time forest smoke detection using hand-designed features and deep learning. Comput. Electron. Agric. 2019, 167, 105029. [Google Scholar] [CrossRef]
  122. Lin, G.; Zhang, Y.; Xu, G.; Zhang, Q. Smoke detection on video sequences using 3D convolutional neural networks. Fire Technol. 2019, 55, 1827–1847. [Google Scholar] [CrossRef]
  123. Gao, Y.; Cheng, P. Forest Fire Smoke Detection Based on Visual Smoke Root and Diffusion Model. Fire Technol. 2019, 55, 1801–1826. [Google Scholar] [CrossRef]
  124. Jakovcevic, T.; Bugaric, M.; Stipanicev, D. A Stereo Approach to Wildfire Smoke Detection: The Improvement of the Existing Methods by Adding a New Dimension. Comput. Inform. 2018, 37, 476–508. [Google Scholar] [CrossRef]
  125. Jia, Y.; Yuan, J.; Wang, J.; Fang, J.; Zhang, Y.; Zhang, Q. A Saliency-Based Method for Early Smoke Detection in Video Sequences. Fire Technol. 2016, 52, 1271–1292. [Google Scholar] [CrossRef]
  126. Chen, S.; Li, W.; Cao, Y.; Lu, X. Combining the Convolution and Transformer for Classification of Smoke-Like Scenes in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4512519. [Google Scholar] [CrossRef]
  127. Guede-Fernandez, F.; Martins, L.; Almeida, R.; Gamboa, H.; Vieira, P. A Deep Learning Based Object Identification System for Forest Fire Detection. Fire 2021, 4, 75. [Google Scholar] [CrossRef]
  128. Yazdi, A.; Qin, H.; Jordan, C.; Yang, L.; Yan, F. Nemo: An Open-Source Transformer-Supercharged Benchmark for Fine-Grained Wildfire Smoke Detection. Remote Sens. 2022, 14, 3979. [Google Scholar] [CrossRef]
  129. Shi, J.; Wang, W.; Gao, Y.; Yu, N. Optimal Placement and Intelligent Smoke Detection Algorithm for Wildfire-Monitoring Cameras. IEEE Access 2020, 8, 72326–72339. [Google Scholar] [CrossRef]
  130. Hossain, F.A.; Zhang, Y.M.; Tonima, M.A. Forest fire flame and smoke detection from UAV-captured images using fire-specific color features and multi-color space local binary pattern. J. Unmanned Veh. Syst. 2020, 8, 285–309. [Google Scholar] [CrossRef]
  131. Li, T.; Zhao, E.; Zhang, J.; Hu, C. Detection of Wildfire Smoke Images Based on a Densely Dilated Convolutional Network. Electronics 2019, 8, 1131. [Google Scholar] [CrossRef]
  132. Cao, Y.; Yang, F.; Tang, Q.; Lu, X. An Attention Enhanced Bidirectional LSTM for Early Forest Fire Smoke Recognition. IEEE Access 2019, 7, 154732–154742. [Google Scholar] [CrossRef]
  133. Prema, C.; Suresh, S.; Krishnan, M.; Leema, N. A Novel Efficient Video Smoke Detection Algorithm Using Co-occurrence of Local Binary Pattern Variants. Fire Technol. 2022, 58, 3139–3165. [Google Scholar] [CrossRef]
  134. Toulouse, T.; Rossi, L.; Campana, A.; Celik, T.; Akhloufi, M.A. Computer vision for wildfire research: An evolving image dataset for processing and analysis. Fire Saf. J. 2017, 92, 188–194. [Google Scholar] [CrossRef]
  135. Kim, S.Y.; Muminov, A. Forest fire smoke detection based on deep learning approaches and unmanned aerial vehicle images. Sensors 2023, 23, 5702. [Google Scholar] [CrossRef]
  136. Yang, H.; Wang, J.; Wang, J. Efficient Detection of Forest Fire Smoke in UAV Aerial Imagery Based on an Improved Yolov5 Model and Transfer Learning. Remote Sens. 2023, 15, 5527. [Google Scholar] [CrossRef]
  137. Huang, J.; Zhou, J.; Yang, H.; Liu, Y.; Liu, H. A small-target forest fire smoke detection model based on deformable transformer for end-to-end object detection. Forests 2023, 14, 162. [Google Scholar] [CrossRef]
  138. Saydirasulovich, S.N.; Mukhiddinov, M.; Djuraev, O.; Abdusalomov, A.; Cho, Y.I. An improved wildfire smoke detection based on YOLOv8 and UAV images. Sensors 2023, 23, 8374. [Google Scholar] [CrossRef]
  139. Chen, G.; Cheng, R.; Lin, X.; Jiao, W.; Bai, D.; Lin, H. LMDFS: A lightweight model for detecting forest fire smoke in UAV images based on YOLOv7. Remote Sens. 2023, 15, 3790. [Google Scholar] [CrossRef]
  140. Qiao, Y.; Jiang, W.; Wang, F.; Su, G.; Li, X.; Jiang, J. FireFormer: An efficient Transformer to identify forest fire from surveillance cameras. Int. J. Wildland Fire 2023, 32, 1364–1380. [Google Scholar] [CrossRef]
  141. Fernandes, A.M.; Utkin, A.B.; Chaves, P. Automatic early detection of wildfire smoke with visible-light cameras and EfficientDet. J. Fire Sci. 2023, 41, 122–135. [Google Scholar] [CrossRef]
  142. Tao, H. A label-relevance multi-direction interaction network with enhanced deformable convolution for forest smoke recognition. Expert Syst. Appl. 2024, 236, 121383. [Google Scholar] [CrossRef]
  143. Tao, H.; Duan, Q.; Lu, M.; Hu, Z. Learning discriminative feature representation with pixel-level supervision for forest smoke recognition. Pattern Recognit. 2023, 143, 109761. [Google Scholar] [CrossRef]
  144. James, G.L.; Ansaf, R.B.; Al Samahi, S.S.; Parker, R.D.; Cutler, J.M.; Gachette, R.V.; Ansaf, B.I. An Efficient Wildfire Detection System for AI-Embedded Applications Using Satellite Imagery. Fire 2023, 6, 169. [Google Scholar] [CrossRef]
  145. Ba, R.; Chen, C.; Yuan, J.; Song, W.; Lo, S. SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention. Remote Sens. 2019, 11, 1702. [Google Scholar] [CrossRef]
  146. Larsen, A.; Hanigan, I.; Reich, B.J.; Qin, Y.; Cope, M.; Morgan, G.; Rappold, A.G. A deep learning approach to identify smoke plumes in satellite imagery in near-real time for health risk communication. J. Expo. Sci. Environ. Epidemiol. 2021, 31, 170–176. [Google Scholar] [CrossRef]
  147. Yuan, F.; Zhang, L.; Wan, B.; Xia, X.; Shi, J. Convolutional neural networks based on multi-scale additive merging layers for visual smoke recognition. Mach. Vis. Appl. 2019, 30, 345–358. [Google Scholar] [CrossRef]
  148. Pundir, A.S.; Raman, B. Dual deep learning model for image based smoke detection. Fire Technol. 2019, 55, 2419–2442. [Google Scholar] [CrossRef]
  149. Wu, H.; Zhang, A.; Han, Y.; Nan, J.; Li, K. Fast stochastic configuration network based on an improved sparrow search algorithm for fire flame recognition. Knowl.-Based Syst. 2022, 245, 108626. [Google Scholar] [CrossRef]
  150. Buza, E.; Akagic, A. Unsupervised method for wildfire flame segmentation and detection. IEEE Access 2022, 10, 55213–55225. [Google Scholar] [CrossRef]
  151. Zhao, Y.; Tang, G.; Xu, M. Hierarchical detection of wildfire flame video from pixel level to semantic level. Expert Syst. Appl. 2015, 42, 4097–4104. [Google Scholar] [CrossRef]
  152. Prema, C.; Vinsley, S.; Suresh, S. Efficient Flame Detection Based on Static and Dynamic Texture Analysis in Forest Fire Detection. Fire Technol. 2018, 54, 255–288. [Google Scholar] [CrossRef]
  153. Zhang, H.; Zhang, N.; Xiao, N. Fire detection and identification method based on visual attention mechanism. Optik 2015, 126, 5011–5018. [Google Scholar] [CrossRef]
  154. Liu, H.; Hu, H.; Zhou, F.; Yuan, H. Forest flame detection in unmanned aerial vehicle imagery based on YOLOv5. Fire 2023, 6, 279. [Google Scholar] [CrossRef]
  155. Wang, L.; Zhang, H.; Zhang, Y.; Hu, K.; An, K. A deep learning-based experiment on forest wildfire detection in machine vision course. IEEE Access 2023, 11, 32671–32681. [Google Scholar] [CrossRef]
  156. Kong, S.; Deng, J.; Yang, L.; Liu, Y. An attention-based dual-encoding network for fire flame detection using optical remote sensing. Eng. Appl. Artif. Intell. 2024, 127, 107238. [Google Scholar] [CrossRef]
  157. Kaliyev, D.; Shvets, O.; Györök, G. Computer Vision-based Fire Detection using Enhanced Chromatic Segmentation and Optical Flow Model. Acta Polytech. Hung. 2023, 20, 27–45. [Google Scholar] [CrossRef]
  158. Chen, B.; Bai, D.; Lin, H.; Jiao, W. Flametransnet: Advancing forest flame segmentation with fusion and augmentation techniques. Forests 2023, 14, 1887. [Google Scholar] [CrossRef]
  159. Morandini, F.; Toulouse, T.; Silvani, X.; Pieri, A.; Rossi, L. Image-based diagnostic system for the measurement of flame properties and radiation. Fire Technol. 2019, 55, 2443–2463. [Google Scholar] [CrossRef]
  160. Chen, Y.; Xu, W.; Zuo, J.; Yang, K. The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier. Clust. Comput. 2019, 22, 7665–7675. [Google Scholar] [CrossRef]
  161. Dimitropoulos, K.; Barmpoutis, P.; Grammalidis, N. Spatio-temporal flame modeling and dynamic texture analysis for automatic video-based fire detection. IEEE Trans. Circuits Syst. Video Technol. 2014, 25, 339–351. [Google Scholar] [CrossRef]
  162. Zheng, X.; Chen, F.; Lou, L.; Cheng, P.; Huang, Y. Real-time detection of full-scale forest fire smoke based on deep convolution neural network. Remote Sens. 2022, 14, 536. [Google Scholar] [CrossRef]
  163. Martins, L.; Guede-Fernandez, F.; Almeida, R.; Gamboa, H.; Vieira, P. Real-Time Integration of Segmentation Techniques for Reduction of False Positive Rates in Fire Plume Detection Systems during Forest Fires. Remote Sens. 2022, 14, 2701. [Google Scholar] [CrossRef]
  164. Fernandes, A.; Utkin, A.; Chaves, P. Automatic Early Detection of Wildfire Smoke With Visible light Cameras Using Deep Learning and Visual Explanation. IEEE Access 2022, 10, 12814–12828. [Google Scholar] [CrossRef]
  165. Jiang, Y.; Wei, R.; Chen, J.; Wang, G. Deep Learning of Qinling Forest Fire Anomaly Detection Based on Genetic Algorithm Optimization. Univ. Politeh. Buchar. Sci. Bull. Ser.-Electr. Eng. Comput. Sci. 2021, 83, 75–84. [Google Scholar]
  166. Perrolas, G.; Niknejad, M.; Ribeiro, R.; Bernardino, A. Scalable Fire and Smoke Segmentation from Aerial Images Using Convolutional Neural Networks and Quad-Tree Search. Sensors 2022, 22, 1701. [Google Scholar] [CrossRef]
  167. Li, J. Adaptive linear feature-reuse network for rapid forest fire smoke detection model. Ecol. Inform. 2022, 68, 101584. [Google Scholar] [CrossRef]
  168. Hu, Y.; Zhan, J.; Zhou, G.; Chen, A.; Cai, W.; Guo, K.; Hu, Y.; Li, L. Fast forest fire smoke detection using MVMNet. Knowl.-Based Syst. 2022, 241, 108219. [Google Scholar] [CrossRef]
  169. Almeida, J.; Huang, C.; Nogueira, F.; Bhatia, S.; Albuquerque, V. EdgeFireSmoke: A Novel lightweight CNN Model for Real-Time Video Fire-Smoke Detection. IEEE Trans. Ind. Inform. 2022, 18, 7889–7898. [Google Scholar] [CrossRef]
  170. Zhao, E.; Liu, Y.; Zhang, J.; Tian, Y. Forest Fire Smoke Recognition Based on Anchor Box Adaptive Generation Method. Electronics 2021, 10, 566. [Google Scholar] [CrossRef]
  171. Pan, J.; Ou, X.; Xu, L. A Collaborative Region Detection and Grading Framework for Forest Fire Smoke Using Weakly Supervised Fine Segmentation and lightweight Faster-RCNN. Forests 2021, 12, 768. [Google Scholar] [CrossRef]
  172. Tran, D.; Park, M.; Jeon, Y.; Bak, J.; Park, S. Forest-Fire Response System Using Deep-Learning-Based Approaches With CCTV Images and Weather Data. IEEE Access 2022, 10, 66061–66071. [Google Scholar] [CrossRef]
  173. Ghosh, R.; Kumar, A. A hybrid deep learning model by combining convolutional neural network and recurrent neural network to detect forest fire. Multimed. Tools Appl. 2022, 81, 38643–38660. [Google Scholar] [CrossRef]
  174. Ayala, A.; Fernandes, B.; Cruz, F.; Macedo, D.; Zanchettin, C. Convolution Optimization in Fire Classification. IEEE Access 2022, 10, 23642–23658. [Google Scholar] [CrossRef]
  175. Lee, Y.; Shim, J. False Positive Decremented Research for Fire and Smoke Detection in Surveillance Camera using Spatial and Temporal Features Based on Deep Learning. Electronics 2019, 8, 1167. [Google Scholar] [CrossRef]
  176. Higa, L. Active Fire Mapping on Brazilian Pantanal Based on Deep Learning and CBERS 04A Imagery. Remote Sens. 2022, 14, 688. [Google Scholar] [CrossRef]
  177. Wu, X.; Cao, Y.; Lu, X.; Leung, H. Patchwise dictionary learning for video forest fire smoke detection in wavelet domain. Neural Comput. Appl. 2021, 33, 7965–7977. [Google Scholar] [CrossRef]
  178. Wang, S.; Zhao, J.; Ta, N.; Zhao, X.; Xiao, M.; Wei, H. A real-time deep learning forest fire monitoring algorithm based on an improved Pruned plus KD model. J.-Real-Time Image Process. 2021, 18, 2319–2329. [Google Scholar] [CrossRef]
  179. Sathishkumar, V.E.; Cho, J.; Subramanian, M.; Naren, O.S. Forest fire and smoke detection using deep learning-based learning without forgetting. Fire Ecol. 2023, 19, 9. [Google Scholar] [CrossRef]
  180. Chen, Y.; Li, J.; Sun, K.; Zhang, Y. A lightweight early forest fire and smoke detection method. J. Supercomput. 2024, 80, 9870–9893. [Google Scholar] [CrossRef]
  181. Wang, A.; Liang, G.; Wang, X.; Song, Y. Application of the YOLOv6 Combining CBAM and CIoU in Forest Fire and Smoke Detection. Forests 2023, 14, 2261. [Google Scholar] [CrossRef]
  182. Li, J.; Xu, R.; Liu, Y. An improved forest fire and smoke detection model based on yolov5. Forests 2023, 14, 833. [Google Scholar] [CrossRef]
  183. Sun, B.; Wang, Y.; Wu, S. An efficient lightweight CNN model for real-time fire smoke detection. J.-Real-Time Image Process. 2023, 20, 74. [Google Scholar] [CrossRef]
  184. Bahhar, C.; Ksibi, A.; Ayadi, M.; Jamjoom, M.M.; Ullah, Z.; Soufiene, B.O.; Sakli, H. Wildfire and smoke detection using staged YOLO model and ensemble CNN. Electronics 2023, 12, 228. [Google Scholar] [CrossRef]
  185. Zhao, J.; Zhang, Z.; Liu, S.; Tao, Y.; Liu, Y. Design and Research of an Articulated Tracked Firefighting Robot. Sensors 2022, 22, 5086. [Google Scholar] [CrossRef]
  186. Rodriguez-Sanchez, M.; Fernandez-Jimenez, L.; Jimenez, A.; Vaquero, J.; Borromeo, S.; Lazaro-Galilea, J. HelpResponder-System for the Security of First Responder Interventions. Sensors 2021, 21, 2614. [Google Scholar] [CrossRef]
  187. Radha, D.; Kumar, M.; Telagam, N.; Sabarimuthu, M. Smart Sensor Network-Based Autonomous Fire Extinguish Robot Using IoT. Int. J. Online Biomed. Eng. 2021, 17, 101–110. [Google Scholar] [CrossRef]
  188. Guo, A.; Jiang, T.; Li, J.; Cui, Y.; Li, J.; Chen, Z. Design of a small wheel-foot hybrid firefighting robot for infrared visual fire recognition. Mech. Based Des. Struct. Mach. 2021, 51, 4432–4451. [Google Scholar] [CrossRef]
  189. Yahaya, I.; Yeong, G.; Zhang, L.; Raghavan, V.; Mahyuddin, M. Autonomous Safety Mechanism for Building: Fire Fighter Robot with Localized Fire Extinguisher. Int. J. Integr. Eng. 2020, 12, 304–313. [Google Scholar]
  190. Ferreira, L.; Coimbra, A.; Almeida, A. Autonomous System for Wildfire and Forest Fire Early Detection and Control. Inventions 2020, 5, 41. [Google Scholar] [CrossRef]
  191. Aliff, M.; Yusof, M.; Sani, N.; Zainal, A. Development of Fire Fighting Robot (QRob). Int. J. Adv. Comput. Sci. Appl. 2019, 10, 142–147. [Google Scholar] [CrossRef]
  192. Bushnaq, O.; Chaaban, A.; Al-Naffouri, T. The Role of UAV-IoT Networks in Future Wildfire Detection. IEEE Internet Things J. 2021, 8, 16984–16999. [Google Scholar] [CrossRef]
  193. Cruz, H.; Eckert, M.; Meneses, J.; Martinez, J. Efficient Forest Fire Detection Index for Application in Unmanned Aerial Systems (UASs). Sensors 2016, 16, 893. [Google Scholar] [CrossRef]
  194. Yandouzi, M.; Grari, M.; Berrahal, M.; Idrissi, I.; Moussaoui, O.; Azizi, M.; Ghoumid, K.; Elmiad, A.K. Investigation of combining deep learning object recognition with drones for forest fire detection and monitoring. Int. J. Adv. Comput. Sci. Appl 2023, 14, 377–384. [Google Scholar] [CrossRef]
  195. Namburu, A.; Selvaraj, P.; Mohan, S.; Ragavanantham, S.; Eldin, E.T. Forest fire identification in uav imagery using x-mobilenet. Electronics 2023, 12, 733. [Google Scholar] [CrossRef]
  196. Rui, X.; Li, Z.; Zhang, X.; Li, Z.; Song, W. A RGB-Thermal based adaptive modality learning network for day–night wildfire identification. Int. J. Appl. Earth Obs. Geoinf. 2023, 125, 103554. [Google Scholar] [CrossRef]
  197. Choutri, K.; Lagha, M.; Meshoul, S.; Batouche, M.; Bouzidi, F.; Charef, W. Fire Detection and Geo-Localization Using UAV’s Aerial Images and Yolo-Based Models. Appl. Sci. 2023, 13, 11548. [Google Scholar] [CrossRef]
  198. Pena, P.F.; Ragab, A.R.; Luna, M.A.; Ale Isaac, M.S.; Campoy, P. WILD HOPPER: A heavy-duty UAV for day and night firefighting operations. Heliyon 2022, 8, e09588. [Google Scholar] [CrossRef]
  199. Aydin, B.; Selvi, E.; Tao, J.; Starek, M.J. Use of fire-extinguishing balls for a conceptual system of drone-assisted wildfire fighting. Drones 2019, 3, 17. [Google Scholar] [CrossRef]
  200. Soliman, A.M.S.; Cagan, S.C.; Buldum, B.B. The design of a rotary-wing unmanned aerial vehicles-payload drop mechanism for fire-fighting services using fire-extinguishing balls. Appl. Sci. 2019, 1, 1259. [Google Scholar] [CrossRef]
  201. Roldán-Gómez, J.J.; González-Gironda, E.; Barrientos, A. A survey on robotic technologies for forest firefighting: Applying drone swarms to improve firefighters’ efficiency and safety. Appl. Sci. 2021, 11, 363. [Google Scholar] [CrossRef]
  202. Zhu, J.; Pan, L.; Zhao, G. An Improved Near-Field Computer Vision for Jet Trajectory Falling Position Prediction of Intelligent Fire Robot. Sensors 2020, 20, 7029. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Selected areas for research.
Figure 1. Selected areas for research.
Information 15 00538 g001
Figure 2. PRISMA framework.
Figure 2. PRISMA framework.
Information 15 00538 g002
Figure 3. Distribution of the number of publications over the period of 2013 to 2023.
Figure 3. Distribution of the number of publications over the period of 2013 to 2023.
Information 15 00538 g003
Figure 4. Publications in selected categories.
Figure 4. Publications in selected categories.
Information 15 00538 g004
Figure 5. ACC and its standard deviation (- - -) for fire.
Figure 5. ACC and its standard deviation (- - -) for fire.
Information 15 00538 g005
Figure 6. ACC and its standard deviation (- - -) for smoke.
Figure 6. ACC and its standard deviation (- - -) for smoke.
Information 15 00538 g006
Figure 7. ACC and its standard deviation (- - -) for fire and flame.
Figure 7. ACC and its standard deviation (- - -) for fire and flame.
Information 15 00538 g007
Figure 8. ACC and its standard deviation (- - -) for fire and smoke.
Figure 8. ACC and its standard deviation (- - -) for fire and smoke.
Information 15 00538 g008
Figure 9. ACC and its standard deviation (- - -) for applications of robots in fire detection and extinguishing.
Figure 9. ACC and its standard deviation (- - -) for applications of robots in fire detection and extinguishing.
Information 15 00538 g009
Table 1. List of the past work related to fire detection.
Table 1. List of the past work related to fire detection.
RefDatasetData TypeMethodObjectiveAchievement
[48]47,992 imagesImagesTransfer learningAchieving early prevention and control of large-scale forest fires.Recognition accuracy of 79.48% through FTResNet50 model.
[49]2976 imagesImagesYOLOv5 and EfficientDetOvercoming the shortcomings of manual feature extraction and achieving higher accuracy in forest fire recognition by weighted fusion.The average accuracy of the proposed model for forest fire identification reached 87%.
[50]11 videosVideosYCbCr and correlation coefficientAchieving efficient forest fire detection using rule-based multi-color space and a correlation coefficient.Achieved 95.87% and 97.89% of F-score and accuracy on fire detection.
[51]11,456 imagesImagesSqueezeNetIdentifying the existence of fire by first segmenting all fire-like areas and then processing through the classification module.Attained 93% accuracy.
[52]2100 imagesImagesCNNAttempting to extract and classify image features for fire recognition based on CNN.Achieved a classification accuracy of around 95%.
[53]* data obtained from USGS websiteSatellite imagesSVMPerforming forest fire detection on LANDSAT images using SVM.Obtained 99.21% accuracy and a high precision of 98.41% on fire detection.
[54]12,000 framesThermal imagesAutomatic gain control algorithmUtilizing thermal infrared sensing for near real-time, data-driven fire detection and monitoring.The proposed approach achieved better situation awareness when compared to existing methods.
[55]37 imagesSatellite imagesSimple linear iterative clusteringBuilding an unsupervised change detection framework that uses post-fire VHR images with prefire PS data to facilitate the assessment of wildfire damage.Achieved an overall accuracy of over 99% on wildfire damage assessments.
[56]500 imagesImagesYCbCr color space and CNNIntroducing conventional image processing techniques, CNNs, and an adaptive pooling approach.Achieved an accuracy of 90.7% on fire detection.
[57]52 imagesImagesMWIRDetecting forest fires by middle infrared channel measurement.Achieved 77.63% accuracy on fire detection.
[58]*ImagesHorn and Schunck optical flowPerforming aerial images-based forest FD for firefighting using optical remote-sensing techniques.Experimental results have verified that the proposed forest fire detection method can achieve good performance.
[59]175 videosVideosSVMPerforming multi-feature analysis in YUV color space for early forest FD.Attained an average detection rate of 96.29%.
[60]VIIRSSatellite imagesFILDADeveloping FILDA that characterizes fire pixels based on both visible light and IR signatures at night.Compared to the existing algorithms, the proposed algorithm produced a much more accurate detection of fire.
[61]13 imagesImagesSpatio-temporal modelDeveloping a spatio-temporal model for forest FD using HJ-IRS satellite dataAchieved 94.45% detection rate on fire detection.
[62]5 imagesImagesGMMBuilding an early detection system of forest fire smoke signatures using GMM.The developed system detected fire in all of the test videos in less than 2 min.
[63]3320 imagesImagesYOLOv5Performing small-target forest fire detection.Achieved an 82.1 [email protected] in forest fire detection and a 70.3 [email protected] in small-target forest fire detection.
[64]22 tiles of Landsat-8 imagesSatellite imagesDeep CNNDetermining the starting point of the fire for the early detection of forest fires.Achieved a 97.35% overall accuracy under different scenarios.
[65]11,681 imagesImagesFCOSDetecting forest fires in real-time and providing firefighting assistance.Attained 89.34% accuracy in forest fire detection.
[66]6595 imagesImagesMTLSolving the problems of poor small-target recognition and many missed and false detections in complex forest scenes.Achieved 98.3% accuracy through segmentation and classification.
[67]8000 imagesImagesR-CNNClassifying video frames as two classes (fire, no-fire) according to the presence or absence of fire and the segmentation method used for incipient forest-fire detection and segmentation.An accuracy of 93.65% and a precision of 91.85% were achieved on forest-fire detection and segmentation.
[68]*ImagesNon-sub-sampling contourlet transform and visual saliencyBuilding a machine vision-based network monitoring system for solar-blind ultraviolet signals.It was claimed that the fusion results of the proposed method had higher clarity and contrast, and retained more image features.
[69]81,810 imagesImagesR-CNN, Bayesian network, and LSTMImproving fire detection accuracy when compared with other video-based methods.Achieved an accuracy of 97.68% for affected areas.
[70]500 imagesRGB and NIR imageVision transformerAchieving early detection and segmentation to predict their spread and help with firefighting.Obtained a 97.7% F1-score on wildfire segmentation.
[71]2000 imagesImagesArtificial bee colony algorithm-based color spaceDetecting forest fires using color space.Obtained an evaluated mean Jaccard index value of 0.76 and a mean Dice index value of 0.85.
[72]4000 imagesImagesDeep CNNDetecting fire as early as possible.Achieved a 94.6% F-score fire detection rate.
[73]48,010 imagesImagesCNN and vision transformersDetecting wildfire at early stages.Obtained a 85.12% accuracy on wildfire classification and a 99.9% F1-score on semantic segmentation.
[74]37,016 imagesSatellite imagesCNNBuilding automated an active fire detection framework using Sentinel-2 imagery.Obtained an average IoU higher than 70% on active fire detection.
[75]38,897 imagesSatellite imagesCNNAccurately detecting the fire-affected areas from satellite imagery.Achieved a 92% detection rate under cloud-free weather conditions.
[76]8194 imagesSatellite imagesCNNPerforming active fire detection using deep learning techniques.Achieved a precision of 87.2% and a recall of 92.4% on active fire detection.
[77]10,000 imagesImagesRNN, LSTM, and GRUPerforming early detection of forest fires with higher accuracy.An accuracy of 99.89% and a loss function value of 0.0088 were achieved on fire detection.
[78]*Satellite imagesGRU networkBuilding an early fire detection system.Performed GRU-based detection of the wildfire earlier than the VIIRS active fire products in most of the study area.
[79]5469 imagesSatellite imagesCNNBuilding an accurate monitoring system for wildfires.Achieved an accuracy of 99.9% on fire detection.
[80]10,581 imagesImagesEfficientDet and YOLOv5Detecting forest fires in different scenarios by an ensemble learning method.Obtained 99.6% accuracy on fire detection.
[81]4000 imagesImagesCNNIntroducing an additive neural network for forest fire detection.Attained 96% accuracy on fire detection.
[82]1500 imagesImagesDCNNPerforming saliency detection and DL-based wildfire identification in UAV imagery.Achieved an overall accuracy of 98% on fire classification.
[83]6137 imagesImagesCNNBuilding a system that can spot wildfire in real-time with high accuracy.Achieved detection precision of 98% for fire detection.
[84]2425 imagesImagesGMM-EMDetecting fire based on combining color-motion-shape features with machine learning.A TPR of 89.97% and an FNR of 10.03% were achieved for detection.
[85]*ImagesCEPPerforming real-time wildfire detection with semantic explanations.Through experimental results based on four real datasets and one synthetic dataset, the supremacy of the proposed method was established.
[86]12 images and 7 videosImages and videoskNNPerforming pixel-level automatic annotation for forest fire images.Achieved a higher fire detection rate and a lower false alarm rate in comparison to existing algorithms.
[87]39,375 framesVideosANNDeveloping a dataset of aerial images of fire and performing fire detection and segmentation on this dataset.Achieved a precision of 92% and a recall of 84% for detection.
[88]2000 imagesImagesCNN and SVMDeveloping a robust algorithm to deal with the problems of a complex background, the weak generalization ability of image recognition, and low accuracy.Accomplished fire detection with a recognition rate of 97.6%, a false alarm rate of 1.4%, and a missed alarm rate of 1%.
[89]2 Landsat-7 imagesSatellite imagesELMUtilizing an adaptive ensemble of ELMs for the classification of RS images into change/no change classes.Achieved an accuracy of 90.5% in detecting the change.
[90]30 imagesVideos and ImagesSVMIdentifying fires and providing fire warnings yielding excellent noise suppression and promotion.Obtained a 97% TPR on classification.
[91]8500 imagesImagesData fusionDetecting smoke from fires, usually within 15 min of ignition.Achieved an accuracy of 91% on the test set and an F-1 score of 89%.
[92]WSNTransmission dataAAPFUtilizing auto-organization and adaptive frame periods for forest fire detection.Developed a comprehensive model to evaluate the communication delay and energy consumption.
[93]20,250 pixelsSatellite imagesRandom forestBuilding a three-step forest fire detection algorithm by using Himawari-8 geostationary satellite data.Achieved an overall accuracy of 99.16%, a POD of 93.08%, and a POFD of 0.07%.
[94]1194 imagesImagesMulti-channel CNNPerforming fire detection using multichannel CNN.Obtained 98% or more classification accuracy and claimed improvement by 2% than the traditional feature-based methods.
[95]7690 imagesImagesDCNN and BPNNDeveloping an improved DCNN model for forest fire risk prediction. Implementing the BPNN fire algorithm to calculate video image processing speed and delay rate.Achieved an 84.37% accuracy in real-time forest fire recognition.
[96]*ImagesDeepLabV3+Presenting Defog DeepLabV3+ for collaborative defogging and precise flame segmentation. Proposing DARA to enhance flame-related feature extraction.Achieved a 94.26% accuracy, 94.04% recall, and 89.51% mIoU.
[97]1452 imagesImagesTransfer learningExploring several CNN models, applying transfer learning, using SVM and RF for detection, and using train/test networks with random and ImageNet weights on a forest fire dataset.Achieved a 99.32% accuracy.
[98]14,094 imagesImagesFuF-Det (encoder–decoder transformer)Designing AAFRM to preserve positional features. Constructing RECAB to retain fine-grained fire point details. Introducing CA in the detection head to improve localization accuracyAchieve an [email protected] of 86.52% and a fire spot detection rate of 78.69%.
[99]3000 imagesImagesYOLOv5Integrating the transformer module into YOLOv5’s feature extraction network. Inserting the CA mechanism before the YOLOv5 head. Using the ASFF in the model’s head to enhance multi-scale feature fusion.Achieved an [email protected] of 84.56%.
[100]1900 imagesImagesEnsemble learningProposing a stacking ensemble model. Using pre-trained models as base learners for feature extraction and initial classification, followed by a Bi-LSTM network as a meta-learner for final classification.Achieved 97.37%, 95.79%, and 95.79% accuracy with hold-out validation, five-fold cross-validation, and tenfold cross-validation.
[101]5250 infrared imagesImagesYOLOv5sProposing FFDSM based on YOLOv5s-seg and incorporating ECA and SPPFCSPC modules to enhance fire detection accuracy and feature extraction.Achieved an [email protected] of 0.907.
[102]204,300 imagesImagesDeep ensemble learningPresenting a deep ensemble neural network model using Faster R-CNN, RetinaNet, YOLOv2, and YOLOv3.The proposed approach significantly improved detection accuracy for potential fire incidents in the input data.
[103]1900 imagesImagesCNNProposing a forest fire detection method using CNN architecture. Employing separable convolution layers for immediate fire detection, reducing computational resources, and enabling real-time applications.Achieved an accuracy of a 97.63% and an F1-score of 98.00%.
[104]51,906 imagesImagesEnsemble learningProposing CT-Fire by combining deep CNN RegNetY and vision transformer EfficientFormer v2 to detect forest fires in ground and aerial images.Attained accuracy rates of 99.62% for ground images and 87.77% for aerial images.
[105]348,600 imagesImagesDetectron2Detecting forest fires using different deep-learning models. Preparing a dataset. Comparing the proposed method with existing ones. Implementing it on Raspberry Pi for CPU and GPU utilization.Achieved a precision of 99.3%.
[106]1900 imagesImagesFL and PSOIntegrating PSO with FL to optimize communication time. Developing a CNN model incorporating FL and PSO to set basic parameters based on local client data. Enhancing FL performance and reducing latency in disaster response.Achieved a prediction accuracy of 94.47%.
[107]* data obtained from Landsat-8Satellite imagesU-NetIntroducing FU-NetCastV2. Collecting historic GeoMac fire perimeters, elevation, and satellite maps. Retrieving 24-h weather data. Implementing and optimizing U-Nets. Generating a burned area map.Achieved an accuracy rate of 94.6% and an AUC score of 97.7%.
[108]5060 images and 14,320 s audioImages and audioCNNProposing a VSU prototype with embedded ML algorithms for timely forest fire detection. Collecting and utilizing two datasets and audio and picture data for training the ML algorithm.Achieved a 96.15% accuracy.
[109]210 images360-degree imagesMulti-scale vision transformerIntroducing a FIRE-mDT model combining ResNet-50 and multiscale deformable transformer for early fire detection, location, and propagation estimation. Creating a dataset from real fire events in Seich Sou Forest.Achieved an F-score of 91.6%.
[110]55,746 imagesImagesANN and CNNProposing EdgeFireSmoke++, based on EdgeFireSmoke, using ANN in the first level and CNN in the second level.Achieved over 95% accuracy.
[111]23,982 imagesImagesFireYOLO and Real-ESRGANProposing a two-step recognition method combining FireYOLO and ESRGAN Net. Using GhostNet with dynamic convolution in FireYOLO’s backbone to eliminate redundant features. Enhance suspected small fire images with Real-ESRGAN before re-identifying them with FireYOLO.Achieved a 94.22% average precision when implemented on embedded devices.
[112]48 videosVideosVision transformers (ViTs) and CNNsProposing FFS-UNet, a spatio-temporal architecture combining a transformer with a modified lightweight UNet. Extracting keyframe and reference frames using three encoder paths for feature fusion, and then using a transformer for deep temporal-feature extraction. Finally, segmenting the fire using shallow keyframe features with skip connections in the decoder path.Achieved a 95.1% F1-score and 86.8% IoU on the UAV-collected videos, as well as a 91.4% F1-score and 84.8% IoU on the Corsican Fire dataset.
[113]3800 imagesImagesCNNProposing FireXnet, a lightweight model for wildfire detection that is suitable for resource-constrained devices. Incorporating SHAP to make the model’s decisions interpretable. Compare FireXnet’s performance against five pre-trained models.Achieved an accuracy of 98.42%.
[114]4674 imagesImagesYOLOv5Utilizing four detection heads in FireDetn. Integrating transformer encoder blocks with multi-head attention. Fusing the spatial pyramid pooling fast structure in detecting multi-scale flame objects at a lower computational cost.Achieved an AP50 of 82.6%.
[115]2 active fire products and 1 burned area productSatellite imagesTemporal patterns and kernel density estimation (KDE)Comparing various MODIS fire products with ground wildfire investigation records in southwest China to identify differences in the spatio-temporal patterns of regional wildfires detected and exploring the influence of instantaneous and local environmental factors on MODIS wildfire detection probability.Detected at least twice as many wildfire events as that in the ground records.
* Information not available.
Table 2. List of the past works related to smoke detection.
Table 2. List of the past works related to smoke detection.
RefDatasetData TypeMethodObjectiveAchievement
[116]6 videosVideosFusion deep networkEnhancing the detection accuracy of smoke objects through video sequences.Achieved a 94.57% accuracy on smoke detection.
[117]2977 imagesImagesGIS and augmented realityImproving the detection range and the rate of correct detection and reducing false alarm rates.Managed to reduce the false alarm rate to 0.001.
[118]6225 imagesImagesClass activation map and ResNet-50Building a class activation map-based data augmentation system for smoke scene detection.Achieved the best accuracy of 94.95%.
[119]90 videosVideos3D convolution-based encoder/decoder networkBuilding a 3D convolution-based encoder–decoder network architecture for video semantic segmentation.Achieved a 99.31% accuracy on wildfire smoke segmentation.
[120]90 videosVideosCNNBuilding a 3D fully convolutional network for segmenting smoke regions.Achieved a 0.7618 mAP on smoke detection.
[121]50,000 imagesImagesCNNPerforming real-time forest smoke detection using hand-designed features and DL.The detection model achieved 97.124% accuracy on the test set.
[122]38 smoke videos and 20 non-smoke videosVideosCNNDetection of wildfire smoke based on faster RCNN and 3D CNNN.Achieved a 95.23% accuracy on smoke detection.
[123]22 videosVideosVibe algorithmDetecting forest fire smoke based on a visual smoke root and diffusion model.Achieved an accuracy higher than 90% on smoke detection.
[124]37,712 imagesImagesStereo vision triangulationAchieving wildfire smoke detection using stereo vision.Obtained results with an over 0.95 TPR on smoke detection.
[125]11 videosVideosSaliency mapsBuilding a saliency-based method for early smoke detection through video sequences.Achieved an average smoke segmentation precision of 93.0% and a precision as high as 99.0% for forest fires.
[126]3225 imagesImagesTECNNClassification of smoke-like scenes in remote sensing images.Obtained a 98.39% accuracy on smoke classification.
[127]3645 imagesImagesR-CNNDetecting smoke columns that are visible below or above the horizon.Produced an F1-score of 80%, a G-mean of 80%, and a detection rate of 90%.
[128]1073 videosVideosDETRDeveloping an open-source transformer-supercharged benchmark for fine-grained wildfire smoke detection.Detected 97.9% of the fires in the incipient stage and 80% within 5 min from the start.
[129]240 videosVideosCNNDeveloping an intelligent smoke detection algorithm for wildfire monitoring cameras.The overall fire risk of the test region is reduced to just 36.28% of its original value.
[130]460 custom imagesImagesGLCM, LBP, an ANNAchieving a forest fire flame and smoke detection from UAV-captured images using fire-specific color features and multi-color space local binary patterns.Achieved an F1-score of 90% for smoke detection.
[131]4595 imagesImagesCNNDetecting wildfire smoke images based on a densely dilated CNN.Achieved a 99.2% accuracy on smoke detection.
[132]2000 imagesImagesLSTMUtilizing enhanced bidirectional LSTM for early forest fire smoke recognition.Obtained an accuracy of 97.8% on smoke detection.
[133]240 videosVideosHDLBP, CoLBP, and ELMAchieving a lesser rate of incorrect alarms by identifying the smoke and examining its distinctive texture attributes.Results obtained with 95% F1-score on fire detection.
[134]500 imagesImagesMulti-spectral fusion algorithmDeveloping a wildfire image dataset and performing analysis on that dataset.A tool was built for researchers and professionals through which they can access the dataset and also contribute.
[135]6500 imagesImagesYOLOv7Collecting forest fire smoke photos, utilizing YOLOv7, incorporating CBAM attention mechanism, and applying SPPF+ and BiFPN modules to focus on small-scale forest fire smoke.Achieved an A P 50 of 86.4% and an A P L of 91.5%
[136]2554 imagesImagesYOLOv5 and transfer learningImproving YOLOv5s using K-means++ for anchor box clustering, adding a prediction head for small-scale smoke detection, replacing the backbone with P C o n v for efficiency, and incorporating coordinate attention for region focus.Achieved an A P 50 of 96% and an A P 50 : 95 of 57.3%.
[137]10,250 imagesImagesDeformable DETERProposing an improved deformable DETR model with MCCL and DPPM modules to enhance low-contrast smoke detection. Implementing an iterative bounding box combination method for precise localization and bounding of semi-transparent smoke.Achieved an improvement of mAP (mean average precision) of 4.2% and anAPS (AP for small objects) of 5.1%.
[138]6000 imagesImagesYOLOv8Incorporating WIoUv3 into a bounding box regression loss, integrating BiFormer into the backbone network, and using GSConv as a substitute for conventional convolution within the neck layer.Achieved an average precision (AP) of 79.4%, an average precision small (APS) of 71.3%, and an average precision large (APL) of 92.6%.
[139]5311 imagesImagesYOLOv7Proposing a lightweight model. Using GSConv in the neck layer, embedding multilayer coordinate attention in the backbone, utilizing the CARAFE up-sampling operator, and applying the SIoU loss function.Achieved an accuracy of 80.2%.
[140]1664 imagesImagesTransformerProposing the FireFormer model. Using a shifted window self-attention module to extract patch similarities in images. Applying GradCAM to analyze and visualize the contribution of image patches.Achieved an OA, Recall, and F1-score of 82.21%, 86.635%, and 74.68%, respectively.
[141]35,328 imagesImagesEfficientDetDetecting distant smoke plumes several kilometers away using EfficientDet.Achieved an 80.4% true detection rate and a 1.13% false-positive rate.
[142]43,060 imagesImagesLMINetProposing a deformable convolution module. Introducing a multi-direction feature interaction module. Implementing an adversarial learning-based loss term.Achieved a mIoU and pixel-level F-measure of 79.31% and 84.61%, respectively.
[143]77,910 imagesImagesPSNetUtilizing non-binary pixel-level supervision to guide model training. Introducing DDAM to distinguish smoke and smoke-like targets, AFSM to enhance smoke-relevant features, and MCAM for enhanced feature representation.Achieved a detection rate of 96.95%.
[144]614 imagesImagesCNNOptimizing a CNN model. Training MobileNet to classify satellite images using a cloud-based development studio and transfer learning. Assessing the effects of input image resolution, depth multiplier, dense layer neurons, and dropout rate.Achieved a 95% accuracy.
[145]6225 imagesSatellite imagesCNNIntroducing SmokeNet, a new model using spatial and channel-wise attention for smoke scene detection, including a unique bottleneck gating mechanism for spatial attention.Achieved a 92.75% accuracy.
[146]975 imagesSatellite imagesFCNPresenting a deep FCN for a near-real-time prediction of fire smoke in satellite imagery.Achieved a 99.5% classification accuracy.
[147]24,217 imagesImagesDeep multi-scale CNNDesigning a multi-scale basic block with parallel convolutional layers of different kernel sizes and merging outputs via addition to reduce dimension. Proposing a deep multi-scale CNN using a cascade of these basic blocks.Achieved a 95% accuracy.
[148]20,000 imagesImagesDCNNPresenting a smoke detection method using a dual DCNN. The first framework extracts image-based features like smoke color, texture, and edge detection. The second framework extracts motion-based features, such as moving, growing, and rising smoke regions.Achieved an average accuracy of 97.49%.
Table 3. List of the past works related to fire and flame detection.
Table 3. List of the past works related to fire and flame detection.
RefDatasetData TypeMethodObjectiveAchievement
[149]338 imagesImagesFSCN and ISSAImproving the accuracy of fire recognition with a fast stochastic configuration network.Achieved a 94.87% accuracy on fire detection.
[150]5 videosVideosUnsupervised methodAchieving the early detection of wildfires and flames from still images by a new unsupervised method based on RGB color space.Achieved a 93% accuracy on flame detection.
[151]14 videosVideosK-SVDDetecting wildfire flame using videos from pixel to semantic levels.Obtained a 94.1% accuracy on flame detection.
[152]85 videosVideosELMPerforming a static and dynamic texture analysis of flame in forest fire detection.Attained an average detection rate of 95.65%.
[153]101 imagesImagesSVMDevising a new fire detection and identification method using a visual attention mechanism.Accomplished an accuracy of 82% for flame recognition.
[154]51,998 images and 6 videosImages & VideosYOLOv5nApplying YOLOv5 to detect forest fires from images captured by UAV and analyzing the flame detection performance of YOLOv5.Achieved a detection speed of 1.4 ms/frame and an average accuracy of 91.4%.
[155]1900 imagesImagesCNNProposing wildfire image classification with Reduce-VGGnet and region detection using an optimized CNN, combining spatial and temporal features.Achieved an accuracy of 97.35%.
[156]2603 imagesImagesADE-NetIntroducing a dual-encoding path with semantic and spatial units, integrating AFM, using an MAF module, proposing an AGE module, and finally employing a GCF module.Achieved a 90.69% and 80.25% Dice coefficient, as well as a 91.42% and 83.80% mIOU, on the FLAME and Fire_Seg datasets, respectively.
[157]20 videosVideosOptic flowProposing the following four-step algorithm: preprocessing input data, detecting flame regions using HSV color space, modeling motion information with optimal mass transport optical flow vectors, and measuring the area of detected regions.Achieved a 96.6% accuracy.
[158]1000 imagesImagesEncoder–decoder architectureProposing FlameTransNet. Implementing an encoder–decoder architecture. Selecting MobileNetV2 for the encoder and DeepLabV3+ for the decoder.Achieved an IoU, Precision, and Recall of 83.72%, 91.88%, and 90.41%, respectively.
[159]Live data from cameras, thermopile-type sensors, and anemometersImages, infrared, and ultrasonicSegmentation and reconstructionDeveloping an image-based diagnostic system to enhance the understanding of wildfire spread and providing tools for fire management through a 3D reconstruction of turbulent flames.Demonstrated that the flame volume measured through image processing can reliably substitute fire thermal property measurements.
[160]*ImagesSVMProposing a fire image recognition method by integrating color space information into the SIFT algorithm. Extracting fire feature descriptors using the SIFT from images, filtering noisy features using fire color space, and transforming descriptors into feature vectors. Using an Incremental Vector SVM classifier to develop the recognition model.Achieved a 97.16% testing accuracy.
[161]37 videosVideosSVMProposing a fire-flame detection model by defining the candidate fire regions through background subtraction and color analysis. Modeling fire behavior using spatio-temporal features and dynamic texture analysis. Classifying candidate regions using a two-class SVM classifier.Achieved detection rates of approximately 99%.
* Information not available.
Table 4. List of the past works related to fire and smoke detection.
Table 4. List of the past works related to fire and smoke detection.
RefDatasetData TypeMethodObjectiveAchievement
[162]17,840 imagesImagesCNNDetecting forest fire smoke in real-time through using deep convolutional neural networks.Achieved an accuracy of 95.7% on real-time forest fire smoke detection.
[163]3000 imagesImagesR-CNNClassifying smoke columns with object detection and a DL-based approach.Dropped the FPR to 88.7% (from 93.0%).
[164]35,328 imagesImagesTransfer learningImproving fire and smoke recognition in still images by utilizing advanced convolutional techniques to balance accuracy and complexity.Obtained an AUROC value of 0.949 with the test set that corresponded to a TPR and FPR of 85.3% and 3.1%, respectively.
[165]1900 imagesImagesGA-CNNDetecting fire occurrences with high accuracy in the environment.Achieved a 95% accuracy and 92% TPR.
[166]3630 imagesImagesCNNSegmenting fire and smoke regions in high-resolution images based on a multi-resolution iterative quad-tree search algorithm.Obtained a 95.9% accuracy on fire and smoke segmentation.
[167]4326 imagesImagesCNNBuilding an adaptive linear feature–reuse network for rapid forest fire smoke detection.Achieved an 87.26% mAP50 on fire and smoke detection.
[168]15,909 imagesImagesMVMNetDetecting fire based on a value conversion attention mechanism module.Obtained an mAP50 of 88.05% on fire detection.
[169]14,402 imagesVideosCNNWildfire detection through RGB images by the CNN model.Achieved an accuracy of 98.97% and an F1-score of 95.77% on fire and smoke detection, respectively.
[170]7652 imagesImagesR-CNNForest fire and smoke recognition based on an anchor box adaptive generation method.Achieved an accuracy rate of 96.72% and an IOU of 78.96%.
[171]1323 fire or smoke images and 3533 non-fire imagesImagesR-CNNPerforming collaborative region detection and developing a grading framework for forest fire smoke using weakly supervised fine segmentation and lightweight faster-RCNN.Achieved a 99.6% detection accuracy and 70.2% segmentation accuracy.
[172]400,000 imagesImagesBNN and RCNNConstructing a model for early fire detection and damage area estimation for response systems.Achieved an mAP of 27.9 for smoke and fire.
[173]23,500 imagesImagesCNN and RNNDetecting forest fire through using a hybrid DL model.Accomplished fire detection with 99.62% accuracy.
[174]16,140 imagesImagesCNNEnhancing fire and smoke detection in still images through advanced convolutional methods to optimize accuracy and complexity.Achieved 84.36% and 81.53% mean test accuracy for the fire and fire and smoke recognition tasks, respectively.
[175]14 fire and 17 non-fire videosVideosR-CNNReducing FP detection by a smoke detection algorithm.Attained a 99.9% accuracy in performing smoke and fire detection.
[176]49 large imagesImagesCNNPerforming active fire mapping using CNN.Achieved a 0.84 F1-score on fire detection.
[177]5682 imagesImagesWavelet decompositionDetecting forest fire smoke using videos in a wavelet domain.Achieved a 94.04% accuracy on fire detection.
[178]1844 imagesImagesMobileNetV3Building a lightweight deep learning fire recognition algorithm that can be employed on embedded hardware.Through experimental results, a significant reduction in the number of model parameters and inference time was achieved when compared to YOLOv4.
[179]999 imagesSatellite imagesTransfer learningUsing learning without forgetting (LwF) to train the network with a new task but keeping the network’s preexisting abilities intact.An accuracy of 91.41% was achieved by Xception with LwF on the BowFire dataset and 96.89% on the original dataset.
[180]*Images and videosGS-YOLOv5Replacing the convolutional blocks in Super-SPPF by GhostConv and using the C3Ghost module instead of the C3 module in YOLOv5 to increase speed and reduce computational complexity.Achieved a detection accuracy of 95.9%.
[181]3000 imagesImagesYOLOv6Enhancing model performance by integrating the Convolutional Block Attention Module (CBAM), employing the CIoU loss function, and utilizing AMP automatic mixed-precision training.Achieved an mAP of 0.619.
[182]450 imagesImagesYOLOv5sIntegrating CA into YOLOv5, replacing YOLOv5’s SPPF module with an RFB module and enhancing the neck structure by upgrading PANet to Bi-FPN.Improved the forest fire and smoke detection model in terms of [email protected] by 5.1% compared with YOLOv5.
[183]18,217 imagesImagesYOLOv4Proposing AERNet, a real-time fire detection network, optimizing for both accuracy and speed. Utilizing SE-GhostNet for lightweight feature extraction and an MSD module for enhanced feature emphasis. Employing decoupled heads for class and location prediction.Achieved a 69.42% mAP50, 18.75 ms inference time, and 48 fps.
[184]39,375 imagesImagesEnsemble CNNUsing an ensemble of XceptionNet, MobileNetV2, and ResNet-50 CNN architectures for early fire prediction. Implementing fire and smoke detection using YOLO architecture known for low latency and high fps.The smoke detection model achieved an [email protected] of 0.85, while the combined model achieved an [email protected] of 0.76.
* Information not available.
Table 5. List of the past works related to the utilization of robots in fire detection and extinguishing.
Table 5. List of the past works related to the utilization of robots in fire detection and extinguishing.
RefEnvironmentRobot TypeObjectivesAchievements
[185]OutdoorUGVTo build a four-drive articulated tracked fire extinguishing robot that can flexibly perform fire detection and fire extinguishing.Designed a firefighting robot that can be operated remotely to control its movements and can spray through its cannon.
[186]Indoor/outdoorUGVBuilding a firefighter intervention architecture that consists of several sensing devices, a navigation platform (an autonomous ground wheeled robot), and a communication/localization network.Achieved an accuracy of 73% and precision of 99% in detecting fire points.
[187]Indoor/outdoorUGVBuilding a smart sensor network-based autonomous fire extinguish robot using IoT.Successfully demonstrated the robot working on nine different occasions.
[188]Indoor/outdoorUGVDeveloping a small wheel-foot hybrid firefighting robot for infrared visual fire recognition.Achieved an average recognition rate of 97.8% with the help of a flame recognition algorithm.
[189]BuildingsUGVBuilding an autonomous firefighter robot with a localized fire extinguisher.The robot, which is equipped with six flame sensors, can detect flame instantly and can extinguish fire with the help of sand.
[190]OutdoorUGVBuilding an autonomous system for wildfire and forest fire early detection and control.The autonomous firefighting robot equipped with a far infrared sensor and turret can detect and extinguish small fires within range.
[191]Indoor/outdoorUGVPerforming fire extinguishing without the need for firefighters.Extinguished fire at a maximum distance of 40 cm from the fire.
[192]ForestUAVBuilding wildfire detection solution based on unmanned aerial vehicle-assisted Internet of Things (UAV-IoT) networks.The rate of detecting a 2.5 km2 fire was more than 90%.
[193]ForestUAVDetecting forest fires through the use of a new color index.A detection precision of 96.82% is achieved.
[194]OutdoorUAVExploring the potential of DL models, such as YOLO and R-CNN, for forest fire detection using drones.An [email protected]% of 90.57% and 89.45% were achieved by Faster R-CNN and YOLOv8n, respectively.
[195]OutdoorUAVProposing a low-cost UAV with extended MobileNet deep learning for classifying forest fires. Share fire detection and GPS location with state forest departments for a timely response.Achieved an accuracy of 97.26%.
[196]OutdoorUAVProposing a novel wildfire identification framework that adaptively learns modality-specific and shared features. Utilizing parallel encoders to extract multiscale RGB and TIR features, integrating them into a fusion feature layer.The proposed method achieved an average improvement of 6.41% and 3.39% in IoU and F1-score, respectively, compared to the second-best RGB-T semantic segmentation method.
[197]OutdoorUAVProposing a two-stage framework for fire detection and geo-localization. Compiling a large dataset from several sources to capture the various visual contexts related to fire scenes. Investigating YOLO models.Achieved an mAP50 of 0.71 and an F1-score of 0.68.
[198]OutdoorUAVIntroducing the UAV platform “WILD HOPPER,” a 600-liter capacity system designed specifically for forest firefighting.Achieved a payload capacity that addresses the common limitations of electrically powered drones, which are typically restricted to fire monitoring due to insufficient lifting power.
[199]OutdoorUAVTo explore the integration of fire extinguishing balls with drone and remote-sensing technologies as a complementary system to traditional firefighting methods.Controlled experiments were conducted to assess the effectiveness and efficiency of fire extinguishing balls.
[200]OutdoorUAVTo promote the use of UAVs in firefighting by introducing a metal alloy rotary-wing UAV equipped with a payload drop mechanism for delivering fire-extinguishing balls to inaccessible areas.Examined the potential of UAVs equipped with a payload drop mechanism for fire-fighting operations.
[201]OutdoorUAVTo propose a concept of deploying drone swarms in fire prevention, surveillance, and extinguishing tasks.Developed a concept for utilizing drone swarms in firefighting, addressing issues reported by firefighters and enhancing both operational efficiency and safety.
[202]OutdoorUAVTo improve the Near-Field Computer Vision system for an intelligent fire robot to accurately predict the falling position of jet trajectories during fire extinguishing.The system for intelligent fire extinguishing achieved a reduction in the average prediction error from 1.36 m to 0.1 m and a reduction in error variance from 1.58 m to 0.13 m in terms of predicting jet-trajectory falling positions.
Table 6. Methods of handling variations in fire, flame, and smoke.
Table 6. Methods of handling variations in fire, flame, and smoke.
NatureMethods
FireInfrared [57,188], convex hulls [86], deep learning [67,76,83,94,175], color probabilities and motion features [84], multi-task learning [66], ensemble learning [73], semantic [85], optimization [165], Markov chain [192], support vector machine [53,59], visible infrared imaging [60], visible-NIR [159]
FlameDeep learning [49,94], support vector machine [160], spatio-temporal features and SVM [161], infrared [190], visible-NIR [159], spatio-temporal features and deep learning [175]
SmokeDeep learning [147,148,172], stereo camera [124], transformer [128]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Özel, B.; Alam, M.S.; Khan, M.U. Review of Modern Forest Fire Detection Techniques: Innovations in Image Processing and Deep Learning. Information 2024, 15, 538. https://doi.org/10.3390/info15090538

AMA Style

Özel B, Alam MS, Khan MU. Review of Modern Forest Fire Detection Techniques: Innovations in Image Processing and Deep Learning. Information. 2024; 15(9):538. https://doi.org/10.3390/info15090538

Chicago/Turabian Style

Özel, Berk, Muhammad Shahab Alam, and Muhammad Umer Khan. 2024. "Review of Modern Forest Fire Detection Techniques: Innovations in Image Processing and Deep Learning" Information 15, no. 9: 538. https://doi.org/10.3390/info15090538

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop