Next Article in Journal
Color Change of Pear Wood (Pyrus communis L.) during Water Steam Treatment
Previous Article in Journal
Driving Factors of Forest Typological Diversity in the Moscow Region
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FireDA: A Domain Adaptation-Based Method for Forest Fire Recognition with Limited Labeled Scenarios

1
Ordnance NCO Academy, Army Engineering University of PLA, Wuhan 430075, China
2
School of Electrical Engineering, Naval University of Engineering, Wuhan 430033, China
*
Authors to whom correspondence should be addressed.
Forests 2024, 15(10), 1684; https://doi.org/10.3390/f15101684
Submission received: 4 August 2024 / Revised: 22 September 2024 / Accepted: 23 September 2024 / Published: 24 September 2024
(This article belongs to the Section Natural Hazards and Risk Management)

Abstract

:
Vision-based forest fire detection systems have significantly advanced through Deep Learning (DL) applications. However, DL-based models typically require large-scale labeled datasets for effective training, where the quality of data annotation is crucial to their performance. To address challenges related to the quality and quantity of labeling, a domain adaptation-based approach called FireDA is proposed for forest fire recognition in scenarios with limited labels. Domain adaptation, a subfield of transfer learning, facilitates the transfer of knowledge from a labeled source domain to an unlabeled target domain. The construction of the source domain FBD is initiated, which includes three common fire scenarios: forest (F), brightness (B), and darkness (D), utilizing publicly available labeled data. Subsequently, a novel algorithm called Neighborhood Aggregation-based 2-Stage Domain Adaptation (NA2SDA) is proposed. This method integrates feature distribution alignment with target domain Proxy Classification Loss (PCL), leveraging a neighborhood aggregation mechanism and a memory bank designed for the unlabeled samples in the target domain. This mechanism calibrates the source classifier and generates more accurate pseudo-labels for the unlabeled sample. Consequently, based on these pseudo-labels, the Local Maximum Mean Discrepancy (LMMD) and the Proxy Classification Loss (PCL) are computed. To validate the efficacy of the proposed method, the publicly available forest fire dataset, FLAME, is employed as the target domain for constructing a transfer learning task. The results demonstrate that our method achieves performance comparable to the supervised Convolutional Neural Network (CNN)-based state-of-the-art (SOTA) method, without requiring access to labels from the FLAME training set. Therefore, our study presents a viable solution for forest fire recognition in scenarios with limited labeling and establishes a high-accuracy benchmark for future research.

1. Introduction

Fire disasters pose significant threats to modern society, particularly due to the rapid increase in population density and the prevalence of combustible materials in factories, warehouses, residential areas, and forests. Both natural and man-made fires are highly destructive, with their rapid spread and difficulty in control often resulting in casualties, property damage, environmental pollution, and social impacts. Wildfires, in particular, have caused severe damage to forests, wildlife habitats, farms, and ecosystems. The extension of dry seasons, decreased precipitation, and increased scorching days have heightened the risk of forest fires due to global warming and climate change [1]. The frequency and intensity of forest fires in the western USA have increased due to climate change and rising temperatures. Additionally, regions in California affected by recent forest fires have experienced prolonged droughts [2]. By March 2020, the Black Summer fires burned nearly 19 million hectares, destroyed more than 3000 houses, and claimed the lives of 33 people [3]. In 2020, fires in the Brazilian Amazon destroyed nearly 800,000 km2 of forest area [4]. Between 2001 and 2022, over 77,000 forest fire incidents occurred in the central region of India [5]. In 2021, 19 severe wildfires worldwide caused 90 deaths and approximately $3.3 million in damage; meanwhile, forest fires in Europe affected 1,113,464 hectares across 39 countries [6].
The high frequency and destructive power of fires necessitate timely and accurate fire detection to mitigate social and economic losses, and to prevent large-scale disasters. In recent years, significant research efforts have been made to improve fire detection and disaster management strategies, emphasizing the importance of these measures in safeguarding human lives, infrastructure, and the environment. Recently, the development of fire detection technology has branched into two main areas: detection methods and detection algorithms. Detection methods comprise observation towers, aerial and satellite systems, particle and temperature sensors [7], and visual sensors. Observation towers heavily rely on the personal experience of observers. Aerial and satellite systems are effective for detecting large-scale fires but have limited capabilities for early detection. Particle and temperature sensors are cost-effective and easy to deploy; however, they have a short detection range, limited information-gathering capabilities, high false alarm rates, and suffer from time delays. Visual sensors, known for their wide detection range, quick response, robust performance in various environments, and low deployment costs, have gradually become mainstream. Fire detection technologies that use visual sensors can be broadly categorized into three types: heuristic rule-based methods, data-driven methods, and hybrid models.
Heuristic rule-based methods rely on complex feature engineering such as color [8], texture [9], shape [10], edge [11], motion [12], shape variation [13], flicker [14], and dynamic texture [15] of flames, necessitating domain expertise in fire detection and robust image processing skills. Color-based methods analyze the color spectrum where flame or smoke may be detected in different color spaces. The RGB [16], HSI [17], HSV [18], YUC [13,19], YUV [20], and YCbCr [21] color spaces were utilized for fire features analysis, as reported in the literature. Motion-based methods usually account for the irregular and chaotic motion of flame and smoke, utilizing features like motion orientation and optical flow for detection. Several motion-based models, including background subtraction algorithms [22,23], optical flow [24], chaotic movement [13], and dynamic texture analysis [15], have been explored for fire detection. To more accurately capture moving regions, researchers have proposed or applied various algorithms, including the Gaussian Mixture Model (GMM) [25,26], the Markov model [27], and the wavelet transform [16]. In these methods, the quality of feature engineering significantly influenced the performance of the detection techniques. They frequently suffered from high false alarm rates and poor robustness to environmental changes, due to variations in fire color, shape, and ambient lighting. Heuristic rule-based methods face limitations in threshold setting. Consequently, trainable methods capable of automatically learning features were introduced to minimize the influence of subjective factors on classification. Traditional machine learning algorithms such as Back-propagation Networks (BP) [28], Support Vector Machine (SVM) [29], Random Forest (RF) [30], Logistic Regression (LR) [29,31], K-Nearest Neighbors (KNN) [32], Multi-Layer Perceptron (MLP) [30], Stochastic Configuration Network (SCN) [33], Radial Basis Function (RBF) [34], and Naïve Bayes (NB) [32] have been employed in fire detection. These methods utilized machine learning algorithms to enhance fire detection performance, yet they still necessitated complex manual feature extraction.
Data-driven methods capitalize on the rapid advancements in DL algorithms, enabling end-to-end automatic learning. These methods strike a balance between detection accuracy and environmental robustness, significantly reducing false alarm rates [35]. Deep Neural Networks (DNNs) have been applied to fire detection, achieving better results than traditional CV methods [36]. For example, AlexNet [37,38,39,40], VGG [41,42], GoogleNet [39,41,43], ResNet [41,44,45], EfficientNet [41], SqueezeNet [46], MobileNet [44,47], Transformer [48] and other backbones were used for fire features extraction. R-CNN [49], Faster-RCNN [50], SSD [50], YOLO [50,51], EfficientDet [52], and other object detection networks have been employed for fire detection. DeepLab [53], FusionNet [54], UNet [55], and other segmentation networks were used for flame segmentation. LSTM [38,52], GRU [56], and other Recurrent Neural Networks (RNNs) were utilized for video fire recognition. Generative Adversarial Network (GAN) [55,57] and Stable Diffusion (SD) [58] were used for fire data augmentation. However, these methods are limited in practical fire detection applications due to significant drawbacks: first, their substantial computational complexity and slower inference speeds; and second, annotating large-scale, high-quality datasets for DNN training, which is both time-consuming and labor-intensive. For the former, a significant number of studies have emerged focusing on lightweight fire detection models. For example, K. Muhammad et al. proposed more compact approaches using architectures like GoogleNet [43], SqueezeNet [46], MobileNet [47], and ShuffleNetV1 [59] for fire detection. These models have enhanced performance while reducing both the number of learning parameters and inference time. Regarding the latter, research is sparse. The authors in [60] proposed a semi-supervised fire detection model called FireMatch, which used consistency regularization and antidistribution alignment. However, this model still relied on some labeled training samples to complete the semi-supervised learning process. In scenarios where the target scene is almost unlabeled, our previous research [61] introduced a fire recognition method based on domain adaptation. This method transferred domain knowledge from labeled public fire data to almost unlabeled target scenes, achieving performance comparable to supervised CNN-based methods.
Hybrid models integrate heuristic rule-based and DL techniques. These models preprocess images using fire-specific domain knowledge before utilizing DL models for fire detection. For instance, X. Wu et al. [62] combined DL and a manual feature extraction process to analyze both static and dynamic features. F. Saeed et al. [63] developed a hybrid model that includes an Adaboost-MLP and a CNN. Moreover, M. Abedi et al. [64] utilized DL and genetic algorithms to classify bridge fires. L. Huang et al. [44] developed a hybrid approach by integrating CNN and wavelet analysis, achieving promising performance. Research has demonstrated that these approaches enhance fire detection performance. However, they sacrifice the advantage of end-to-end automatic learning and require additional manual feature extraction.
Based on the preceding analysis, this paper explores fire recognition using visual sensors and DL models. To address the challenge of extensive data annotation required by DL methods, a domain adaptation-based fire recognition model that focuses on scenarios with limited labeled data is proposed. The main contributions of this paper can be summarized as follows:
  • A fire recognition model for scenarios with limited labels is proposed, which significantly reduces the data annotation workload required by traditional DL-based models.
  • This study introduces domain adaptation methods for fire recognition, utilizing publicly available labeled fire datasets to train the network and transferring knowledge to scenarios with limited labels.
  • A novel domain adaptation method that combines feature distribution alignment and target domain proxy classification loss is proposed.
  • The proposed domain adaptation method incorporates a neighborhood aggregation mechanism that exploits the structural information of unlabeled samples to provide more accurate pseudo-labels for calculating LMMD and PCL.
  • Extensive interpretability analyses are conducted to demystify the “black box” of the proposed fire recognition method, revealing their internal operational mechanisms.
This paper is structured as follows. Given that domain adaptation is a subset of transfer learning and considering the significant research gap in fire detection methods based on domain adaptation, Section 2 introduces related work in domain adaptation and transfer learning-based fire recognition methods. Section 3 details the proposed domain adaptation algorithm, NA2SDA. Section 4 describes the experimental setup in detail, including the dataset, transfer experiments, baselines, and evaluation metrics. Section 5 presents the performance of NA2SDA, assesses the role of each module of the algorithm, performs visualizations, compares it with the SOTA approach, and discusses future research directions and key points. Section 6 provides the conclusions of this study.

2. Related Work

Considering domain adaptation as a subfield of transfer learning and noting the scarcity of research in fire recognition based on domain adaptation, this Section introduces methods employing pre-training fine-tuning, and domain adaptation techniques, both categorized under transfer learning. The distinction between these approaches lies in their requirements: the former requires labels from target domain samples for training, whereas the latter functions without such labels. Additionally, a brief overview of current developments in domain adaptation is provided.

2.1. Pre-Training Fine-Tuning Techniques for Fire Recognition

To prevent the small size of the fire dataset from leading to the problem of overfitting in large DNNs, the networks are pre-trained on ImageNet to obtain a better generalization performance. Pre-training and fine-tuning techniques have been proposed for fire detection to address the issue of insufficient data in the target scene [41]. For example, J. Sharma et al. [45] proposed a flame detection algorithm using VGG16 and ResNet50 networks, pre-trained on ImageNet, to address data category imbalances by collecting varied numbers of positive and negative samples and introducing strong interference images. K. Muhammad et al. [40] proposed an early video detection algorithm based on AlexNet, pre-trained on ImageNet, and reliant on surveillance cameras to establish a disaster management system. K. Muhammad et al. [43] proposed a lightweight fire recognition network based on GoogleNet, balancing computational efficiency and accuracy through the pre-training and fine-tuning technique. K. Muhammad et al. [46] fused feature layers in SqueezeNet sensitive to the flame region to obtain location information and achieve a semantic-level understanding of fire scenes through pre-training on ImageNet. S. Majid et al. [41] compared the detection performance of mainstream CNN networks like VGG and ResNet, pre-trained on ImageNet, using a comprehensive fire dataset collected from platforms such as Kaggle. M. Shahid et al. [65] proposed a two-stage spatiotemporal neural network for video flame segmentation, with its classifier pre-trained on ImageNet. L. Zhang et al. [66] proposed a forest fire recognition method using the pre-training and fine-tuning technique for Unmanned Aerial Vehicle (UAV) aerial imagery and explored the effects of various backbones, network depths, activation functions, and data augmentation methods on forest fire recognition. A. Khan et al. [32] established a new dataset, DeepFire, for UAV forest fire detection. Based on this dataset, the performance of VGG19 pre-trained on ImageNet was compared with the performance of non-deep methods such as KNN, RF, NB, SVM, LR, etc. M. Lee et al. [67] proposed a method to classify the tail flame of jet engines using ResNet, pre-trained on ImageNet. J. Pincott et al. [68] developed a real-time fire and smoke detection system using the pre-trained Faster R-CNN and Inception V2. This system can provide essential fire information, including location, size, and spread trends. H. Yang et al. [69] addressed the challenges of small target detection and sparse data in forest fire smoke detection by proposing an improved method based on YOLOv5 and applying a transfer learning strategy. To address the degradation of model performance on source data during the transfer learning process, V.E. Sathishkumar et al. [70] proposed a fire and smoke recognition model based on non-forgetting learning. This model was implemented using pre-trained models like VGG16, InceptionV3, and Xception. S. Dalal et al. [71] proposed a hybrid model for intelligent urban fire detection in foggy weather, employing a two-step feature extraction method based on transfer learning to extract standard and additional useful features. These pre-trained models were employed in transfer learning to enhance convergence speed and prevent overfitting on small datasets. However, they remain supervised as they access target domain labels during fine-tuning. Therefore, these methods do not apply to situations where the target scene is almost unlabeled.

2.2. Domain Adaptation for Fire Recognition

For fire detection with limited labels, X. Liu et al. [72] proposed a domain adaptation-based fire detection method, a grassland fire risk assessment based on Transfer Component Analysis (TCA). It constructs a transfer learning task using the Xilingol grassland dataset as the source and the Hulunbeier grassland dataset as the target domain, addressing the challenge of obtaining samples for target scenarios in natural disaster studies. Z.-G. Liu et al. [73] also proposed a method for fire warnings in public places by mining urban communication data using TCA. This method transfers knowledge from non-local rich data to local tasks with insufficient samples, effectively bridging data gaps. P. Vorwerk et al. [74] applied feature representation transfer and instance transfer methods, based on Linear Discriminant Analysis (LDA) and the TrAdaBoost algorithm, to multi-sensor node indoor early fire detection. This approach addresses the scarcity of real-world data for model training, a common issue due to the infrequent occurrence of fires over a building’s life cycle. Although these methods incorporate domain adaptation techniques for fire detection, they primarily focus on non-visual data.
In this study, deep domain adaptation is introduced to vision-based fire recognition, and an improved algorithm that combines subdomain feature distribution alignment with target domain proxy classification loss is proposed. This approach leverages transfer learning to reduce the workload associated with annotating fire datasets in target scenarios.

2.3. Basic Theory of Domain Adaptation

Recently, domain adaptation techniques have been widely applied in CV tasks such as image classification [75], object detection [76], image/video segmentation [77], and image/video retrieval [78,79].
Given a source domain D s = x i s , y i s i = 1 n s and a target domain D t = x i t , y i t i = 1 n t , where n s and n t represent the number of samples in the source and target domains, respectively. In this paper, domain adaptation assumes the sample and label space of D s and D t are identical ( X s = X t , Y s = Y t ), yet their joint probability distributions are different ( P s x , y P t x , y ), and the labels y t of the target domain are inaccessible. The optimization objective is to learn an optimal model f : x t y t using the labeled D s and the unlabeled D t , aiming to minimize the prediction error ϵ = E x , y D t l f x , y on D t . The loss function can be expressed as
L = L t a s k D s + λ L a d a p t D s , D t
where L t a s k · represents the task-related loss, such as the cross-entropy loss in image classification tasks, L a d a p t · , · is the adaptation loss to improve the generalization performance of the model over D t , and λ is a weighting factor.
L a d a p t D s , D t can be seen as a regularization term, and the methods for constructing the regularization term mainly include two types: one that learns invariant features between the source and target domains to achieve knowledge transfer, which explicitly or implicitly measures the distribution distance between the source and target domains, further divided into distance- and adversarial-based methods; and another that designs a proxy to the expected classification error on the target domain, further divided into information maximization-, Singular Value Decomposition (SVD)-, and pseudo-label-based methods. Distance-based methods generally measure the discrepancy between the source and target domains by proposing an explicit distributional distance [80,81,82,83]. Adversarial-based methods achieve feature alignment through a minimax two-player game strategy, where one player, the domain discriminator, is trained to distinguish the source from the target domain, and the other, the feature extractor, is trained to confuse the domain discriminator by learning domain-invariant features [84,85,86]. Information maximization-based methods utilize the entropy or mutual information of prediction vectors [87,88,89]. SVD -based methods apply singular value decomposition to either source or target features [90,91]. Pseudo-label-based methods generate labels for the unlabeled samples in the target domain, effectively transforming the learning approach from unsupervised to supervised. This process is commonly referred to as self-supervised learning.
Inspired by references [83,92], this paper proposes the Neighborhood Aggregation-based 2-Stage Domain Adaptation (NA2SDA) algorithm, which integrates the learning of domain-invariant features with a proxy to the expected classification error on the target domain. NA2SDA constructs an auxiliary classifier in the target domain using a neighborhood aggregation mechanism. It utilizes the structural information embedded in the target domain’s unlabeled samples to generate higher-quality pseudo-labels. This method facilitates a two-stage domain adaptation: the first stage involves LMMD-based conditional distribution alignment, and the second stage focuses on minimizing the proxy classification loss in the target domain. Since both stages depend on the pseudo-labels from the target domain samples, higher-quality pseudo-labels inevitably enhance the model’s generalization performance in the target domain.

3. Methodology of Proposed FireDA

This Section introduces the proposed FireDA and its core domain adaptation algorithm NA2SDA, providing detailed descriptions of the methodology. In the framework of FireDA, as shown in Figure 1, inputs are labeled source samples and unlabeled target samples. G f is a feature extractor with residual connectivity, which extracts fire image features from both the source and target domains. G y is a 3-layer MLP structure designed for the domain adaptation-based fire recognition task. The final output layer contains two neurons that output the probability of the image being categorized as fire or non-fire. For the traditional domain adaptation, pseudo-labels generated by the source domain classifier are inevitably biased toward the source domain distribution. Since the source and target domains adhere to different distributions, it is essential to calibrate the pseudo-labels towards the target domain distribution. NA2SDA utilizes the source domain classifier and structural information from the target domain’s unlabeled samples. It employs source domain-like samples with high soft prediction confidence to construct a classifier that is suitable for predicting unlabeled samples in the target domain. To avoid frequent sample selection and alternate training typical of SSL, NA2SDA builds a memory bank. This bank stores and updates the feature and soft prediction information of all unlabeled target domain samples, allowing the algorithm to access historical information of neighboring samples in each training batch to obtain more accurate pseudo-labels.

3.1. Memory Bank

To obtain the global structural features of unlabeled target domain samples, a memory bank is constructed to store the information of these samples. At the beginning of training, the features and soft predictions of unlabeled samples are initialized randomly generated and initialized with uniform distribution, respectively. Subsequently, they are updated in each training epoch. The features and soft predictions of the unlabeled samples are denoted as
f e a j n = G f n x j t
p r e d j n = S o f t m a x G y n G f n x j t
where G f and G y are the feature extractor and classifier of the network, respectively, and f e a j n and p r e d j n are features and soft predictions extracted by the n th epoch of the training network, respectively.

3.2. Neighborhood Aggregation

In the n + 1 th epoch of training, the process begins by calculating the cosine similarities between the features of each unlabeled sample and all samples in the memory bank. Next, the K-nearest neighbors are queried to obtain the corresponding soft predictions from the memory bank. These soft predictions are then aggregated by calculating the mean value, ultimately resulting in pseudo-labels for target domain-oriented calibration. The distance then be obtained as
d i s f e a i n + 1 , f e a j n = 1 c o s f e a i n + 1 , f e a j n = 1 f e a i n + 1 · f e a j n f e a i n + 1 f e a j n
where 1 i , j n t and j i . Then, the distances are sorted and the indexes of the nearest K distances are obtained as
j 1 , j 2 , j K = a r g s o r t d i s f e a i n + 1 , f e a 1 n , , d i s f e a i n + 1 , f e a n t n : K
where a r g s o r t · represents the ascending order and j i represents the index value.
The corresponding pseudo-label y ~ i then be obtained as
y ~ i = a r g m a x c p r e d i = a r g m a x c 1 K j 1 j K p r e d j n
where a r g m a x · represents the activation function and c 1,2 , , C represents the category label.

3.3. Updating Mechanisms

In the n + 1 th epoch, the features and soft predictions of the unlabeled samples are extracted and then updated in the memory bank. To prevent the network from producing ambiguous results, a temperature coefficient is used to sharpen the network predictions stored in the memory bank.
f e a j n + 1 = G f n + 1 x j t
p r e d j = S o f t m a x G y n + 1 G f n + 1 x j t
p r e d j , c n + 1 = p r e d j , c 1 T 1 C p r e d j , c 1 T
where p r e d j , c is the probability that the sample x j t belongs to class c , T is the temperature coefficient, the smaller its value is, the sharper the output probability distribution is, and vice versa, the smoother it is. When T 0 , p r e d j , c n + 1 collapses to a one-hot code, the category corresponding to 1 is the label predicted by the model, and the rest are all 0.

3.4. Loss Functions

3.4.1. LMMD

Among the domain adaptation methods, the MMD is a classical measure of distributional distance, and its empirical approximation is usually expressed as
L M M D D s , D t = 1 n s x i D s G f x i 1 n t x j D t G f x j H 2
where H represents the Reproducing Kernel Hilbert Space (RKHS). The · H 2 denotes the RKHS-norm, and n s , n t denote the number of samples in the source and target domains, respectively.
It can be seen that the MMD-based domain adaptation method calculates the overall distribution of the two domains, and thus can only achieve marginal distribution alignment. Y. Zhu et al. [83] proposed LMMD, which further considered the conditional distribution alignment by dividing D s and D t into C pairs of sub-domains ( C is the number of categories). The LMMD then be calculated as
L L M M D D s , D t = 1 C c = 1 C x i D s ω i s c G f x i x j D t ω j t c G f x j H 2
where ω i s c and ω j t c are the probabilities that the samples x i and x j belong to the category c , respectively, and i = 1 n s ω i s c = j = 1 n t ω j t c = 1 . ω i s c and ω j t c are derived by the following equation.
ω i s c = y i c x , y D s y c
ω j t c = y ~ j c x , y D t y ~ c
To compute ω i s c , the true label y i c in the source domain is used; to compute ω j t c , since there is no label in the target domain, the pseudo-label y ~ is used to estimate ω j t c .

3.4.2. Pseudo-Label-Based Proxy Classification Loss in the Target Domain

Pseudo-labeling training is an important approach in Semi-Supervised Learning (SSL), where the network gradually treats its high-confidence predictions of unlabeled data as true labels. Subsequently, it calculates the cross-entropy loss based on these pseudo-labels during retraining. The pseudo-labeling mechanism encourages the model to produce low-entropy predictions for unlabeled samples, thereby improving clustering results. When calculating the cross-entropy loss between the pseudo-label y ~ i and the model’s prediction, the average soft prediction corresponding to the pseudo-labels is used as a weight. This weight is multiplied by the standard cross-entropy loss, allowing pseudo-labels with higher confidence levels to have a greater impact on the loss function. The pseudo-label-based Proxy Classification Loss (PCL) in the target domain is denoted as
L P C L D t = 1 n t x i , y ~ i D t p r e d i · H S o f t m a x G y G f x i , y ~ i
where S o f t m a x G y G f x i is the C -dimensional prediction of the network for unlabeled samples in a new training epoch, and H · is the cross-entropy loss.

3.4.3. Supervised Classification Loss in the Source Domain

For fire recognition, the task-related loss is the Supervised Classification Loss (SCL) in the source domain. The SCL is denoted as
L S C L D s = 1 n s x i , y i D s H S o f t m a x G y G f x i , y i
In summary, the loss function of NA2SDA includes the LMMD loss, the SCL in the source domain, and the PCL in the target domain. The total loss function can be expressed as
L N A 2 S D A = L S C S D s + λ 1 L L M M D D s , D t + λ 2 L P C L D t
where λ 1 and λ 2 are hyperparameters to be optimized.

4. Experiment Settings

In this Section, information about the experimental setup, dataset, evaluation metrics, and the SOTA methods being compared will be briefly described.

4.1. Datasets

To construct source and target domain data for transfer learning, currently available public video and image fire data are collected as follows:
BowFire [44,47,61]: An image dataset collected by the Institute of Mathematics and Computer Science at the University of São Paulo, Brazil, featuring urban emergency fire scenarios such as building fires, industrial fires, car accidents, and riots. It comprises 119 fire images and 107 non-fire images and is frequently used as a testing set for fire recognition methods due to its large number of strongly disturbing negative samples.
FIRESENCE [60,61]: A video fire dataset open-sourced by the cultural heritage preservation organization FIRESENCE, containing 11 fire videos and 16 non-fire videos.
KMU [60,61]: Collected by the Computer Vision and Pattern Recognition Laboratory at Keimyung University, South Korea, this dataset includes 22 fire videos and 16 non-fire videos. They are categorized into four scenarios: indoor/outdoor fires, indoor/outdoor smoke, wildfire smoke, as well as smoke-like and fire-like moving objects. The majority of fire videos depict pool fires generated by the combustion of gasoline and heptane.
MIVIA [44,47,60,61,65]: A video fire and smoke dataset from the MIVIA lab at the University of Salerno, Italy, captured in real environments. It includes 14 fire videos and 17 non-fire videos, with 27 videos sourced from VisiFire. This dataset has been employed as both training and testing sets by various DNN-based fire recognition methods.
VisiFire [61]: A surveillance video-based fire/smoke dataset from Bilkent University, Turkey, consisting of 15 fire videos and 24 non-fire videos. It is primarily used to validate the performance of traditional fire recognition methods that rely on manual feature extraction.
FLAME [61,66,93]: A large-scale forest fire dataset compiled by Shamsoshoara et al. [93], which is captured by UAVs in a pine forest in Arizona, USA. The dataset includes overhead views in both video and infrared formats. For training and evaluating fire recognition models, the authors segment the dataset into 31,501 training samples, 7874 validation samples, and 8617 testing samples from the visible videos.
As shown in Figure 2, five datasets: BowFire, FIRESENCE, KMU, MIVIA, and VisiFire, are integrated to create the source domain dataset FBD. FBD comprises three typical scenes: forest (F), brightness (B), and darkness (D), each containing 500 fire and 500 non-fire images. The target domain dataset, FLAME, is selected to facilitate comparisons. Furthermore, FLAME offers an overhead drone view of a forest fire featuring smaller fire targets, significantly differing from the FBD dataset. This difference is crucial for verifying the robustness of the NA2SDA against large domain shifts.

4.2. Experimental Setup and Baselines

To validate the effectiveness of the domain adaptation approach in fire recognition and to facilitate the transfer of fire knowledge from labeled to unlabeled data, the transfer task FBD→FLAME is established, as shown in Table 1. To compare with traditional supervised CNN-based methods, FireDA is trained using the labeled FBD dataset and the unlabeled FLAME training set. Model selection is conducted using the FLAME validation set, and performance is evaluated using the FLAME testing set. It is important to note that FireDA does not access the labels of the FLAME training set during its training phase.
In addition, the recognition performance of FireDA with the supervised CNN-based methods, which are based on Xception [93] and ResNet50 with sample augmentation (ResNet50+SA) [66], respectively, is also discussed. The quantitative results of these methods are from [61]. As for the transfer task for fire recognition, the only domain adaptation-based fire recognition method, DSAN+ResNet50 [61], is included in the comparison.

4.3. Evaluation Metrics

To evaluate model performance, several evaluation metrics commonly used in classification tasks are employed. These include Recall, Precision, Accuracy, and F1-score, along with the False Negative Rate (FNR) and False Positive Rate (FPR), which are particularly critical in fire recognition tasks. The evaluation metrics are defined as follows in the subsequent equations.
R e c a l l = T P T P + F N
P r e c i s i o n = T P T P + F P
A c c u r a c y = T P + T N T P + T N + F P + F N
F 1 s c o r e = 2 × r e c a l l × p r e c i s i o n r e c a l l + p r e c i s i o n
F N R = F N T P + F N = 1 R e c a l l
F P R = F P F P + T N
In the fire recognition task, T P , F P , F N , and T N represent the number of true positives (fire images correctly identified as having a fire), false positives (non-fire images incorrectly identified as having a fire, i.e., false alarms), false negatives (fire images incorrectly identified as not having a fire, i.e., missed alarms), and true negatives (non-fire images correctly identified as not having a fire), respectively.

4.4. Implement Details

All domain adaptation methods are uniformly implemented using the PyTorch framework, and the network structure is detailed in Table 2. The feature extraction network G f is a pre-trained ResNet18 on ImageNet, while the classifier G y is a 3-layer MLP. The FL3 features of G y are utilized for domain adaptation. During training, all convolutional and pooling layers are fine-tuned. The backbone and classifier networks operate under the same learning rate, l r , with Stochastic Gradient Descent (SGD) as the chosen optimizer. The learning rate follows a cosine annealing strategy, initially set at l r i n i t = l r / 100 , warming up from l r i n i t to l r at the iteration’s start, and subsequently varying according to the cosine function. The learning rate decreases progressively until it reaches the final learning rate, l r f i n a l 0 . The B a t c h s i z e is set to 128, with each batch containing 64 images from the source and target domains. A total of 10,000 iterations are performed and the model parameters are saved every 250 iterations. During training, images are first resized to 256 × 256 pixels, then randomly cropped to 224 × 224 pixels to increase data diversity. This is followed by random horizontal flipping to enrich the data further. Finally, images are normalized to prevent the gradient from vanishing or exploding. During validation, images are also resized to 256 × 256 pixels and then centrally cropped to 224 × 224 pixels, matching the training set’s input size. Images are standardized to ensure consistent data distribution between the training and validation phases, with specific details provided in Table 3. The experimental hardware platform consisted of a server equipped with a 64-bit Ubuntu 20.04 operating system and two NVIDIA RTX3090Ti graphics cards, with 24 GB RAM per graphics card.

5. Results and Analysis

In this Section, the standard machine learning workflow is adhered to for the experiments. Model training is performed on the labeled FBD and unlabeled FLAME training sets. Model selection, hyperparameter tuning, and ablation studies are conducted on the FLAME validation set. Model evaluation is carried out on the FLAME testing set. Specifically, Section 5.1 conducts extensive ablation experiments to validate the effectiveness of each NA2SDA module, Section 5.2 provides an interpretability analysis to elucidate the inner workings of NA2SDA, Section 5.3 undertakes hyperparameter selection and sensitivity analysis, Section 5.4 validates the robustness of FireDA in complex scenes, Section 5.5 compares the performance of the optimal model against SOTA method, and Section 5.6 focuses on the limitations and future improvements of FireDA.

5.1. Ablation Study

An ablation study is conducted to assess the contributions of individual NA2SDA modules to model performance and determine their optimal configurations. In Table 4, the first row presents the Baseline (Model 1), involving supervised training solely on the FBD dataset. Its performance is subsequently evaluated on the target testing set, serving as the lower bound of the domain adaptation-based methods’ performance. Subsequent rows display experimental results obtained by incorporating various modules into the Baseline. Compared to the Baseline, Model 2 achieves positive transfer through L L M M D , and Model 3 through L P C L . Model 4 achieves two-stage domain adaptation by combining L L M M D and L P C L , improving the model’s accuracy to 96.30%.
The evaluation metrics results are primarily illustrated by the spider diagram shown in Figure 3. Considering that FNR can be represented by Recall, and that lower FNR and FPR indicate better model performance, the spider diagram displays five metrics: Accuracy, F1-score, Recall, Precision, and 1-FPR. In the graph, larger values for each metric indicate better corresponding performance of the model. Additionally, a larger area enclosed by all the metrics suggests better overall performance of the model. Due to the smaller fire targets in the FLAME dataset compared to those in the FBD dataset, the Baseline trained solely on the FBD dataset fails to recognize these smaller targets, resulting in a tendency to high FNR. Additionally, Model 2 relies solely on L L M M D fails to utilize the structural features of the target domain samples. Consequently, the trained classifiers are biased towards the source domain, inadequately addressing the issue of high FNR. Model 3 employs neighborhood aggregation of the target domain’s features to calibrate the source domain classifier, yielding more accurate pseudo-labels. The L P C L calculated by the calibrated pseudo-labels, effectively reducing the FNR. Furthermore, L L M M D is combined with L P C L to formulate the proposed NA2SDA, which elevates the fire recognition accuracy to an optimal 96.30% and maintains low FNR and FPR, achieving the best overall performance.

5.2. Visualization

In this Section, an interpretability analysis of the FBD→FLAME transfer task is performed using t-distributed Stochastic Neighbor Embedding (t-SNE) and Gradient-weighted Class Activation Mapping (GradCAM). Figure 4 illustrates the distributions of the FBD training samples and FLAME validation samples after dimensionality reduction in the feature space. A comparison of Figure 4a–d reveals the domain shift in the FBD→FLAME transfer task. Domain adaptation-based methods effectively learn the common features of negative samples (non-fires) and thus achieve distributional alignment. However, aligning the positive samples (fires) in the feature space proves difficult. The Baseline shows poor alignment of positive samples on both the horizontal and vertical axes of the feature space, due to differences in flame patterns caused by varying shooting angles and distances. Model 2 improves alignment, particularly on the horizontal axis, through conditional distribution alignment, yet it still exhibits poor alignment on the vertical axis. Method 3, leveraging neighborhood aggregation, learns the structural information of unlabeled target domain samples and substantially improves the alignment of fire samples on the vertical axis. The NA2SDA algorithm combines the strengths of Models 2 and 3 to achieve two-stage domain adaptation, enhancing the alignment of positive samples on both the horizontal and vertical axes. This results in optimal transfer effects and the highest recognition accuracy on the FLAME validation set.
Figure 5 displays the network’s activation regions on images using the GradCAM technique. Column 1 contains the original images from the FLAME dataset, with red boxes indicating the location of the fire. Columns 2–5 show the class activation heat maps of different networks superimposed onto the original images. Darker red regions indicate higher network activation, which helps assess whether the network accurately identifies fire features. Through comparative analysis, the Baseline demonstrates poor generalization for unlabeled target fires due to the lack of knowledge transfer. For example, in Figure 5(2–5), there are inactivated areas across all figures, with misactivation noted in the smoke region at the top right of Figure 5(2). Compared to the Baseline, Model 2 shows improved feature learning ability, reducing inactivation in Figure 5(4) but still experiencing misactivation in Figure 5(2). Model 3 also displays enhanced feature learning ability, particularly effective in multi-target fire scenarios like Figure 5(3,4), achieving more complete activation at fire locations compared to previous methods. However, it failed to detect the fire in Figure 5, resulting in misclassification. The proposed NA2SDA achieves the best results by resolving the misactivation issues seen in Figure 5(2) and the incomplete activation in Figure 5(3,4). It is also the only model that successfully detects the fire at two locations in Figure 5(5).

5.3. Parameter Sensitivity Analysis

In this Section, the effects of changes in four key hyperparameters on the model’s performance are examined, using the validation set of FLAME. These hyperparameters are the LMMD weight λ 1 , the PCL weight λ 2 , the temperature coefficient T , and the number of nearest neighbors K . The search space is depicted in Table 5, where the underlined parameters indicate the optimal hyperparameter combinations. The experimental results, shown in Figure 6, reveal that the recognition accuracy of the NA2SDA is particularly sensitive to the value of λ 2 . As λ 2 increases, accuracy initially increases and then decreases, peaking at an optimal value of 40. The other parameters maintain the model’s performance within a stable range and significantly surpass the Baseline.

5.4. Robustness Evaluation

In this Section, both positive and negative samples from complex scenes within the source domain FBD are selected to validate the robustness of FireDA. GradCAM technology is utilized to present class activation heatmaps, depicted in Figure 7. Images (1)–(3) display positive samples, where Images (1) and (2) include significant distractions such as firefighters in red clothing and red traffic lights in addition to the fire sources. Image (3) portrays a complex scene with multiple fire targets. Images (4) and (5) represent negative samples, featuring strong distractors like the rising sun and cartoonish flames. The class activation maps overlaid on the original images demonstrate that our method effectively ignores strong distractors and correctly focuses on the actual fire locations in Images (1), (2), (4), and (5). In Image (3), our method successfully identifies and focuses on multiple fire targets. This capability leads to accurate predictions across all samples, validating the robustness of our approach in complex fire detection scenarios.

5.5. Comparison to the SOTA Methods

The experimental results and the corresponding spider diagram for the FLAME testing set are displayed in Table 6 and Figure 8, and Figure 8 serves as a visual representation of Table 6. A. Shamsoshoara et al. [93] achieved an accuracy of 76.23% using Xception as the backbone. L. Zhang et al. [66] utilized the ResNet50 network as the backbone, combined with data augmentation and pre-training and fine-tuning techniques, to improve accuracy to 79.48%. Because references [66,93] have not made their codes publicly available and reported only recognition accuracy, our comparison primarily focuses on this metric. Reference [61], however, provides detailed evaluation metrics, allowing for a more comprehensive comparison. Compared to supervised CNN-based methods, the domain adaptation-based fire recognition algorithm performs lower, primarily due to the domain shift and the lack of training set labels during the training phase. However, our model surpasses the performance of Xception [93] by 2.79% and achieves 98.59% of the performance of the optimal ResNet50+SA [66]. Compared to DSAN+ResNet50 [61], our model significantly boosts accuracy to 78.36% and shows large improvements across all evaluation metrics. Considering both recognition accuracy and the reduction in data annotation workload, our method achieves better results.

5.6. Limitations and Future Research

The effectiveness of domain adaptation-based methods for fire recognition has been substantiated through comparison with other SOTA methods. However, there are still aspects that require attention and improvement in our future research.
  • The FireDA model, utilizing ResNet18 as its backbone, has a size of 45 M, which is suboptimal for deployment on edge computing platforms with limited processing capabilities. This is in contrast to the more lightweight fire recognition networks described in [37,94]. Consequently, it is crucial to enhance the FireDA by reducing its size.
  • Due to the inherent requirements of the domain adaptation approach, NA2SDA still necessitates a few labeled samples from the target domain for model selection during training. This limitation hinders its application in scenarios where the target domain is nearly unlabeled. Consequently, further research is needed into a model evaluation method that does not rely on labeled target domain samples for hyperparameter tuning and model checkpoint selection.

6. Conclusions

To reduce the reliance on CNN-based models on label quality and quantity for forest fire recognition, this paper introduces a novel domain adaptation-based framework, FireDA. This framework transfers domain knowledge from publicly available labeled fire data to target scenarios with limited labels. Within this framework, NA2SDA, a novel method that utilizes two domain adaptation strategies: feature distribution alignment and target domain proxy classification loss, is presented. NA2SDA generates calibrated pseudo-labels for the target domain using a neighborhood aggregation mechanism and a memory bank module, effectively addressing significant domain shifts and enhancing pseudo-label accuracy. Utilizing these pseudo-labels, LMMD and PCL are computed to improve model generalization performance on unlabeled target domains. Initially, the FBD dataset is constructed from publicly available labeled data across multiple scenarios to serve as the source domain. Subsequently, the widely recognized FLAME dataset is employed as the target domain to validate each module of NA2SDA and conduct comparisons with other SOTA methods. The results demonstrate that although FireDA does not access labels from the FLAME training set during training, it achieves performance comparable to the SOTA-supervised CNN-based method. To further enhance the practical application of the proposed algorithm, future efforts will focus on reducing the size of the backbone and conducting model selection under unlabeled conditions in the target domain.

Author Contributions

Conceptualization, Z.Y.; Methodology, Z.Y.; Software, Z.Y.; Validation, P.D. and X.W.; Formal analysis, L.Z. and M.Y.; Investigation, Z.Y.; Resources, X.Z. and W.L.; Writing—original draft, Z.Y.; Writing—review and editing, Z.Y., W.L. and L.W.; Visualization, Z.Y.; Supervision, W.L.; Project administration, X.Z.; Funding acquisition, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China (Key Special Project for Marine Environmental Security and Sustainable Development of Coral Reefs 2022-3.1).

Data Availability Statement

Publicly available datasets were analyzed in this study. The FLAME can be found in [93]. The proposed codes in this paper and the FBD dataset are available from the authors upon request.

Acknowledgments

This paper is a result of Ph.D. research conducted in the School of Electrical Engineering, Naval University of Engineering.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pourghasemi, H.R.; Gayen, A.; Lasaponara, R.; Tiefenbacher, J.P. Application of Learning Vector Quantization and Different Machine Learning Techniques to Assessing Forest Fire Influence Factors and Spatial Modelling. Environ. Res. 2020, 184, 109321. [Google Scholar] [CrossRef] [PubMed]
  2. Shen, P.; Crippa, P.; Castruccio, S. Assessing Urban Mortality from Wildfires with a Citizen Science Network. Air Qual. Atmos. Health 2021, 14, 2015–2027. [Google Scholar] [CrossRef]
  3. Filkov, A.I.; Ngo, T.; Matthews, S.; Telfer, S.; Penman, T.D. Impact of Australia’s Catastrophic 2019/20 Bushfire Season on Communities and Environment. Retrospective Analysis and Current Trends. J. Saf. Sci. Resil. 2020, 1, 44–56. [Google Scholar] [CrossRef]
  4. Brando, P.; Macedo, M.; Silvério, D.; Rattis, L.; Paolucci, L.; Alencar, A.; Coe, M.; Amorim, C. Amazon Wildfires: Scenes from a Foreseeable Disaster. Flora 2020, 268, 151609. [Google Scholar] [CrossRef]
  5. Jain, M.; Saxena, P.; Sharma, S.; Sonwani, S. Investigation of Forest Fire Activity Changes Over the Central India Domain Using Satellite Observations During 2001–2020. GeoHealth 2021, 5, e2021GH000528. [Google Scholar] [CrossRef]
  6. Nguyen, D.-L.; Putro, M.D.; Jo, K.-H. Lightweight Convolutional Neural Network for Fire Classification in Surveillance System. IEEE Access 2023, 11, 101604–101615. [Google Scholar] [CrossRef]
  7. Mahbub, M.; Hossain, M.M.; Gazi, M.S.A. Cloud-Enabled IoT-Based Embedded System and Software for Intelligent Indoor Lighting, Ventilation, Early Stage Fire Detection and Prevention. Comput. Netw. 2021, 184, 107673. [Google Scholar] [CrossRef]
  8. Calderara, S.; Piccinini, P.; Cucchiara, R. Vision Based Smoke Detection System Using Image Energy and Color Information. Mach. Vis. Appl. 2011, 22, 705–719. [Google Scholar] [CrossRef]
  9. Yuan, F. A Double Mapping Framework for Extraction of Shape-Invariant Features Based on Multi-Scale Partitions with AdaBoost for Video Smoke Detection. Pattern Recognit. 2012, 45, 4326–4336. [Google Scholar] [CrossRef]
  10. Chen, J.; Wang, Y.; Tian, Y.; Huang, T. Wavelet Based Smoke Detection Method with RGB Contrast-Image and Shape Constrain. In Proceedings of the 2013 Visual Communications and Image Processing (VCIP), Kuching, Malaysia, 17–20 November 2013; pp. 1–6. [Google Scholar]
  11. Qiu, T.; Yan, Y.; Lu, G. An Autoadaptive Edge-Detection Algorithm for Flame and Fire Image Processing. IEEE Trans. Instrum. Meas. 2012, 61, 1486–1493. [Google Scholar] [CrossRef]
  12. Chunyu, Y.; Jun, F.; Jinjun, W.; Yongming, Z. Video Fire Smoke Detection Using Motion and Color Features. Fire Technol. 2010, 46, 651–663. [Google Scholar] [CrossRef]
  13. Foggia, P.; Saggese, A.; Vento, M. Real-Time Fire Detection for Video-Surveillance Applications Using a Combination of Experts Based on Color, Shape, and Motion. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1545–1556. [Google Scholar] [CrossRef]
  14. Stadler, A.; Windisch, T.; Diepold, K. Comparison of Intensity Flickering Features for Video Based Flame Detection Algorithms. Fire Saf. J. 2014, 66, 1–7. [Google Scholar] [CrossRef]
  15. Dimitropoulos, K.; Barmpoutis, P.; Grammalidis, N. Spatio-Temporal Flame Modeling and Dynamic Texture Analysis for Automatic Video-Based Fire Detection. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 339–351. [Google Scholar] [CrossRef]
  16. Rafiee, A.; Dianat, R.; Jamshidi, M.; Tavakoli, R.; Abbaspour, S. Fire and Smoke Detection Using Wavelet Analysis and Disorder Characteristics. In Proceedings of the 2011 3rd International Conference on Computer Research and Development, Shanghai, China, 11–13 March 2011; pp. 262–265. [Google Scholar]
  17. Horng, W.-B.; Peng, J.-W.; Chen, C.-Y. A New Image-Based Real-Time Flame Detection Method Using Color Analysis. In Proceedings of the 2005 IEEE Networking, Sensing and Control, 2005, Tucson, AZ, USA, 19–22 March 2005; pp. 100–105. [Google Scholar]
  18. Chen, T.-H.; Wu, P.-H.; Chiou, Y.-C. An Early Fire-Detection Method Based on Image Processing. In Proceedings of the 2004 International Conference on Image Processing 2004 ICIP’04, Singapore, 24–27 October 2004; Volume 3, pp. 1707–1710. [Google Scholar]
  19. Habiboğlu, Y.H.; Günay, O.; Çetin, A.E. Covariance Matrix-Based Fire and Flame Detection Method in Video. Mach. Vis. Appl. 2012, 23, 1103–1113. [Google Scholar] [CrossRef]
  20. Marbach, G.; Loepfe, M.; Brupbacher, T. An Image Processing Technique for Fire Detection in Video Images. Fire Saf. J. 2006, 41, 285–289. [Google Scholar] [CrossRef]
  21. Çelik, T.; Demirel, H. Fire Detection in Video Sequences Using a Generic Color Model. Fire Saf. J. 2009, 44, 147–158. [Google Scholar] [CrossRef]
  22. Celik, T.; Demirel, H.; Ozkaramanli, H.; Uyguroglu, M. Fire Detection Using Statistical Color Model in Video Sequences. J. Vis. Commun. Image Represent. 2007, 18, 176–185. [Google Scholar] [CrossRef]
  23. Celik, T.; Ozkaramanlt, H.; Demirel, H. Fire Pixel Classification Using Fuzzy Logic and Statistical Color Model. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing—ICASSP ’07, Honolulu, HI, USA, 15–20 April 2007; pp. I-1205–I-1208. [Google Scholar]
  24. Mueller, M.; Karasev, P.; Kolesov, I.; Tannenbaum, A. Optical Flow Estimation for Flame Detection in Videos. IEEE Trans. Image Process. 2013, 22, 2786–2797. [Google Scholar] [CrossRef]
  25. Chen, J.; He, Y.; Wang, J. Multi-Feature Fusion Based Fast Video Flame Detection. Build. Environ. 2010, 45, 1113–1122. [Google Scholar] [CrossRef]
  26. Han, X.-F.; Jin, J.S.; Wang, M.-J.; Jiang, W.; Gao, L.; Xiao, L.-P. Video Fire Detection Based on Gaussian Mixture Model and Multi-Color Features. SIViP 2017, 11, 1419–1425. [Google Scholar] [CrossRef]
  27. Toreyin, B.U.; Dedeoglu, Y.; Cetin, A.E. Flame Detection in Video Using Hidden Markov Models. In Proceedings of the IEEE International Conference on Image Processing 2005, Genova, Italy, 14 September 2005; p. II-1230. [Google Scholar]
  28. Zhang, D.; Han, S.; Zhao, J.; Zhang, Z.; Qu, C.; Ke, Y.; Chen, X. Image Based Forest Fire Detection Using Dynamic Characteristics with Artificial Neural Networks. In Proceedings of the 2009 International Joint Conference on Artificial Intelligence, Hainan, China, 25–26 April 2009; pp. 290–293. [Google Scholar]
  29. Ko, B.C.; Cheong, K.-H.; Nam, J.-Y. Fire Detection Based on Vision Sensor and Support Vector Machines. Fire Saf. J. 2009, 44, 322–329. [Google Scholar] [CrossRef]
  30. Pérez-Porras, F.-J.; Triviño-Tarradas, P.; Cima-Rodríguez, C.; Meroño-de-Larriva, J.-E.; García-Ferrer, A.; Mesas-Carrascosa, F.-J. Machine Learning Methods and Synthetic Data Generation to Predict Large Wildfires. Sensors 2021, 21, 3694. [Google Scholar] [CrossRef] [PubMed]
  31. Dampage, U.; Bandaranayake, L.; Wanasinghe, R.; Kottahachchi, K.; Jayasanka, B. Forest Fire Detection System Using Wireless Sensor Networks and Machine Learning. Sci. Rep. 2022, 12, 46. [Google Scholar] [CrossRef] [PubMed]
  32. Khan, A.; Hassan, B.; Khan, S.; Ahmed, R.; Abuassba, A. DeepFire: A Novel Dataset and Deep Transfer Learning Benchmark for Forest Fire Detection. Mob. Inf. Syst. 2022, 2022, 1–14. [Google Scholar] [CrossRef]
  33. Wu, H.; Zhang, A.; Han, Y.; Nan, J.; Li, K. Fast Stochastic Configuration Network Based on an Improved Sparrow Search Algorithm for Fire Flame Recognition. Knowl.-Based Syst. 2022, 245, 108626. [Google Scholar] [CrossRef]
  34. Wen, Z.; Xie, L.; Feng, H.; Tan, Y. Infrared Flame Detection Based on a Self-Organizing TS-Type Fuzzy Neural Network. Neurocomputing 2019, 337, 67–79. [Google Scholar] [CrossRef]
  35. Ullah, F.U.M.; Obaidat, M.S.; Ullah, A.; Muhammad, K.; Hijji, M.; Baik, S.W. A Comprehensive Review on Vision-Based Violence Detection in Surveillance Videos. ACM Comput. Surv. 2023, 55, 1–44. [Google Scholar] [CrossRef]
  36. Cheng, G.; Chen, X.; Wang, C.; Li, X.; Xian, B.; Yu, H. Visual Fire Detection Using Deep Learning: A Survey. Neurocomputing 2024, 596, 127975. [Google Scholar] [CrossRef]
  37. Li, S.; Yan, Q.; Liu, P. An Efficient Fire Detection Method Based on Multiscale Feature Extraction, Implicit Deep Supervision and Channel Attention Mechanism. IEEE Trans. Image Process. 2020, 29, 8467–8475. [Google Scholar] [CrossRef]
  38. Hu, C.; Tang, P.; Jin, W.; He, Z.; Li, W. Real-Time Fire Detection Based on Deep Convolutional Long-Recurrent Networks and Optical Flow Method. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 9061–9066. [Google Scholar]
  39. Dunnings, A.J.; Breckon, T.P. Experimentally Defined Convolutional Neural Network Architecture Variants for Non-Temporal Real-Time Fire Detection. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 1558–1562. [Google Scholar]
  40. Muhammad, K.; Ahmad, J.; Baik, S.W. Early Fire Detection Using Convolutional Neural Networks during Surveillance for Effective Disaster Management. Neurocomputing 2018, 288, 30–42. [Google Scholar] [CrossRef]
  41. Majid, S.; Alenezi, F.; Masood, S.; Ahmad, M.; Gündüz, E.S.; Polat, K. Attention Based CNN Model for Fire Detection and Localization in Real-World Images. Expert Syst. Appl. 2022, 189, 116114. [Google Scholar] [CrossRef]
  42. He, L.; Gong, X.; Zhang, S.; Wang, L.; Li, F. Efficient Attention Based Deep Fusion CNN for Smoke Detection in Fog Environment. Neurocomputing 2021, 434, 224–238. [Google Scholar] [CrossRef]
  43. Muhammad, K.; Ahmad, J.; Mehmood, I.; Rho, S.; Baik, S.W. Convolutional Neural Networks Based Fire Detection in Surveillance Videos. IEEE Access 2018, 6, 18174–18183. [Google Scholar] [CrossRef]
  44. Huang, L.; Liu, G.; Wang, Y.; Yuan, H.; Chen, T. Fire Detection in Video Surveillances Using Convolutional Neural Networks and Wavelet Transform. Eng. Appl. Artif. Intell. 2022, 110, 104737. [Google Scholar] [CrossRef]
  45. Sharma, J.; Granmo, O.-C.; Goodwin, M.; Fidje, J.T. Deep Convolutional Neural Networks for Fire Detection in Images. In Engineering Applications of Neural Networks; Boracchi, G., Iliadis, L., Jayne, C., Likas, A., Eds.; Communications in Computer and Information Science; Springer International Publishing: Cham, Switzerland, 2017; Volume 744, pp. 183–193. ISBN 978-3-319-65171-2. [Google Scholar]
  46. Muhammad, K.; Ahmad, J.; Lv, Z.; Bellavista, P.; Yang, P.; Baik, S.W. Efficient Deep CNN-Based Fire Detection and Localization in Video Surveillance Applications. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1419–1434. [Google Scholar] [CrossRef]
  47. Muhammad, K.; Khan, S.; Elhoseny, M.; Hassan Ahmed, S.; Wook Baik, S. Efficient Fire Detection for Uncertain Surveillance Environment. IEEE Trans. Ind. Inf. 2019, 15, 3113–3122. [Google Scholar] [CrossRef]
  48. Li, R.; Hu, Y.; Li, L.; Guan, R.; Yang, R.; Zhan, J.; Cai, W.; Wang, Y.; Xu, H.; Li, L. SMWE-GFPNNet: A High-Precision and Robust Method for Forest Fire Smoke Detection. Knowl.-Based Syst. 2024, 289, 111528. [Google Scholar] [CrossRef]
  49. Li, Z.; Mihaylova, L.; Yang, L. A Deep Learning Framework for Autonomous Flame Detection. Neurocomputing 2021, 448, 205–216. [Google Scholar] [CrossRef]
  50. Li, P.; Zhao, W. Image Fire Detection Algorithms Based on Convolutional Neural Networks. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
  51. Hu, Y.; Zhan, J.; Zhou, G.; Chen, A.; Cai, W.; Guo, K.; Hu, Y.; Li, L. Fast Forest Fire Smoke Detection Using MVMNet. Knowl.-Based Syst. 2022, 241, 108219. [Google Scholar] [CrossRef]
  52. Mahaveerakannan, R.; Anitha, C.; Thomas, A.K.; Rajan, S.; Muthukumar, T.; Govinda Rajulu, G. An IoT Based Forest Fire Detection System Using Integration of Cat Swarm with LSTM Model. Comput. Commun. 2023, 211, 37–45. [Google Scholar] [CrossRef]
  53. Barmpoutis, P.; Stathaki, T.; Dimitropoulos, K.; Grammalidis, N. Early Fire Detection Based on Aerial 360-Degree Sensors, Deep Convolution Neural Networks and Exploitation of Fire Dynamic Textures. Remote Sens. 2020, 12, 3177. [Google Scholar] [CrossRef]
  54. Choi, H.-S.; Jeon, M.; Song, K.; Kang, M. Semantic Fire Segmentation Model Based on Convolutional Neural Network for Outdoor Image. Fire Technol. 2021, 57, 3005–3019. [Google Scholar] [CrossRef]
  55. Yang, Z.; Wang, T.; Bu, L.; Ouyang, J. Training with Augmented Data: GAN-Based Flame-Burning Image Synthesis for Fire Segmentation in Warehouse. Fire Technol. 2022, 58, 183–215. [Google Scholar] [CrossRef]
  56. Kou, L.; Wang, X.; Guo, X.; Zhu, J.; Zhang, H. Deep Learning Based Inverse Model for Building Fire Source Location and Intensity Estimation. Fire Saf. J. 2021, 121, 103310. [Google Scholar] [CrossRef]
  57. Qin, K.; Hou, X.; Yan, Z.; Zhou, F.; Bu, L. FGL-GAN: Global-Local Mask Generative Adversarial Network for Flame Image Composition. Sensors 2022, 22, 6332. [Google Scholar] [CrossRef]
  58. Zheng, H.; Wang, M.; Wang, Z.; Huang, X. FireDM: A Weakly-Supervised Approach for Massive Generation of Multi-Scale and Multi-Scene Fire Segmentation Datasets. Knowl.-Based Syst. 2024, 290, 111547. [Google Scholar] [CrossRef]
  59. Muhammad, K.; Ullah, H.; Khan, S.; Hijji, M.; Lloret, J. Efficient Fire Segmentation for Internet-of-Things-Assisted Intelligent Transportation Systems. IEEE Trans. Intell. Transport. Syst. 2023, 24, 13141–13150. [Google Scholar] [CrossRef]
  60. Lin, Q.; Li, Z.; Zeng, K.; Fan, H.; Li, W.; Zhou, X. FireMatch: A Semi-Supervised Video Fire Detection Network Based on Consistency and Distribution Alignment. Expert Syst. Appl. 2024, 248, 123409. [Google Scholar] [CrossRef]
  61. Yan, Z.; Wang, L.; Qin, K.; Zhou, F.; Ouyang, J.; Wang, T.; Hou, X.; Bu, L. Unsupervised Domain Adaptation for Forest Fire Recognition Using Transferable Knowledge from Public Datasets. Forests 2022, 14, 52. [Google Scholar] [CrossRef]
  62. Wu, X.; Lu, X.; Leung, H. An Adaptive Threshold Deep Learning Method for Fire and Smoke Detection. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 1954–1959. [Google Scholar]
  63. Saeed, F.; Paul, A.; Karthigaikumar, P.; Nayyar, A. Convolutional Neural Network Based Early Fire Detection. Multimed. Tools Appl. 2020, 79, 9083–9099. [Google Scholar] [CrossRef]
  64. Abedi, M.; Naser, M.Z. RAI: Rapid, Autonomous and Intelligent Machine Learning Approach to Identify Fire-Vulnerable Bridges. Appl. Soft Comput. 2021, 113, 107896. [Google Scholar] [CrossRef]
  65. Shahid, M.; Virtusio, J.J.; Wu, Y.-H.; Chen, Y.-Y.; Tanveer, M.; Muhammad, K.; Hua, K.-L. Spatio-Temporal Self-Attention Network for Fire Detection and Segmentation in Video Surveillance. IEEE Access 2022, 10, 1259–1275. [Google Scholar] [CrossRef]
  66. Zhang, L.; Wang, M.; Fu, Y.; Ding, Y. A Forest Fire Recognition Method Using UAV Images Based on Transfer Learning. Forests 2022, 13, 975. [Google Scholar] [CrossRef]
  67. Lee, M.; Yoon, S.; Kim, J.; Wang, Y.; Lee, K.; Park, F.C.; Sohn, C.H. Classification of Impinging Jet Flames Using Convolutional Neural Network with Transfer Learning. J. Mech. Sci. Technol. 2022, 36, 1547–1556. [Google Scholar] [CrossRef]
  68. Pincott, J.; Tien, P.W.; Wei, S.; Calautit, J.K. Development and Evaluation of a Vision-Based Transfer Learning Approach for Indoor Fire and Smoke Detection. Build. Serv. Eng. Res. Technol. 2022, 43, 319–332. [Google Scholar] [CrossRef]
  69. Yang, H.; Wang, J.; Wang, J. Efficient Detection of Forest Fire Smoke in UAV Aerial Imagery Based on an Improved Yolov5 Model and Transfer Learning. Remote Sens. 2023, 15, 5527. [Google Scholar] [CrossRef]
  70. Sathishkumar, V.E.; Cho, J.; Subramanian, M.; Naren, O.S. Forest Fire and Smoke Detection Using Deep Learning-Based Learning without Forgetting. Fire Ecol. 2023, 19, 9. [Google Scholar] [CrossRef]
  71. Dalal, S.; Lilhore, U.K.; Radulescu, M.; Simaiya, S.; Jaglan, V.; Sharma, A. A Hybrid LBP-CNN with YOLO-v5-Based Fire and Smoke Detection Model in Various Environmental Conditions for Environmental Sustainability in Smart City. Environ. Sci. Pollut. Res. 2024. [Google Scholar] [CrossRef]
  72. Liu, X.; Zhang, G.; Lu, J.; Zhang, J. Risk Assessment Using Transfer Learning for Grassland Fires. Agric. For. Meteorol. 2019, 269–270, 102–111. [Google Scholar] [CrossRef]
  73. Liu, Z.-G.; Li, X.-Y.; Jomaas, G. Identifying Community Fire Hazards from Citizen Communication by Applying Transfer Learning and Machine Learning Techniques. Fire Technol. 2021, 57, 2809–2838. [Google Scholar] [CrossRef]
  74. Vorwerk, P.; Kelleter, J.; Müller, S.; Krause, U. Classification in Early Fire Detection Using Multi-Sensor Nodes—A Transfer Learning Approach. Sensors 2024, 24, 1428. [Google Scholar] [CrossRef]
  75. Wang, M.; Deng, W.; Liu, C.-L. Unsupervised Structure-Texture Separation Network for Oracle Character Recognition. IEEE Trans. Image Process. 2022, 31, 3137–3150. [Google Scholar] [CrossRef] [PubMed]
  76. Zhao, T.; Shen, Z.; Zou, H.; Zhong, P.; Chen, Y. Unsupervised Adversarial Domain Adaptation Based on Interpolation Image for Fish Detection in Aquaculture. Comput. Electron. Agric. 2022, 198, 107004. [Google Scholar] [CrossRef]
  77. Liu, W.; Luo, Z.; Cai, Y.; Yu, Y.; Ke, Y.; Junior, J.M.; Gonçalves, W.N.; Li, J. Adversarial Unsupervised Domain Adaptation for 3D Semantic Segmentation with Multi-Modal Learning. ISPRS J. Photogramm. Remote Sens. 2021, 176, 211–221. [Google Scholar] [CrossRef]
  78. Wang, W.; Zhao, F.; Liao, S.; Shao, L. Attentive WaveBlock: Complementarity-Enhanced Mutual Networks for Unsupervised Domain Adaptation in Person Re-Identification and Beyond. IEEE Trans. Image Process. 2022, 31, 1532–1544. [Google Scholar] [CrossRef]
  79. Ainam, J.-P.; Qin, K.; Owusu, J.W.; Lu, G. Unsupervised Domain Adaptation for Person Re-Identification with Iterative Soft Clustering. Knowl.-Based Syst. 2021, 212, 106644. [Google Scholar] [CrossRef]
  80. Tzeng, E.; Hoffman, J.; Zhang, N.; Saenko, K.; Darrell, T. Deep Domain Confusion: Maximizing for Domain Invariance. arXiv 2014, arXiv:1412.3474. [Google Scholar]
  81. Long, M.; Cao, Y.; Wang, J.; Jordan, M. Learning Transferable Features with Deep Adaptation Networks. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 97–105. [Google Scholar]
  82. Sun, B.; Saenko, K. Deep CORAL: Correlation Alignment for Deep Domain Adaptation. In Computer Vision—ECCV 2016 Workshops; Hua, G., Jégou, H., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2016; Volume 9915, pp. 443–450. ISBN 978-3-319-49408-1. [Google Scholar]
  83. Zhu, Y.; Zhuang, F.; Wang, J.; Ke, G.; Chen, J.; Bian, J.; Xiong, H.; He, Q. Deep Subdomain Adaptation Network for Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 1713–1722. [Google Scholar] [CrossRef]
  84. Ganin, Y.; Lempitsky, V. Unsupervised Domain Adaptation by Backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 1180–1189. [Google Scholar]
  85. Pei, Z.; Cao, Z.; Long, M.; Wang, J. Multi-Adversarial Domain Adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
  86. Yu, C.; Wang, J.; Chen, Y.; Huang, M. Transfer Learning with Dynamic Adversarial Adaptation Network. In Proceedings of the 2019 IEEE International Conference on Data Mining (ICDM), Beijing, China, 8–11 November 2019; pp. 778–786. [Google Scholar]
  87. Shi, Y.; Sha, F. Information-Theoretical Learning of Discriminative Clusters for Unsupervised Domain Adaptation. In Proceedings of the 29th International Conference on Machine Learning ICML 2012, Edinburgh, UK, 26 June–1 July 2012. [Google Scholar]
  88. Jin, Y.; Wang, X.; Long, M.; Wang, J. Minimum Class Confusion for Versatile Domain Adaptation. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12366, pp. 464–480. ISBN 978-3-030-58588-4. [Google Scholar]
  89. Prabhu, V.; Khare, S.; Kartik, D.; Hoffman, J. SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised Domain Adaptation. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 8538–8547. [Google Scholar]
  90. Chen, X.; Wang, S.; Long, M.; Wang, J. Transferability vs. Discriminability: Batch Spectral Penalization for Adversarial Domain Adaptation. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 1081–1090. [Google Scholar]
  91. Cui, S.; Wang, S.; Zhuo, J.; Li, L.; Huang, Q.; Tian, Q. Towards Discriminability and Diversity: Batch Nuclear-Norm Maximization Under Label Insufficient Situations. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3940–3949. [Google Scholar]
  92. Liang, J.; Hu, D.; Feng, J. Domain Adaptation with Auxiliary Target Domain-Oriented Classifier. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 16627–16637. [Google Scholar]
  93. Shamsoshoara, A.; Afghah, F.; Razi, A.; Zheng, L.; Fulé, P.Z.; Blasch, E. Aerial Imagery Pile Burn Detection Using Deep Learning: The FLAME Dataset. Comput. Netw. 2021, 193, 108001. [Google Scholar] [CrossRef]
  94. Yar, H.; Khan, Z.A.; Rida, I.; Ullah, W.; Kim, M.J.; Baik, S.W. An Efficient Deep Learning Architecture for Effective Fire Detection in Smart Surveillance. Image Vis. Comput. 2024, 145, 104989. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed FireDA.
Figure 1. Framework of the proposed FireDA.
Forests 15 01684 g001
Figure 2. Sample images of FBD and FLAME.
Figure 2. Sample images of FBD and FLAME.
Forests 15 01684 g002
Figure 3. Spider diagram of the ablation study on the FLAME validation set.
Figure 3. Spider diagram of the ablation study on the FLAME validation set.
Forests 15 01684 g003
Figure 4. Feature visualization of FBD training and FLAME validation samples based on t-SNE. The red and yellow points represent the FBD and FLAME fire samples, respectively. The blue and green points represent the FBD and FLAME non-fire samples, respectively.
Figure 4. Feature visualization of FBD training and FLAME validation samples based on t-SNE. The red and yellow points represent the FBD and FLAME fire samples, respectively. The blue and green points represent the FBD and FLAME non-fire samples, respectively.
Forests 15 01684 g004
Figure 5. GradCAM-based heatmaps for samples in FLAME. (1–5) are fire samples from the target domain FLAME, with red boxes indicating the location of the fire.
Figure 5. GradCAM-based heatmaps for samples in FLAME. (1–5) are fire samples from the target domain FLAME, with red boxes indicating the location of the fire.
Forests 15 01684 g005
Figure 6. Grid search results of the hyperparameters.
Figure 6. Grid search results of the hyperparameters.
Forests 15 01684 g006
Figure 7. GradCAM-based heatmaps for samples of complex scenarios. (15) are fire and non-fire samples from complex scenes within the source domain FBD.
Figure 7. GradCAM-based heatmaps for samples of complex scenarios. (15) are fire and non-fire samples from complex scenes within the source domain FBD.
Forests 15 01684 g007
Figure 8. Spider diagram of the evaluation metrics on the FLAME testing set.
Figure 8. Spider diagram of the evaluation metrics on the FLAME testing set.
Forests 15 01684 g008
Table 1. Fire recognition transfer task experimental setup.
Table 1. Fire recognition transfer task experimental setup.
DatasetFire ImagesNon-Fire ImagesLabels
FBD (source)15001500
FLAME training set (target)20,01511,486
FLAME validation set50032871
FLAME testing set51373480
Table 2. Model structure of domain adaptation methods.
Table 2. Model structure of domain adaptation methods.
ModelLayerFeature
BackboneResNet18
ClassifierLinear(256)
ReLU()
Dropout(0.5)FL3
Linear(256)
ReLU()
Dropout(0.5)FL6
Linear(2)Logits
Softmax()Preds
Table 3. Hyperparameter Settings.
Table 3. Hyperparameter Settings.
CatalogSettings
OptimizerSGD
l r One Cycle
5% warmup period
l r i n i t = l r / 100
l r f i n a l 0
Cosine annealing
B a t c h s i z e 64 source + 64 target
Iterations/checkpoints10000/250
Training Image transformsResize(256)
RandomCrop(224)
RandomHorizontalFlip()
Normalize()
Validation/testing Image transformsResize(256)
CenterCrop(224)
Normalize()
Table 4. Ablation study of the effectiveness of each module on the FLAME validation set.
Table 4. Ablation study of the effectiveness of each module on the FLAME validation set.
ModelAccuracyF1-ScoreRecallPrecisionFNRFPR
L S C L (Baseline, Model 1)70.8470.2754.2399.7845.770.21
+ L L M M D (Model 2)78.5979.9467.1698.7432.841.50
+ L P C L (Model 3)96.0596.8896.4697.303.544.67
+ L L M M D + L P C L (FireDA, Model 4)96.3097.0595.6898.464.322.61
The bold means the best result for each metric.
Table 5. Hyperparameter search space. These hyperparameters are the LMMD weight λ 1 , the PCL weight λ 2 , the temperature coefficient T , and the number of nearest neighbors K .
Table 5. Hyperparameter search space. These hyperparameters are the LMMD weight λ 1 , the PCL weight λ 2 , the temperature coefficient T , and the number of nearest neighbors K .
HyperparameterSearch Space
λ 1 [0.0, 0.2, 0.4, 0.6, 0.8, 1.0]
λ 2 [0.0, 0.1, 1.0, 10.0, 40.0, 50.0, 100.0]
T [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]
K [10, 20, 40, 60, 80, 100]
The underlined parameters indicate the optimal hyperparameter combinations.
Table 6. Comparison results with the SOTA methods on the testing set of the FLAME.
Table 6. Comparison results with the SOTA methods on the testing set of the FLAME.
MethodsAccuracyF1-ScoreRecallPrecisionFNRFPR
Supervised CNN-basedXception [93]76.23-----
ResNet50+SA [66]79.48-----
ToT+ResNet50 [61]67.5076.1086.5068.3013.5060.55
Domain adaptation-basedDSAN+ResNet50 [61]63.9066.2060.7078.2039.3031.38
Our model78.3681.3979.4083.4820.6023.19
The bold means the best result for each metric.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, Z.; Zheng, X.; Li, W.; Wang, L.; Ding, P.; Zhang, L.; Yin, M.; Wang, X. FireDA: A Domain Adaptation-Based Method for Forest Fire Recognition with Limited Labeled Scenarios. Forests 2024, 15, 1684. https://doi.org/10.3390/f15101684

AMA Style

Yan Z, Zheng X, Li W, Wang L, Ding P, Zhang L, Yin M, Wang X. FireDA: A Domain Adaptation-Based Method for Forest Fire Recognition with Limited Labeled Scenarios. Forests. 2024; 15(10):1684. https://doi.org/10.3390/f15101684

Chicago/Turabian Style

Yan, Zhengjun, Xing Zheng, Wei Li, Liming Wang, Peng Ding, Ling Zhang, Muyi Yin, and Xiaowei Wang. 2024. "FireDA: A Domain Adaptation-Based Method for Forest Fire Recognition with Limited Labeled Scenarios" Forests 15, no. 10: 1684. https://doi.org/10.3390/f15101684

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop