Next Article in Journal
Plant–Soil–Microbial Carbon, Nitrogen, and Phosphorus Ecological Stoichiometry in Mongolian Pine-Planted Forests Under Different Environmental Conditions in Liaoning Province, China
Previous Article in Journal
Reproductive Ecology of Lecythis Pisonis in Brazilian Agroforestry Systems: Implications for Conservation and Genetic Diversity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MTL-FSFDet: An Effective Forest Smoke and Fire Detection Model Based on Multi-Task Learning

The College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Forests 2025, 16(5), 719; https://doi.org/10.3390/f16050719
Submission received: 28 March 2025 / Revised: 18 April 2025 / Accepted: 21 April 2025 / Published: 23 April 2025
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)

Abstract

:
Forest fires cause devastating damage to the natural environment, making prompt and precise detection of smoke and fires in forests crucial. When processing forest fire images based on ground and aerial perspectives, current object detection methods still encounter issues, such as inadequate detection precision, elevated false detection and omission rates, as well as difficulties in detecting small targets in complex forest environments. Multi-task learning represents a framework in machine learning where a model can handle detection and segmentation tasks concurrently, enhancing the accuracy and generalization capacity for object detection. Therefore, this study proposes a Multi-Task Learning-based Forest Smoke and Fire Detection model (MTL-FSFDet). Firstly, an improved Bilateral Filtering-Multi-Scale Retinex (BF-MSR) method for enhancing images was proposed, to lessen the effect of lighting on smoke images and improve the quality of the dataset. Secondly, a Hybrid Feature Extraction module, which integrates local and global information, was introduced to distinguish between targets and backgrounds, addressing smoke and fire detection in complex backgrounds. Furthermore, Dysample, a method utilizing point sampling, was designed to capture richer feature information when dealing with small targets. In addition, a feature fusion approach based on Context Gate Aggregation (CGA) was proposed to weightedly fuse low-level and high-level features, boosting the precision in detecting small targets. Finally, multi-task learning improves the capability to detect small targets and tackle complex scenarios by sharing the feature extraction module and leveraging refined supervision of the segmentation task. The findings from the experiments show that, in comparison to the baseline model, MTL-FSFDet improved the mAP@0.5 by 5.3%.

1. Introduction

Forests constitute the fundamental environment for the survival of Earth’s biota and hold a crucial position in maintaining the balance of natural ecosystems [1]. In recent years, forest fires have occurred frequently, and once a fire runs wild, it will bring devastating harm to the natural environment, endangering human life and property security [2]. Hence, prompt and precise detection of forest fires is particularly important.
Traditional detection methods mainly include manual inspection, sensor monitoring, observation tower surveillance, and satellite remote sensing technology. However, these methods have certain limitations for early detection. Although manual inspection can intuitively detect signs of fire, achieving real-time monitor of large forest areas poses a challenge due to limitations in manpower, material resources, and time [3]. Sensors can monitor fires in real time, but their deployment and maintenance costs are high, and their monitoring range is limited [4]. Observation towers can observe fires at greater distances, but they are susceptible to obstructions and have limited visibility, due to blind spots [5]. Satellite remote sensing technology has a wide coverage range, but the image range is too long, making it difficult to capture early signs of fires. In this context, the emergence of unmanned aerial vehicles (UAVs) has opened new avenues for forest fire monitoring [6]. By equipping sensors on UAVs for aerial patrol monitoring, this overcomes the limitations of observation towers’ restricted field of view and addresses the issue of satellite remote sensing’s excessively long monitoring range, eliminating the need for substantial manpower and material resources required by traditional methods, and achieving effective monitoring of forest fires [7].
With the advent of computer vision, early forest fire detection methods primarily relied on manual feature extraction. Chen et al. [8] proposed a method based on color models, employing color gamut segmentation and the K-Means clustering algorithm for forest fire detection, which exhibits high stability and versatility. Çelik et al. [9] investigated images and videos of different fires, introduced a statistical analysis approach, and proposed a rule-based general color model for flame pixel classification. Compared with the RGB color space, this algorithm more effectively achieves the separation of luminance and chrominance by utilizing the YCbCr color space. Wang et al. [10] proposed a hybrid method that combines a hidden Markov model based on spatiotemporal transform features with the variance in the luminance map of visual features, which could significantly reduce false detections and demonstrated strong robustness.
As deep learning technology has advanced, the application of deep learning for forest fire detection has increasingly demonstrated its advantages compared to traditional methods that rely on manual prior features. Compared to basic image processing techniques, the use of deep learning is highly advisable for extracting and identifying complex semantic information [11], making it more adaptable to changing forest environments. Lee et al. [12] calculated global frame similarity and mean squared error and combined them with Faster-RCNN [13] to preliminarily delineate smoke and flame regions. Subsequently, based on local spatiotemporal information, the final scope of smoke and flames was determined, achieving a false positive detection rate of 99.9%. Yuan et al. [14] improved YOLOv8 [15] by utilizing a Channel Prior Dilated Attention module, which allows the convolution kernel to more closely fit the target for efficient feature extraction. At the same time, they replaced standard convolution with Generalized-Sparse Convolution to alleviate the parameter increment issue introduced by the attention module, achieving an mAP of 88.8%. Qian et al. [16] proposed the OBDS model based on YOLOv5, which merges a CNN and Transformer to extract global and local feature information from images, enhancing the model’s capacity to perceive contextual information, with an mAP of 92.1%.
Although much progress has been achieved in detecting forest fires domestically and internationally, there are still some issues to be addressed. Existing deep learning models still encounter issues with poor detection precision, with high incidents of false and missed detections within intricate forest backgrounds. Especially for small or irregularly shaped smoke and flames, the recognition capabilities of the existing algorithms are relatively limited. To address these challenges, this study proposes a Multi-Task Learningbased Forest Smoke and Fire Detection model (MTL-FSFDdet), to achieve real-time monitoring of forest areas and promptly detect fire situations and anomalies, which has significant practical application implications. This study introduces the following goals and tasks:
(1)
To mitigate the impact of natural environmental lighting conditions on smoke images, this study proposes an improved Bilateral Filtering-Multi-Scale Retinex (BF-MSR) method for enhancing images. This algorithm enhances the color contrast of low-light smoke images, making the contours and textures of the smoke clearer, and effectively reducing the miss rate of detection methods.
(2)
To overcome the limitations of Convolutional Neural Networks (CNNs) in lacking global perception ability and in distinguishing between the target and background, this study proposes the Hybrid Feature Extraction module. This module integrates the local feature extraction ability provided by convolution with the global perception ability offered by the self-attention mechanism, enabling the simultaneous capture fine textures and contextual information of smoke and fire, and better addressing the detection of forest smoke and fires against complex backgrounds.
(3)
To improve the precision of detecting small smoke and fire targets, this study proposes Dysample and a feature fusion approach based on Context Gate Aggregation (CGA). Dysample achieves upsampling by dividing a single point into multiple sub-points through point sampling, allowing the model to obtain richer feature information when dealing with small targets. CGAFusion synergizes spatial attention and channel attention, weightedly fusing low-level and high-level features, to capture the subtle features of small target smoke and fires more effectively.
(4)
To enhance the identification of the edges, shapes, and details of objects in detection tasks, this study proposes a multi-task learning approach, including both detection and segmentation tasks. The efficiency and accuracy of feature extraction are enhanced by the multi-task learning, which utilizes a shared feature extraction module and a multi-scale fusion network. The main emphasis is on the detection task, with the segmentation task serving as a supplementary area. We leverage the fine-grained supervision provided by the segmentation task to enhance detection performance for small objects and within complicated scenarios.

2. Materials and Methods

2.1. Dataset

The data utilized in this study were sourced from public datasets such as VisiFire [17], BowFire [18], FireSense [19], and HPWREN Fire [20], encompassing forest smoke and fire images of various types and scenarios, along with interfering images of cloud, sun, etc. Table 1 presents the distribution of each category in the dataset, and Figure 1 presents some representative images.

2.2. The Preprocessing of the Image

Given the wide range of data sources for forest smoke images and the inevitable influence of complex natural environmental factors, such as diverse lighting conditions, throughout the process of image collection, images often exhibit issues of uneven illumination and significant color deviations. If these smoke images are directly input into a detection network, the network may struggle to accurately capture smoke features due to overly bright areas or shadowed parts in the images, leading to an increase in false detection rates. Therefore, this study proposes an improved Bilateral Filtering-Multi-Scale Retinex (BF-MSR) method, enhancing smoke images to improve image quality.
Based on the theory of Retinex [21], the input image I(x,y) can be split into two components: the incident component L(x,y), and the reflection component R(x,y), which is expressed as
I x , y = L x , y R ( x , y )
Since the human eye’s response to changes in brightness approximates a logarithmic pattern, taking the logarithm on both sides
l o g I x , y = l o g L x , y + l o g R x , y
The image decomposition process for Multi-Scale Retinex (MSR) is as follows:
Multi-scale Gaussian filtering is applied to the image. Specifically, N different scales σ are selected, and for each scale, a Gaussian filter G n x , y is computed and convolved with the image I(x,y) to obtain the incident component L(x,y):
L x , y = I x , y G n x , y
For each scale n, compute the reflection component R n x , y :
R n x , y = l o g I x , y l o g ( I x , y G n x , y )
Combine the reflection components of each scale through a weighted linear fusion to obtain the final MSR:
M S R x , y = n = 1 N w n l o g I x , y l o g ( I x , y G n x , y )
G x , y = 1 2 π σ 2 e x 2 + y 2 2 σ 2
In this context, N signifies the count of scales, G denotes the Gaussian surround function, σ represents the standard deviation of the Gaussian function, wn represents the weight of the nth scale, and w1 + … + wn = 1.
The Retinex algorithm is based on a theoretical assumption when estimating illumination for an image; namely, that the incident illumination layer varies slowly. However, in complex forest environments, due to direct sunlight or shading from trees, the actual incident illumination layer can vary dramatically. This results in the loss of image edge information after Multi-Scale Retinex (MSR) processing and may lead to noticeable halos, making overexposure likely. Compared to Gaussian filtering, bilateral filtering, as a nonlinear filter, can maintain image edge details effectively and smooth image noise. Therefore, we improve MSR by using bilateral filtering instead of Gaussian filtering, with the formula as follows:
M S R x , y = n = 1 N w n l o g I x , y l o g ( I x , y B n x , y )
B x , y = e ( x x c ) 2 + ( y y c ) 2 2 σ s 2 e ( f x , y f x c , y c ) 2 2 σ r 2
In which, (xc,yc) indicates the location of the center point of the image, while f(xc,yc) signifies the grayscale value of the central pixel. The spatial Gaussian function has a standard deviation denoted by σs, and the range Gaussian function has a standard deviation represented by σr.
To understand this intuitively, we chose three sets of low-illumination images for testing, and the test results can be observed in Figure 2.
Compared to the original smoke images, BF-MSR significantly enhances the color contrast of low-illumination and nighttime images, making the contours and textures of the smoke much clearer. Compared to MSR, BF-MSR effectively avoids image overexposure and reduces the likelihood of edge halos.
For an unbiased assessment of the performance of the algorithm, Information Entropy (IE) [22] and Peak Signal-to-Noise Ratio (PSNR) [23] were selected as evaluation metrics. A higher IE points to the image containing a richer amount of information, and a higher PSNR indicates stronger noise suppression capability for the image. Taking Figure a as an example, the results of a comparison among diverse image enhancement approaches are displayed in Table 2.
As can be seen from the table above, BF-MSR had the highest IE and PSNR, indicating that the processed images are rich in detail information and have good noise reduction effects, making it more suitable for enhancing forest smoke images.
In the preprocessing step for flame images, current methods mainly rely on color and edge information to segment the flame area from the background and fill in incomplete areas within the flame center [24]. However, in this study, the flame images underwent meticulous segmentation and annotation processing, with the flame areas accurately defined and the annotated regions continuous and without holes, which ensured the integrity of the flame areas. Therefore, there was no need to repeat this process using color and edge-based segmentation methods. As a result, this study did not perform preprocessing on the flame images.

2.3. Methods

2.3.1. Yolov8

The basic framework of MTL-FSFDet is based on YOLOv8, and its overall structure is illustrated in Figure 3.
YOLOv8 mainly consists of three components: the backbone, neck, and head. In the backbone, the input image undergoes initial feature extraction through convolutional layers (Conv), followed by multiple stacked C2f modules to extract multi-scale features, and the SPPF module is utilized to enhance the receptive field. In the neck part, after upsampling, the deep features are concatenated with shallow features to achieve multi-scale feature fusion. In the head part, the fused multi-scale feature maps are used to generate classification and regression predictions.

2.3.2. MTL-FSFDet

In multitask learning, the network architecture and parameters are primarily shared among the layers. Specifically, when multiple tasks perform hard parameter sharing in the shared layers, they use the same network architecture, and the weights and biases of these network layers are identical. This sharing mechanism means that for each neuron in the shared layers, the input it receives, the activation function it applies, and the feature representation it outputs are the same for all tasks. During the training process, these shared parameters are updated synchronously based on the loss functions of all tasks, thereby learning a general representation or features that can adapt to multiple tasks simultaneously. By sharing these parameters and network architecture, the multitask learning model can capture common information or patterns among different tasks, which helps improve the performance of each task. Simultaneously, since the shared layers decrease the count of parameters that need to be learned, this also aids in alleviating the issue of overfitting and improves the model’s capacity for generalization.
This study introduces MTL-FSFDet, a model for detecting forest smoke and fires utilizing multi-task learning. It plays a vital role in addressing challenges like low detection precision and difficulties in spotting small targets in complex forests. As shown in Figure 4, the preprocessed image data by BF-MSR are fed into MTL-FSFDet as input. In the backbone, C2f_Hybrid extracts local and global features of the image. Subsequently, these feature data are transmitted to the neck. In the neck, Dysample captures tiny features, and CGAFusion performs weighted fusion of low-level and high-level features. Finally, the fused feature maps are sent to the segmentation head and the detection head, respectively, which output the boundaries of the fire regions and the detection results.
The entire model includes two tasks: a task for detecting smoke and fire, and another task for segmenting smoke and fire. Among them, the object detection task is the primary task, and the segmentation task serves as an auxiliary task to assist in improving the detection performance. These two tasks share the feature extraction module and the multi-scale feature fusion network, which can enhance the performance of their respective task-guiding networks.

2.3.3. Hybrid Feature Extraction Block

In the backbone of YOLOv8, the original C2f module primarily consists of simple convolutional stacks, which excel at extracting local features but lack the ability to perceive global features. When dealing with forest smoke and fire images with complex backgrounds, where smoke and flames exhibit varied shapes and are influenced by factors like illumination and occlusion, this emphasis on local feature extraction often leads to difficulties for the model in distinguishing between targets and backgrounds, causing erroneous detections and omissions. In response to this challenge, this study introduces a Hybrid Feature Extraction Block [25], a convolution-self-attention module, to serve as an alternative to the bottleneck in C2f, forming a new C2f_Hybrid. This design combines the ability of convolutions to learn local relative positional information, with the ability of the self-attention mechanism to capture global context, which permits the model to better detect forest smoke and fires within complex backgrounds.
In the Hybrid module, depicted in Figure 5, the input feature map X, sized C × H × W, is split evenly into two sub-feature maps along its channel dimension: X1 and X2. X1 is processed by IDConv, which performs local feature aggregation by injecting inductive biases to generate a local feature map X1′; while X2 is processed by OSRA, which extracts global features by expanding the receptive field, yielding a global feature map X2. The feature maps’ dimensions after these two steps are both C/2 × H × W, where the height and width stay unchanged, while the channel count is halved. Subsequently, X1 and X2 are reconnected along the channel dimension, forming an output feature map X’ that regains the original dimensions. Lastly, to further enhance the efficiency of feature fusion, a lightweight STE is introduced to acquire the ultimate features. The process of the Hybrid module can be represented as
X 1 , X 2 = S p l i t ( X )
X = C o n c a t ( I D C o n v X 1 , O S R A X 2
Y = S T E ( X )
(1)
Input-dependent Depthwise Convolution (IDConv)
Within the IDConv block, as shown in Figure 6a, with a feature map X of size C × H × W, it first aggregates spatial context information through an adaptive average pooling layer and reduces the spatial size to K × K dimensions, where K denotes the size of the dynamic convolution kernel to be generated. Following this, the modified feature map is fed into two 1 × 1 convolutions to obtain multiple sets of spatial attention maps S’, with dimensions G × C × K × K, where G indicates the total groups of spatial attention maps and C stands for the channel count in the initial feature map. To endow the spatial attention maps with adaptive selection properties, a normalized softmax function is utilized over the G dimension, yielding attention weights S. Ultimately, the attention weights S are multiplied element-by-element with a series of learnable parameters P of the same dimensions to generate dynamic convolution kernels W.
IDConv has the capability to adaptively modify the weights of the convolution kernels in response to various input feature maps, thereby dynamically capturing local information and endowing the network with strong inductive biases. The operation of IDConv can be represented as follows:
S = C o n v   1 × 1 C r   G × C ( C o n v   1 × 1 C C r   A d a p t i v e P o o l X )
S = S o f t m a x ( R e s h a p e S )
W = i = 0 G P i S i
(2)
Overlapping Spatial Reduction Attention (OSRA)
In the OSRA module, as shown in Figure 6b, an Overlapping Spatial Reduction (OSR) technique is introduced to optimize the representation of spatial structures by the Multi-Head Self-Attention (MHSA) mechanism. Through leveraging expanded and overlapping patch units, this approach can better grasp spatial details in the edge regions of patches, thereby significantly enhancing the MHSA mechanism’s ability to discern spatial structures. This method accomplishes fusion of global information via overlapping regions between patches. The operation of OSR can be represented as follows:
Y = O S R ( X )
Q = L i n e a r ( X )
K , V = S p l i t ( L i n e a r Y + L R Y )
Z = S o f t m a x Q K T d + B V
where LR( ) represents a localized enhancement block initialized through a 3 × 3 depthwise convolution, B denotes the matrix of relative positional biases used to encode spatial relationships within the attention map, and d represents the count of channels contained in each individual attention head.
(3)
Squeezed Token Enhancer (STE)
Within the STE module, shown in Figure 6c, the input feature map X is processed by a 3 × 3 depthwise convolution to enhance local feature correlations. Subsequently, 1 × 1 convolutional serves to reduce the channel count, with the goal of lessening the model’s computational burden. Additionally, the introduction of a residual connection mechanism ensures the feature representation capability. Compared to traditional methods that directly use a single 1 × 1 fully connected convolution layer for feature fusion, the STE proposed in this study exhibits better performance and more favorable computational complexity. The process of the STE can be represented as
S T E X = C o n v   1 × 1   C r   C C o n v   1 × 1 C C r   D W C o n v 3 × 3 X + X

2.3.4. DySample

In the neck of YOLOv8, the original UpSample achieves upsampling by interpolating the entire feature map, a process that not only consumes substantial computational resources, limiting the model’s ability to stay lightweight when detecting forest fires, but also leads to the omission of certain details in the upsampling, thus increasing the likelihood of overlooking small fire and smoke objects. In response to this problem, this study proposes a lightweight and effective upsampling method, DySample [26], as an alternative to UpSample.
DySample achieves the upsampling process through dynamic sampling, a method that does not rely on additional CUDA packages, significantly reducing computational complexity. Furthermore, utilizing the concept of point sampling, DySample divides a single point into several smaller points. This innovative design enhances edge clarity, assisting the model in retaining feature details more completely and elevating its precision when detecting small targets within forest fires. Figure 6 illustrates the structure of DySample.
Figure 7a depicts the sampling, grounded in dynamic upsampling. When provided with a feature map X (sized C × H × W) and a sampling set S (with dimensions 2 × sH × sW), the grid_sample function utilizes the coordinate information in sampling set S to carry out bilinear interpolation resampling on the input feature map X, resulting in the generation of a fresh feature map X’ of size C × sH × sW. The step is defined as follows:
X = g r i d _ s a m p l e ( X , S )
Figure 7b illustrates the point sampling generator process. Set the upsampling scale factor to s. Given a feature map X of size C × H × W, offsets O of size 2s2 × H × W are generated using input channels C and output channels 2s2. Subsequently, reshape the offsets O into size 2 × sH × sW through pixel shuffling. The sampling set S is obtained by adding the offsets O and the initial sampling grid G. The entire process can be depicted as follows:
O = l i n e a r X
S = O + G

2.3.5. Content-Guided Attention Fusion (CGAFusion)

In the demanding context of detecting smoke and fire in forests, the attention mechanism refines the input features through weighted processing, to emphasize key details and diminish non-essential or redundant information, thus significantly improving the efficacy of feature fusion. Traditional attention modules for features generally comprise two parts: channel attention, and spatial attention, which exert their functions by computing attention weights. Specifically, channel attention [27] is responsible for recalibrating features, while spatial attention [28] produces a spatial importance map (SIM) to depict the salience of various areas, which is crucial for precisely locating forest smoke or fire areas.
However, when facing complex forest environments and varied smoke and fire patterns, the inherent limitations of traditional attention become apparent. Spatial attention needs to overcome feature-level inhomogeneity to accurately capture the faint information of smoke, while channel attention, due to its lack of context analysis capability, struggles to fully understand the complex scenarios of fire sites. Furthermore, these two types of attention operate independently, lacking effective interaction, which prevents them from synergistically enhancing feature representations.
To tackle this problem, this study proposes a Content-Guided Attention (CGA) [29] module. CGA resolves the problem of insufficient information correlation between features by generating specific SIMs for each channel. This design promotes a deep interaction between spatial and channel attention, enhances the understanding of context information, and allows the model to precisely identify small smoke and fire objects in complex scenarios. The operation of CGA is illustrated in Figure 8.
Considering an input feature X of size C × H × W, the objective of CGA is to produce a channel-specific SIM (denoted as W, which has the same dimensions as X). First, Ws and Wc are calculated separately using the following formulas:
W s = C 7 × 7 ( X G A P s , X G M P s )
W c = C 1 × 1 ( max 0 , C 1 × 1 X G A P c )
where X G M P s , X G A P s , and X G A P c represent the features after channel global maximum pooling, channel global average pooling, and spatial global average pooling. max (0, x) represents the ReLU activation function. Subsequently, Ws and Wc are fused through simple addition to obtain a coarse SIM Wcos:
W c o s = W c + W s
Wcos and each channel of X are rearranged alternately through channel shuffling to obtain the final channel-specific SIM W:
W = σ ( G C 7 × 7 C S X , W c o a )
In this context, σ is used to signify the sigmoid activation function, while CS refers to the channel shuffle. Therefore, CGA allocates a distinct SIM to every channel, steering the model’s attention towards important regions of smoke and flame within each channel.
YOLOv8 adopts a feature fusion strategy that combines upsampling and concatenation. During the upsampling process, the receptive fields of low-level features and high-level feature maps may not align, leading to spatial misalignment in the fused features. Low-level features capture edge details and texture information, which are used to distinguish targets from objects with similar visual characteristics (such as smoke and clouds, and flames and the sun). High-level features emphasize the extraction of semantic information (such as the spread of smoke and the shape of flames), which is used to distinguish targets from non-targets in complex environments. Owing to the substantial disparities in encoded information [30] and receptive fields between low-level and high-level features, merely concatenating them may fail to achieve effective complementarity between the two, and may even lead to the loss of key information.
To resolve this challenge, this study presents CGAFusion, a feature fusion approach based on CGA. It takes low-level features from the backbone and corresponding high-level features from the neck as inputs into CGA for weighted fusion. This retains fine-grained details from low-level features and semantic info from high-level features, enhancing the feature expression capabilities and boosting detection accuracy for small objects.
Figure 9 shows how low-level and high-level features are input into the CGA to compute weights for each feature location. Subsequently, a method involving weighted summation is used to merge these features. A skip connection is added to mitigate gradient vanishing and ensure the integrity of information transmission. Lastly, a 1 × 1 conv layer is used to map the fused features, resulting in the ultimate feature. The process of CGAFusion can be represented as
F f u s e = C 1 × 1 ( F l o w W + F h i g h 1 W + F l o w + F h i g h

2.3.6. Multi-Task Head

For detection tasks and segmentation tasks, specific task heads are designed for each task. The design of the task heads is illustrated in Figure 10: For the semantic segmentation task head [31], the fused feature map downsampled by eight times from the neck network is used as input. Through a sequence of upsampling and convolutional operations, the feature map is restored to the size of the original input, and the count of feature channels corresponds to the count of semantic segmentation categories.
For the object detection task, this study adopts an anchor-free mechanism. By simplifying the object localization process, the anchor-free approach eliminates the need for calculating and matching multiple anchor boxes, effectively reducing the computational cost and memory footprint, and allowing the model to operate efficiently under limited hardware resources. At the same time, it can better handle the diversity of objects in training and test data, without relying on predefined shapes and sizes of anchor boxes, and thus having better generalization capability. The input to the object detection task head are features from three different scales in the feature fusion layer. The feature map at the larger scale is more attentive to information about small objects, while the small-scale feature map focuses more on information about large objects. The final feature map regresses by directly predicting the center point and bounding box of the object.

2.4. Experiment

2.4.1. Experimental Setup

This study was carried out utilizing Pycharm (PyTorch 2.2.1, Python 3.8, and CUDA 12.1). The CPU was an Intel(R) Core (TM) i7-12700H, and the GPU configuration was an NVIDIA GeForce RTX 4060. The dataset was divided into training, validation, and testing sets in a 7:2:1 ratio. The model was trained from scratch, with the training parameters shown in Table 3.

2.4.2. Evaluation Indicators

To analyze the model performance, this study used two types of evaluation metrics: accuracy metrics, and resource metrics. In terms of accuracy metrics, average precision (AP), mean average precision (mAP), precision (P), and recall (R) [32] were selected as evaluation criteria to measure the model’s detection capability. In terms of resource metrics, model parameters (Param) and frames per second (FPS) were chosen as evaluation criteria to assess the feasibility of the model’s practical application. The formulas are as follows:
P = T P T P + F P
R = T P T P + F N
A P = 0 1 P R d R
m A P = 1 n t = 1 n A P t
where TP, FP, and FN represent true positive, false positive, and false negative.

2.4.3. Ablation Experiment

To verify the effectiveness of each improved module in MTL-FFSDet, a total of five groups of ablation experiments were conducted, with each group using the same environment and training parameters. The results are shown in Table 4.
From Table 4, it can be seen that the introduction of BF-MSR improved the model’s mAP@0.5 from 80.2% to 80.8%. This is because BF-MSR enhanced the quality of low-illumination smoke images, making the smoke features clearer and thus improving the accuracy of feature extraction.
After the introduction of C2f_Hybrid, the model’s mAP@0.5 increased from 80.8% to 82.8%. This is because C2f_Hybrid can capture both global and local features, providing richer semantic information and detailed representations, which makes the model more accurate and robust in detecting smoke and fires in complex forest scenarios.
The introduction of Dysample further increased the model’s mAP@0.5 from 82.8% to 83.3%. This is because Dysample, based on point sampling for upsampling, improved the feature resolution and detail capturing ability, thereby enhancing the recognition accuracy for smoke and fires.
After the introduction of CGAFusion, the model’s mAP@0.5 rose from 83.3% to 84.2%. This is because CGAFusion synergizes spatial and channel attention, weightedly fusing low-level edge information with high-level semantic information, enabling it to more effectively capture subtle features of small smoke and fire targets in complex backgrounds.
Finally, the introduction of the multi-task head increased the model’s mAP@0.5 from 84.2% to 85.5%. This is because multi-task learning shares the feature extraction module and enhances the understanding of target edges, shapes, and details in the detection task by introducing supervision of the segmentation task, thereby improving the detection performance for small targets and complex scenarios.
The accuracy for smoke and fire is shown in Table 5.

2.4.4. Comparative Experiment

To additionally confirm the advantages of MTL-FSFDet for detecting forest smoke and fire, it was compared with several mainstream detection methods. The comparative results are presented in Table 6.
The table indicates that MTL-FSFDet outperformed the majority of the other models in the object detection task, especially in terms of mAP@0.5, precision, and recall, achieving 85.5%, 83.1%, and 77.9% respectively. Compared to YOLOv8, it was better by 5.3%, 4.4%, and 4.5%, respectively, and it outperformed the other models to varying degrees. There was a minor rise in parameter count and a slight drop in FPS, but these did not affect the model’s real-time detection performance.

2.4.5. Result Analysis

To visually showcase the effectiveness of our MTL-FSFDet model, this section selects different scenarios including tiny objects, tree obstructions, interfering images, and complex backgrounds to assess the model’s performance.
As shown in Figure 11, the original model struggled to identify the small fire targets in image (a) and the small smoke targets in image (d), and its perception range for smoke in image (a) was very limited, making it difficult to achieve comprehensive coverage.
However, the MTL-FSFDet model demonstrated excellent performance, not only accurately identifying these small fire and smoke targets but also significantly expanding the perception range for smoke. Compared to the original model, MTL-FSFDet achieved a more extensive and comprehensive smoke detection.
As shown in Figure 12, the initial model had difficulty precisely identifying the tiny fire obscured by trees in image (a), as well as the faint and sparse smoke partially hidden in image (c). These forest smokes and fires appeared blurry and indistinguishable due to the obstruction of trees, posing significant challenges in the recognition task.
However, the MTL-FSFDet model proposed in this study demonstrated remarkable recognition capabilities. It could accurately capture the subtle features of forest smokes and fires in complex environments with tree obstructions, and its recognition confidence significantly exceeded that of the initial model.
This section selects four groups of interfering images for study: one group is smoke images under cloud interference, one group is smoke images under haze interference, another group is images with sun interference, and the last group has images with maple leaf interference, with the results presented in Figure 13. In the cloud interference scenario shown in Figure 13a, because of the striking resemblance in visual attributes between clouds and smoke, the original model had difficulty accurately distinguishing between them, leading to missed detections of smoke. In the haze interference scenario shown in Figure 13d, although the original model could identify smoke, it produced duplicate detection boxes, with low confidence levels. In the sun interference image shown in Figure 13g, the sun’s rays share some visual similarities with flames, causing the original model to mistakenly identify the sun as a flame, resulting in false positives. In the maple leaf interference image shown in Figure 13j, the original model did not produce false positives.
In contrast, the MTL-FSFDet model proposed in this study demonstrated significant advantages. This model could accurately identify smoke under cloud and haze interference, and effectively distinguished between the sun, maple leaves, and flames, avoiding missed detections and false positives. These results indicate that MTL-FSFDet had a higher accuracy and robustness in handling smoke and flame recognition tasks in confusing scenarios.
Given the high complexity and variability of forest smoke and fire scenarios, where numerous hardly noticeable fire spots are often scattered and accompanied by diffuse smoke, it is particularly important to accurately identify and distinguish every instance of smoke and fire in these scenarios. As shown in Figure 14, the original model had obvious limitations when dealing with such complex environments, missing many obscured and barely visible small fire spots, and its smoke perception range was relatively limited.
In contrast, the MTL-FSFDet model demonstrated significant advantages. Even with such complex backgrounds, it could still accurately detect the obscured and inconspicuous small fire spots, and its smoke perception range was more comprehensive.

3. Result and Discussion

Existing object detection models still suffer from issues, including poor detection precision, high false positives, missed detections, and difficulties in detecting small targets in complex forest environments. In this study, we proposed a model for detecting forest smoke and fire that utilizes multi-task learning. In terms of data preprocessing, the introduction of BF-MSR led to an improvement in mAP by 0.6%. In the backbone section, the incorporation of C2f_Hybrid enhanced the mAP by 2%. In the neck part, the inclusion of Dysample and CGAFusion resulted in an increase in mAP by 1.4%. Additionally, the introduction of supervision from the segmentation task through multi-task learning contributed to a 1.3% increase in mAP. The experimental results demonstrated that the model struck an effective balance between precision and rapidity.
However, the model may make errors in certain situations. For small target detection, due to limited receptive fields or insufficient feature representation, the model may miss extremely small fire source areas. In occluded scenes, if the occlusion is severe, the model may fail to accurately identify fire sources hidden behind obstacles, because key features are concealed. In complex backgrounds, which often involve a large number of scattered fires and smoke, there is a risk of missed detections.
Moreover, since our goal is to achieve real-time forest fire detection, we need to strike a balance in the complexity of the model architecture. A more complex model may achieve higher accuracy but will increase computational requirements, which may not be suitable for resource-constrained environments such as remote forest monitoring stations.
There are still certain limitations in our research. Under extreme weather conditions, the model struggles to accurately identify target features and its performance deteriorates. In contrast, long-range thermal images can capture the thermal radiation information of objects in adverse conditions such as low light and foggy weather. Therefore, subsequent research could integrate regular images with long-range thermal images to compensate for the deficiencies of single imaging technologies, thereby improving the robustness of the model under harsh conditions.
All in all, the YOLO forest fire detection system has a wide range of application scenarios. Compared with watchtowers, UAVs offer broader perspectives, providing an efficient and flexible application platform for the model. UAVs can be equipped with high-resolution cameras to capture clear forest images. The model can quickly and accurately analyze these images, identifying subtle fire characteristics. Even in the early stages of a fire, when smoke and flames are not very obvious, it can still detect fire hazards in a timely manner. By fully leveraging the advantages of the model and the characteristics of UAVs, the efficiency and accuracy of forest fire monitoring can be significantly improved, providing strong support for the protection of forest resources and the stability of the ecological environment.

4. Conclusions

In summary, this study effectively addressed the limitations of traditional forest fire detection. Firstly, by proposing an improved BF-MSR method, we enhanced the color contrast and clarity of smoke contours and textures, thereby mitigating the influence of natural lighting conditions on smoke images and reducing the miss rate of detection algorithms. Secondly, through introducing a hybrid feature extraction module, we combined the local feature extraction capabilities of convolution with the global perception abilities of the self-attention mechanism, effectively overcoming the limitations of CNNs in lacking global context awareness and improving detection accuracy with complex backgrounds. Thirdly, by developing Dysample and a feature fusion approach based on CGA, we enriched feature information through point sampling, and achieved weighted fusion of low-level and high-level features, significantly enhancing the precision of detecting small smoke and fire targets. Lastly, by adopting a multi-task learning strategy that integrates detection and segmentation tasks, we leveraged the fine-grained supervision provided by the segmentation task to refine feature extraction, ultimately improving the model’s ability to discern object edges, shapes, and details, boosting detection performance in scenarios involving small objects and complex environments.

Author Contributions

Software, J.L.; Data curation, C.C.; Writing—original draft, C.Z.; Supervision, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of 580 China, grant number 2017YFD0600904.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abid, F. A survey of machine learning algorithms based forest fires prediction and detection systems. Fire Technol. 2021, 57, 559–590. [Google Scholar] [CrossRef]
  2. Zheng, B.; Ciais, P.; Chevallier, F.; Chuvieco, E.; Chen, Y.; Yang, H. Increasing forest fire emissions despite the decline in global burned area. Sci. Adv. 2021, 7, eabh2646. [Google Scholar] [CrossRef] [PubMed]
  3. Lin, Z.; Yun, B.; Zheng, Y. LD-YOLO: A Lightweight Dynamic Forest Fire and Smoke Detection Model with Dysample and Spatial Context Awareness Module. Forests 2024, 15, 1630. [Google Scholar] [CrossRef]
  4. Benzekri, W.; El Moussati, A.; Moussaoui, O.; Berrajaa, M. Early Forest Fire Detection System Using Wireless Sensor Network and Deep Learning. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 496–503. [Google Scholar] [CrossRef]
  5. Zhang, F.; Zhao, P.; Xu, S.; Wu, Y.; Yang, X.; Zhang, Y. Integrating multiple factors to optimize watchtower deployment for wildfire detection. Sci. Total Environ. 2020, 737, 139561. [Google Scholar] [CrossRef]
  6. Lu, K.; Xu, R.; Li, J.; Lv, Y.; Lin, H.; Liu, Y. A Vision-Based Detection and Spatial Localization Scheme for Forest Fire Inspection from UAV. Forests 2022, 13, 383. [Google Scholar] [CrossRef]
  7. Zhao, Y.; Ma, J.; Li, X.; Zhang, J. Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery. Sensors 2018, 18, 712. [Google Scholar] [CrossRef]
  8. Chen, R.; Luo, Y.; Alsharif, M.R. Forest Fire Detection Algorithm Based on Digital Image. J. Softw. 2013, 8, 1897–1905. [Google Scholar] [CrossRef]
  9. Celik, T.; Demirel, H. Fire Detection in Video Sequences Using a Generic Color Model. Fire Saf. J. 2009, 44, 147–158. [Google Scholar] [CrossRef]
  10. Wang, L.; Ye, M.; Ding, J.; Zhu, Y. Hybrid Fire Detection Using Hidden Markov Model and Luminance Map. Comput. Electr. Eng. 2011, 37, 905–915. [Google Scholar] [CrossRef]
  11. Sallom, A.; Alabboud, M. Evaluating Image Segmentation as a Valid Method to Estimate Walnut Anthracnose and Blight Severity. DYSONA—Appl. Sci. 2023, 4, 1–5. [Google Scholar]
  12. Lee, Y.; Shim, J. False Positive Decremented Research for Fire and Smoke Detection in Surveillance Camera Using Spatial and Temporal Features Based on Deep Learning. Electronics 2019, 8, 1167. [Google Scholar] [CrossRef]
  13. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  14. Yuan, F.; Zhang, L.; Wan, B.; Xia, X.; Shi, J. Convolutional Neural Networks Based on Multi-Scale Additive Merging Layers for Visual Smoke Recognition. Mach. Vis. Appl. 2019, 30, 345–358. [Google Scholar] [CrossRef]
  15. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  16. Qian, J.; Lin, J.; Bai, D.; Xu, R.; Lin, H. Omni-Dimensional Dynamic Convolution Meets Bottleneck Transformer: A Novel Improved High Accuracy Forest Fire Smoke Detection Model. Forests 2023, 14, 838. [Google Scholar] [CrossRef]
  17. VisiFire. Available online: http://signal.ee.bilkent.edu.tr/VisiFire/ (accessed on 1 July 2024).
  18. BoWFire. Available online: https://bitbucket.org/gbdi/bowfire-dataset/downloads/ (accessed on 1 July 2024).
  19. FireSense. Available online: https://www.kaggle.com/chrisfilo/firesense (accessed on 1 July 2024).
  20. HPWREN Fire. Available online: https://cdn.hpwren.ucsd.edu/HPWREN-FIgLib-Data/index.html (accessed on 1 July 2024).
  21. Bansal, S.; Bansal, R.K.; Bhardwaj, R. A Novel Low Complexity Retinex-Based Algorithm for Enhancing Low-Light Images. Multimed. Tools Appl. 2024, 83, 29485–29504. [Google Scholar] [CrossRef]
  22. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  23. Huynh-Thu, Q.; Ghanbari, M. Scope of Validity of PSNR in Image/Video Quality Assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  24. Zhao, Y.; Wu, S.; Wang, Y.; Chen, H.; Zhang, X.; Zhao, H. Fire Detection Algorithm Based on an Improved Strategy of YOLOv5 and Flame Threshold Segmentation. Comput. Mater. Contin. 2023, 75, 5639–5657. [Google Scholar]
  25. Lou, M.; Zhou, H.Y.; Yang, S.; Yu, Y. TransXNet: Learning both global and local dynamics with a dual dynamic token mixer for visual recognition. arXiv 2023, arXiv:2310.19380. [Google Scholar] [CrossRef]
  26. Liu, W.; Lu, H.; Fu, H.; Cao, Z. Learning to Upsample by Learning to Sample. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 19–25 October 2023; pp. 6004–6014. [Google Scholar]
  27. Wang, Q.; Wu, B.; Zhu, P.; Li, P.H.; Zuo, W.M.; Hu, Q.H. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11531–11539. [Google Scholar]
  28. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  29. Chen, Z.; He, Z.; Lu, Z.-M. DEA-Net: Single Image Dehazing Based on Detail-Enhanced Convolution and Content-Guided Attention. IEEE Trans. Image Process. 2024, 33, 1002–1015. [Google Scholar] [CrossRef] [PubMed]
  30. El-gayar, M.M.; Soliman, H.; Meky, N. A Comparative Study of Image Low Level Feature Extraction Algorithms. Egypt. Inform. J. 2013, 14, 175–181. [Google Scholar] [CrossRef]
  31. Lu, K.; Huang, J.; Li, J.; Zhou, J.; Chen, X.; Liu, Y. MTL-FFDET: A Multi-Task Learning-Based Model for Forest Fire Detection. Forests 2022, 13, 1448. [Google Scholar] [CrossRef]
  32. COCO Dataset. Evaluation Metrics. Available online: https://cocodataset.org/#detection-eval (accessed on 15 April 2025).
Figure 1. Representative images of the dataset: (a) smoke images only; (b) fire images only; (c) images with smoke and fire; (d) negative samples.
Figure 1. Representative images of the dataset: (a) smoke images only; (b) fire images only; (c) images with smoke and fire; (d) negative samples.
Forests 16 00719 g001
Figure 2. Smoke image enhancement results. (a) original image; (b) the image after MSR processing; (c) the image after BF-MSR processing.
Figure 2. Smoke image enhancement results. (a) original image; (b) the image after MSR processing; (c) the image after BF-MSR processing.
Forests 16 00719 g002
Figure 3. The structure of yolov8. Note: red square represents the detection box.
Figure 3. The structure of yolov8. Note: red square represents the detection box.
Forests 16 00719 g003
Figure 4. The structure of MTL-FSFDet. Note: red square represents the detection box.
Figure 4. The structure of MTL-FSFDet. Note: red square represents the detection box.
Forests 16 00719 g004
Figure 5. The structure of the hybrid feature extraction block.
Figure 5. The structure of the hybrid feature extraction block.
Forests 16 00719 g005
Figure 6. (a). The structure of IDConv. (b) The structure of OSRA. (c) The structure of STE.
Figure 6. (a). The structure of IDConv. (b) The structure of OSRA. (c) The structure of STE.
Forests 16 00719 g006aForests 16 00719 g006b
Figure 7. The structure of DySample.
Figure 7. The structure of DySample.
Forests 16 00719 g007
Figure 8. The structure of CGA.
Figure 8. The structure of CGA.
Forests 16 00719 g008
Figure 9. The structure of CGAFusion.
Figure 9. The structure of CGAFusion.
Forests 16 00719 g009
Figure 10. The structure of the multi-task head.
Figure 10. The structure of the multi-task head.
Forests 16 00719 g010
Figure 11. The detection results for small object scenes: (a,d) are the detection results of YOLOv8; (b,e) are the detection results of MTL-FSFDet; (c,f) are the segmentation results of MTL-FSFDet(green represents smoke, red represents fire).
Figure 11. The detection results for small object scenes: (a,d) are the detection results of YOLOv8; (b,e) are the detection results of MTL-FSFDet; (c,f) are the segmentation results of MTL-FSFDet(green represents smoke, red represents fire).
Forests 16 00719 g011
Figure 12. The detection results of scenes obscured by trees: (a,d) are the detection results of YOLOv8; (b,e) are the detection results of MTL-FSFDet; (c,f) are the segmentation results of MTL-FSFDet(green represents smoke, red represents fire).
Figure 12. The detection results of scenes obscured by trees: (a,d) are the detection results of YOLOv8; (b,e) are the detection results of MTL-FSFDet; (c,f) are the segmentation results of MTL-FSFDet(green represents smoke, red represents fire).
Forests 16 00719 g012
Figure 13. The detection results for interfering images: (a,d,g,j) are the detection results of YOLOv8; (b,e,h,k) are the detection results of MTL-FSFDet; (c,f,i,l) are the segmentation results of MTL-FSFDet(green represents smoke).
Figure 13. The detection results for interfering images: (a,d,g,j) are the detection results of YOLOv8; (b,e,h,k) are the detection results of MTL-FSFDet; (c,f,i,l) are the segmentation results of MTL-FSFDet(green represents smoke).
Forests 16 00719 g013
Figure 14. The detection results for complex backgrounds: (a,d) are the detection results of YOLOv8; (b,e) are the detection results of MTL-FSFDet; (c,f) are the segmentation results of MTL-FSFDet(green represents smoke, red represents fire).
Figure 14. The detection results for complex backgrounds: (a,d) are the detection results of YOLOv8; (b,e) are the detection results of MTL-FSFDet; (c,f) are the segmentation results of MTL-FSFDet(green represents smoke, red represents fire).
Forests 16 00719 g014
Table 1. Dataset composition.
Table 1. Dataset composition.
TypeNumbers
Smoke images only5121
Fire images only5236
Images with smoke and fire5076
Negative samples10,000
Total25,433
Table 2. Comparison of image enhancement methods.
Table 2. Comparison of image enhancement methods.
MethodIEPSNR
Original Image6.72/
MSR6.095.02
BF-MSR7.0010.14
Table 3. Training parameters.
Table 3. Training parameters.
Parameter NameParameter Value
Epoch300
Batch Size8
Momentum0.937
Learning Rate0.01
Weight Decay0.0005
OptimizerSGD
Table 4. The results of the ablation experiments.
Table 4. The results of the ablation experiments.
ModelmAP@0.5PRParamFPS
YOLOv880.278.773.43.031
+BF-MSR80.879.273.83.031
+C2f_Hybrid82.880.975.93.229
+Dysample83.381.576.33.230
+CGAFusion84.282.277.23.329
+Multi-task Head85.583.177.94.628
Table 5. The respective accuracies for flame and smoke.
Table 5. The respective accuracies for flame and smoke.
ModelTypemAP@0.5
YOLOv8Smoke84.6
Fire75.7
All80.2
OursSmoke90.1
Fire80.8
All85.5
Table 6. The results of comparative experiments.
Table 6. The results of comparative experiments.
ModelmAP@0.5PRParamFPS
SSD67.169.263.526.334
Faster R-CNN71.373.164.7137.111
EfficientDet72.773.470.16.618
YOLOv577.675.972.311.729
YOLOv7-tiny78.476.872.67.623
Yolov880.278.773.43.031
Ours85.583.177.94.628
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, C.; Liu, Y.; Chen, C.; Li, J. MTL-FSFDet: An Effective Forest Smoke and Fire Detection Model Based on Multi-Task Learning. Forests 2025, 16, 719. https://doi.org/10.3390/f16050719

AMA Style

Zhang C, Liu Y, Chen C, Li J. MTL-FSFDet: An Effective Forest Smoke and Fire Detection Model Based on Multi-Task Learning. Forests. 2025; 16(5):719. https://doi.org/10.3390/f16050719

Chicago/Turabian Style

Zhang, Chenyu, Yunfei Liu, Cong Chen, and Junhui Li. 2025. "MTL-FSFDet: An Effective Forest Smoke and Fire Detection Model Based on Multi-Task Learning" Forests 16, no. 5: 719. https://doi.org/10.3390/f16050719

APA Style

Zhang, C., Liu, Y., Chen, C., & Li, J. (2025). MTL-FSFDet: An Effective Forest Smoke and Fire Detection Model Based on Multi-Task Learning. Forests, 16(5), 719. https://doi.org/10.3390/f16050719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop