Next Article in Journal
Development and Validation of a Mathematical Model for Pyroelectric Temperature Measurement Sensors for Application in Mobile Robotic Systems
Previous Article in Journal
Lightweight Progressive Fusion Calibration Network for Rotated Object Detection in Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Lightweight Wildfire Detection Method for Transmission Line Perimeters

School of Electrical Engineering and Electronic Information, Xihua University, Chengdu 610039, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(16), 3170; https://doi.org/10.3390/electronics13163170
Submission received: 5 July 2024 / Revised: 31 July 2024 / Accepted: 7 August 2024 / Published: 11 August 2024
(This article belongs to the Section Artificial Intelligence)

Abstract

:
Due to extreme weather conditions and complex geographical features, the environments around power lines in forest areas have a high risk of wildfires. Once a wildfire occurs, it causes severe damage to the forest ecosystem. Monitoring wildfires around power lines in forested regions through deep learning can reduce the harm of wildfires to natural environments. To address the challenges of wildfire detection around power lines in forested areas, such as interference from complex environments, difficulty detecting small target objects, and high model complexity, a lightweight wildfire detection model based on the improved YOLOv8 is proposed. Firstly, we enhanced the image-feature-extraction capability using a novel feature-extraction network, GS-HGNetV2, and replaced the conventional convolutions with a Ghost Convolution (GhostConv) to reduce the model parameters. Secondly, the use of the RepViTBlock to replace the original Bottleneck in C2f enhanced the model’s feature-fusion capability, thereby improving the recognition accuracy for small target objects. Lastly, we designed a Resource-friendly Convolutional Detection Head (RCD), which reduces the model complexity while maintaining accuracy by sharing the parameters. The model’s performance was validated using a dataset of 11,280 images created by merging a custom dataset with the D-Fire data for monitoring wildfires near power lines. In comparison to YOLOv8, our model saw an improvement of 3.1% in the recall rate and 1.1% in the average precision. Simultaneously, the number of parameters and computational complexity decreased by 54.86% and 39.16%, respectively. The model is more appropriate for deployment on edge devices with limited computational power.

1. Introduction

The “Camp Fire” in Northern California, USA, in 2018 lasted more than two weeks and caused significant economic damage. The fire was ignited by power lines, setting the surrounding vegetation ablaze. In 2020, a major forest fire broke out in Xichang City, Sichuan Province, China. The direct cause of the fire was electric poles discharging due to the force of the wind in a specific direction, sparking nearby weeds and shrubs. This incident affected 3047 hectares of land, including 791 hectares of forest area, and led to serious economic losses. Therefore, real-time monitoring of areas around power lines can quickly detect fire sources at the initial stage of wildfires, reducing the damage to natural environments. At the onset of a wildfire, smoke typically appears first, usually presenting a low temperature, rendering traditional temperature sensors potentially ineffective. In vast outdoor areas, manual patrols and monitoring with various sensors can be challenging to implement. Image-processing methods have shown essential application prospects in monitoring wildfires around transmission lines due to their strong feature-extraction capabilities, wide deployment range, minimal environmental interference, and higher accuracy [1].
Early image-processing methods typically employed image-processing algorithms and motion-based smoke-estimation techniques for detecting flames and smoke. In [2], Gao et al. proposed a forest-fire-smoke-detection method based on a diffusion model, recognizing the shape of the smoke in its generation phase using color-based, dynamic-region, and simulated-smoke-matching algorithms. In [3], Wu et al. proposed an improved robust AdaBoost (RAB) classifier. This improvement involved extracting the features of the flames and smoke from different videos for training and testing. The objective was to enhance the training and classification accuracy for smoke and fire detection.
However, early image-processing methods relied on manual feature extraction, lacked robustness, and struggled to maintain high accuracy in complex forest environments. Deep learning possesses the capability to automatically extract features, and recent studies have employed deep learning methods in wildfire detection [4]. In [5], Zhao et al. investigated the application of EfficientNet [6] for extracting features from the input images. They integrated Squeeze-and-Excitation (SE) modules [7] to enhance the model’s focus on texture features, consequently improving the detection accuracy of flames and smoke. However, the detection accuracy was low in complex environments with occlusions. In [8], Zhou et al. studied improving the YOLOv5 model by employing MobileNetV3 as its foundational structure. They further trained this enhanced model using Semi-Supervised Learning with Knowledge Distillation (SSLD). This approach achieved a lightweight model with reduced parameters, but there remained room for improvement in detecting occluded objects and increasing the overall detection accuracy of the model. In [9] Dou et al. explored ways to improve the YOLOv5s network. They achieved this by integrating the Convolutional Block Attention Module (CBAM) [10] and substituting PANet [11] with a Bidirectional Feature Pyramid Network (BiFPN) [12].These enhancements led to better detection accuracy. However, the model still did not detect small target fires and smoke optimally. In [13], Yu et al. combined the ConvNeXt [14] and CSPNet [15] structures to design a new backbone that used large kernel convolution to capture surrounding features. This method improved the network precision without affecting the inference speed and reduced the network parameters, yet the study of small object detection has yet to be thoroughly explored. In [16], Chen et al. investigated enhancements to PP-YOLO by incorporating FPN and the CBAM, aiming to boost the detection accuracy. This reduced the model’s false positives and missed detections, but increased its complexity and inference time. In [17], Jin et al. proposed a method combining self-attention mechanisms and radial multi-scale feature connections. This method enhances the semantic and positional information in feature fusion by integrating multi-scale feature information extracted from the network. The algorithm can more effectively handle scale variations and feature-extraction challenges in smoke detection through this approach. However, the algorithm’s complexity requires considerable computational resources, especially on resource-limited devices.
Therefore, effectively identifying smoke and flames that are small in size and lacking distinctive features and distinguishing them from similar natural phenomena remains challenging in wildfire-monitoring tasks. A deep learning model with lower complexity and higher accuracy is proposed to address this. The model can accurately detect flames and smoke in complex forest environments, while significantly reducing the computational complexity of the model. The main contributions of this paper are as follows:
  • We propose GS-HGNetV2, a feature-extraction network based on HGNetV2, incorporating Ghost Convolution (GhostConv) to replace the standard convolution blocks. This modification retains robust feature-extraction capabilities, while reducing the model parameters and computational load. The GS-HGBlock facilitates cross-scale fusion for feature processing, enhancing the network’s ability to capture and fuse information across different scales, thereby improving feature representation.
  • We replaced the Bottleneck structure in the C2f module with the RepViTBlock. This updated structure enhances the model’s global representation learning capability by introducing a lightweight attention mechanism, which is beneficial for detecting small target objects.
  • We propose a lightweight detection head called the Resource-friendly Convolutional Detection Head (RCDetection). By sharing the parameters, this detection head reduces the model complexity and computational load, while maintaining accuracy.

2. Methods and Materials

In this section, we elaborate on the proposed deep learning model, which includes a backbone with more robust feature-extraction capabilities named Ghost-HGNetV2. The model also integrates a new feature extraction module, C2f-RVT, and is complemented by a shared parameter detection head. Finally, we describe the datasets used for model training.

2.1. The Proposed Network Architecture

To enhance detection accuracy and reduce model complexity, this paper improves upon YOLOv8. Figure 1 illustrates our network model used for monitoring wildfires around power lines, with the red-bordered section indicating the improved parts. The improved model utilizes Ghost-HGNetV2 as the backbone network. To mitigate the dilution of small target object features during model feature fusion, the improved model utilizes the C2f-RVT structure in the neck. This structure incorporates the RepViTBlock [18], which possesses global-information-processing capabilities, to boost the model’s performance in feature fusion and the recognition of small objects. Finally, we designed a lightweight detection head, which reduces the model’s parameter count and computational complexity through parameter sharing. Additionally, this forces the model to learn more universal feature representations, thereby improving its generalization ability.

2.2. Ghost-HGNetV2

The YOLOv8 backbone network extracts features at different scales through convolution and the C2f structure. However, the traditional stacking of convolutions and C2f incurs a significant computational and parameter burden. We used Ghost-HGNetV2 as the backbone network of our model to reduce the number of parameters and the complexity of the backbone network and enhance the network’s feature-extraction capability.
Ghost-HGNetV2 is an improvement over HGNetV2 [19]. The model structure is illustrated in Figure 2. Stem is utilized for the initial downsampling of the image; Ghost-HGBlock performs the feature extraction; Dethpwise Conv [20] is employed to downsample the model.
Ordinary convolution uses multi-channel convolution kernels to extract the spatial features of images. This will increase the parameter quantity and the computational complexity when the input feature dimension is large. Dethpwise Conv has the characteristic that each output channel only uses one convolution kernel. Therefore, downsampling with dethpwise Conv can reduce the amount of parameters and the complexity.
The Ghost-HGBlock employs multiple GhostConvs [21] to extract feature information from different dimensions. This diverse dimensional feature information is spliced together and processed through a compression operation via a 1 × 1 convolution layer. Such compression excitation operations recalibrate the weights of each channel in the feature map, effectively integrating features from lower and higher levels. This operation simultaneously enhances the model’s capacity for feature extraction and strengthens its understanding of contextual information. The specific structure of Ghost-HGBlock is shown in Figure 3
The improved Ghost-HGNetV2 utilizes multiple distinct Ghost-HGBlocks to capture diverse features. This architecture effectively integrates features from both lower and higher levels, enhancing the understanding of contextual information and improving detection accuracy through a more sophisticated feature-extraction mechanism. Incorporating this structure into YOLOv8 allows for an increase in model accuracy while reducing model complexity.

2.3. C2f-RVT

In object-detection models, low-level features often contain abundant information about smaller objects. However, during the feature-fusion process, the fusion of low-level and high-level features can result in the dilution of small object information. This effect is especially noticeable when the high-level features have a larger weight. To address this issue, the C2f-RVT structure has been proposed, incorporating the RepViTBlock [18] into the C2f architecture to enhance the model’s overall perception capability. This allows for more accurate recognition of small target objects with limited information. The structural comparison of C2f and C2f-RVT is shown in Figure 4.
The input feature map undergoes initial processing through a Conv layer. Subsequently, the features are split into two branches: one branch directly enters the next RepViTBlock for processing, while the other branch is connected to the subsequent layer. This branching design aids the model in capturing information from different perspectives. Features processed by the RepViTBlock are then merged with features from preceding layers, enhancing feature correlation through residual connections to improve accuracy and efficiency. The merged feature map undergoes further refinement through a second Conv layer, producing higher order feature representations. The RepViTBlock exhibits high performance in feature extraction and global-information-processing capabilities.
After entering the input feature map into the RepViTBlock, spatial information extraction is performed through a 3 × 3 depth convolution. Then, the SE module is applied to adjust the feature response between channels and enhance the network’s focus on valuable features. Next, the output features pass through two 1 × 1 residual structures to increase the depth of the feature map while maintaining consistency with the input size, ultimately producing the output feature map. By using a single 3 × 3 depth convolution and two 1 × 1 convolutions, the RepViTBlock effectively simplifies the computational complexity compared to the dual 3 × 3 convolutions in the Bottleneck structure of C2f. Meanwhile, the SE attention mechanism combines local feature extraction and global feature abstraction, enhancing feature-extraction performance. Therefore, the RepViTBlock achieves excellent performance under lightweight conditions.
The global abstraction capability of C2f-RVT significantly enhances the feature fusion efficiency in the model’s neck region, thereby improving the perception capability for small target objects. Its lightweight design also reduces the complexity of the model.

2.4. RCDetection

In YOLOv8s, the detection head takes the target position and class information as inputs into two separate branches for learning and then makes predictions for the position and class. However, the detection head parameters occupy 19.24% of the entire model’s parameters, and floating-point operations account for 29.09% of the model. The input feature maps are output from the neck and fed into the detection head, which undergoes convolution by two 3 × 3 filters. This easily leads to a stack of parameters and a surge in computational load. With three such detection heads, the parameter volume and computational cost of the detection heads are substantial.
In the YOLOv8 model, the two branches of the detection head have the same structure; therefore, constructing a parameter-sharing method can effectively reduce the model’s parameter count. We propose a lightweight detection head, the Resource-friendly Convolutional Detection Head (RCDetection), which reduces the model’s parameter count and computational complexity through parameter sharing.
In RCDetection, Conv_GN [22] is utilized for feature extraction, Conv_Reg is used to extract bounding box information, and Conv-Cls is used to extract classification information. P3, P4, and P5 share Conv_GN, Conv_Reg, and Conv_Cls for their respective feature-extraction tasks. A zoom layer is employed to scale the extracted bounding box information to address the issue of different predicted bounding box sizes from various output layers. After passing through the zoom layer, the feature information is predicted as small, medium, and large bounding boxes of different scales. Parameter sharing also aids in improving the model’s generalization ability by forcing it to learn more universal feature representations.The detection head network structure is shown in Figure 5.
Conv-GN performs normalization by dividing channels into groups and computing each group’s mean and variance normalization. This resolves the issue of rapidly increasing errors in batch normalization when the batch size decreases, ensuring stable performance even with smaller batch sizes. Introducing this convolution into the detection head effectively enhances detection accuracy, enabling RCDetection to achieve stability in accuracy while reducing the parameters and complexity of the detection head.

2.5. Dataset

Due to the relatively limited research materials on forest fires near power lines and the lack of readily available datasets for researchers to use, this study employed web crawling techniques to collect images of power lines scattered across forested areas. These images were then merged with the D-Fire dataset [23] created by Pedro Vinícius Almeida Borges de Venâncio and his team to construct the new DW-Fire dataset.
Utilizing images extracted from videos as the data source can enhance the capture of the dynamic characteristics of flames and smoke. Additionally, video images encompass variations in lighting, weather, and background conditions. These variations aid the model in adapting to detection tasks under various environmental circumstances. The D-Fire dataset includes surveillance videos and simulated fire scenes of different resolutions. These data are primarily sourced from authentic surveillance images, enabling more precise simulations and comprehension of forest fires. However, the D-Fire dataset contains many similar surveillance images following the same naming convention, which could lead to model overfitting and a lack of diversity. Consequently, the evaluation metrics of the model might be artificially inflated, rendering them meaningless. To mitigate the risk of overfitting, we leveraged the naming convention to filter out similar image groups and randomly preserved only one image out of every five in each group. This process allowed us to filter and remove similar images. Combining these with our collected data created a more practical dataset called DW-Fire. DW-Fire comprises a total of 11,280 images, including 1897 surveillance image scene pictures, with 1558 of them being labeled. The number of surveillance image scene pictures amounts to only 16.8% of the total dataset. This approach eliminates the risk of poor model generalization due to an excess of similar samples from surveillance images and the potential for the model to develop a preference for specific features. It also avoids the issue of inflated detection performance metrics caused by sample similarity.
The DW-Fire dataset contains two types of labels: flame and smoke. Additionally, some unlabeled samples have been included in the data to enhance the model’s generalization capability. The distribution of the labels in the dataset is shown in Table 1.
The sun and clouds are common natural phenomena in forested areas. Thus, the sun, which has similar characteristics to flames, and clouds, resembling smoke, were added to the dataset as disturbance samples to enhance the model’s generalization ability. Additionally, some images from the early stages of the fires were included in the dataset. Figure 6 displays typical scenes from the dataset.
To effectively evaluate the performance of the dataset, data visualization techniques were used to analyze the label information in the dataset statistically. The results of the data analysis are shown in Figure 7. The visualization results indicate a higher concentration of labels in the center region of the images, and most of these labels are smaller than 1% of the original image size. The label sizes in the dataset correspond to the actual fire observations, demonstrating that the dataset has good performance and practical value for power line fire detection.
In this study, we labeled the images in the dataset using the LabelImg 1.8.6, and the labeled data were stored in .txt files. Subsequently, we split the entire dataset into a 8:1:1 ratio for various related experimental verifications.

3. Experiment and Result Analysis

This section elaborates on the deep learning experimental environment, training parameters, and evaluation metrics. We also compare the experimental results of the proposed method with mainstream object detection algorithms. The superiority of the proposed method is explained from multiple perspectives, including the recall rate, precision rate, and time efficiency. Finally, by comparing the computational complexity with the original model YOLOv8 and visualizing the detection effects, we demonstrate that the proposed method can better detect wildfires around power lines.

3.1. Experimental Environment

The experimental setup includes a system equipped with an Intel Core i9-10900X 3.70 GHz CPU, NVIDIA GeForce RTX 3090 24 GB GPU, and 64 GB DDR4 2933 MHz RAM, running on the Windows 10 Professional operating system. We utilized the PyTorch 1.12.1 framework for building and training the neural network models, with CUDA version 11.6 and Python version 3.8.15. The experimental training parameters are listed in Table 2.
The experiment was conducted over 300 epochs to ensure the model could adequately learn the critical features in the data and achieve a more stable convergence state. Additionally, the batch size was configured to 32, allowing the model to average the noise of individual samples in each training round through a larger batch size, thereby obtaining more stable gradient updates. This setting effectively reduced fluctuations during training and encouraged the model to focus more on patterns commonly found in the data rather than the specific characteristics of individual samples, thus reducing the risk of overfitting. The input images were set to a high resolution of 640 × 640 to preserve more detail in the images. This enables the network to extract richer and more complex features at deeper layers, which is particularly important for detecting smaller objects or identifying targets in complex scenes. Simultaneously, an input size of 640 × 640 pixels assists the model in detecting large and small objects, enhancing its adaptability to objects of different scales. In deep learning, the setting of the learning rate is crucial as it determines the magnitude of parameter updates. An appropriate learning rate balances optimization speed and the stability of the training process. Consequently, the learning rate was set to 0.01 to achieve this balance. The Stochastic Gradient Descent (SGD) optimization algorithm was chosen as the optimizer due to its low computational cost, fast convergence speed, ability to avoid local minima, and suitability for large-scale datasets. The hyperparameters mentioned above were selected in the experiment to ensure that the model achieved optimal performance and generalization capability.

3.2. Model Evaluation and Experimental Steps

The detection accuracy of the model was determined by the recall (R), precision (P), and mean average precision (mAP). We also evaluated the model’s complexity and detection speed using parameters such as the number of parameters, floating-point operations per second (FLOPS), and frames per second (FPS). In the context of this experiment, TP represents the number of samples labeled as fire and correctly predicted by the model as fire. FP refers to the number of samples labeled as non-fire, but incorrectly predicted by the model as fire, indicating the number of false detections. FN refers to the number of samples labeled as fire, but predicted by the model as non-fire.
Precision, which reflects the accuracy of a fire-detection system, indicates that a higher P-value corresponds to a higher correctness of the model in detecting fires. The specific calculation is as shown in Equation (1).
P = T P T P + F P
Recall (R): This reflects the model’s ability to correctly detect actual fire occurrences. A higher R implies a lower probability of false alarms and missed detections, as expressed by Formula (2).
R = T P T P + F N
AP refers to the area under the precision–recall curve, describing the model’s overall accuracy in detecting fires. mAP is the average of the AP values across multiple categories, as defined by Formula (3).
m A P = 1 N i = 1 N 0 1 P i ( R ) d R
The mAP@50 is the average precision of all categories at 50% Intersection over Union (IoU).
The complexity of the model is determined by the parameters and FLOPS. The FLOPs indicates the number of floating-point operations. The quantitative relationship between the GFLOPs and FLOPs is 1 GFLOP = 10 9 FLOPs. FPS is used to evaluate the speed of the model, indicating the number of frames the model can process per second, measured in frames per second (FPS).
We evaluated the trained model with the above parameters. The specific training and validation steps of the deep learning model are shown in Algorithm 1.
Algorithm 1 Steps for training and validation of object-detection models
1:
Initialization: ϵ = (detection model with random weights, hyper-parameters = {image size = (640, 640, 3), epoch = 300, batch-size = 32, Pre-training weights None, optimizer = SGD with learning rate = 0.01), validation = per epoch, batch size = 32)}
2:
Create data loaders for D_train, D_val, and D_test
3:
Evaluation metrices: [ P, R, mAP@50, Parameters(M), FLOPS(G), FPS ]
4:
Initialize detection model training:
5:
For iteration = 1:epoch
6:
Fed the D_train and D_val to detection model
7:
for  i t e r a t i o n [ 1 , e p o c h ] do
8:
     Fed the D_train and D_val to detection model
9:
     Split the batch into images x and labels Y.
10:
   Pass x through detection model to obtain the output tensor Z.
11:
   Compute the loss L by applying loss to the output tensor Z and the label tensor Y.
12:
   Pass x through detection model to obtain the output tensor Z.
13:
   Apply dropout with rate D to the gradients.
14:
   Update the model parameters using the SGD optimizer.
15:
   Save detection mode = β
16:
end for
17:
Load β for model testing
18:
Initialize empty lists for predictions P and ground truth labels T.
19:
for  b a t c h D _ t e s t do
20:
   Split the batch into images X and labels Y.
21:
   Pass x through β to obtain the output tensor z.
22:
   Append the output tensor Z and the label tensor Y to the lists P and T, respectively.
23:
end for
24:
Calculated [ P, R, mAP@50, Parameters(M), FLOPS(G), FPS ] with respect to P and T.

3.3. Experiment Result Analysis

In this experiment, we selected mainstream object-detection algorithms and our method. Additionally, we collected related wildfire-detection methods from the literature [24] and the literature [25], both wildfire detection papers published recently. All methods were trained using the DW-Fire dataset. The experimental parameter settings are shown in Table 2. Precision, recall, and the mAP@50 were used to evaluate the detection accuracy of the models, while the parameters, FLOPS, and FPS were used to assess the model complexity. The results of all methods on the validation set are shown in Table 3.
Faster R-CNN [26] and SSD [27] both utilize ResNet-50 [28] as the backbone for feature extraction. Faster R-CNN has the highest number of model parameters and computational complexity, resulting in the slowest inference speed. SSD performs moderately in terms of model parameters and computational complexity, but needs to improve its inference speed and model accuracy. The detection precision, recall, and mAP@50 of YOLOv7-tiny [29] is relatively low. The precision of YOLOv10 is lower when compared to YOLOv8. This implies that YOLOv10 generates more false positives. YOLOv5 uses a coupled head to predict the class and location of objects simultaneously, leading to lower detection accuracy than YOLOv8s. YOLOv8s uses a decoupled head strategy to process location and classification information independently, improving detection accuracy. This design is more suitable for complex scenarios like wildfire detection. Therefore, choosing YOLOv8s as the base model to maintain high detection accuracy while reducing model complexity is justified.
Among the methods mentioned in the literature [24], YOLOv8s served as the base model, and object detection accuracy was improved by employing a parameter-free attention mechanism (SimAM) [30]. Although the model parameters decreased, the accuracy and recall rate were lower than the original model’s. Among the methods mentioned in the literature [25], it improves the YOLOv8s model by incorporating the Partial Convolution(Pconv) [31] and Exponential Moving Average (EMA) [32] attention mechanisms into the C2f module of the backbone. It utilizes the Slim-Neck [33] module to reduce neck complexity and employs a dy-head [34] in the detection head to enhance precision. This method reduces model complexity and improves the recall rate, but the use of dy-head increases inference time, leading to slower inference speed. Moreover, this model has lower detection accuracy and higher model complexity compared to our model.
The experimental results from Table 3 indicate that, except for Faster R-CNN, the recall rate is lower than the precision rate in the other experiment groups. This suggests that most models misclassify a higher proportion of positive samples as negatives. This means the number of false negatives exceeds the number of false positives. The higher rate of false negatives is primarily due to the dataset containing a variety of complex scenes with a more significant proportion of tiny target positive samples, making it challenging for the model to identify all positive cases. Furthermore, the irregular shapes of smoke and flames cause significant variations in the appearance of positive samples, making it difficult for the model to adapt to this diversity fully. As a result, the model struggles to identify all positive samples, leading to increased false negatives.
Regarding time efficiency, Faster R-CNN (Resnet50) has the slowest inference speed due to its much higher model complexity compared to the other models. YOLOv7-tiny, with its lower computational demand and fewer model layers, performs best in the inference speed. YOLOv10 adopts an NMS-free strategy, thus resulting in faster inference speed. YOLOv5s and YOLOv8s have similar inference speeds because they share a similar structure, with differences only in the detection head and feature extraction modules. The model in the literature [24] experiences a decrease in inference speed relative to YOLOv8s due to the introduction of the SimAM attention mechanism, which complicates the inference process. Meanwhile, the model in the literature [25] sees a significant increase in inference time due to the dynamic design of the detection head, affecting its FPS performance and resulting in a lower frame rate. Although the inference speed of our proposed model is slightly lower than YOLOv8, this is because the addition of the backbone network Ghost-HGNetV2 increases the depth of the network layers. However, the model’s inference speed still achieves an FPS of 111.3 frames, which is greater than 30 frames and meets the requirements for real-time monitoring. Our proposed model demonstrates good time efficiency.
Our model demonstrates excellent recall rate performance, indicating fewer missed and false positives. It also achieves the highest scores in precision and the mAP@50, showcasing its exceptional detection effectiveness. Additionally, the model has the lowest parameter count and performs superbly in computational complexity and FPS.

3.4. Model Complexity Analysis

To investigate the improvement in computational complexity of the model, we compared the statistics of the computational complexity of the original model YOLOv8s and our proposed model. Figure 8 presents the statistical graph of the model’s computational complexity. By tallying the computation cost of each layer in both models, the model’s computation cost is categorized by the output scale of each layer, with C1, C2, C3, C4, and C5 corresponding to the input image downsampled by 1, 2, 3, 4, and 5 times, respectively. The output feature map sizes for P3, P4, and P5 are, respectively, 80 × 80, 40 × 40, and 20 × 20. The computation costs of each layer for both the original and the improved models are shown in Figure 8. The backbone network adopts GhostConv for feature extraction and DWConv for the downsampling strategies, thereby reducing the computational complexity of the backbone network by 5.5 G. The use of the RepViTBlock in the feature fusion part reduces the computational complexity by 3G. Finally, using shared parameters in the detection head reduces it by 2.8 G. The improved model has lower computational complexity, enabling the model to operate on resource-constrained edge devices.

3.5. Analysis of the Model’s Detection Capabilities

In order to compare the detection performance differences between the method proposed in this paper and other advanced wildfire-detection models, we selected some pictures in the test set for prediction. These images were input into the trained model, and the predicted results are shown in Figure 9.
In Figure 9a, Reference [24] and the literature Reference [25] incorrectly detected smoke as flames. The reason for the false detection is that, during different stages of the fire, there are significant differences in the characteristics exhibited by flames and smoke, which causes a large variation in the appearance of positive samples. The model finds it difficult to adapt to this diversity, leading to the misidentification of flames as smoke. In Figure 9b, References [24,25] experienced missed detections. In Figure 9c, our model successfully detected more small target flames and demonstrated higher confidence. Therefore, compared to advanced wildfire-detection algorithms, the method proposed in this paper exhibits superior detection capabilities.
To thoroughly assess the effectiveness of our proposed detection method, we chose a number of challenging scenarios. In these scenarios, we compared our method with the baseline YOLOv8. The detection results in complex environmental backgrounds are shown in Figure 10 and Figure 11, while the detection results for small target objects are shown in Figure 12.
The comparison of the experimental results shows that the improved model demonstrates a higher discriminative ability in distinguishing objects such as the Sun and smoke, which have features similar to fire and smoke. In conditions with insufficient lighting, the original model exhibits cases of both false positives and false negatives. In the detection results for small target objects, the original model mistakenly detects light sources with features similar to fire as small targets. The improved model demonstrates better detection capabilities in complex environments with low lighting and interference. It also shows higher confidence in detecting small target objects.
It is difficult to demonstrate the superior detection capability of the proposed method over YOLOv8 with only a partial set of detection images. To further showcase the detection abilities of the improved model, we conducted a heatmap analysis using the XGradCAM method proposed in the literature [35]. In the heatmap, areas highlighted in red indicate that the corresponding regions have higher importance when determining specific categories. The model outputs objects of different scales through three detection heads of varying sizes; hence, a heatmap analysis was performed for each detection head.
Figure 13a shows that in the 20 × 20 detection head heatmap, the red areas cover a wider range. This indicates that the improved model has a larger receptive field when detecting small target objects, thus possessing stronger small-target-detection capabilities. Observing Figure 13b,c, it is evident that the improved model pays more attention to mid-sized and large targets. Therefore, the improved model has better detection capabilities and is more suitable for wildfire-detection tasks.

3.6. Ablation Experiments

We conducted eight groups of ablation experiments using the DW-Fire dataset to verify the performance improvement of YOLOv8 by the proposed Ghost-HGNetV2, C2f-RVT, and RCDetection. All experimental groups used the same training parameters. The grouping situation and results of the experiment are shown in Table 4. The variation curves of the precision, recall, and mAP@50 with the number of training rounds are shown in Figure 14.
Integrating the Ghost-HGNetV2 structure into the YOLOv8 model resulted in a 2% improvement in precision and a 0.8% increase in the mAP, with the computational complexity reduced by 5.5 GFLOPS. Ghost-HGNetV2 enhanced the network’s feature-extraction capability. Integrating the C2f-RVT module into the neck part of the model led to a significant 2.1% improvement in recall. The improvement is ascribed to the RepViTBlock’s global-feature-extraction capacity, which raises the neck region’s feature-fusion efficiency. The RepVITBlock effectively reduces the occurrence of false negatives, especially when identifying small target objects. Additionally, the RCDetection technique, through parameter sharing, facilitates the learning of more generic feature representations, enhancing both the detection accuracy and recall while reducing the overall computational load of the model.
Compared to YOLOv8, the model we propose has achieved improved detection accuracy. Regarding model complexity, the proposed model demonstrated significant simplification, with a reduction of 5.96 M parameters and a decrease in computational complexity by 11.2 GFLOPS. Due to the reduced model complexity, the improved model is more suitable for deployment on resource-constrained embedded devices.

4. Discussion

In forested areas, regions around power transmission lines are more susceptible to wildfires due to electrical failures, including issues with the transmission lines, fallen power poles, and poor insulation. Therefore, monitoring wildfires around power transmission lines in forested areas is particularly crucial. This study proposes a lightweight target detection model to address the challenges of detecting fires near power lines in forested areas and optimize the detection of small targets.
The method proposed in this manuscript is designed for wildfires located around power lines in forested areas. Pre-trained weights were not used in the experiments to ensure that the model focuses more on detecting wildfires around power lines while improving its interpretability. However, pre-trained weights can reduce model training time and enhance the model’s performance in other fire-detection scenarios. The appropriate use of data augmentation in wildfire-detection tasks can significantly enhance model performance and generalization capabilities. Therefore, this paper chooses the online data augmentation methods provided by the YOLOv8 model for dataset enhancement, including flipping, scaling, translation, erasing, and color adjustment. Researchers can explore more data augmentation methods, such as generating datasets using CycleGAN, to provide additional feature information for the model, ensuring its generalization capabilities. When constructing the dataset for wildfire detection, we introduced 1171 unlabeled images to enhance the model’s generalization capability. These images provide some interference samples for the model and help prevent overfitting. When introducing unlabeled images, it is necessary to control the image quality to ensure that they are closely related to the theme of wildfire detection or have practical significance as interference samples. Moreover, the number of unlabeled images should be moderate. If the image quality is poor, irrelevant noise can be introduced, misleading the model’s learning. On the other hand, if too many unlabeled images are introduced, the model may become too conservative in prediction, thereby increasing the risk of missed detections.
The experimental results indicate that the proposed model is suitable for deployment on edge devices mounted on power transmission towers. By collecting image data in real time through cameras, the input to the deep learning model can automate wildfire detection. Once the device detects flames or smoke, it sends an alert to the relevant personnel, who then confirm whether a fire has actually occurred. Although the algorithm proposed in this study performs well in forest fire detection, it has some limitations. The algorithm is primarily optimized for visible light environments, so it may struggle to detect smoke generated during the initial stages of a fire, especially in low-light conditions at night. Additionally, mist generated between trees during dawn and dusk may affect the camera’s ability to capture clear images, leading to decreased detection performance.
We propose two possible directions for future research to overcome these limitations and further enhance system reliability. Firstly, integrating methods that combine visible and infrared light features can help improve fire detection accuracy in low-light conditions at night. Secondly, developing and integrating dehazing algorithms can improve the quality of images captured by cameras during misty dawns and dusks, thereby enhancing detection accuracy under these conditions. Through these studies, we aim to enhance the algorithm’s applicability and robustness under different environmental conditions.

5. Conclusions

This study proposes a method for detecting wildfires around power transmission lines. The method first constructs a wildfire dataset around power lines, DW-Fire, and then introduces a lightweight detection model based on YOLOv8. Compared to the original YOLOv8 model, the proposed model shows a 3.1% increase in the recall rate and a 1.1% increase in precision on the DW-Fire dataset validation results, with a 54.86% reduction in the model parameters and a 39.16% decrease in the computational complexity. It demonstrates better small target detection capabilities and high confidence in complex scene detection.
Considering the variability and complexity of the forested power line environment, the model’s detection performance may be affected under extreme conditions. Therefore, future research will further optimize the detection model’s structure and expand the dataset used for model training.

Author Contributions

Conceptualization, X.H. and Q.Z.; methodology, X.H.; software, X.H.; validation, X.H., H.H., and J.X.; formal analysis, Y.L.; investigation, H.H. and Y.L.; data curation, X.H. and Q.Z.; writing—original draft preparation, X.H.; writing—review and editing, W.X.; visualization, Q.Z. and J.X.; supervision, W.X.; funding acquisition, W.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Achievements Transfer and Transformation Demonstration project of Sichuan province in China grant number 2022ZHCG0099 and the Chunhui Project of Ministry of Education of China, grant number Z2022087.

Data Availability Statement

All data generated or presented in this study are available upon request from the corresponding author. The dataset and code cannot be shared due to specific reasons.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; nor in the decision to publish the results.

References

  1. Abid, F. A survey of machine learning algorithms based forest fires prediction and detection systems. Fire Technol. 2021, 57, 559–590. [Google Scholar] [CrossRef]
  2. Gao, Y.; Cheng, P. Forest fire smoke detection based on visual smoke root and diffusion model. Fire Technol. 2019, 55, 1801–1826. [Google Scholar] [CrossRef]
  3. Wu, X.; Lu, X.; Leung, H. A video based fire smoke detection using robust AdaBoost. Sensors 2018, 18, 3780. [Google Scholar] [CrossRef] [PubMed]
  4. Geetha, S.; Abhishek, C.; Akshayanat, C. Machine vision based fire detection techniques: A survey. Fire Technol. 2021, 57, 591–623. [Google Scholar] [CrossRef]
  5. Zhao, L.; Zhi, L.; Zhao, C.; Zheng, W. Fire-YOLO: A small target object detection method for fire inspection. Sustainability 2022, 14, 4930. [Google Scholar] [CrossRef]
  6. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning. PMLR 2019, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  7. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  8. Zhou, M.; Wu, L.; Liu, S.; Li, J. UAV forest fire detection based on lightweight YOLOv5 model. Multimed. Tools Appl. 2023, 83, 61777–61788. [Google Scholar] [CrossRef]
  9. Dou, Z.; Zhou, H.; Liu, Z.; Hu, Y.; Wang, P.; Zhang, J.; Wang, Q.; Chen, L.; Diao, X.; Li, J. An improved yolov5s fire detection model. Fire Technol. 2024, 60, 135–166. [Google Scholar] [CrossRef]
  10. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV) 2018, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  11. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
  12. Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
  13. Yu, P.; Wei, W.; Li, J.; Du, Q.; Wang, F.; Zhang, L.; Li, H.; Yang, K.; Yang, X.; Zhang, N.; et al. Fire-PPYOLOE: An Efficient Forest Fire Detector for Real-Time Wild Forest Fire Monitoring. J. Sens. 2024, 2024, 2831905. [Google Scholar] [CrossRef]
  14. Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022, New Orleans, LA, USA, 18–24 June 2022; pp. 11976–11986. [Google Scholar]
  15. Wang, C.Y.; Liao, H.Y.M.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops 2020, Seattle, WA, USA, 14–19 June 2020; pp. 390–391. [Google Scholar]
  16. Chen, C.; Yu, J.; Lin, Y.; Lai, F.; Zheng, G.; Lin, Y. Fire detection based on improved PP-YOLO. Signal Image Video Process. 2023, 17, 1061–1067. [Google Scholar] [CrossRef]
  17. Jin, C.; Zheng, A.; Wu, Z.; Tong, C. Real-time fire smoke detection method combining a self-attention mechanism and radial multi-scale feature connection. Sensors 2023, 23, 3358. [Google Scholar] [CrossRef] [PubMed]
  18. Wang, A.; Chen, H.; Lin, Z.; Han, J.; Ding, G. Repvit: Revisiting mobile cnn from vit perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024, Waikoloa, HI, USA, 3–8 January 2024; pp. 15909–15920. [Google Scholar]
  19. Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. DETRs Beat YOLOs on Real-time Object Detection. arXiv 2023, arXiv:2304.08069v3. [Google Scholar]
  20. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  21. Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. Ghostnet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
  22. Wu, Y.; He, K. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV) 2018, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  23. de Venâncio, P.V.A.; Lisboa, A.C.; Barbosa, A.V. An automatic fire detection system based on deep convolutional neural networks for low-power, resource-constrained devices. Neural Comput. Appl. 2022, 34, 15349–15368. [Google Scholar] [CrossRef]
  24. Guo, X.; Cao, Y.; Hu, T. An Efficient and Lightweight Detection Model for Forest Smoke Recognition. Forests 2024, 15, 210. [Google Scholar] [CrossRef]
  25. Kong, D.; Li, Y.; Duan, M. Fire and smoke real-time detection algorithm for coal mines based on improved YOLOv8s. PLoS ONE 2024, 19, e0300502. [Google Scholar] [CrossRef]
  26. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef]
  27. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  29. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
  30. Yang, L.; Zhang, R.Y.; Li, L.; Xie, X. Simam: A simple, parameter-free attention module for convolutional neural networks. In Proceedings of the International Conference on Machine Learning. PMLR 2021, Virtual, 18–24 July 2021; pp. 11863–11874. [Google Scholar]
  31. Chen, J.; Kao, S.h.; He, H.; Zhuo, W.; Wen, S.; Lee, C.H.; Chan, S.H.G. Run, Don’t walk: Chasing higher FLOPS for faster neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023, Vancouver, BC, Canada, 17–24 June 2023; pp. 12021–12031. [Google Scholar]
  32. Ouyang, D.; He, S.; Zhang, G.; Luo, M.; Guo, H.; Zhan, J.; Huang, Z. Efficient multi-scale attention module with cross-spatial learning. In Proceedings of the ICASSP 2023–2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
  33. Li, H.; Li, J.; Wei, H.; Liu, Z.; Zhan, Z.; Ren, Q. Slim-neck by GSConv: A better design paradigm of detector architectures for autonomous vehicles. arXiv 2022, arXiv:2206.02424. [Google Scholar]
  34. Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; Zhang, L. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021, Virtual, 19–25 June 2021; pp. 7373–7382. [Google Scholar]
  35. Fu, R.; Hu, Q.; Dong, X.; Guo, Y.; Gao, Y.; Li, B. Axiom-based grad-cam: Towards accurate visualization and explanation of cnns. arXiv 2020, arXiv:2008.02312. [Google Scholar]
Figure 1. Overall structure diagram of the improved YOLO model.
Figure 1. Overall structure diagram of the improved YOLO model.
Electronics 13 03170 g001
Figure 2. Ghost-HGNetV2 network architecture diagram.
Figure 2. Ghost-HGNetV2 network architecture diagram.
Electronics 13 03170 g002
Figure 3. Ghost-HGBlock network architecture diagram.
Figure 3. Ghost-HGBlock network architecture diagram.
Electronics 13 03170 g003
Figure 4. Comparison of C2f and C2f-RVT structures: (a) C2f; (b) C2f-RVT.
Figure 4. Comparison of C2f and C2f-RVT structures: (a) C2f; (b) C2f-RVT.
Electronics 13 03170 g004
Figure 5. Resource-friendly Convolutional Detection Head network structure.
Figure 5. Resource-friendly Convolutional Detection Head network structure.
Electronics 13 03170 g005
Figure 6. Typical scenarios of DW-fire: (a) normal fire, (b) early fire, (c) fire disturbance, and (d) smoke disturbance.
Figure 6. Typical scenarios of DW-fire: (a) normal fire, (b) early fire, (c) fire disturbance, and (d) smoke disturbance.
Electronics 13 03170 g006
Figure 7. Visualization statistics of dataset labels. (a) Statistics of label positions relative to images. (b) Statistics of label sizes relative to images.
Figure 7. Visualization statistics of dataset labels. (a) Statistics of label positions relative to images. (b) Statistics of label sizes relative to images.
Electronics 13 03170 g007
Figure 8. Comparison chart of computational complexity for each layer: (a) original model; (b) improved model.
Figure 8. Comparison chart of computational complexity for each layer: (a) original model; (b) improved model.
Electronics 13 03170 g008
Figure 9. Comparison of detection results between advanced wildfire algorithms and our method: (a) wildfire scene 1, (b) wildfire scene 2, and (c) wildfire scene 3 [24,25].
Figure 9. Comparison of detection results between advanced wildfire algorithms and our method: (a) wildfire scene 1, (b) wildfire scene 2, and (c) wildfire scene 3 [24,25].
Electronics 13 03170 g009
Figure 10. Detection results under interference environment.
Figure 10. Detection results under interference environment.
Electronics 13 03170 g010
Figure 11. Detection results in low-light conditions.
Figure 11. Detection results in low-light conditions.
Electronics 13 03170 g011
Figure 12. Comparison diagram of the detection effect of the model on small target objects.
Figure 12. Comparison diagram of the detection effect of the model on small target objects.
Electronics 13 03170 g012
Figure 13. YOLOv8 and the improved model in heatmaps of different sizes of detection heads: (a) 20 × 20 detection head, (b) 40 × 40 detection head, and (c) 80 × 80 detection head.
Figure 13. YOLOv8 and the improved model in heatmaps of different sizes of detection heads: (a) 20 × 20 detection head, (b) 40 × 40 detection head, and (c) 80 × 80 detection head.
Electronics 13 03170 g013
Figure 14. The curves of precision, recall, and mAP@50.
Figure 14. The curves of precision, recall, and mAP@50.
Electronics 13 03170 g014
Table 1. DW-fire label statistics table.
Table 1. DW-fire label statistics table.
CategoriesNumbers
Only smoke4292
Only fire1016
Smoke and fire4801
None1171
Total11,280
Table 2. Experimental parameter settings.
Table 2. Experimental parameter settings.
Training ParametersDetails
Epochs300
Batch size32
Image size (pixels)640 × 640
Pre-training weightsNone
Initial learning rate0.01
Optimization algorithmSGD
Table 3. Comparison of results for different object-detection algorithms.
Table 3. Comparison of results for different object-detection algorithms.
ModelsPrecisionRecallmAP@50Parameters (M)FLOPS (G)FPS
Faster R-CNN (Resnet50) 0.392 0.785 0.669 28.48 914.16 17.08
SSD (Resnet50) 0.803 0.451 0.598 14.34 15.09 49.21
YOLOv5s 0.845 0.734 0.812 7.2 16.5 109.8
YOLOv7-tiny 0.758 0.677 0.75 6.02 13.2123.4
YOLOv8s 0.846 0.747 0.825 11.15 28.6 118
YOLOv10s 0.828 0.755 0.825 7.21 21.4 123.1
Reference [24] 0.841 0.741 0.821 7.77 25.3 97.2
Reference [25] 0.854 0.755 0.826 8.57 20.9 66.3
ours 0.871 0.778 0.8365.19 17.4 111.3
Table 4. Comparison results of ablation experiments.
Table 4. Comparison results of ablation experiments.
BaseGhost-HGNetV2C2f-RVTRCDetectionPrecisionRecallmAP@50Parameters (M)FLOPS (G)
0.846 0.747 0.825 11.15 28.6
0.866 0.753 0.833 8.33 23.1
0.859 0.764 0.829 9.7 25.6
0.847 0.769 0.834 9.43 25.8
0.872 0.764 0.831 6.88 20.1
0.862 0.766 0.831 6.61 20.9
0.858 0.773 0.826 8.01 22.9
0.871 0.7780.8365.1917.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, X.; Xie, W.; Zhang, Q.; Lan, Y.; Heng, H.; Xiong, J. A Lightweight Wildfire Detection Method for Transmission Line Perimeters. Electronics 2024, 13, 3170. https://doi.org/10.3390/electronics13163170

AMA Style

Huang X, Xie W, Zhang Q, Lan Y, Heng H, Xiong J. A Lightweight Wildfire Detection Method for Transmission Line Perimeters. Electronics. 2024; 13(16):3170. https://doi.org/10.3390/electronics13163170

Chicago/Turabian Style

Huang, Xiaolong, Weicheng Xie, Qiwen Zhang, Yeshen Lan, Huiling Heng, and Jiawei Xiong. 2024. "A Lightweight Wildfire Detection Method for Transmission Line Perimeters" Electronics 13, no. 16: 3170. https://doi.org/10.3390/electronics13163170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop