Next Article in Journal
Natural Materials as Carriers of Microbial Consortium for Bioaugmentation of Anaerobic Digesters
Previous Article in Journal
A Stock Market Decision-Making Framework Based on CMR-DQN
 
 
Article
Peer-Review Record

A Lightweight Fire Detection Algorithm Based on the Improved YOLOv8 Model

Appl. Sci. 2024, 14(16), 6878; https://doi.org/10.3390/app14166878
by Shuangbao Ma 1,2, Wennan Li 2, Li Wan 3 and Guoqin Zhang 4,*
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2024, 14(16), 6878; https://doi.org/10.3390/app14166878
Submission received: 9 July 2024 / Revised: 29 July 2024 / Accepted: 29 July 2024 / Published: 6 August 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

My additional regarding the reviewed work.

1.     After reading the article, I consider chapter 2. Model and Methods to be original or important for a given field.

2.     This article contributes new knowledge in the field of confident decision-making in security and fire protection systems supervision.

3.     Compared to other published works, this article brings novelties to the thematic area in terms of building algorithms for security systems. This paper improves the feature extraction capabilities of the model by introducing Mixed Local Channel Attention (MLCA) in the neck. MLCA integrates channel, spatial, local and global information to balance detection accuracy, speed and number of model parameters. Initially, the input feature maps are processed by concatenating local averages and concatenating global means, generating feature maps with different spatial resolutions and information representations. At the local level, MLCA collects detailed information using a sliding window method and computes local spatial relationships to generate appropriate attention weights. Global attention acquires global information by computing the relationships between channels across the entire feature map.

4.      In my opinion, an important direction for further research is that the team should investigate the issue of reliability (security) of its developed algorithm and the entire modernized security system.

5.     The conclusions are too general and should be rebuilt to be consistent with the evidence and arguments presented.

 

 

Author Response

Thank you very much for taking the time to review this manuscript. We highly value the feedback received, as it serves as a valuable guide for enhancing and refining our paper, while also offering significant direction for our future research endeavors. Please find the detailed responses below and the corresponding revisions.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The apper is well written and the model seems to work well. But it will be good to extend the experiments with some special case photos. For example:

How the Model will work on a photo with artificial light on it, or if the Sun is visible?

How the model will react to a photo with a picture of fire - real case for a home with pictures on the wall?

As the dataset used is not public, authors shoudl provide more information on it. Or, they would provide link to it.

Author Response

Thank you very much for taking the time to review this manuscript. We highly value the feedback received, as it serves as a valuable guide for enhancing and refining our paper, while also offering significant direction for our future research endeavors. Please find the detailed responses below and the corresponding revisions.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This paper aims to improve the existing fire detection systems which are often affected by environmental factors. 

To this end, the authors use the YOLOv8 as a base model, and propose the so-called GCM-YOLO lightweight fire detection algorithm. The backbone network is enhanced and higher precision is achieved by integrating GhostNet, introducing the CARAFE upsampling module, and incorporating the MLCA attention mechanism. 

The models and methods are well explained in Section 2: 
2.1. Improved YOLOv8 Model,
2.2. GhostNet,
2.3. The Content-Aware ReAssembly of Features Module,
2.4. Mixed Local Channel Attention,
and experimental results are nicely presented and discussed in Section 3: 
3.1. Materials, 
3.2. Training Equipment and Parameter Setting, 
3.3. Evaluation Indicators, 
3.4. Comparison of Ablation Experiments, 
3.5. Comparison Experiment, 
3.6. Results and Analysis. 

Overall, the results look interesting and meaningful, so I recommend its publication in Applied Sciences after a minor revision.

 

Comments:

1. There are lots of abbreviations. 

GCM-YOLO, YOLOv8, GhostNet, CARAFE, MLCA, [email protected], HGNetv2, SEAttention,  SimAM, CBAM...

The authors should mention their full name at least once in the introduction, or at the place where each term first appears. 
An alternative way is to include a list of abbreviations and their description at the end of the paper. 

2. On page 5, lines 146-147: 

It is stated that the output channel number is \sigma^2 k_{up}^2 and the upsampling kernel size is \sigma H \times \sigma W \times k_{up}^2. However, if I understand correctly, the numbers shown in Figure 4 are \sigma^2 k_{up}^4 and \sigma H \times \sigma W \times k_{up}^4, respectively. That is, the power of k_{up} is different: k_{up}^2 and k_{up}^4. These numbers should be corrected, either in the text or in the figure. 

3. Figure 4 needs to be improved. What is the \chi' on the right side of the cube? 

4. On page 2, line 47: Bahhar C et al. [12] → Bahhar et al. [12]

5. On page 2, line 51: Kim S. et al. [13] → Kim et al. [13]

6. On page 2, line 53: Jie Y et al. [14] → Yang et al. [14]

7. On page 2, line 56: Sangwon K et al. [15] → Kim et al. [14]

 

Author Response

Thank you very much for taking the time to review this manuscript. We highly value the feedback received, as it serves as a valuable guide for enhancing and refining our paper, while also offering significant direction for our future research endeavors. Please find the detailed responses below and the corresponding revisions.

Author Response File: Author Response.pdf

Back to TopTop