**4. Conclusions**

With the development of remote camera sensing technology, many researchers attempted to improve existing wildfire-detection systems using CNN-based deep learning. In the damage-detection field, it is di fficult to obtain a su fficient amount of the necessary data for training models; data imbalance or overfitting problems have thus caused the deterioration of the models' performance. To solve these problems, traditional image transformation methods such as image rotation were primarily used. A method of increasing the learning data was also adopted, wherein the flame image was artificially cut and pasted over a forest background. However, these two methods have their respective weaknesses—failure to increase the diversity of images and the necessity of manual labor, while providing unnatural images. The results of this study addressed this issue.

Our study had several advantages. First, a data augmentation method based on the same rules as those of artificial intelligence was used. It could also generate data while requiring minimal manpower. Using adversary, cycle-consistency, and identity losses, the optimized model could be used to produce various flame scenarios. The model could also be pre-trained for various wildfire scenarios in new environments, prior to the managemen<sup>t</sup> of the forest; higher detection accuracy could, thus, be expected. Second, we improved the detection accuracy by applying a dense block based on DenseNet in the model. The training history and test results showed that the proposed methods facilitated good model performance. Third, it was proposed that the model could be applied to high-resolution images to overcome the limitations that depend primarily on the use of small-sized images, as inputs to the model. This allows us to identify the approximate location of the wildfire from a wide range of photographs.

There were also several limitations to our study. The model training was conducted using a limited forest class. Although during the experiment conducted with drone images the model identified the cloud and wildfire areas well (the upper part of the cropped photos in Figure 11), the smoke in the part of the image comprising the sky was not captured as a feature when the test data was obtained using CAM. This could be adjusted by increasing the class range or by learning additional models using images that are likely to confuse the model. Another potential problem was that the model performance for detection of wildfires in the nighttime was not considered. This temporal variable was excluded from the study because the purpose of this study was to check the e fficiency of the data augmentation from artificial intelligence method and the e fficiency of dense block in wildfire detection models. However, these details should be considered in further studies because of the di fferent characteristics in the nighttime detection and in the day-time detection.

By improving upon the achievements and limitations of this study, in a future study, we intend to implement a forest-fire detection model in the field, by installing real-time surveillance cameras in Gangwon-do, Korea, which is exposed to the risk of wildfires every year.

In addition, by developing a technology that calculates the location of fires using image processing to measure fire area distance from camera and displays it on a map user interface, we intend to provide disaster-response support information for decision makers to realize a quick response in the event of the occurrence of a wildfire.

**Author Contributions:** Idea development and original draft writing, M.P.; draft review and editing, D.Q.T.; project administration, D.J.; supervision and funding acquisition, S.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was supported by a gran<sup>t</sup> [2019-MOIS31-011] from the Fundamental Technology Development Program for Extreme Disaster Response funded by the Ministry of Interior and Safety, Korea.

**Conflicts of Interest:** The authors declare no conflict of interest.
