Next Article in Journal
Spatial Information Enhancement with Multi-Scale Feature Aggregation for Long-Range Object and Small Reflective Area Object Detection from Point Cloud
Previous Article in Journal
Impact Analysis of Vegetation FVC Changes and Drivers in the Ring-Tarim Basin from 1993 to 2021
Previous Article in Special Issue
A Hidden Eruption: The 21 May 2023 Paroxysm of the Etna Volcano (Italy)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Burned-Area Mapping Using Post-Fire PlanetScope Images and a Convolutional Neural Network

by
Byeongcheol Kim
1,
Kyungil Lee
2 and
Seonyoung Park
1,*
1
Department of Applied Artificial Intelligence, Seoul National University of Science and Technology, 232 Gongneung-ro, Nowon-gu, Seoul 01811, Republic of Korea
2
AI Semiconductor Research Center, Seoul National University of Science and Technology, 232 Gongneung-ro, Nowon-gu, Seoul 01811, Republic of Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(14), 2629; https://doi.org/10.3390/rs16142629
Submission received: 17 May 2024 / Revised: 5 July 2024 / Accepted: 12 July 2024 / Published: 18 July 2024

Abstract

:
Forest fires result in significant damage, including the loss of critical ecosystems and individuals that depend on forests. Remote sensing provides efficient and reliable information for forest fire detection on various scales. The purposes of this study were to produce burned-area maps and to identify the applicability of transfer learning. We produced a burned-area (BA) maps using single post-fire PlanetScope images and a deep learning (DL)-based algorithm for three cases in the Republic of Korea and Greece. Publicly accessible Copernicus Emergency Management Service and land cover maps were used as reference data for classification and validation. The DL model was trained using six schemes, including three vegetation indicators, and the data were split into training, evaluation, and validation sets based on a specified ratio. In addition, the model was applied to another site and assessed for transferability. The performance of the model was assessed using its overall accuracy. The U-Net model used in this study produced an F1-score of 0.964–0.965 and an intersection-over-union score of 0.938–0.942 for BAs. When compared with other satellite images, unburned and non-forested areas were accurately identified using PlanetScope imagery with a spatial resolution of approximately 3 m. The structure and seasonality of the vegetation in each target area were also more accurately reflected because of the higher resolution, potentially lowering the transferability. These results indicate the possibility of efficiently identifying Bas using a method based on DL with single satellite images.

1. Introduction

Numerous environmental changes induced by natural occurrences or anthropogenic activity have resulted in forests being more vulnerable to a variety of natural disturbances, including forest fires. Forest fires have a significant and rapid influence on both abiotic and biotic conditions in forest ecosystems, as well as on human society. Fires constitute a long-term threat, contributing to habitat destruction and soil erosion. They also alter the atmosphere and global climate by emitting greenhouse gases [1,2,3]. Accurately analyzing the scale of burned areas (BAs) caused by forest fire is important for proper restoration planning and budgeting [4].
Remote sensing techniques have advantages to in situ measurements [5,6,7]. BA detection is frequently dependent on the effects of fire on forest area reflectance, whereas active fire identification is mostly based on the thermal difference between burnings and the surrounding area [8,9]. Satellite imagery provides regular and objective information about the Earth’s landscape. Remote sensing techniques using satellite imagery can extract and analyze reliable information to detect BAs [10,11,12].
The use of differential temporal imaging to distinguish burned from unburned areas is effective. However, it can be difficult to acquire pre-fire images if the revisiting period is too long. This may also cause evaluation difficulties [13,14]. Previous studies have primarily used pre- and post-fire imagery to predictthe areas damaged by forest fires. Obtaining pre-fire imagery can be difficult if there are unusable pixels, such as from clouds and smoke. If recent data prior to the forest fire are unavailable, significant noise may arise when detecting forest fire damage because of the phenological impact on vegetation. Previous research observed that BA prediction maps based on pre- and post-fire imagery often resulted in high computational costs and did not always effectively improve the identification of burned areas [15]. Previous studies have using the normalized burn ratio [16,17] and normalized difference vegetation index [18]; the values significantly changed when using images before and after a forest fire, potentially lowering the prediction performance of the models [19]. If there is a significant difference in the duration between images acquired before and after a forest fire, the detection accuracy may be poor because of spectral changes from various climate-change effects [20,21,22]. The accuracy and efficiency of BA detection can be enhanced by the use of imagery with a high spatial resolution, even if only post-fire satellite images are used. Images with a high spatial resolution can effectively discriminate BAs in mixed forests by compositing multi-spectral bandwidths [23,24,25]. Previous studies have also demonstrated that single-temporal indicators can outperformed other methods to analyze the extent of the forest fire damage depending on the characteristics of forest and forest-fire damage [26,27].
There are several conventional BA estimation methods tha t use single-temporal imagery. These involve the determination of a specific threshold of relevant spectral indices that appear in BAs. There may be significant variations in the results, depending on the spectral characteristics of the data used [28]. The threshold is dependent on various environmental factors. These factors should be considered as artificial effects on the basis of various conditions such as fuel dryness, rain, biomass, and spatial resolution [29]. When using machine learning including random forest (RF), unstable results can be produced if the ratio of the classes of the training dataset is imbalanced [30]. K-nearest neighbors and RF models cannot efficiently differentiate water bodies and BAs in all sensors [31]. It is also difficult to reflect the forest characteristics of areas where forest fires have occurred, especially if high-resolution imagery is not used [31,32]. In a previous study, the convolutional neural network (CNN) approach outperformed traditional machine learning approaches, including RF and LightGBM [33]. Other previous studies have implemented the use of high-resolution images such as PlanetScope, but the models become confused when attempting to discriminate between certain wetlands and severe BAs using the basic bands of PlanetScope [34]. Predicting binary classes based only on burned and unburned areas is limited because areas without damage are likely to be included in BAs from images with a high resolution [35,36,37,38,39]. There may also be limits in terms of verification because of a lack of or inconsistent reference data.
The objectives of this study were to produce burned-area maps and verify the accuracy using only post-fire images from PlanetScope. We classified burned areas from various sites located in Republic of Korea and Greece, and identified the applicability of transfer learning. The extent of the damaged areas from the forest fires was estimated using a CNN approach solely based on post-fire imagery. The CNN approach with segmentation enabled us to estimate pixel-based label values from an image dataset without the need to specify a separate threshold. BA construction was performed using a single image and various vegetation indices. Performance was evaluated based on various metrics for the model assessment. Our method improved the mapping accuracy using high-resolution images and CNN techniques, and reduced confusion errors using various vegetation indices. We created a lightweight model using only post-fire images with three classes, and analyzed the performance of our method as well as its transferability to another BA.

2. Study Area and Data

Three areas were considered in this study. These were Andong and Uljin in Republic of Korea, and Corinth in Greece. Forest fires occurred in Andong from 24 April to 26 April 2020, and in Uljin from 4 March to 13 March 2022. The damaged area in Andong was calculated to be 1944 hectares (ha).The damaged area in Uljin was 18,463 ha. Corinth experienced a forest fire from 22 July to 23 July 2020, with an estimated damaged area of 3282 ha. Figure 1 presents a summary of the study areas used in this study. The acquisition dates were different for each study area, Site A was from 1 May 2020, site B was from 26 July 2020, and site C was from 29 March 2022.
Figure 1 presents the true-color maps of the burned areas and land cover maps of each study area. Figure 1A depicts Andong in Republic of Korea, characterized by inland water bodies and extensive coniferous forests. Figure 1B illustrates Corinth in Greece, with a land cover consisting of natural-material surfaces, sclerophyllous vegetation, herbaceous vegetation, and cultivated areas. Figure 1C represents Uljin in Republic of Korea, which consists of extensive forest areas, small water bodies flowing between the forests, and artificial areas. Unlike Republic of Korea, the land cover of Greece has many barren lands, agricultural areas, and grasslands. In Republic of Korea, the land cover consists of forests and agricultural areas, inland water bodies, grasslands, and barren lands. Republic of Korea is usually dry from November to early April, and vegetation begins to grow from April. Greece features a land cover with abundant grasslands, agricultural areas, and barren lands surrounding forests. The eastern part of Greece is relatively dry because of the impact of the Pindus Mountains. The Mediterranean climate causes the eastern Peloponnesus region, where Corinth is located, to be hot and dry in the summer.

2.1. PlanetScope Data

PlanetScope was developed and is provided by Planet Labs in the United States. PlanetScope includes Dove Classic, Dove-R, and SuperDove. PlanetScope was launched as Dove Classic in July 2016. PlanetScope satellites operate as a constellation, comprising over 130 Dove missions. They can capture 200 million square kilometers of Earth imagery per day. PlanetScope has a 3-m spatial resolution for general scenes and the the PlanetScope Ortho tile has a 3.125 m spatial resolution. The revisit time for the PlanetScope mission is 1 day.
PlanetScope Dove Classic and Dove-R have four default spectral bands: red, green, blue, and near-infrared (NIR). There are included in the PlanetScope Ortho tiles. We selected the surface reflectance products from the PlanetScope scenes based on the analytical data from the Ortho tiles. PlanetScope image data were selected to produce forest fire burned-area maps. PlanetScope Ortho tiles that had been acquired after forest fires were used in this studyAlthough PlanetScope imaging is not freely available, researchers and students can access PlanetScope data by applying through the Education and Research (E&R) program [41].
As PlanetScope does not have a short-wave infrared (SWIR) band, the four available bands were used to calculate several spectral indices, including the vegetation indices. We used the NDVI [42,43,44], green normalized vegetation Index [45,46,47,48], and blue normalized vegetation difference index [49,50].
N D V I = N I R R E D N I R + R E D
G N D V I = N I R G R E E N N I R + G R E E N
B N D V I = N I R B L U E N I R + B L U E

2.2. Reference Data

The Copernicus Emergency Management Service (CEMS) Rapid Mapping Activations functions provide the perimeter of the burned area of site B. The Korea Forest Service and the Ministry of Environment have not publicly released the boundaries of the forest-fire cases for site A and site C. We manually generated the reference maps for site A and site C using pre- and post-fire true-color images, vegetation indices, and Google Earth Pro visuals. The vegetation index was calculated as NDVI using post-fire imagery from PlanetScope data. The vegetation indices dataset was then calculated and converted to a vectorized file with burned or non-burned labels. The boundaries of the labels were then re-labeled by hand, based on pre- and post-fire imagery and past images from Google Earth Pro.
The original spatial resolutions of the land cover map data were resampled to 3.125 m to match the resolution of the Planet imagery. Both the coniferous and broadleaf forests represented on the land cover maps were reclassified as forest areas (FAs). The remaining areas, which did not include FAs and BAs, were classified as non-forest areas (NFAs) on the land cover map. A value of 2 was allocated to the areas corresponding with the borders and boundary ranges of the existing BAs. The reference maps were classified into three classes, with values of 0, 1, and 2 for each study area, where 0 represented the NFA and 1 represented the FA.

3. Methodology

We tested six schemes to determine the most efficient input data for the deep learning model to detect burned areas. Table 1 lists the six methods. Each based on the method used for S1 and incorporated various vegetation indices such as the NDVI, GNDVI, and BNDVI. As NIR is more useful when identifying burned areas in the default bands of PlanetScope [51,52,53], We constructed S6. This only included NIR and the three vegetation indices in the visible band
Figure 2 displays a flowchart of the overall experiment. All datasets obtained from each database were preprocessed and used as input datasets to train several models. The input datasets were implemented using various channels as configurations of the six schemes. The input datasets for each study area were trained using U-Net models. Every image dataset was extracted at random using a chosen grid in each study area. The pixel size and spatial resolution were determined by calculating the length of one side of a square to construct the grid. The study area formed a square grid that was larger than the burned area. Image patches from the initial square grid were then divided using a randomly selected 60% GeoPackage file. Each raster was then collected with a randomly chosen grid using the open-source software QGIS 3.28.10. The selected input image patches were extracted and randomly classified into training, test, and validation data at ratios of 60%, 20%, and 20% ratios, respectively, to train the U-Net models. The patches were created on a grid to ensure each study area image was 31 × 31, including the burned areas. The number of input patches was different for each study area. Input patches were created by including forest and non-forest areas with the burned area in the center. The input datasets of site A comprised 3609, 1204, and 1203 patches for training, testing, and validation, respectively. Site B had an input dataset comprising 6797, 2266, and 2266 patches, respectively. The performance of the model used for each scheme was assessed using several metrics by averaging 10 occurrences. The results were then applied to another area to evaluate the transferability.

3.1. U-Net

A U-Net [54] model is a convolutional neural network. It has two paths connected with a bottleneck to a skip connection. The two paths of U-Net consist of a contracting path and an extracting path. The contracting path extracts context from the input image patch by downsampling. The expanding path constructs a feature map with a shallow layers by upsampling. U-Net can obtain a dense prediction maps from a coarse maps with fully connected network constructions. In this study, we trained the U-Net models with 40 epochs, a batch size of 12, and a learning rate of 1 × 10−4 (0.0001). These values were chosen as the optimal parameters for model training after multiple tests.
The activation functions used were Rectified Linear Unit (ReLU) [55] for nine Conv2D layers and SoftMax for one Conv2D layer in the output layer. The ReLU activation function outputs the input value if it is positive and outputs 0 if it is negative. This mechanism improves cost efficiency and prevents vanishing gradient and saturation problems. The probability that an image belongs to a label is calculated using the SoftMax function in the output layer. The label with the highest probability is selected as the final predicted value. The U-Net model was trained using the adaptive moment estimation (Adam) optimizer. The Adam optimizer provides inertia in the training speed and an adaptive learning rate based on recent changes in the optimization path. The loss function used was categorical cross-entropy (CCE), which is represented by the following equations:
f s i = e s i j C e s j
C C E = i C t i log f s i
where t i is the ground truth and f s i is the SoftMax loss function. CCE can be applied with the SoftMax loss function to predict a certain class from a multi-class. The CCE can be converged faster than the mean square error loss function in the classification. The loss function above calculates the categorical loss value for a multi-class classification.

3.2. Performance Assessment

The predicted results were assessed using several evaluation metrics. There were various methods included with the classification. Every model trained by all schemes was averaged using 10 occurences. The metrics of classification were assessed into three classes, consisting of BA, FA, and NFA. Precision, recall, and F1-score were used in the classification of burned areas. The precision value was the percentage of what the model classified as true that was actually true. The recall value was the percentage of what the model predicted to be true that was actually true. Equations (5) and (6) predicted the true positive (TP) as true, based on the actual true result. False positive (FP) is predicted as true, based on the actual false. False negative (FN) was predicted as false, based on the actual true result. The F1-score was the harmonic mean of the precision and recall values, shown as Equation (7). The intersection-over-union (IoU) was used to assess the performance of the CNN models.
Precision = T P T P + F P
Recall = T P T P + F N
F 1 - Score = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
Intersection - over - Union   I o U = G r o u n d t r u t h P r e d i c t i o n m a p G r o u n d t r u t h P r e d i c t i o n m a p

3.3. Transferability Experiments

A case study was conducted to evaluate the transferability of the method. The case study was performed using site C (Uljin, Republic of Korea). Site C had a forest fire approximately 4–5 times greater than those at site A and site B. We analyzed the transferability of the U-Net model developed for site A and site B to site C (Table 1). We tested the same parameters and schemes used for the U-Net model with an image patch size of 31 × 31. The assessment was performed using the overall accuracy.

4. Result

4.1. The Performance of Each Scheme

Figure 3 presents the prediction maps for each scheme from site A. Scheme 1 to Scheme 6 were the BA prediction maps that classified the data into three classes. Scheme 1, Scheme 2, and Scheme 4 revealed false alarms in the NFAs, Scheme 2, which consisted of visible bands and the NDVI, has fewer noisy pixels than Scheme 1 and Scheme 4. Scheme 2, Scheme 3, and Scheme 4 did not present any false alarms for FAs compared with Scheme 1, but noise was observed around the BAs and FAs. Scheme 5 and Scheme 6, which included three vegetation indices, presensted fewer noisy detections for BAs. Table 2 lists the performance of the U-Net models for site A assessed using four metrics. Scheme 5 had a higher recall (0.976) and a higher F1-score (0.967) than the other schemes. Scheme 5 also had a higher IoU (0.938) for the BAs, followed by Scheme 1 and Scheme 6. Scheme 4 had a high precision of 0.971, but a low recall rate of 0.932. This indicated a trade-off between the recall and precision values in Scheme 4, suggesting that the model was highly confused. In contrast to Scheme 1, the F1-score for FAs was higher in Schemes 2–6; these included other vegetation indices. Scheme 2, which used the NDVI for NFAs, presented the highest IoU and F1-score values compared with the other schemes using vegetation indices. Although a few incorrect detections were identified in the waterbody of site A, most schemes correctly predicted burned areasl when only post-wildfire imagery was used.
Figure 4 presnet the prediction maps from Site B. All schemes exhibited confusion between the FAs and NFAs on the prediction map. This was probably because of the lack of sufficient data to accurately classify the FAs and NFAs. Scheme 1, Scheme 3, and Scheme 6 effectively detected burned areas, with fewer noisy pixels. Table 3 presents a summary of the performance metrics of site B. The precision value for the burned area (BA) was the highest Scheme 2 at 0.972, Scheme 3 had the highest recall value (0.974). Scheme 6 had the highest F1-score for BAs (0.964), followed by Scheme 1 and Scheme 5. The IoU rating for BAs reached its maximum value in Scheme 6 (0.942). The FA had the greatest IoU value in Scheme 4 (0.845). The NFA had the highest IoU value in Scheme 1 (0.901). Although both Scheme 2 and Scheme 3 demonstrated high recall and precision, the trade-off between these metrics resulted in lower IoU and F1-score values than other schemes. Site B performed comparably to site A, with good accuracy in all schemes. Despite minor variations, using the vegetation indices resulted in s greater classification accuracy for the detection of burned areas.

4.2. Spatial Examination of Burned Area Detection Errors

Figure 5 presents the predicted error maps for site A. These maps were produced by integrating ground truths with predicted values and categorizing them as ‘area predicted correctly’, ‘NBA was incorrectly predicted as BA’, and ‘BA was incorrectly predicted as NBA’. A light green color was used in Figure 5 to depict the regions that were actual BAs and correctly predicted as BAs. Blue areas represent the regions that were incorrectly predicted as BAs and were actually NBAs. Red patches indicate regions that were incorrectly predicted as NBAs and were actually BAs.
We identified that there were several misclassifications, especially around the borders of burned areas, because the classification was conducted using a single post-fire image. Among the different maps for site A, Scheme 3 and Scheme 4 exhibited higher ratios of areas incorrectly predicted as NBAs compared with actual BAs. Conversely, Scheme 1 and Scheme 5 showed higher ratios of areas incorrectly predicted as BAs compared with NBAs. In all schemes except Schemes 5 and 6, there were some noises around the borders of the BA regions. This suggested that the noise had resulted in confusion between certain NBA and BA areas. The confused pixels in Scheme 2 (which used the default bands and the NDVI) were more evenly distributed compared with the other schemes. Schemes 5 and 6 show the least noise, but they had a higher number of BA predictions in NBA areas compared with the other schemes. Although using all vegetation indices resulted in fewer misclassifications, the schemes did not demonstrate a significant difference overall.
Figure 6 depicts the predicted error maps for site B. Misclassification regions (BAs as NBAs) were prevalent in Schemes 1 and 2. Scheme 3 had a greater proportion of inaccurate classifications (NBAs as BAs), indicating a high recall value (Table 3). The reason Scheme 3 has a high recall but low scores in other metrics was because recall assessed the model’s ability to correctly identify true positives. In Schemes 4–6, misclassified pixels were evenly distributed. In contrast to site A, which exhibited misclassifications near the border of burned regions, false alarms occurred within the burned areas. There were cases where non-burned areas were classified as BAs. This was because we used a single post-fire image, which caused insufficient input information and confusion with other types of land cover, such as bare soil. There were cases where BA was incorrectly predicted even in cases where the level of vegetation vitality sharply dropped (such as periods when farming ceased in agricultural lands located around areas where forest fires occurred). There was a significant amount of grassland in the land cover in S3 (this included the GNDVI, which is sensitive to chlorophyll), so a greater number of NBAs were misclassified as BAs compared with other indices. Figure 5 and Figure 6 illustrate the limitations of this study. There were problems with boundaries because the convolution window contained pixels from different classes, producing a set of features that the model had not really seen.

4.3. Evaluation of Model Transferability: A Case Study

Figure 7 shows the actual binary map and Scheme 1 of the case study area (site C),with the resulting transferability assessment of accuracy. Each U-Net model was trained using site A data ti predict and C data based on each scheme. Interestingly, Scheme 1 showed noticeable performance (Figure 7). The overall accuracy for BAs and NFAs in Scheme 1 was higher than in other schemes. Scheme 1 tended to predict BAs over a wider area compared with the actual BA and incorrectly predicted certain BA pixels as NBAs. Scheme 1 achieved high overall accuracy in all three classes. Although Scheme 4 showed a higher accuracy than Scheme 1 for FAs, it misclassified Bas as rivers. Scheme 6, which used all the vegetation indices instead of the visible band, had the highest overall accuracy for NFAs but the lowest for FAs and BAs. Table 4 shows the precision of each scheme of transfer learning, and its results support the findings in Figure 7. Scheme 1 had the highest precision (44%) for BA classification among six schemes. Scheme 4 including BNDVI has the highest precision (94.3%) for NFAs, and Scheme 6 has the highest precision (95.6%) for FAs. The range of vegetation indices varies depending on topological characteristics such as land cover in each region, resulting in low transferability when vegetation indices are used as the main feature. Sufficient input information for various areas with different land cover types must be obtained to improve transferability for the detection of burned areas.

5. Discussion

The U-Net models exhibited robust F1-scores and IoU performances, accurately identifying burned areas. Scheme 1 (which used only default bands) performed well for both sites A and B. Schemes 5 and 6, which used vegetation indices, could decrease false alarms. There were many differences in the topographical characteristics (such as land cover) of sites A and B. Site A consisted of many rivers, buildings, fields, and roads around the forest. The land cover of site B included vineyards, cultivated areas, sclerophyllous vegetation, and marshes. We concluded that the complexity of the land cover and topographical factors influenced the classification performance in each region, even though there were differences in the timings of the forest fires between sites A and B. In our study, the U-Net model frequently misidentified agricultural regions as Bas. This was because of limited vegetation, which interfered with the BA prediction. Although this study examined the most appropriate patch size (31 × 31) and used valid padding for upconvolution—the process of combining high-resolution features with low-resolution images—certain feature details at the edges may have been missed. To effectively obtain contour information from a dataset, it is necessary to increase the capacity of trainable information by increasing the patch size, or to improve the model architecture to effectively extract feature map information from image dataset. We obtained the training, test, and validation datasets using random sampling, which may have led to an overestimation of the accuracy. It is important that validation data are far enough from training data when processing spatial data to ensure statistical independence and to avoid overestimations of accuracy. Spatial cross-validation will be employed in future studies for robust validations, as discussed in [56,57]. In the assessment of transferability for site C (Figure 7), the predictions revealed poorer results compared with sites A and B. Schemes 2 to 6 misclassified data more often when the vegetation index was used, as previously discussed in [58]. This was because the vegetation at sites A and B had already grown above a certain level. We concluded that the model was confused because the forest fire at site C occurred in early March, before the vegetation had matured. Scheme 1, which did not include vegetation indices, was relatively less affected by phenological patterns and thus produced more robust results. Vegetation indices are crucial in the detection of burned areas, but their characteristics must be carefully assessed when applying them as input features. As discussed above, burned area can be classified as NBA due to the seasonal factors [59]. Schemes 2 and 4 included the NDVI and BNDVI. They were confused and predicted BAs as NBAs. The NDVI increased with an increase in vegetation growth and, as the levels of vegetation at sites A and B had already developed, it overestimated BAs [60,61]. It is possible that the forest fire caused a low NDVI value, which is why BA was classed as NBA [62]. Previous research has demonstrated that BNDVI is particularly sensitive to non-vascular plants [63,64,65]. GNDVI tends to predict NBA as BA compared to other vegetation indices (Figure 5 and Figure 6). This is because the characteristics of GNDVI are sensitive to the phenology and chlorophyll in the canopy of the tree [66,67,68].

6. Conclusions

In this study, we used single post-fire imagery with a 3 m resolution to adapt and assess a U-Net model based on data from three forest-fire events. The model’s confused pixels were determined and analyzed to investigate the BA prediction accuracy. S5, which used the visible band and vegetation indices, was demonstrated to have a small probability of model misunderstanding. BAs and harvested agricultural land could be confused if there were many croplands near the forest, as in the case of the Greek site. In terms of transferability, the level of vegetation affected the detection of burned areas because of the high resolution of the satellite imagery. We observed that the vegetation index could bring noise into the assessment of the damage caused by forest fires when using vegetation configurations that were already growing. It is crucial to consider the structure and seasonality of the vegetation in a target area when detecting burned areas using satellite images, particularly as the resolution increases. Recognizable bands such as red, green, and blue are crucial when identifying FAs, NFAs, and BAs in BA mapping.
Transfer learning posed a challenge when predicting different areas using a model trained on two distinct regions. The topographical and environmental differences between the regions made it difficult to generalize the model to entirely new areas. Site A exhibited relatively uniform vegetation, whereas Site B had a mixture of vegetation types. Although all vegetation indices were considered and applied to Site C, the basic band ultimately yielded the best results. Seasonal variations in the growing seasons might have contributed to this outcome, highlighting the limitations of using data from both the growth and non-growth of vegetation for predictions.
The detection performance could be improved if PlanetScope data can be further normalized or post-processed to a lower noise. Accuracy would be increased by choosing the vegetation index that optimally reflects forest fires and by adjusting the model network to more effectively incorporate the characteristics of each channel. A possible disadvantage of PlanetScope is its lack of SWIR, which is commonly used in forest-fire analyses. Future considerations include using and fusing with other bands, including SWIR, that are built on other satellites for more precise and effective forest-fire damage monitoring.

Author Contributions

Writing—original draft, B.K.; Writing—review & editing, K.L.; Supervision, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by grants from the National Research Foundation of Korea (NRF), funded by the Korea Ministry of Science and ICT (MSIT) (2022R1C1C1013225), and the R&D Program for Forest Science Technology (RS-2024-00404017), provided by the Korea Forest Service (Korea Forestry Promotion Institute).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lee, K.; Kim, B.; Park, S. Evaluating the potential of burn severity mapping and transferability of Copernicus EMS data using Sentinel-2 imagery and machine learning approaches. GISci. Remote Sens. 2023, 60, 2192157. [Google Scholar] [CrossRef]
  2. De Luca, G.; Silva, J.M.; Modica, G. A workflow based on Sentinel-1 SAR data and open-source algorithms for unsupervised burned area detection in Mediterranean ecosystems. GISci.Remote Sens. 2021, 58, 516–541. [Google Scholar] [CrossRef]
  3. Yarragunta, Y.; Srivastava, S.; Mitra, D.; Chandola, H.C. Influence of Forest Fire Episodes on the Distribution of Gaseous Air Pollutants over Uttarakhand, India. GISci. Remote Sens. 2020, 57, 190–206. [Google Scholar] [CrossRef]
  4. Bonazountas, M.; Kallidromitou, D.; Kassomenos, P.A.; Passas, N. Forest fire risk analysis. Hum. Ecol. Risk Assess. 2005, 11, 617–626. [Google Scholar] [CrossRef]
  5. Tian, X.R.; Mcrae, D.J.; Shu, L.F.; Wang, M.Y.; Hong, L. Satellite remote-sensing technologies used in forest fire management. J. For. Res. 2005, 16, 7. [Google Scholar] [CrossRef]
  6. Gillespie, T.W.; Chu, J.; Frankenberg, E.; Thomas, D. Assessment and prediction of natural hazards from satellite imagery. Prog. Phys. Geogr. 2007, 31, 459–470. [Google Scholar] [CrossRef]
  7. Rashkovetsky, D.; Mauracher, F.; Langer, M.; Schmitt, M. Wildfire detection from multisensor satellite imagery using deep semantic segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7001–7016. [Google Scholar] [CrossRef]
  8. Giglio, L.; Boschetti, L.; Roy, D.P.; Humber, M.L.; Justice, C.O. The collection 6 MODIS burned area mapping algorithm and product. Remote Sens. Environ. 2018, 217, 72–85. [Google Scholar] [CrossRef]
  9. Kang, Y.; Jang, E.; Im, J.; Kwon, C. A deep learning model using geostationary satellite data for forest fire detection with reduced detection latency. GISci. Remote Sens. 2022, 59, 2019–2035. [Google Scholar] [CrossRef]
  10. Rauste, Y.; Herland, E.; Frelander, H.; Soini, K.; Kuoremaki, T.; Ruokari, A. Satellite-based forest fire detection for fire control in boreal forests. Int. J. Remote Sens. 1997, 18, 2641–2656. [Google Scholar] [CrossRef]
  11. Lasaponara, R. Inter-comparison of AVHRR-based fire danger estimation methods. Int. J. Remote Sens. 2005, 26, 853–870. [Google Scholar] [CrossRef]
  12. Lasaponara, R. Estimating spectral separability of satellite derived parameters for burned areas mapping in the Calabria region by using SPOT-Vegetation data. Ecol. Model. 2006, 196, 265–270. [Google Scholar] [CrossRef]
  13. Pontes-Lopes, A.; Dalagnol, R.; Dutra, A.C.; de Jesus Silva, C.V.; de Alencastro Graça, P.M.L.; de Oliveira e Cruz de Aragão, L.E. Quantifying Post-Fire Changes in the Aboveground Biomass of an Amazonian Forest Based on Field and Remote Sensing Data. Remote Sens. 2022, 14, 1545. [Google Scholar] [CrossRef]
  14. Jiao, L.; Bo, Y. Near Real-Time Mapping of Burned Area by Synergizing Multiple Satellites Remote-Sensing Data. GISci. Remote Sens. 2022, 59, 1956–1977. [Google Scholar] [CrossRef]
  15. Weber, K.T.; Seefeldt, S.; Moffet, C.; Norton, J. Comparing fire severity models from post-fire and pre/post-fire differenced imagery. GISci. Remote Sens. 2008, 45, 392–405. [Google Scholar] [CrossRef]
  16. Key, C.H.; Benson, N.C. Landscape Assessment (LA). In FIREMON Fire Effects Monitoring and Inventory System; Gen Tech Rep RMRS-GTR-164-CD; Lutes, D.C., Keane, R.E., Caratti, J.F., Key, C.H., Benson, N.C., Sutherl, S., Gangi, L.J., Eds.; Department of Agriculture, Forest Service, Rocky Mountain Research Station: Fort Collins, CO, USA, 2006; Volume 164, p. LA-1-55. [Google Scholar]
  17. Ramayanti, S.; Kim, B.; Park, S.; Lee, C.W. Wildfire susceptibility mapping by incorporating damage proxy maps, differenced normalized burn Ratio, and deep learning algorithms based on sentinel-1/2 data: A case study on Maui Island, Hawaii. GISci. Remote Sens. 2024, 61, 2353982. [Google Scholar] [CrossRef]
  18. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
  19. Afira, N.; Wijayanto, A.W. Mono-temporal and multi-temporal approaches for burnt area detection using Sentinel-2 satellite imagery (a case study of Rokan Hilir Regency, Indonesia). Ecol Inform. 2022, 69, 101677. [Google Scholar] [CrossRef]
  20. Hawbaker, T.J.; Vanderhoof, M.K.; Schmidt, G.L.; Beal, Y.J.; Picotte, J.J.; Takacs, J.D.; Falgout, J.T.; Dwyer, J.L. The Landsat Burned Area algorithm and products for the conterminous United States. Remote Sens. Environ. 2020, 244, 111801. [Google Scholar] [CrossRef]
  21. Syifa, M.; Panahi, M.; Lee, C.W. Mapping of post-wildfire burned area using a hybrid algorithm and satellite data: The case of the camp fire wildfire in California, USA. Remote Sens. 2020, 12, 623. [Google Scholar] [CrossRef]
  22. Gaveau, D.L.; Descals, A.; Salim, M.A.; Sheil, D.; Sloan, S. Refined burned-area mapping protocol using Sentinel-2 data increases estimate of 2019 Indonesian burning. Earth Syst. Sci. Data 2021, 13, 5353–5368. [Google Scholar] [CrossRef]
  23. Liu, C.C.; Kuo, Y.C.; Chen, C.W. Emergency responses to natural disasters using Formosat-2 high-spatiotemporal-resolution imagery: Forest fires. Nat. Hazards. 2013, 66, 1037–1057. [Google Scholar] [CrossRef]
  24. Meng, R.; Wu, J.; Schwager, K.L.; Zhao, F.; Dennison, P.E.; Cook, B.C.; Brewster, K.; Green, T.M.; Serbin, S.P. Using high spatial resolution satellite imagery to map forest burn severity across spatial scales in a Pine Barrens ecosystem. Remote Sens. Environ. 2017, 191, 95–109. [Google Scholar] [CrossRef]
  25. Humber, M.L.; Boschetti, L.; Giglio, L. Assessing the shape accuracy of coarse resolution burned area identifications. IEEE Trans. Geosci. Remote Sens. 2019, 58, 1516–1526. [Google Scholar] [CrossRef]
  26. Escuin, S.; Navarro, R.; Fernández, P. Fire severity assessment by using NBR (Normalized Burn Ratio) and NDVI (Normalized Difference Vegetation Index) derived from LANDSAT TM/ETM images. Int. J. Remote Sens. 2008, 29, 1053–1073. [Google Scholar] [CrossRef]
  27. Lee, D.; Son, S.; Bae, J.; Park, S.; Seo, J.; Seo, D.; Lee, Y.; Kim, J. Single-Temporal Sentinel-2 for Analyzing Burned Area Detection Methods: A Study of 14 Cases in Republic of Korea Considering Land Cover. Remote Sens. 2024, 16, 884. [Google Scholar] [CrossRef]
  28. Lozano, F.J.; Suárez-Seoane, S.; Kelly, M.; Luis, E. A multi-scale approach for modeling fire occurrence probability using satellite data and classification trees: A case study in a mountainous Mediterranean region. Remote Sens. Environ. 2008, 112, 708–719. [Google Scholar] [CrossRef]
  29. Loepfe, L.; Rodrigo, A.; Lloret, F. Two thresholds determine climatic control of forest fire size in Europe and northern Africa. Reg. Environ. Chang. 2014, 14, 1395–1404. [Google Scholar] [CrossRef]
  30. Collins, L.; McCarthy, G.; Mellor, A.; Newell, G.; Smith, L. Training data requirements for fire severity mapping using Landsat imagery and random forest. Remote Sens. Environ. 2020, 245, 111839. [Google Scholar] [CrossRef]
  31. Pacheco, A.d.P.; Junior, J.A.d.S.; Ruiz-Armenteros, A.M.; Henriques, R.F.F. Assessment of k-Nearest Neighbor and Random Forest Classifiers for Mapping Forest Fire Areas in Central Portugal Using Landsat-8, Sentinel-2, and Terra Imagery. Remote Sens. 2021, 13, 1345. [Google Scholar] [CrossRef]
  32. Hu, X.; Ban, Y.; Nascetti, A. Uni-temporal multispectral imagery for burned area mapping with deep learning. Remote Sens. 2021, 13, 1509. [Google Scholar] [CrossRef]
  33. Lee, C.; Park, S.; Kim, T.; Liu, S.; Md Reba, M.N.; Oh, J.; Han, Y. Machine Learning-Based Forest Burned Area Detection with Various Input Variables: A Case Study of South Korea. Appl. Sci. 2022, 12, 10077. [Google Scholar] [CrossRef]
  34. Gonçalves, D.N.; Junior, J.M.; Carrilho, A.C.; Acosta, P.R.; Ramos, A.P.M.; Gomes, F.D.G.; Osco, A.P.R.; Oliveira, M.D.R.; Martins, J.A.C.; Júnior, G.A.D.; et al. Transformers for mapping burned areas in Brazilian Pantanal and Amazon with PlanetScope imagery. Int. J. Appl. Earth Obs. Geoinf. 2023, 116, 103151. [Google Scholar] [CrossRef]
  35. Farasin, A.; Colomba, L.; Garza, P. Double-step u-net: A deep learning-based approach for the estimation of wildfire damage severity through sentinel-2 satellite data. Appl. Sci. 2020, 10, 4332. [Google Scholar] [CrossRef]
  36. Burrows, N.; Stephens, C.; Wills, A.; Densmore, V. Fire mosaics in south-west Australian forest landscapes. Int. J. Wildland Fire 2021, 30, 933–945. [Google Scholar] [CrossRef]
  37. Al-Dabbagh, A.M.; Ilyas, M. Uni-temporal Sentinel-2 imagery for wildfire detection using deep learning semantic segmentation models. Geomat. Nat. Hazards Risk. 2023, 14, 2196370. [Google Scholar] [CrossRef]
  38. Shirvani, Z.; Abdi, O.; Goodman, R.C. High-resolution semantic segmentation of woodland fires using residual attention UNet and time series of Sentinel-2. Remote Sens. 2023, 15, 1342. [Google Scholar] [CrossRef]
  39. Russell-Smith, J.; Yates, C.; Vernooij, R.; Eames, T.; Lucas, D.; Mbindo, K.; Banda, S.; Mukoma, K.; Kaluka, A.; Liseli, A.; et al. Framework for a savanna burning emissions abatement methodology applicable to fire-prone miombo woodlands in southern Africa. Int. J. Wildland Fire 2024, 33. [Google Scholar] [CrossRef]
  40. Malinowski, R.; Lewiński, S.; Rybicki, M.; Gromny, E.; Jenerowicz, M.; Krupiński, M.; Nowakowski, A.; Wojtkowski, C.; Krupiński, M.; Krätzschmar, E.; et al. Automated Production of a Land Cover/Use Map of Europe Based on Sentinel-2 Imagery. Remote Sens. 2020, 12, 3523. [Google Scholar] [CrossRef]
  41. Planet’s Education and Research (E&R) Program; Planet Team. Planet Application Program Interface: In Space for Life on Earth. San Francisco, CA. 2017. Available online: https://api.planet.com (accessed on 3 September 2023).
  42. Illera, P.; Fernandez, A.; Delgado, J.A. Temporal evolution of the NDVI as an indicator of forest fire danger. Int. J. Remote Sens. 1996, 17, 1093–1105. [Google Scholar] [CrossRef]
  43. Novo, A.; Fariñas-Álvarez, N.; Martínez-Sánchez, J.; González-Jorge, H.; Fernández-Alonso, J.M.; Lorenzo, H. Mapping forest fire risk—A case study in Galicia (Spain). Remote Sens. 2020, 12, 3705. [Google Scholar] [CrossRef]
  44. Jodhani, K.H.; Patel, H.; Soni, U.; Patel, R.; Valodara, B.; Gupta, N.; Patel, A.; Omar, P.J. Assessment of forest fire severity and land surface temperature using Google Earth Engine: A case study of Gujarat State, India. Fire Ecol. 2024, 20, 23. [Google Scholar] [CrossRef]
  45. Buschmann, C.; Nagel, E. In vivo spectroscopy and internal optics of leaves as basis for remote sensing of vegetation. Int. J. Remote Sens. 1993, 14, 711–722. [Google Scholar] [CrossRef]
  46. Gitelson, A.; Merzlyak, M.N. Quantitative estimation of chlorophyll-a using reflectance spectra: Experiments with autumn chestnut and maple leaves. J. Photochem. Photobiol. B Biol. 1994, 22, 247–252. [Google Scholar] [CrossRef]
  47. Navarro, G.; Caballero, I.; Silva, G.; Parra, P.C.; Vázquez, Á.; Caldeira, R. Evaluation of forest fire on Madeira Island using Sentinel-2A MSI imagery. Int. J. Appl. Earth Obs. Geoinf. 2017, 58, 97–106. [Google Scholar] [CrossRef]
  48. Pompa-García, M.; Martínez-Rivas, J.A.; Valdez-Cepeda, R.D.; Aguirre-Salado, C.A.; Rodríguez-Trejo, D.A.; Miranda-Aragón, L.; Rodríguez-Flores, F.D.; Vega-Nieva, D.J. NDVI Values Suggest Immediate Responses to Fire in an Uneven-Aged Mixed Forest Stand. Forests 2022, 13, 1901. [Google Scholar] [CrossRef]
  49. Yang, C.; Everitt, J.H.; Bradford, J.M.; Murden, D. Airborne hyperspectral imagery and yield monitor data for mapping cotton yield variability. Precis. Agric. 2004, 5, 445–461. [Google Scholar] [CrossRef]
  50. Hancock, D.W.; Dougherty, C.T. Relationships between blue-and red-based vegetation indices and leaf area and yield of alfalfa. Crop. Sci. 2007, 47, 2547–2556. [Google Scholar] [CrossRef]
  51. Michael, Y.; Lensky, I.M.; Brenner, S.; Tchetchik, A.; Tessler, N.; Helman, D. Economic assessment of fire damage to urban forest in the wildland–urban interface using planet satellites constellation images. Remote Sens. 2018, 10, 1479. [Google Scholar] [CrossRef]
  52. Tran, B.N.; Tanase, M.A.; Bennett, L.T.; Aponte, C. Evaluation of spectral indices for assessing fire severity in Australian temperate forests. Remote Sens. 2018, 10, 1680. [Google Scholar] [CrossRef]
  53. Gerrevink, M.J.V.; Veraverbeke, S. Evaluating the near and mid infrared bi-spectral space for assessing fire severity and comparison with the differenced normalized burn ratio. Remote Sens. 2021, 13, 695. [Google Scholar] [CrossRef]
  54. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar] [CrossRef]
  55. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. IEEE Int. Conf. Comput. Vis. (ICCV 2015) 2015, 1502, 1026–1034. [Google Scholar] [CrossRef]
  56. Roberts, D.R.; Bahn, V.; Ciuti, S.; Boyce, M.S.; Elith, J.; Guillera-Arroita, G.; Hauenstein, S.; Lahoz-Monfort, J.J.; Schröder, B.; Thuiller, W.; et al. Cross-validation strategies for data with temporal, spatial, hierarchical, or phylogenetic structure. Ecography 2017, 40, 913–929. [Google Scholar] [CrossRef]
  57. Meyer, H.; Reudenbach, C.; Hengl, T.; Katurji, M.; Nauss, T. Improving performance of spatio-temporal machine learning models using forward feature selection and target-oriented validation. Environ. Model. Softw. 2018, 101, 1–9. [Google Scholar] [CrossRef]
  58. Yang, J.; Weisberg, P.J.; Bristow, N.A. Landsat remote sensing approaches for monitoring long-term tree cover dynamics in semi-arid woodlands: Comparison of vegetation indices and spectral mixture analysis. Remote Sens. Environ. 2012, 119, 62–71. [Google Scholar] [CrossRef]
  59. Löw, M.; Koukal, T. Phenology modelling and forest disturbance mapping with Sentinel-2 time series in Austria. Remote Sens. 2020, 12, 4191. [Google Scholar] [CrossRef]
  60. Chéret, V.; Denux, J.P. Analysis of MODIS NDVI time series to calculate indicators of Mediterranean forest fire susceptibility. GISci. Remote Sens. 2011, 48, 171–194. [Google Scholar] [CrossRef]
  61. Lacouture, D.L.; Broadbent, E.N.; Crandall, R.M. Detecting vegetation recovery after fire in a fire-frequented habitat using normalized difference vegetation index (NDVI). Forests 2020, 11, 749. [Google Scholar] [CrossRef]
  62. Karamesouti, M.; Kairis, O.; Gasparatos, D.; Lakes, T. Map-based soil crusting susceptibility assessment using pedotransfer Rules, CORINE and NDVI: A preliminary study in Greece. Ecol. Indic. 2023, 154, 110668. [Google Scholar] [CrossRef]
  63. Gypser, S.; Herppich, W.B.; Fischer, T.; Lange, P.; Veste, M. Photosynthetic characteristics and their spatial variance on biological soil crusts covering initial soils of post-mining sites in Lower Lusatia, NE Germany. Flora Morphol. Distrib. Funct. Ecol. Plants 2016, 220, 103–116. [Google Scholar] [CrossRef]
  64. Kleefeld, A.; Gypser, S.; Herppich, W.B.; Bader, G.; Veste, M. Identification of spatial pattern of photosynthesis hotspots in moss-and lichen-dominated biological soil crusts by combining chlorophyll fluorescence imaging and multispectral BNDVI images. Pedobiologia 2018, 68, 1–11. [Google Scholar] [CrossRef]
  65. Morales-Gallegos, L.M.; Martínez-Trinidad, T.; Hernández-de la Rosa, P.; Gómez-Guerrero, A.; Alvarado-Rosales, D.; Saavedra-Romero, L.D.L. Tree Health Condition in Urban Green Areas Assessed through Crown Indicators and Vegetation Indices. Forests 2023, 14, 1673. [Google Scholar] [CrossRef]
  66. Bhavsar, D.; Kumar, A.; Roy, A. Applicability of NDVI temporal database for western Himalaya forest mapping using fuzzy-based PCM classifier. Eur. J. Remote Sens. 2017, 50, 614–625. [Google Scholar] [CrossRef]
  67. Maleki, M.; Arriga, N.; Barrios, J.; Wieneke, S.; Liu, Q.; Penuelas, J.; Janssens, I.; Balzarolo, M. Estimation of Gross Primary Productivity (GPP) Phenology of a Short-Rotation Plantation Using Remotely Sensed Indices Derived from Sentinel-2 Images. Remote Sens. 2020, 12, 2104. [Google Scholar] [CrossRef]
  68. Torgbor, B.A.; Rahman, M.M.; Robson, A.; Brinkhoff, J.; Khan, A. Assessing the potential of sentinel-2 derived vegetation indices to retrieve phenological stages of mango in Ghana. Horticulturae 2021, 8, 11. [Google Scholar] [CrossRef]
Figure 1. Locations of the areas used in this study. The land covers of site (A) and site (C) were obtained from Ministry of Environment in Republic of Korea. The land covers of site (B) was obtained from Centrum Badań Kosmicznych Polskiej Akademii Nauk (CBK PAN) Reprinted/adapted with permission from [40]. 2024, Malinowski, R., funded by ESA.
Figure 1. Locations of the areas used in this study. The land covers of site (A) and site (C) were obtained from Ministry of Environment in Republic of Korea. The land covers of site (B) was obtained from Centrum Badań Kosmicznych Polskiej Akademii Nauk (CBK PAN) Reprinted/adapted with permission from [40]. 2024, Malinowski, R., funded by ESA.
Remotesensing 16 02629 g001
Figure 2. Flowchart of this study.
Figure 2. Flowchart of this study.
Remotesensing 16 02629 g002
Figure 3. Burned area prediction maps of site A, including Scheme 1 to Scheme 6.
Figure 3. Burned area prediction maps of site A, including Scheme 1 to Scheme 6.
Remotesensing 16 02629 g003
Figure 4. Burned area prediction maps of site B, including Scheme 1 to Scheme 6.
Figure 4. Burned area prediction maps of site B, including Scheme 1 to Scheme 6.
Remotesensing 16 02629 g004
Figure 5. Spatial distributions of predicted errors for site A.
Figure 5. Spatial distributions of predicted errors for site A.
Remotesensing 16 02629 g005
Figure 6. Spatial distributions of predicted errors for site B.
Figure 6. Spatial distributions of predicted errors for site B.
Remotesensing 16 02629 g006
Figure 7. Actual binary maps and Scheme 1 maps, with transferability assessment accuracy for site C (Uljin) based on each model trained by site A and B.
Figure 7. Actual binary maps and Scheme 1 maps, with transferability assessment accuracy for site C (Uljin) based on each model trained by site A and B.
Remotesensing 16 02629 g007
Table 1. Summary of the schemes used in this study.
Table 1. Summary of the schemes used in this study.
SchemesInput Channel
S1R, G, B, and NIR
S2R, G, B, NIR, and NDVI
S3R, G, B, NIR, and GNDVI
S4R, G, B, NIR, and BNDVI
S5R, G, B, NIR, NDVI, GNDVI, and BNDVI
S6NIR, NDVI, GNDVI, and BNDVI
Table 2. Summary of the performance results for site A. The highest accuracy values for all metrics are indicated in bold.
Table 2. Summary of the performance results for site A. The highest accuracy values for all metrics are indicated in bold.
SchemeClassPrecisionRecallF1-ScoreIoU
S1NFA0.9690.9210.9450.903
FA0.9470.9700.9580.925
BA0.9590.9710.9650.936
S2NFA0.9580.9430.9510.920
FA0.9550.9660.9600.934
BA0.9660.9610.9640.936
S3NFA0.9540.9430.9490.914
FA0.9550.9590.9570.928
BA0.9570.9630.9600.930
S4NFA0.9530.9470.9500.917
FA0.9450.9640.9550.927
BA0.9710.9320.9510.923
S5NFA0.9460.9520.9490.915
FA0.9650.9540.9600.929
BA0.9580.9760.9670.938
S6NFA0.9460.9520.9490.914
FA0.9640.9560.9600.930
BA0.9610.9700.9650.937
Table 3. Summary of the performance results of site B. The highest accuracy values for all metrics are indicated in bold.
Table 3. Summary of the performance results of site B. The highest accuracy values for all metrics are indicated in bold.
SchemeClassPrecisionRecallF1-ScoreIoU
S1NFA0.9440.9570.9510.901
FA0.9150.8860.9000.841
BA0.9630.9610.9620.940
S2NFA0.9320.9680.9490.900
FA0.9290.8700.8990.835
BA0.9720.9430.9570.934
S3NFA0.9570.9290.9430.883
FA0.8770.9090.8930.823
BA0.9410.9740.9570.927
S4NFA0.9510.9490.9500.902
FA0.8960.9090.9020.845
BA0.9640.9550.9600.935
S5NFA0.9620.9350.9480.893
FA0.8720.9350.9020.838
BA0.9630.9600.9620.939
S6NFA0.9370.9650.9510.896
FA0.9260.8670.8960.823
BA0.9680.9610.9640.942
Table 4. Precision values of transfer learning for site C (Uljin) based on each model trained by sites A and B.
Table 4. Precision values of transfer learning for site C (Uljin) based on each model trained by sites A and B.
SchemesPrecision (%)
NFAFABA
S182.893.844.0
S286.294.234.6
S391.193.140.6
S494.387.643.2
S589.994.838.5
S691.295.616.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, B.; Lee, K.; Park, S. Burned-Area Mapping Using Post-Fire PlanetScope Images and a Convolutional Neural Network. Remote Sens. 2024, 16, 2629. https://doi.org/10.3390/rs16142629

AMA Style

Kim B, Lee K, Park S. Burned-Area Mapping Using Post-Fire PlanetScope Images and a Convolutional Neural Network. Remote Sensing. 2024; 16(14):2629. https://doi.org/10.3390/rs16142629

Chicago/Turabian Style

Kim, Byeongcheol, Kyungil Lee, and Seonyoung Park. 2024. "Burned-Area Mapping Using Post-Fire PlanetScope Images and a Convolutional Neural Network" Remote Sensing 16, no. 14: 2629. https://doi.org/10.3390/rs16142629

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop