Next Article in Journal
Monitoring Pine Wilt Disease Using High-Resolution Satellite Remote Sensing at the Single-Tree Scale with Integrated Self-Attention
Previous Article in Journal
An Optimal Viewpoint-Guided Visual Indexing Method for UAV Autonomous Localization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Analysis of dNBR, dNDVI, SVM Kernels, and ISODATA for Wildfire-Burned Area Mapping Using Sentinel-2 Imagery

1
Disaster & Risk Management Laboratory, Interdisciplinary Program in Crisis&Disaster and Risk Management, Sungkyunkwan University (SKKU), Suwon 16419, Gyeonggi-do, Republic of Korea
2
Geodesy Laboratory, Civil & Architectural and Environmental System Engineering, Sungkyunkwan University (SKKU), Suwon 16419, Gyeonggi-do, Republic of Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(13), 2196; https://doi.org/10.3390/rs17132196
Submission received: 19 May 2025 / Revised: 17 June 2025 / Accepted: 24 June 2025 / Published: 25 June 2025
(This article belongs to the Section Forest Remote Sensing)

Abstract

Accurate and rapid delineation of wildfire-affected areas is essential in the era of climate-driven increases in fire frequency. This study compares and analyzes four techniques for identifying wildfire-affected areas using Sentinel-2 satellite imagery: (1) calibrated differenced Normalized Burn Ratio (dNBR); (2) differenced NDVI (dNDVI) with empirically defined thresholds (0.04–0.18); (3) supervised SVM classifiers applying linear, polynomial, and RBF kernels; and (4) unsupervised ISODATA clustering. In particular, this study proposes an SVM-based classification method that goes beyond conventional index- and threshold-based approaches by directly using the SWIR, NIR, and RED band values of Sentinel-2 as input variables. It also examines the potential of the ISODATA method, which can rapidly classify affected areas without a training process and further assess burn severity through a two-step clustering procedure. The experimental results showed that SVM was able to effectively identify affected areas using only post-fire imagery, and that ISODATA enabled fast classification and severity analysis without training data. This study performed a wildfire damage analysis through a comparison of various techniques and presents a data-driven framework that can be utilized in future wildfire response and policy-oriented recovery support.

1. Introduction

Wildfires represent one of the most severe natural hazards, and their frequency, duration, and intensity have increased markedly in recent decades due to climate change and anthropogenic influences [1,2]. Rising global temperatures, prolonged droughts, and shifting precipitation regimes—hallmarks of a changing climate—have created increasingly favorable conditions for the ignition and spread of large-scale wildfires [3]. These fires not only result in immediate ecological damage, such as forest degradation and biodiversity loss, but also pose long-term risks to human life, public health, and regional infrastructure.
In this context, an accurate and timely post-fire damage assessment has emerged as a critical component of wildfire response and recovery planning. Quantifying the spatial extent and severity of burn areas enables the efficient deployment of restoration resources, supports evidence-based policymaking, and minimizes the socio-economic impacts of post-disaster delays [4]. Moreover, as fire regimes continue to shift in frequency, intensity, and spatial extent due to complex interactions between climate change, vegetation dynamics, and human activity, the development of standardized and scalable assessment tools—particularly those leveraging remote sensing—is becoming essential for enabling climate-resilient forest management and timely emergency response strategies [5].
Satellite-based remote sensing techniques have become essential for post-fire assessment, offering timely and consistent information across large and inaccessible areas. Among these, the differenced Normalized Burn Ratio (dNBR) is widely used for mapping burn severity and estimating fire-affected areas [6,7]. The dNBR index is derived by comparing pre- and post-fire reflectance in the near-infrared (NIR) and shortwave infrared (SWIR) bands, capturing spectral changes associated with vegetation loss and soil exposure [8,9].
Despite its widespread adoption, the operational accuracy of dNBR remains limited by the lack of regionally calibrated threshold values. This often results in significant discrepancies between satellite-derived classifications and field-based damage reports [10]. Threshold values are frequently applied in a generic or heuristic manner, ignoring the heterogeneity of vegetation types, terrain conditions, and fire regimes across different ecosystems. These limitations highlight the importance of developing localized or statistically optimized thresholds to improve the consistency and reliability of dNBR-based fire assessments [11,12,13].
Recent research efforts have focused on overcoming the limitations of dNBR by incorporating multi-source data and advanced analytics. For instance, ref. [14] developed a global forest burn severity dataset using Landsat imagery, enhancing the accuracy of quantitative assessments based on the dNBR and RdNBR. Similarly, ref. [15] proposed a rapid post-fire vegetation damage assessment framework that integrates Sentinel-2 imagery and machine learning models, combining spectral indices such as the dNBR, NBR, and MIRBI. These approaches offer improved precision in burn severity estimation across diverse ecosystems. Others have improved the dNBR-based interpretations by incorporating complementary spectral indices, such as the Relativized Burn Ratio (RBR) and the Normalized Burn Ratio Plus (NBR+). In particular, NBR+, developed specifically for Sentinel-2 data, enhances sensitivity to subtle vegetation changes by leveraging additional shortwave infrared bands, thereby improving discrimination between low- and moderate-severity burn areas [16].
Moreover, the increasing availability of advanced Earth observation platforms—such as Landsat-9, Sentinel-2, and UAV-derived imagery—has enabled the application of multi-resolution data fusion techniques, significantly enhancing the spatial precision of burn severity assessments. For instance, ref. [17] demonstrated that monotemporal Landsat-9 imagery combined with a convolutional shift-transformer network yields improved mapping accuracy for burned areas. Similarly, ref. [18] showed that integrating Sentinel-2 and UAV data enhances the estimation of post-fire soil organic carbon, improving the overall characterization of fire effects. Furthermore, ref. [19] used UAV imagery in combination with random forest algorithms to evaluate the burn severity and vegetation recovery in semi-arid ecosystems. These integrated approaches allow researchers to mitigate temporal discontinuities, atmospheric distortions, and spatial resolution constraints inherent in single-sensor observations.
Together, these advancements demonstrate a clear trajectory toward more reliable and location-aware post-fire damage mapping, but further research is needed to standardize the threshold-setting practices across diverse biomes and satellite systems.
In the case of large-scale wildfires, a rapid and accurate assessment of the damage extent using only ground-based observations remains challenging due to logistical constraints and limited accessibility. To overcome these limitations, satellite-based classification techniques have gained prominence, especially those integrating Sentinel-2 imagery with machine learning algorithms such as support vector machines (SVMs). SVMs have demonstrated robust performance in high-dimensional spectral domains and are capable of modeling both linear and nonlinear separability through kernel-based transformations [20,21].
Recent studies have demonstrated the effectiveness of support vector machines (SVMs) in post-wildfire land cover classification, particularly when utilizing multispectral Sentinel-2 data, showing higher accuracy compared to traditional classifiers. For example, ref. [22] conducted a comparative study using Sentinel-2A imagery and reported that SVM classifiers, especially those applied through pixel-based image analysis, achieved superior accuracy in classifying land cover after wildfires.
These findings underscore the utility of SVM methods combined with Sentinel-2 imagery for rapid and reliable post-fire mapping. However, the classification accuracy may vary depending on local vegetation structure, topography, and spectral response characteristics, suggesting the need for regionally adapted model calibration.
To evaluate the effectiveness of both traditional and data-driven post-fire assessment techniques, this study compared the performance of a calibrated dNBR threshold method with supervised and unsupervised classification models. In particular, classification accuracy was assessed by applying support vector machines (SVMs) with multiple kernel functions (linear, polynomial, and RBF) and the ISODATA algorithm as an unsupervised clustering approach, using multispectral Sentinel-2 imagery. While the dNBR index is widely adopted for post-fire assessments and effectively captures general fire-induced spectral changes, its application often exhibits limitations in delineating fine-scale burn boundaries, particularly in heterogeneous landscapes [11].
Recent studies have highlighted the superior performance of machine learning models in wildfire damage mapping. For example, ref. [23] demonstrated that machine learning models utilizing vegetation productivity and site characteristics significantly improved burn severity prediction across diverse forest types. Similarly, ref. [24] conducted a comparative analysis between random forest and logistic regression models and found that machine learning approaches provided more accurate and spatially coherent wildfire susceptibility maps in complex terrains.
These findings suggest that machine learning-based classification frameworks, when integrated with multispectral satellite data, offer a scalable and accurate alternative for automated burn area estimation, particularly in large and complex wildfire events. The comparative evaluation conducted in this study reinforces the practical value of calibrated, data-driven methodologies in supporting reliable post-fire assessments.

2. Materials and Methods

2.1. Study Area

This study focuses on the large-scale wildfire event that occurred in Gyeongbuk Province, South Korea, between March 22 and 25, 2025. During this period, a dry weather advisory was in effect for the region, and meteorological conditions were highly conducive to wildfire ignition and spread. The dominant wind pattern was westerly, and the maximum instantaneous wind speed exceeded 25.1 m/s, creating extreme fire-prone conditions. The wildfire resulted in approximately 100,000 hectares of forest damage, making it one of the most extensive fire events in recent years on the Korean Peninsula.
The study area was selected due to its vast burn extent, heterogeneous vegetation composition, complex topography, and the availability of high-resolution remote sensing data. These characteristics render the region suitable for evaluating and comparing diverse post-fire classification techniques, including threshold-based dNBR mapping, unsupervised clustering, and machine learning–based approaches.
The Gyeongbuk wildfire occurred in the southeastern part of the Korean Peninsula (approximately 36.2°N, 128.8°E), primarily affecting mixed deciduous and coniferous forests across multiple administrative boundaries. The severity of the event was significantly intensified by prolonged drought conditions and strong winds, which facilitated rapid fire spread and extensive ecological disruption. Given its environmental complexity and large-scale impact, the Gyeongbuk wildfire serves as a representative case study for assessing the applicability and performance of remote sensing-based post-fire damage mapping methodologies. The outcomes of this analysis are intended to support accurate post-fire severity assessment and to inform science-based strategies for ecological restoration and disaster recovery planning.
To ensure consistent and reliable analysis, the Gyeongbuk wildfire was examined using Sentinel-2 Level-1C imagery with less than 10% cloud cover. Imagery was selected to represent both pre-fire and immediate post-fire conditions, enabling accurate spectral differencing and detection of vegetation change. Prior to analysis, the imagery underwent atmospheric correction and geometric alignment.
Figure 1 presents the wildfire ignition points and fire-affected areas in the Gyeongbuk region. The left panel shows Sentinel-3 fire detections from March 22 to 26, while the right panel displays a Sentinel-2-based false-color composite emphasizing burn severity. This overview provides visual context for subsequent severity classification.
Figure 1 illustrates the spatial distribution of fire occurrences and the extent of fire-damaged areas within the study region, encompassing Uiseong-gun, Yeongdeok-gun, Andong-si, and surrounding counties in Gyeongsangbuk-do, Republic of Korea. Fire detection points from Sentinel-3 imagery between 22 March and 26 March are shown in the left panel, revealing a linear west-to-east spread pattern across the central part of the study area. The temporal sequence of these fire points suggests a progressive propagation of the wildfire over multiple days, with dense clusters observed particularly in Uiseong-gun and Yeongdeok-gun, implying continuous fire activity along this corridor.
To assess the extent of vegetation damage, a false-color composite image derived from Sentinel-2 MSI Level-2C data is shown in the right panel. This composite was generated using imagery acquired on 8 April and 18 April 2025, processed through Google Earth Engine (GEE). The processing workflow included top-of-atmosphere (TOA) correction and cloud masking, followed by temporal gap-filling for cloud-covered regions using the mean of the cloud-free pixels across multiple acquisition dates. The composite assigns Band 12 (SWIR), Band 8 (NIR), and Band 4 (Red) to the RGB channels to enhance the spectral response of burned vegetation.
The fire-damaged area is prominently visible as a continuous swath of dark red tones, corresponding to vegetation stress and surface charring, consistent with the fire detection line observed in the Sentinel-3 data. This spatial agreement validates the reliability of the multi-sensor approach for both active fire monitoring and post-fire assessment. Notably, the affected zone spans multiple administrative districts, with severe damage concentrated in central Uiseong-gun and extending eastward toward Yeongdeok-gun. The surrounding green-toned regions indicate healthy vegetation, while the grayish areas correspond to the urban and non-vegetated surfaces, ensuring a clear visual distinction of burned areas.

2.2. Sentinel-2 Imagery Acquisition and Preprocessing

To achieve an accurate estimation of the burned areas and support robust model validation, this study employed Sentinel-2 satellite imagery that was provided by the European Space Agency (ESA). The Sentinel-2 platform offers 13 spectral bands spanning the visible, near-infrared (NIR), and shortwave infrared (SWIR) regions, with spatial resolutions ranging from 10 to 60 m. These characteristics render it highly suitable for vegetation monitoring and post-fire damage assessment [25]. The onboard MultiSpectral Instrument (MSI) enables the precise detection of fire-induced vegetation changes by capturing spectral responses associated with biomass loss. Specifically, decreases in NIR reflectance and concurrent increases in SWIR reflectance following fire events provide a reliable basis for delineating burned areas [26].
The short revisit cycle of Sentinel-2—approximately five days—enables near-real-time monitoring of wildfire-affected areas, thereby enhancing the timeliness and accuracy of post-fire damage assessments. These temporal and spatial capabilities are particularly valuable for informing post-fire recovery planning and forest management strategies. To minimize the influence of vegetation regrowth, pre- and post-fire images were selected within a ±10-day temporal window. Level-1C images with less than 10% cloud cover were prioritized and atmospherically corrected to Level-2A bottom-of-atmosphere (BOA) reflectance using the Sen2Cor processor. This correction step enhances the radiometric integrity of the reflectance values and improves the reliability of the vegetation indices used in the analysis [27].
The differenced Normalized Burn Ratio (dNBR) was calculated using Sentinel-2’s Band 8 (NIR, 10 m) and Band 12 (SWIR, 20 m). To enable consistent spatial comparison, SWIR data were resampled to a common 10 m resolution via bilinear interpolation, following standard procedures in post-fire remote sensing analysis [4]. All images underwent geometric alignment through co-registration techniques to ensure pixel-level correspondence and were subsequently clipped to the official wildfire perimeters, which were obtained from validated disaster reports and authoritative geospatial datasets.
Clouds and cast shadows were masked using the Sentinel-2 Scene Classification Layer (SCL), and manual inspection was performed to reduce misclassification errors, particularly in areas with complex terrain. These preprocessing procedures adhered to recommended practices from large-scale wildfire studies [28], supporting consistency across classification techniques, including thresholding, unsupervised clustering, and support vector machines (SVMs).

2.3. Index Calculation and Threshold Calibration

To quantify the surface changes induced by wildfire, this study employed the differenced Normalized Burn Ratio (dNBR), a widely accepted spectral index for post-fire assessment. The dNBR is derived from the Normalized Burn Ratio (NBR), which utilizes the near-infrared (NIR, Band 8) and shortwave infrared (SWIR, Band 12) reflectance from Sentinel-2 imagery. The NBR is calculated using Equations (1) and (2), where a higher contrast between NIR and SWIR values typically indicates a stronger fire impact. The dNBR is then computed by subtracting the post-fire NBR from the pre-fire NBR, as shown below [29].
To minimize phenological variability, satellite images acquired within a ±20-day window of the fire event were used. Cloud masking was conducted based on the Sentinel-2 Scene Classification Layer (SCL), and missing values in cloud-contaminated regions were corrected using mean imputation techniques to preserve spatial consistency.
The NBR was calculated using the reflectance values from Sentinel-2 Band 8 (NIR) and Band 12 (SWIR), as shown in Equation (1), while the dNBR was computed by differencing pre- and post-fire NBR values, as presented in Equation (2).
NBR = NI R 8 SWI R 12 NI R 8 + SWI R 12
dNBR = NBR pre NBR post
The differenced Normalized Burn Ratio (dNBR) is widely used for the standardized quantification of vegetation loss and fire severity across diverse landscapes [30,31]. However, its classification performance depends strongly on the calibration of threshold values, which define the boundaries between severity classes such as unburned, low, moderate, and high. Although some studies have proposed globally fixed thresholds [14], these may not generalize well across biomes due to differences in vegetation structure, topography, and climate conditions [32].
In this study, the standard burn severity classification scheme proposed by the United States Geological Survey (USGS) and the Monitoring Trends in Burn Severity (MTBS) program was applied to delineate the spatial extent and severity of wildfire-affected areas using dNBR values. This approach is consistent with the method demonstrated by [33], who utilized Sentinel-2 imagery for dNBR-based burn severity mapping, and which reflects the broader research emphasis on integrating local vegetation and landscape characteristics into post-fire assessment frameworks.
The dNBR index, derived from the differencing of pre- and post-fire Normalized Burn Ratio (NBR) values, provides a standardized means to quantify vegetation loss and assess fire severity. The severity levels are determined based on the established dNBR thresholds defined by the USGS and MTBS programs, as summarized in Table 1.
In addition to dNBR, the differenced Normalized Difference Vegetation Index (dNDVI) is a widely used remote sensing metric for assessing vegetation changes before and after wildfire events. It quantifies the degree of vegetation loss by subtracting post-fire NDVI values from pre-fire NDVI values, providing a direct and interpretable measure of fire-induced disturbance. This index is particularly effective for identifying areas of high burn severity and for monitoring post-fire vegetation regrowth across various forest ecosystems.
The differenced Normalized Difference Vegetation Index (dNDVI) is a widely used remote sensing metric for assessing vegetation changes before and after wildfire events. It quantifies the degree of vegetation loss by subtracting the post-fire NDVI values from the pre-fire NDVI values, providing a direct and interpretable measure of fire-induced disturbance. This index is particularly effective for identifying areas of high burn severity and for monitoring post-fire vegetation regrowth across various forest ecosystems.
According to [34], the dNDVI shows strong reliability in distinguishing between high and low burn severity areas, making it suitable for operational wildfire severity mapping. Similarly, ref. [35] highlighted the usefulness of the dNDVI for capturing spatial variability in burn intensity, especially when applied to Sentinel-2 imagery. Ref. [36] further demonstrated that the dNDVI offers advantages in assessing post-fire vegetation dynamics across diverse forest environments. Moreover, several studies have shown that the dNDVI outperforms other commonly used indices such as the dNBR and dNDVI in tracking vegetation regrowth over time [36,37].
To complement the detection of vegetation changes caused by wildfire, this study additionally calculated the differenced Normalized Difference Vegetation Index (dNDVI) using Sentinel-2 imagery. The NDVI values were derived from pre- and post-fire Level-2A surface reflectance data at a spatial resolution of 10 m, using Band 8 (NIR) and Band 4 (Red), according to the following formulas:
d N D V I = N I R R E D N I R + R E D p o s t N I R R E D N I R + R E D p r e
Here, NDVI_pre reflects the vegetation conditions before the fire, while NDVI_post captures the post-fire state.
As there are no universally established thresholds for a dNDVI-based burn severity classification in domestic or international standards, this study adopted an interval-based thresholding scheme. Specifically, eight thresholds were defined at 0.02 intervals, ranging from 0.04 to 0.18. This strategy enabled a more refined categorization of wildfire-impacted zones and was employed to enhance the precision and granularity of post-fire damage interpretation.
Given the high spatial (10 m) and temporal resolution of Sentinel-2 imagery, dNDVI-based assessments enable detailed mapping of heterogeneous burn patterns at the landscape level [35]. Although no universally accepted thresholds exist for classifying the burn severity using the dNDVI, many studies have adopted locally calibrated or interval-based thresholds to improve classification accuracy under specific vegetation and environmental conditions [34,35].

2.4. Supervised and Unsupervised Learning-Based Severity Classification

To complement traditional threshold-based classification of burn severity, this study applied both supervised and unsupervised machine learning algorithms—specifically support vector machines (SVMs) and the Iterative Self-Organizing Data Analysis Technique (ISODATA)—in an independent and comparative framework. This dual approach facilitates a robust evaluation of classification performance and methodological flexibility across heterogeneous post-fire landscapes.
Machine learning techniques offer data-driven alternatives that can model complex, nonlinear relationships by leveraging spectral indices and auxiliary datasets. Such methods are particularly advantageous in post-fire environments characterized by spectral variability and diverse vegetation structures, where traditional threshold-based classifications may be insufficient.
Recent studies support the utility of machine learning in this context. For example, ref. [23] demonstrated that models incorporating vegetation productivity and site characteristics significantly enhanced burn severity classification accuracy across various forest types, reinforcing the value of flexible, data-adaptive methodologies in wildfire assessment. Refs. [38,39] highlighted the effectiveness of machine learning algorithms, such as support vector machines, in classifying forest vegetation using Sentinel-2 imagery, underscoring the broader applicability of data-driven approaches in complex environmental settings.
Support vector machine (SVM), a widely used supervised classification algorithm, was employed to classify wildfire-affected areas using manually labeled training samples derived from Sentinel-2 spectral reflectance data, specifically Bands 12 (SWIR), 8 (NIR), and 4 (Red). SVMs are effective in high-dimensional feature spaces and are widely recognized for their strong generalization capabilities in remote sensing applications [20].
In parallel, the ISODATA (Iterative Self-Organizing Data Analysis Technique) algorithm was applied as an unsupervised classification method to identify spectrally homogeneous clusters without the need for labeled data. This technique adaptively splits and merges clusters based on spectral distance and intra-cluster variance, making it particularly suitable for post-fire landscapes where ground truth data are limited or unavailable [40].
By implementing both supervised and unsupervised approaches, this study leverages the respective strengths of each method—SVM’s ability to learn from labeled samples and ISODATA’s capability to capture intrinsic spectral structures autonomously. This dual-strategy framework enables a more comprehensive and flexible assessment of burn patterns across heterogeneous wildfire-affected regions.

2.4.1. Supervised Learning with SVM

To evaluate the post-fire burn severity using a supervised machine learning approach, this study employed the support vector machine (SVM), a well-established classification algorithm known for its robustness in high-dimensional feature spaces. In this context, SVM was trained on manually labeled data derived from Sentinel-2 spectral reflectance values, specifically Bands 12 (SWIR), 8 (NIR), and 4 (Red), which are known to be sensitive to fire-induced changes in vegetation and soil conditions. SVM has been extensively applied in remote sensing due to its ability to construct optimal decision boundaries and to generalize well across heterogeneous landscapes [41]. Furthermore, as noted by ref. [20], SVM is recognized for delivering stable and accurate classification results under a wide range of environmental conditions, making it particularly suitable for wildfire damage assessment.
To construct a suitable training dataset for supervised classification, this study utilized wildfire events that occurred across various regions in South Korea between 2019 and 2023. Based on a comprehensive visual interpretation of pre- and post-fire Sentinel-2 imagery, burn severity zones were manually delineated into approximately 240 polygons, each corresponding to a distinct severity level. Subsequently, 100 random point samples were generated from each polygon, yielding a total of 24,000 labeled training samples. Z-score normalization was applied, and the SVM classifier was trained to distinguish fire-affected from unaffected areas based on spectral patterns.
Support vector machines (SVMs) are supervised learning algorithms designed to find the optimal decision boundary, or hyperplane, that best separates different classes by maximizing the margin between them. According to [42], this margin maximization not only enhances classification accuracy but also reduces the risk of overfitting, thereby improving the model’s generalization performance. In the case of linearly separable data, the decision boundary is a flat hyperplane in the original feature space. For nonlinearly separable cases, kernel functions, such as polynomial or radial basis function (RBF) kernels, are employed to project the data into a higher-dimensional space, where a linear separation becomes feasible [42,43].
A key advantage of SVMs lies in their reliance on support vectors—data points nearest to the decision boundary, which exclusively determine the orientation and position of the hyperplane. This feature enhances robustness while minimizing the influence of irrelevant training samples [44,45]. Several studies have emphasized that selectively focusing on these boundary-critical instances can significantly improve computational efficiency and accuracy, especially in large-scale remote sensing datasets [46].
In this study, multiple SVM configurations were implemented to evaluate the algorithm’s adaptability to heterogeneous spectral patterns observed in post-fire landscapes. Specifically, we tested three kernel types: linear, polynomial, and RBF, to capture both simple and complex decision boundaries across varying burn severity levels. The formulation of the linear SVM model is presented as follows:
min w , b 1 2 | w | 2 subject   to y i w x i + b 1 , i
where w is the weight vector normal to the hyperplane, b is the bias term, and yᵢ ∈ {−1, +1} are the class labels. The solution to this optimization problem yields the decision boundary.
f x = w x   + b = 0
This linear formulation performs well when classes are linearly separable. However, many real-world datasets, such as spectral reflectance data from post-fire environments, exhibit nonlinear boundaries that cannot be captured in the original feature space. To address this, SVM utilizes the kernel trick, which allows the model to implicitly project the input data into a higher-dimensional feature space H via a nonlinear mapping ϕ: ℝⁿ H, where a linear decision boundary becomes feasible.
Here, ϕ: ℝⁿ H denotes a nonlinear mapping from the original input space ℝⁿ to a higher-dimensional feature space H, where H is a Hilbert space. The purpose of this transformation is to enable linear separation of data that are not linearly separable in the original space. By mapping the input vectors into H, the SVM can construct a linear decision boundary in this transformed space, which corresponds to a nonlinear boundary in the original space [47,48,49,50,51,52,53,54].
Rather than computing ϕ(x) explicitly, the inner product xᵢxⱼ is replaced with a kernel function K(xᵢ, xⱼ), which satisfies the following condition:
K x i , x j = ϕ x i ϕ x j
This substitution enables the SVM to perform computations as if operating in a high-dimensional space, without incurring the computational cost of explicitly mapping the data. The use of kernel functions, therefore, makes it possible to construct complex, nonlinear decision boundaries in the original input space.
In this study, two commonly used kernel functions were adopted to explore the separability of post-fire spectral data. The polynomial kernel enables the SVM to model complex feature interactions by expanding the dot product into a polynomial form, allowing for nonlinear decision boundaries in the transformed feature space. It is defined as:
K x i , x j = γ x i x j + r d
where
  • xᵢ, xⱼ ∈ ℝⁿ are the input feature vectors;
  • γ > 0 is a scaling parameter that controls the influence of the dot product;
  • r is a constant term that adjusts the bias of the polynomial;
  • d ∈ ℕ is the degree of the polynomial.
This kernel is especially useful when the class boundaries depend on combinations of features (e.g., quadratic or cubic interactions). As the degree d increases, the model gains the capacity to fit more complex decision boundaries at the cost of potentially increased sensitivity to noise. Here, d ∈ ℕ, where d is a positive integer (natural number) that represents the degree of the polynomial, which determines the highest-order interaction between the input features considered by the kernel. The symbol denotes the set of natural numbers, typically defined as the set of all positive integers {1, 2, 3, ...}.
In scenarios where the data distribution exhibits complex, nonlinear patterns—as commonly observed in post-fire remote sensing datasets—nonlinear kernels are essential for capturing subtle variations. Among these, the radial basis function (RBF) kernel, also known as the Gaussian kernel, is one of the most widely used due to its flexibility and locality. It is defined as [55,56,57]:
K x i , x j = exp γ x i x j 2
where
  • xᵢxⱼ2 is the squared Euclidean distance between samples;
  • γ > 0 is a shape parameter that controls the width of the Gaussian function.
Smaller values of γ result in broader influence regions, leading to smoother and more generalized decision boundaries. Conversely, larger values of γ allow the model to capture fine-grained distinctions between classes but may increase the risk of overfitting. The RBF kernel is particularly well-suited for high-dimensional data and cases where class boundaries are highly nonlinear and irregular, as is often the case in heterogeneous post-fire landscapes.
The radial basis function (RBF) kernel is widely used in remote sensing applications, particularly for post-fire datasets where data distributions are complex and highly nonlinear. The RBF kernel effectively captures subtle variations in high-dimensional, heterogeneous landscapes, often outperforming traditional linear methods in classification accuracy and robustness.
Several studies have demonstrated the superior classification performance of RBF-based classifiers in remote sensing. For instance, refs. [58,59] showed that support vector machines using the RBF kernel (SVM-RBF) consistently achieved higher overall and class-specific accuracies compared to conventional methods such as maximum likelihood and minimum distance classifiers. Ref. [60] further validated these findings, highlighting the improved generalization capability of SVM-RBF models in land cover and post-fire image classification.
The robustness of RBF-based approaches is also notable. Unlike parametric classifiers, RBF models do not rely on assumptions about the statistical distribution of data. This makes them particularly suitable for complex, irregular environments frequently encountered in post-fire remote sensing datasets [59,60].
A critical factor in the performance of the RBF kernel is the kernel width parameter (γ). Ref. [61] emphasized the importance of appropriate tuning, as excessively large or small γ values can lead to overfitting or underfitting, respectively. Cross-validation and improved SVM variants have been employed to address these issues, helping to optimize generalization performance [60].
Beyond classification, the RBF kernel has proven beneficial in advanced remote sensing tasks. For sub-pixel mapping, Wang et al. [62] demonstrated that RBF interpolation methods yield more accurate representations of spatial heterogeneity than conventional interpolation techniques. In feature extraction, methods such as kernel principal component analysis (kernel PCA) and kernel partial least squares (kernel PLS) combined with RBF kernels have shown significant improvements in separating subtle class distinctions in multi- and hyperspectral imagery [61,63]. Additionally, RBF kernels have been applied effectively in smoke and fire detection, achieving high precision and recall in complex data environments [64].
Overall, nonlinear kernels such as the RBF and polynomial kernels, as well as the linear kernel, each demonstrate distinct strengths depending on the data characteristics and classification objectives. The RBF kernel offers strong performance in modeling localized, highly nonlinear patterns typical of post-fire remote sensing data. Polynomial kernels are effective when class boundaries involve complex feature interactions, particularly of higher orders. Linear kernels, while simpler, remain useful for linearly separable datasets and provide computational efficiency. Therefore, the choice of kernel should be guided by the underlying data structure, task complexity, and the desired balance between accuracy and generalization.
Different kernel functions offer varying trade-offs between complexity, interpretability, and classification power. As summarized in Table 2, the linear SVM provides a fast and simple solution but is limited to linearly separable data. The polynomial kernel can model curved decision boundaries, though it may overfit if the polynomial degree is too high. The radial basis function (RBF) kernel, widely used in remote sensing, is capable of capturing complex, nonlinear patterns but requires careful tuning of hyperparameters such as γ and C.
To ensure consistent and reliable comparison of the kernel functions, this study employed a 5-fold cross-validation strategy during the training and validation process. In this approach, the entire labeled dataset was divided into five equal-sized subsets. For each iteration, one subset was used as the validation set, while the remaining four subsets were used for training. This procedure was repeated five times, allowing each subset to serve as the validation set once. This approach helps to mitigate the risk of overfitting and reduces bias associated with arbitrary data partitioning, thereby enhancing the credibility of performance evaluation.
Within this validation framework, the classification performance of the three SVM kernel types—linear, polynomial, and radial basis function (RBF)—was quantitatively assessed. Four standard performance metrics were used: accuracy, precision, recall, and F1-score. These metrics were chosen to evaluate various aspects of model effectiveness and to support a comprehensive comparison of kernel-specific classification behavior in identifying wildfire-affected areas.
These results align with the theoretical properties of each kernel function and emphasize the importance of appropriate kernel selection in SVM-based classification tasks, particularly in the context of remote sensing applications involving heterogeneous and spectrally diverse datasets.

2.4.2. Unsupervised Learning with ISODATA Clustering

In parallel with the supervised SVM classification, the Iterative Self-Organizing Data Analysis Technique (ISODATA) algorithm was employed as an unsupervised learning approach to classify burn severity based solely on spectral characteristics. ISODATA is a refinement of the traditional k-means clustering algorithm, designed to partition multidimensional image data into spectrally homogeneous clusters without requiring labeled samples. Unlike standard k-means, ISODATA does not assume a fixed number of clusters; instead, it adaptively modifies the cluster count through iterative splitting and merging procedures. These operations are guided by spectral distance metrics and intra-cluster variance thresholds, making the method particularly suitable for complex and heterogeneous post-fire landscapes where the ground truth is limited or unavailable.
The typical ISODATA workflow includes the following steps:
  • Randomly initialize k cluster centroids.
  • Assign each pixel to the nearest centroid based on spectral similarity.
  • Recalculate the mean vector for each cluster.
  • Perform a merging of clusters if they are too close, or splitting if the variance within a cluster exceeds a threshold.
  • Repeat the process until convergence is reached (i.e., minimal change in clusters between iterations).
This flexibility makes ISODATA particularly suitable for post-fire classification in heterogeneous landscapes where ground truth data are limited or unavailable [46]. The combined use of supervised and unsupervised approaches allowed for a comprehensive evaluation of methodological performance and robustness under varying vegetation and terrain conditions.
In our study, ISODATA clustering was implemented using the ISO Cluster tool in ArcGIS Pro 3.5, applying reflectance values from Sentinel-2 Bands 12 (SWIR), 8 (NIR), and 4 (Red) as input variables. Parameter settings followed established theoretical procedures to ensure methodological consistency. The maximum number of classes was initially set to 10, providing an upper limit for spectral cluster formation. Euclidean distance was used as the spectral similarity metric for assigning pixels to clusters. During each iteration, cluster centroids were recalculated to reflect the updates in class membership, while the merging and splitting of clusters were guided by specified thresholds—namely, a maximum merge distance of 0.5, a minimum of 20 pixels per cluster, and a maximum of 5 merges per iteration. The clustering process was limited to 20 iterations to maintain computational efficiency.
Following the initial ISODATA classification, which distinguished between burned and unburned regions, a second round of clustering was conducted exclusively within the burned areas. This secondary process subdivided the fire-affected regions into four spectral classes, allowing for a more detailed assessment of burn severity. By incorporating this two-stage clustering framework based on post-fire spectral heterogeneity in Bands 12, 8, and 4, this study enabled a structured, data-driven evaluation of fire damage intensity. This approach is particularly valuable in the absence of extensive ground truth data and contributes to operational strategies for prioritizing post-fire restoration and managing forest recovery under varying ecological conditions.

2.5. Extent Delineation and Workflow Integration

To quantitatively delineate wildfire-affected areas, this study applied a range of classification methods, each representing a distinct analytical approach to burn extent mapping. Index-based methods included the differenced Normalized Burn Ratio (dNBR) and the differenced Normalized Difference Vegetation Index (dNDVI), both of which empirically estimate burn areas by measuring spectral changes between pre- and post-fire satellite imagery.
For supervised classification, support vector machines (SVMs) were employed using three kernel types—linear, polynomial, and radial basis function (RBF)—to compare classification performance and delineation accuracy. Each kernel defines a different decision boundary based on the distribution of training data, and the SVM models were trained using labeled samples to detect burned regions.
As an unsupervised approach, the Iterative Self-Organizing Data Analysis Technique (ISODATA) was used to cluster spectrally similar pixels without requiring prior training data, enabling autonomous segmentation of the burn-affected areas based solely on the spectral characteristics.
Burned area calculations were conducted for five administrative regions (Andong, Uiseong, Cheongsong, Yeongyang, and Yeongdeok). For each classification output, a burned area was quantified using the Calculate Geometry function in ArcGIS Pro, providing hectare-based measurements for comparative analysis across the methods.
Notably, for the dNBR and ISODATA-derived results, the analysis was extended beyond binary classification to include a burn severity assessment through class stratification. Severity levels were inferred based on the magnitude of spectral change or inter-cluster variance, allowing for a more nuanced understanding of not only the spatial extent but also the intensity distribution of wildfire impacts, highlighting the practical utility of these methods in operational fire management and ecological restoration planning.
Figure 2 illustrates the overall workflow developed in this study for wildfire burn area delineation. The process begins with the acquisition of Sentinel-2 imagery (Bands 12, 8, and 4), followed by two parallel analytical pathways: index-based and machine learning-based approaches. In the index-based approach, the burn areas were estimated using both dNBR and dNDVI metrics.
For the supervised learning pathway, SVM classification was performed using training data, incorporating 5-fold cross-validation to ensure generalization. Three kernel types—linear, polynomial, and Gaussian (RBF)—were employed to compare classification performance. In parallel, an unsupervised ISODATA clustering algorithm was applied, with dynamic splitting and merging based on intra-cluster variance and inter-cluster distance criteria.
All classification outputs were integrated into a unified burn area extraction stage, allowing for comparative analysis and the generation of an optimized burn area map. This workflow was designed to enhance both the reliability and scalability of a wildfire impact assessment.
As shown in Figure 2, the proposed workflow integrates index-based and machine learning-based classification approaches to quantitatively delineate wildfire-affected areas. The index-based methods, using the dNBR and dNDVI, estimate burned regions based on the spectral changes between pre- and post-fire satellite imagery. In parallel, supervised learning (SVM) and unsupervised clustering (ISODATA) enable automated classification, driven by either labeled training data or intrinsic spectral structures.
The analytical workflow developed in this study was designed to provide flexibility and scalability across diverse wildfire scenarios and satellite datasets. Initially, manually labeled training samples were generated using spectral reflectance values from Sentinel-2 Bands 12 (SWIR), 8 (NIR), and 4 (Red). These samples were used to construct kernel-based support vector machine (SVM) classifiers. To evaluate the generalization performance and mitigate overfitting, a 5-fold cross-validation procedure was employed during model training.
Following the training phase, burn area classification was conducted using SVM models with linear, polynomial, and radial basis function (RBF) kernels. Independently, the ISODATA clustering algorithm was applied as an unsupervised classification method. Through iterative centroid refinement and dynamic cluster splitting and merging, ISODATA enabled spectral segmentation without requiring labeled data.
This modular workflow structure allows for the complementary use of both supervised and unsupervised methods, supporting robust and detailed burn area mapping under varying spectral conditions and data availability.

3. Results

3.1. Burn Area Mapping with dNBR

The differenced Normalized Burn Ratio (dNBR) was derived using the near-infrared (NIR; Band 8) and shortwave infrared (SWIR; Band 12) bands of Sentinel-2 imagery to delineate wildfire-affected areas. This spectral index effectively captures the reduction in NIR reflectance and the concurrent increase in SWIR reflectance caused by fire-induced vegetation loss, thereby enabling precise identification of burn perimeters.
Based on established methodologies, the dNBR values were computed and subsequently classified into burn severity categories using threshold ranges defined by the United States Geological Survey (USGS) and the Monitoring Trends in Burn Severity (MTBS) program. This approach resulted in the generation of a burn severity map that quantitatively illustrates the spatial extent and intensity of fire damage across the Gyeongbuk region.
The resulting map was then used as a reference layer to evaluate the classification accuracy of both supervised and unsupervised machine learning methods applied in this study.
Figure 3 presents the spatial distribution of wildfire severity levels across the study area, categorized into four distinct classes. The map highlights the extent and intensity of burn severity, primarily within the forested regions of Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun, offering a comprehensive visual reference for subsequent analysis of classification accuracy under diverse topographic and vegetative conditions.
Figure 3 illustrates the spatial distribution of burn severity classified into four levels: low, moderate-low, moderate-high, and high. The most severely affected areas are predominantly concentrated in the central and eastern parts of the study region, particularly within forested zones of Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun. These regions exhibit extensive high and moderate-high severity zones, indicative of intense vegetation combustion and prolonged fire activity. In contrast, low-severity burn areas are more sporadically distributed along the periphery, reflecting reduced fire intensity or shorter burn durations. This spatial heterogeneity serves as a critical reference for assessing classification accuracy and model robustness across varying landscape conditions, including differences in vegetation cover, elevation, and slope.

3.2. Burn Area Mapping with dNDVI

The integration of the dNDVI with the dNBR, supported by these empirical threshold tests, improved the delineation of heterogeneous burn severity patterns, particularly in fragmented landscapes such as those observed in southern Cheongsong-gun and northern Yeongyang-gun. The combined approach provided a robust reference for validating the classification models under diverse ecological and terrain conditions.
Figure 4 presents a comparative visualization of the burned area delineation results using a series of dNDVI thresholds ranging from 0.04 to 0.18. Each panel illustrates the spatial extent of areas classified as burned under progressively stricter threshold conditions. This multi-threshold approach enables a detailed examination of the sensitivity of dNDVI-based methods to varying degrees of vegetation loss, offering insights into optimal cutoff values for accurate fire scar mapping.
Figure 4 illustrates the delineation of the burned areas using a series of dNDVI threshold values ranging from 0.04 to 0.18. As the threshold increases, the mapped burn extent becomes progressively more conservative, with higher values primarily capturing only the most severely impacted core zones. In contrast, lower thresholds (e.g., dNDVI > 0.04) produce broader burn extents, encompassing both central and peripheral regions with moderate to low vegetation loss. Thresholds exceeding 0.16 result in spatially fragmented patterns that isolate only high-severity scars.
This progressive reduction in mapped extent highlights a fundamental trade-off between commission and omission errors. Lower thresholds risk overestimating the burned area by including mildly affected vegetation, while higher thresholds may underestimate total disturbance by omitting lower-severity zones. Such a variation underscores the importance of threshold calibration being tailored to local vegetation characteristics and fire behavior.
The stepwise threshold evaluation proved effective in identifying an optimal cut-off value that balances sensitivity and specificity. Notably, the dNDVI demonstrated flexibility in capturing a continuous gradient of burn severity, making it a valuable tool for assessing wildfire impacts in spectrally heterogeneous and topographically complex landscapes.
The stepwise evaluation of the threshold values proved effective in determining an optimal cut-off that achieves a balance between sensitivity and specificity in burn area delineation. Importantly, the dNDVI exhibited strong adaptability in capturing a continuous gradient of burn severity, thereby reinforcing its utility as a robust indicator for post-fire assessment, particularly in spectrally heterogeneous and topographically complex landscapes.

3.3. Burn Area Mapping via Supervised SVM

To complement the spectral index-based classification, a supervised machine learning approach was employed using the support vector machine (SVM) algorithm. SVM is a robust and widely used classifier, particularly effective in handling high-dimensional remote sensing data. Training samples were selected based on visually interpreted reference areas, and the model was trained to distinguish between burn severity classes using spectral information from pre- and post-fire imagery. The following results illustrate the spatial distribution of fire severity generated through SVM classification and serve as a basis for performance comparison with other methods.
To train the supervised SVM classifier, a set of reference fire events was selected from various regions and years across South Korea. These training samples were manually interpreted using post-fire Sentinel-2 imagery with false-color composites (SWIR–NIR–RED) to distinguish burn severity patterns. The selected cases include both recent and historical wildfires that occurred in diverse landscapes, encompassing coastal, mountainous, and agricultural–urban interface zones. This diversity in training data was intended to enhance the generalizability and robustness of the classifier across different vegetation types and terrain conditions.
Figure 5 presents a collection of representative wildfire cases across South Korea from 2019 to 2023, visualized using post-fire false-color composite satellite imagery.
Figure 5 presents a collection of these representative wildfire cases visualized through false-color composite satellite imagery captured after the fire events. The selected scenes encompass a diverse range of ecological and topographic settings—including inland and coastal regions, various vegetation types, and differing fire intensities—carefully chosen to ensure comprehensive environmental representation. Each panel highlights spatially distinct burn patterns, offering qualitative insight into the spectral characteristics of fire-affected landscapes.
These images served as the foundation for constructing the training dataset for SVM-based classification. A total of 240 polygons were manually delineated to distinguish between fire-affected and unaffected areas, and 100 pixels were randomly sampled from each polygon. From each polygon, 100 pixels were randomly sampled, and their spectral values were extracted to train three SVM kernel models: linear, polynomial, and radial basis function (RBF). This carefully curated training dataset was designed to ensure class balance and spatial variability, thus enhancing model generalizability across heterogeneous landscapes. Table 3 lists the Sentinel-2 image acquisition dates corresponding to each wildfire event, along with the geographic coordinates and event periods. These images were used to generate training data for supervised classification.
By incorporating fire events from diverse geographic settings and land cover types, this training approach supports the development of classification models with strong scalability and applicability across broader regional contexts. The inclusion of multiple fire regimes and vegetation structures enhances the robustness of the model and provides a reliable foundation for operational implementation in large-scale wildfire monitoring and damage assessment.
To further illustrate the impact of kernel selection on classification behavior, Figure 6 provides a comparative visualization of the decision boundaries formed by three different SVM kernels—linear, polynomial, and radial basis function (RBF)—trained on standardized Sentinel-2 spectral reflectance values (Bands 4, 8, and 12). Each subplot displays the classifier’s response to the training samples, where the blue and red points denote unburned and burned pixels, respectively.
Figure 6 presents a three-dimensional visualization of the decision boundaries formed by three SVM classifiers—linear, polynomial, and radial basis function (RBF)—trained on standardized reflectance values from Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR). Each classifier was trained on the same dataset comprising burned and unburned pixels, with the resulting decision surfaces shown in green.
The linear SVM (left panel) generates a planar decision boundary, which performs well when the spectral separation between classes is distinct and approximately linear. However, its capacity to generalize is limited in cases where the data exhibit overlapping or curved class distributions, as often encountered in post-fire environments.
The polynomial kernel (center panel) introduces moderate nonlinearity, producing a curved decision surface that adapts better to transitional zones where burned and unburned signatures partially overlap. This kernel captures higher-order feature interactions, offering improved flexibility compared to the linear model while maintaining interpretability.
The RBF kernel (right panel) constructs a highly flexible, nonlinear boundary that conforms closely to the complex geometry of the training data. Its ability to localize decision boundaries in high-dimensional feature space enables superior performance in heterogeneous or fragmented landscapes, where spectral patterns may vary subtly across space.
Overall, this figure underscores how increasing kernel complexity enhances the model’s ability to resolve subtle spectral variations, which is critical for accurately delineating burn-affected areas with continuous gradients in severity. The comparative visualization highlights the trade-offs between model complexity and boundary flexibility, informing kernel selection for operational wildfire classification tasks.
To evaluate how effectively the three SVM classifiers distinguished burned and unburned samples within the training dataset, we conducted 5-fold cross-validation using standardized spectral reflectance values derived from Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR). The analysis employed four performance metrics—accuracy, precision, recall, and F1-score—to assess the models’ ability to learn from the labeled samples and generalize across folds.
Figure 7 presents the comparative performance of three SVM classifiers—linear, polynomial, and radial basis function (RBF)—based on 5-fold cross-validation using standardized spectral reflectance data from Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR). Four evaluation metrics—accuracy, precision, recall, and F1-score—were used to assess the classifiers’ ability to distinguish between burned and unburned samples. This figure provides a quantitative summary of each kernel’s classification performance and highlights the impact of kernel complexity on model generalization and predictive capability.
Figure 7 quantitatively compares the classification performance of the three SVM kernels using four commonly adopted evaluation metrics: accuracy, precision, recall, and F1-score. The results are derived from 5-fold cross-validation on standardized Sentinel-2 spectral reflectance data.
The linear kernel yielded the lowest performance across all metrics, with accuracy at 95.0%, precision at 89.2%, recall at 96.9%, and an F1-score of 92.9%. These values indicate that while the linear model maintained reasonable accuracy and sensitivity, it struggled with precision, suggesting misclassification of unburned samples in transitional regions.
In contrast, the polynomial kernel achieved notably higher performance, with an accuracy of 99.2%, precision of 98.3%, recall of 99.4%, and F1-score of 98.9%. This reflects the model’s improved capacity to capture moderate nonlinear patterns in the data, offering a balanced trade-off between sensitivity and specificity.
The RBF kernel slightly outperformed the polynomial kernel across all metrics, achieving 99.3% accuracy, 98.5% precision, 99.5% recall, and an F1-score of 99.0%. This marginal yet consistent improvement highlights the RBF kernel’s superior flexibility in modeling complex and irregular spectral distributions, particularly in heterogeneous post-fire landscapes.
Overall, the results confirm that kernel complexity plays a critical role in enhancing classification performance. While the linear SVM may be computationally efficient, nonlinear kernels—particularly the RBF—provide significant advantages in terms of both accuracy and reliability when applied to high-dimensional, spectrally diverse wildfire data.
To complement the visual comparison shown in Figure 7, the quantitative evaluation metrics for all three kernel types are summarized in Table 4. These results reinforce the importance of kernel selection in SVM-based burned area classification. While the linear kernel may be suitable for linearly separable cases or real-time applications with strict computational constraints, the RBF kernel’s superior adaptability makes it a more appropriate choice for operational deployment in spectrally complex wildfire environments.
To further assess how each kernel performs in a practical classification scenario, Figure 8 visualizes the SVM outputs by applying each trained model to a full-scene satellite image and projecting all pixels into a standardized three-dimensional spectral feature space. The visualization enables intuitive comparison of kernel-specific classification behavior and generalization performance. The color of each point represents its classification status: the blue points indicate unburned training data, the red points represent both burned training samples and pixels classified as burned in the test area, and the gray points correspond to the unburned pixels in the test area.
Figure 8 illustrates how each SVM kernel model classifies burned areas when applied to full-scene spectral data projected into a standardized feature space composed of Sentinel-2 Bands 12 (SWIR), 8 (NIR), and 4 (Red). In this visualization, the distribution of TIF-derived image pixels (gray) and predicted burned areas (red) provides insight into the classification behavior of each kernel relative to its learned decision boundary.
The linear SVM classified a relatively broad swath of pixels as burned, particularly in regions exhibiting strong spectral gradients. However, it also produced a significant number of misclassifications near the class boundary, reflecting its limited ability to separate overlapping or nonlinearly distributed spectral clusters.
The polynomial SVM demonstrated more adaptive behavior by forming a curved decision surface that better conforms to transitional zones. It successfully captured many of the burned pixels in the core area but also included some false positives beyond the immediate boundary, indicating moderate generalization with slight overreach.
The RBF SVM, in contrast, classified a more focused subset of pixels with high spectral similarity to the training samples. Its conservative boundary produced the smallest burned area footprint among the three models, indicating a strong preference for precision. This behavior aligns with the kernel’s inherent ability to model complex and localized spectral relationships while avoiding overgeneralization.
Overall, the visual patterns observed in Figure 8 reinforce the notion that kernel complexity directly affects classification selectivity. The RBF kernel, in particular, shows strong alignment between its learned boundary and the actual spectral distribution of the burned areas, confirming its effectiveness in complex post-fire environments.
Building on the spectral feature space analysis shown in Figure 8, Figure 9 displays the spatial distribution of the burned areas as classified by the linear, polynomial, and RBF SVM models. Each map was generated by applying the trained classifiers to Sentinel-2 surface reflectance data from Bands 4 (Red), 8 (NIR), and 12 (SWIR). The burned pixels are highlighted in red, offering a geographic perspective on model performance. Fire-affected areas are predominantly concentrated along mountainous ridgelines and upper slopes in central Gyeongsangbuk-do, including Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun, where dense forest cover and steep topography facilitated fire spread.
Figure 9 presents a comparative visualization of the burned area classification results produced by linear, polynomial, and radial basis function (RBF) SVM models that are applied to full-scene Sentinel-2 imagery. The spatial distribution of the predicted burned pixels reveals distinct classification behaviors, shaped by each kernel’s underlying decision mechanism.
The linear SVM effectively detected fire-affected areas with strong spectral contrast, but it frequently overclassified the burned regions in transitional or spectrally ambiguous zones. This reflects the model’s limited flexibility, as its planar decision boundary cannot adequately capture the complex spectral variability often present in heterogeneous post-fire landscapes.
The polynomial SVM exhibited greater adaptability through curved decision boundaries, enabling better discrimination in the mid-severity and transitional zones. While it classified a more nuanced and spatially extensive burn footprint compared to the linear model, some overprediction persisted, indicating moderate sensitivity with partial misclassification along the boundary regions.
The RBF SVM, by contrast, produced the most conservative classification. Its outputs were highly localized and closely aligned with the distribution of the training samples, indicating strong dependence on spectral similarity and localized boundary structures. This precision-driven behavior minimized false positives and proved particularly effective in areas with fragmented or high-complexity terrain.
These spatial patterns are consistent with the cross-validation results summarized in Table 3 and visualized in Figure 7 and Figure 8. Together, they demonstrate that the superior performance of the RBF kernel stems from its ability to model highly nonlinear and localized spectral structures while preserving classification precision. Overall, Figure 9 underscores the critical role of kernel selection in determining the spatial extent, reliability, and generalizability of SVM-based burned area mapping in complex post-fire environments.

3.4. Burn Area Mapping via Unsupervised ISODATA

In this section, the Iterative Self-Organizing Data Analysis Technique Algorithm (ISODATA), an unsupervised clustering method, was applied to Sentinel-2 imagery to delineate the burned areas without the use of labeled training data. The ISODATA algorithm automatically groups pixels into spectrally similar clusters based on statistical distance and iteratively refines cluster centroids and membership. This approach is particularly useful for rapid assessment in data-scarce regions or when training samples are unavailable. Post-clustering, specific clusters associated with burn signatures (e.g., low NIR, high SWIR reflectance) were manually identified to extract the burned area map.
Following the unsupervised clustering procedure, the ISODATA algorithm was configured to generate ten spectral clusters (k = 10), determined based on the statistical distribution and variance of the reflectance values across Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR). This clustering process allows for iterative refinement, including the automatic merging and splitting of clusters, which enhances the ability to capture underlying spectral structures.
Among the ten generated clusters, several—particularly those concentrated along the central mountainous ridge of the region—were characterized by low NIR (Band 8) and high SWIR (Band 12) reflectance, a spectral signature commonly associated with burned vegetation. These clusters (e.g., Cluster 3 is represented in dark tones) were manually interpreted as burned surfaces, and they showed strong spatial alignment with the independently verified fire-affected zones identified through reference datasets and burned area delineation maps that were derived from spectral indices such as the dNBR and dNDVI.
The clustering output illustrates the ISODATA algorithm’s strength in partitioning spectrally complex landscapes without the need for predefined class labels. This capability is especially useful in scenarios where ground truth data are limited or unavailable, making the approach a practical and efficient preliminary step for burned area mapping. Although manual interpretation of fire-related clusters is still required to link spectral groupings with real-world phenomena, this method significantly simplifies the classification task by reducing data dimensionality and highlighting candidate regions for further analysis. As such, it facilitates rapid and large-scale post-fire assessments while maintaining sufficient sensitivity to spectral variability across diverse terrain and vegetation types.
Figure 10 presents the spatial distribution of ten spectral clusters generated by the ISODATA algorithm, applied to the standardized reflectance values from Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR). The clustering process was unsupervised and allowed for the dynamic merging and splitting of clusters, enabling more accurate delineation of the spectral variability across the landscape. The resulting clusters capture distinct patterns that correspond to the differences in land cover type, vegetation density, and surface moisture conditions throughout the study area. This segmentation provides a valuable basis for post-classification refinement and context-aware burned area analysis.
Figure 10 provides a spatial representation of the ISODATA clustering results, offering insight into the spectral segmentation of the post-fire landscape. The clustering was performed in a three-dimensional spectral space composed of Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR), using the ISO Cluster tool in ArcGIS Pro. The algorithm dynamically adjusted cluster boundaries by iteratively merging spectrally similar clusters and eliminating or reassigning small, spectrally unstable ones.
The application of adaptive parameters—such as a merge threshold of 0.5 and a minimum cluster size of 20 pixels—enabled effective handling of both dominant and subtle spectral patterns. As a result, the final output captured meaningful variations in the land cover and post-fire surface conditions despite the absence of labeled training data. This is particularly evident in the concentration of distinct clusters along mountainous areas, where topography, vegetation density, and burn severity interact to create complex spectral signatures.
Clusters corresponding to heavily burned regions were spatially coherent and largely aligned with known fire-affected zones, indicating the algorithm’s sensitivity to post-fire spectral degradation. Conversely, clusters distributed across lowland agricultural fields, urban areas, and intact vegetation zones exhibited more spectral diversity, suggesting that ISODATA effectively distinguished between the fire-related and background variabilities.
Overall, the unsupervised clustering approach shown in Figure 10 demonstrates strong potential for pre-classification stratification and rapid burned area assessment in complex, data-scarce environments. It also lays a foundation for subsequent refinement steps, such as rule-based masking or integration with supervised classification outputs.
Figure 11 visualizes the ISODATA clustering results in a three-dimensional spectral feature space that is defined by Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR). Each point represents a pixel sample assigned to one of the ten spectral clusters (k = 10), while the semi-transparent volumetric surfaces illustrate the spatial extent and overlap of cluster boundaries in the spectral domain. This representation offers an intuitive understanding of how the algorithm partitions the multidimensional reflectance space, revealing the separability and internal cohesion of each cluster. It also highlights the spectral complexity of the post-fire landscape and provides evidence of both well-defined and transitional spectral zones.
Figure 11 offers a comprehensive view of the ISODATA clustering results, combining both spectral domain representation and spatially explicit burned area delineation. In the three-dimensional spectral feature space defined by Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR), each pixel is assigned to one of ten spectral clusters. The colored points represent cluster membership, while the semi-transparent surfaces indicate the volumetric boundaries derived from the spectral distribution of each cluster. This visualization provides valuable insights into the internal structure, separability, and spectral overlap among clusters, facilitating the interpretation of complex land surface conditions.
Several clusters, particularly Clusters 1, 5, and 6, appear compact and well-defined, reflecting homogeneous spectral characteristics likely associated with stable land cover types such as dense forest or bare ground. In contrast, Clusters 3 and 9 exhibit broader and more elongated distributions, suggesting internal variability, potentially due to partial burning or mixed vegetation conditions. Notably, Cluster 3 is spectrally distinct with elevated Band 4 values and reduced Band 8 reflectance, characteristics that are typical of burned surfaces. Overlaps between Clusters 4 and 7 reveal transitional spectral zones that may correspond to areas of vegetation stress, degradation, or heterogeneous terrain.
Following this spectral analysis, specific clusters, particularly those aligned with known post-fire spectral responses, were manually selected and merged to produce the final burned area map. The resulting spatial output captures extensive and continuous burn scars concentrated along the mountainous spine of the study area, with pronounced damage in Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun. The mapped burn extent shows strong correspondence with the topographic gradients and historical fire perimeters, despite the absence of supervised training data.
Figure 11 highlights the effectiveness of the ISODATA clustering algorithm in extracting meaningful spectral and spatial patterns from post-fire landscapes. By integrating spectral feature space visualization with spatial classification results, the figure enhances the interpretability of unsupervised outputs and facilitates validation of burned area mapping. The derived burned extent shows strong agreement with known fire perimeters and topographic gradients, indicating the algorithm’s ability to delineate complex land cover structures without the need for labeled training data.
The method proved particularly effective in capturing continuous fire scars across coniferous forest zones and upper slopes—areas typically prone to high-intensity fire behavior. Despite its unsupervised nature, the ISODATA-based classification results closely aligned with the field observations and reference fire maps, demonstrating the practical utility of this approach for rapid burn area assessment in both emergency response and retrospective analysis contexts, especially in data-scarce environments.
Figure 12 provides a spatial visualization of the final burned area map derived from the ISODATA clustering results. The map highlights the distribution of fire-affected regions across the study area, with burned pixels prominently concentrated along the mountainous central spine encompassing Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun. This spatial pattern reflects the topographic and vegetation-driven propagation of high-intensity wildfires and further supports the effectiveness of the unsupervised classification in delineating continuous burn scars in complex terrain.
Figure 12 illustrates the spatial distribution of the unsupervised classification results generated by the ISODATA algorithm using 10 clusters, based on Sentinel-2 imagery incorporating Bands 4 (Red), 8 (NIR), and 12 (SWIR). The classification successfully differentiated distinct spectral patterns within the heterogeneous landscape of the study area. Notably, Cluster 3—spatially concentrated along the central mountainous zone—exhibits spectral characteristics that are typically associated with fire-affected surfaces, including reduced reflectance in the near-infrared band and elevated reflectance in the shortwave infrared band. These spectral signatures are consistent with post-fire vegetation stress and charred surfaces, corroborating the presence of the burned areas previously reported in the ancillary fire incident records.
The spatial coherence and thematic relevance of the classified clusters underscore the utility of the ISODATA algorithm in delineating ecologically meaningful land surface patterns without prior labeling. By relying solely on spectral separability, the method demonstrates robust performance in segmenting complex, post-disturbance environments. This capability is particularly advantageous in regions where labeled data are scarce or unavailable, supporting its applicability in rapid environmental assessment and disaster response contexts.
To further assess the severity of fire damage within the previously identified burned areas, a secondary unsupervised classification was conducted by applying the ISODATA algorithm with the number of clusters set to four. This reclassification was limited to pixels that were initially assigned to fire-related clusters, enabling a more nuanced stratification of burn severity based on the subtle variations in spectral response. The resulting sub-clusters exhibit gradational differences in reflectance, particularly within the NIR and SWIR bands, which are known to correlate with vegetation degradation and soil exposure levels. This hierarchical classification approach enhances the interpretability of fire impacts and supports more detailed post-fire assessments for ecological monitoring and land management planning.
Figure 13 presents the spatial distribution of the fire severity levels derived from the secondary ISODATA classification, which was applied exclusively to the pixels previously identified as burned areas. By reducing the number of clusters to four, the classification enables a more detailed stratification of burn severity—ranging from low to high—based on the spectral variations within the fire-affected zones. Each severity class reflects distinct spectral characteristics, particularly in the NIR and SWIR bands, which are known to be sensitive to vegetation loss, soil exposure, and combustion intensity. This refined classification offers enhanced insight into the spatial heterogeneity of fire impacts across the central mountainous regions and supports more targeted post-fire recovery planning.
The results shown in Figure 13 reflect the second-stage unsupervised classification applied exclusively to previously delineated burned areas. The ISODATA algorithm was used to reclassify the extracted fire perimeter into four spectral clusters, with the following parameter configuration: a maximum of four classes, 20 iterations, up to five cluster merges per iteration, a merge distance threshold of 0.5, a minimum of 20 samples per cluster, and a skip factor of 10. Euclidean distance was employed as the similarity metric to group spectrally homogeneous pixels.
The resulting clusters were interpreted through a visual comparison with false-color composite imagery generated from Sentinel-2 Bands 12 (SWIR), 8 (NIR), and 4 (Red). Clusters exhibiting spectral characteristics similar to unburned vegetated areas, particularly those with higher NIR reflectance and lower SWIR response, were assigned to the low-severity class. Conversely, clusters characterized by strong SWIR reflectance and reddish tones in the composite imagery were identified as high-severity burns. Intermediate spectral responses were assigned to moderate-severity classes.
This approach enabled a more nuanced delineation of intra-burn heterogeneity by capturing subtle spectral variations within the burned landscape. The stratification of fire severity into four discrete levels facilitates a more detailed assessment of fire impact across the affected region. Moreover, this method demonstrates the operational potential of unsupervised classification techniques in translating satellite-derived spectral patterns into ecologically meaningful burn severity categories, particularly in the absence of labeled ground reference data.

3.5. Comparative Analysis of Burned Area Estimation Across Classification Methods

To evaluate the spatial variability and classification reliability of wildfire mapping techniques, a comprehensive comparative analysis was conducted using six distinct classification methods. These methods encompassed three major categories: spectral index-based approaches, supervised machine learning models, and unsupervised clustering techniques. The first category included two widely adopted spectral indices— the dNBR (difference Normalized Burn Ratio) and the dNDVI (difference Normalized Difference Vegetation Index)—which measure the changes in vegetation and burn severity based on the pre- and post-fire satellite reflectance values. The second category comprised three supervised classification models using support vector machines (SVMs), each employing a different kernel function: linear, polynomial, and radial basis function (RBF). These SVM models were trained on labeled fire and non-fire samples to evaluate their capacity for accurate and generalizable classification. The third approach involved an unsupervised classification method using ISODATA (Iterative Self-Organizing Data Analysis Technique), which automatically identifies clusters based on spectral similarity without the need for labeled training data.
These six classification techniques were applied to five wildfire-affected administrative districts located in northern Gyeongsangbuk-do, South Korea—namely Andong, Uiseong, Cheongsong, Yeongyang, and Yeongdeok—where large-scale forest fires occurred in 2025. These regions exhibit diverse topographic and environmental conditions, making them suitable for assessing the effectiveness and adaptability of various classification strategies. By comparing these methods across spatially and temporally heterogeneous wildfire events, this study aims to identify the relative strengths, limitations, and operational applicability of each approach, thereby contributing to more robust wildfire damage monitoring and post-fire recovery planning frameworks.
Among the various spectral index calculations performed in this study, eight dNDVI images were generated across different wildfire events and acquisition dates to capture vegetation changes before and after fire incidents. To determine an appropriate threshold for delineating burned areas, we referred to the established literature and selected a cutoff value of approximately 0.1, which has been widely validated in previous remote sensing studies. This threshold was adopted based on its demonstrated robustness across different ecosystems, sensor platforms, and methodological contexts.
In particular, the dNDVI method employed a threshold of approximately 0.1 to delineate burned areas in this study, a value that has been widely validated in previous wildfire remote sensing research. Several studies have demonstrated the practical utility of this threshold for accurately identifying vegetation loss and burn severity. For instance, ref. [65] applied a dNDVI threshold near 0.1 to UAV-derived imagery and found it effective in delineating moderate to high-severity burn zones, showing strong agreement with ground truth data. Similarly, ref. [66] utilized a comparable threshold in Mediterranean forest environments, reporting substantial spectral reductions that were consistent with satellite-based severity metrics. Ref. [37] confirmed the applicability of this threshold across multiple forest types, highlighting its capability to detect significant post-fire spectral change. Ref. [67] also employed a similar value in Turkish forest regions and observed strong alignment with both ecological assessments and machine learning-based classifications. The consistent use and validation of the 0.1 threshold across diverse ecosystems, sensors, and methodological frameworks underscore its robustness and reliability as a baseline for post-fire vegetation damage mapping.
Figure 14 presents the comparative results of different classification methods, including dNDVI-based thresholding at 0.1, for estimating wildfire-affected areas across multiple regions.
Figure 14 presents the estimated burned areas (in hectares) derived from each method across the five regions. The histogram reveals notable differences in the predicted extent of the burned areas depending on the classification technique used. In particular, the dNBR exhibited a tendency to overestimate fire-affected areas compared to the other approaches, while the RBF-based SVM model and dNDVI yielded more conservative estimates. ISODATA clustering produced comparable results to the dNBR in terms of the total area but differed in spatial distribution across the administrative boundaries. The dNDVI-based results using the 0.1 threshold demonstrated high consistency with the SVM-RBF classification, supporting its effectiveness in mapping wildfire impacts in complex forested landscapes.
This inter-method comparison highlights the sensitivity of burned area estimation to the choice of classification algorithm and underscores the importance of method selection in operational fire damage assessment.

4. Discussion

This study conducted a comprehensive comparative assessment of three key methodologies for post-fire burn area mapping using Sentinel-2 imagery: spectral index differencing (dNBR and dNDVI), supervised classification using support vector machines (SVMs), and unsupervised ISODATA clustering. The heterogeneous and topographically complex wildfire landscape of Gyeongbuk Province provided a robust test bed for evaluating each method’s applicability and limitations.
Spectral index-based approaches, particularly the differenced Normalized Burn Ratio (dNBR), have long been favored for their simplicity, computational efficiency, and operational relevance in post-fire assessments [6,29]. In this study, we adopted the burn severity thresholds recommended by the U.S. Geological Survey (USGS), which facilitated consistent classification performance across varied land cover types. However, such thresholds remain static and are sensitive to scene-dependent spectral variability, limiting their effectiveness in transitional or fragmented landscapes. While the dNBR was most effective in detecting high-severity burn areas with clear NIR and SWIR contrasts, its sensitivity declined in areas with mixed vegetation or partial canopy loss. Complementarily, the differenced NDVI (dNDVI) captured moderate and low-severity burn signals more effectively, especially in vegetated areas with sub-canopy degradation. When combined, these indices yielded a more balanced depiction of fire severity, yet their performance was still constrained by threshold dependency and regional calibration challenges [13,68].
To address these limitations, supervised machine learning classification using SVM was employed, which provided greater adaptability to spectral complexity. Among the tested kernels, the radial basis function (RBF) kernel achieved the highest overall accuracy, which was consistent with prior findings that underscore its strength in modeling nonlinear class boundaries in high-dimensional spectral spaces [59,60]. The RBF kernel effectively delineated the burn extents in ambiguous zones, such as forest edges, water-adjacent vegetation, and shaded topography, thereby reducing omission and commission errors. The polynomial kernel also performed well, particularly in transitional zones with curved spectral gradients, although it showed a higher risk of overfitting. In contrast, the linear kernel, while computationally efficient and easily interpretable, lacked the flexibility to handle nonlinear spectral variation, often misclassifying edge pixels and producing lower precision.
Following the initial model development, further adaptations were incorporated to extend the SVM framework to complex classification scenarios. While SVMs are fundamentally binary classifiers, they were successfully extended to multi-class problems using one-vs.-one and one-vs.-all schemes. Recent advances in margin-based formulations have also enhanced multi-class separation by optimizing inter-class decision boundaries [69]. In addition, class imbalance—a prevalent issue in post-disaster remote sensing datasets—was mitigated through cost-sensitive learning and multi-scale feature fusion strategies, which improved model stability and the detection of minority classes [70]. These enhancements contributed to the classifier’s robustness across varied burn severity levels and land cover types.
The ISODATA clustering algorithm, employed as an unsupervised approach, provided a valuable alternative in data-scarce or time-sensitive contexts. Though inherently limited by its lack of semantic class labels and reliance on post hoc interpretation, ISODATA proved effective in identifying major spectral groupings across the burn scar. A two-stage clustering strategy—the initial isolation of burned areas followed by intra-class stratification—enabled meaningful segmentation of fire-affected zones. Its rapid implementation and minimal dependence on ground truth data make it suitable for preliminary assessments and emergency response mapping [40].
However, due to the absence of verified ground truth data from field surveys, pixel-level performance metrics (e.g., accuracy, precision, recall) could not be computed for any of the six classification methods. This limited the scope of quantitative validation and highlights the need for future work incorporating reference data.
Rather than identifying a universally optimal method, the results of this study emphasize the importance of selecting classification strategies tailored to data availability, ecological context, and application-specific demands. Spectral indices offer rapid, interpretable outputs, but are constrained by fixed thresholding. In contrast, machine learning classifiers such as SVM offer superior adaptability, especially when integrated with advanced modeling techniques to address spectral ambiguity and class imbalance. Unsupervised methods, while less precise, complement supervised workflows by enabling rapid segmentation when labeled data are lacking.
The integration of these approaches into a hybrid classification framework—using index-based pre-screening followed by data-driven refinement—presents a promising direction for operational wildfire mapping. Such integration leverages the strengths of each method, mitigates their individual limitations, and enhances the spatial resolution and thematic accuracy of burn severity maps. Moreover, the demonstrated utility of Sentinel-2 imagery, with its high spatial resolution and temporal frequency, reinforces its role as a primary data source for near-real-time fire monitoring. Continued refinement of kernel-based learning models and the incorporation of contextual variables such as topography, vegetation structure, and temporal dynamics will further improve classification transferability and resilience. These advancements are essential for developing scalable, robust frameworks to support ecosystem recovery, forest management, and climate-adaptive decision-making in an era of increasing wildfire activity.

5. Conclusions

This study presents a comprehensive comparative evaluation of burned area mapping techniques using Sentinel-2 imagery, incorporating spectral index-based methods (dNBR and dNDVI), unsupervised clustering (ISODATA), and supervised machine learning (SVM with multiple kernel types). The results highlight that no single method is universally optimal; rather, each exhibits distinct advantages depending on the data conditions, landscape heterogeneity, and operational objectives.
The dNBR and dNDVI indices provided efficient, interpretable tools for rapid post-fire assessment. The dNBR proved effective in identifying areas of high burn severity, while the dNDVI demonstrated greater sensitivity to moderate and low-intensity fires. However, both indices exhibited reduced performance in spectrally complex or transitional environments—especially in mixed vegetation zones—underscoring the limitations of threshold-based methods when applied to heterogeneous post-fire landscapes.
Support vector machines, particularly those using the radial basis function (RBF) kernel, significantly outperformed traditional indices in terms of classification accuracy, precision, and recall. The RBF kernel effectively captured nonlinear class boundaries and demonstrated robustness across varying spectral contexts, including challenging zones such as water–vegetation interfaces. These findings are consistent with recent research emphasizing the superior adaptability of kernel-based SVM models in wildfire mapping and post-fire land cover classification [71,72,73,74,75,76].
Unsupervised ISODATA clustering introduced a practical, label-free alternative for burn area delineation. While its precision was lower compared to supervised models, the ISODATA proved particularly useful in data-scarce or time-sensitive scenarios, rapidly segmenting fire-affected areas into spectral groupings without requiring labeled inputs. The two-stage ISODATA process—the initial burn area identification followed by severity stratification—reflects emerging interest in flexible, hybrid approaches that balance speed with spatial detail.
A key contribution of this study is the development of a hybrid classification framework that integrates spectral index-based thresholding with supervised machine learning. This modular workflow harnesses the interpretability of indices and the adaptability of SVMs, improving classification robustness and spatial precision. Such integration advances the scalability and transferability of burn mapping systems across diverse ecological and topographic settings, which is in line with recent efforts to operationalize remote sensing-based fire assessment [77,78].
Looking forward, further improvements in classification and prediction accuracy can be achieved by incorporating multi-temporal satellite observations, topographic features, and climate-related variables into ensemble classification systems. Recent studies have demonstrated that leveraging multi-temporal data significantly enhances model performance by capturing seasonal and phenological dynamics [79,80,81]. Moreover, the integration of topographic and climatic variables—such as elevation and precipitation—further improves classification robustness in heterogeneous landscapes [82,83]. Deep learning frameworks, including temporal convolutional networks and hybrid ensemble models, have shown notable advantages in modeling complex spatio-temporal relationships [84,85]. These approaches are particularly effective when combined with transfer learning and data fusion strategies, enabling generalized applications across diverse regions and environmental contexts [86].
Building on these advancements, future research should explore the fusion of spatio-temporal deep learning models with traditional classification schemes to support both pre- and post-fire analysis. This would enable the development of highly adaptable, automated systems for large-scale wildfire monitoring, supporting real-time situational awareness and long-term ecosystem recovery planning in the context of intensifying fire regimes and global environmental change.

Author Contributions

Conceptualization, S.-H.L., M.-H.L., T.-H.K., H.-R.C., H.-S.Y. and S.-J.L.; methodology, S.-H.L. and S.-J.L.; software, S.-H.L. and S.-J.L.; validation, S.-H.L., S.-J.L. and H.-S.Y.; formal analysis, S.-H.L. and S.-J.L.; investigation, S.-H.L. and S.-J.L.; resources, S.-H.L. and S.-J.L.; data curation, S.-H.L. and S.-J.L.; writing—original draft preparation, S.-H.L. and S.-J.L.; writing—review and editing, M.-H.L., T.-H.K., H.-R.C. and H.-S.Y.; visualization, S.-H.L. and S.-J.L.; supervision, H.-S.Y.; project administration, H.-S.Y.; funding acquisition, H.-R.C. and H.-S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant, funded by the Korea government (MSIT) (RS-2023-00248092).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Flannigan, M.D.; Stocks, B.J.; Turetsky, M.R.; Wotton, B.M. Impacts of climate change on fire activity and fire management in the circumboreal forest. Glob. Chang. Biol. 2009, 15, 549–560. [Google Scholar] [CrossRef]
  2. Jolly, W.M.; Cochrane, M.A.; Freeborn, P.H.; Holden, Z.A.; Brown, T.J.; Williamson, G.J.; Bowman, D.M.J.S. Climate-induced variations in global wildfire danger from 1979 to 2013. Nat. Commun. 2015, 6, 7537. [Google Scholar] [CrossRef] [PubMed]
  3. Masson-Delmotte, V.; Zhai, P.; Pirani, A.; Connors, S.L.; Péan, C.; Berger, S.; Zhou, B. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. In Climate Change 2021: The Physical Science Basis; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar] [CrossRef]
  4. Chuvieco, E.; Mouillot, F.; van der Werf, G.R.; San Miguel, J.; Tanase, M.; Koutsias, N.; Padilla, M. Historical background and current developments for mapping burned area from satellite Earth observation. Remote Sens. Environ. 2019, 225, 45–64. [Google Scholar] [CrossRef]
  5. Rogers, B.M.; Balch, J.K.; Goetz, S.J.; Lehmann, C.E.R.; Turetsky, M. Focus on changing fire regimes: Interactions with climate, ecosystems, and society. Environ. Res. Lett. 2020, 15, 030201. [Google Scholar] [CrossRef]
  6. Saulino, L.; Rita, A.; Migliozzi, A.; Maffei, C.; Allevato, E.; Garonna, A.P.; Saracino, A. Detecting burn severity across Mediterranean forest types by coupling medium-spatial resolution satellite imagery and field data. Remote Sens. 2020, 12, 741. [Google Scholar] [CrossRef]
  7. Pelletier, F.; Eskelson, B.N.I.; Monleon, V.J.; Tseng, Y. Using Landsat imagery to assess burn severity of national forest inventory plots. Remote Sens. 2021, 13, 1935. [Google Scholar] [CrossRef]
  8. Stambaugh, M.C.; Hammer, L.D.; Godfrey, R. Performance of burn-severity metrics and classification in oak woodlands and grasslands. Remote Sens. 2015, 7, 10501–10522. [Google Scholar] [CrossRef]
  9. Kurbanov, E.; Vorobev, O.; Lezhnin, S.; Sha, J.; Wang, J.; Li, X.; Cole, J.; Dergunov, D.; Wang, Y. Remote sensing of forest burnt area, burn severity, and post-fire recovery: A review. Remote Sens. 2022, 14, 4714. [Google Scholar] [CrossRef]
  10. Escuin, S.; Navarro, R.; Fernández, P. Fire severity assessment by using NBR (Normalized Burn Ratio) and NDVI (Normalized Difference Vegetation Index) derived from LANDSAT TM/ETM images. Int. J. Remote Sens. 2008, 29, 1053–1073. [Google Scholar] [CrossRef]
  11. Miller, J.D.; Thode, A.E. Quantifying burn severity in a heterogeneous landscape with a relative version of the delta Normalized Burn Ratio (dNBR). Remote Sens. Environ. 2007, 109, 66–80. [Google Scholar] [CrossRef]
  12. Cardil, A.; Mola-Yudego, B.; Blázquez-Casado, Á.; González-Olabarria, J.R. Fire and burn severity assessment: Calibration of Relative Differenced Normalized Burn Ratio (RdNBR) with field data. Remote Sens. 2019, 11, 760. [Google Scholar] [CrossRef] [PubMed]
  13. Parks, S.A.; Dillon, G.K.; Miller, C. A new metric for quantifying burn severity: The Relativized Burn Ratio. Remote Sens. 2014, 6, 1827–1844. [Google Scholar] [CrossRef]
  14. He, K.; Shen, X.; Anagnostou, E.N. A global forest burn severity dataset from Landsat imagery (2003–2016). Earth Syst. Sci. Data 2024, 16, 3061–3081. [Google Scholar] [CrossRef]
  15. Ebadati, B.; Attarzadeh, R.; Alikhani, M.; Youssefi, F.; Pirasteh, S. Rapid Post-Wildfire Burned Vegetation Assessment with Google Earth Engine (Case Study: 2023 Canada Wildfires). ISPRS Arch. 2024, XLVIII-3/W3, 45–52. [Google Scholar] [CrossRef]
  16. Alcaras, E.; Costantino, D.; Guastaferro, F.; Parente, C.; Pepe, M. Normalized Burn Ratio Plus (NBR+): A new index for Sentinel-2 imagery. Remote Sens. 2022, 14, 1727. [Google Scholar] [CrossRef]
  17. Seydi, S.T.; Sadegh, M. Improved burned area mapping using monotemporal Landsat-9 imagery and convolutional shift-transformer. Measurement 2023, 216, 112961. [Google Scholar] [CrossRef]
  18. Beltrán-Marcos, D.; Suárez-Seoane, S.; Fernández-Guisuraga, J.M.; Fernández-García, V.; Marcos, E.; Calvo, L. Relevance of UAV and Sentinel-2 Data Fusion for Estimating Topsoil Organic Carbon after Forest Fire. Geoderma 2023, 430, 116290. [Google Scholar] [CrossRef]
  19. Gillespie, M.; Okin, G.S.; Meyer, T.; Ochoa, F. Evaluating burn severity and post-fire woody vegetation regrowth in the Kalahari using UAV imagery and random forest algorithms. Remote Sens. 2024, 16, 3943. [Google Scholar] [CrossRef]
  20. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  21. Khatami, R.; Mountrakis, G.; Stehman, S.V. A meta-analysis of remote sensing research on supervised pixel-based land-cover image classification processes: General guidelines for practitioners and future research. Remote Sens. Environ. 2016, 177, 89–100. [Google Scholar] [CrossRef]
  22. Tan, Y.-C.; Duarte, L.; Teodoro, A.C. Comparative study of random forest and support vector machine for land cover classification and post-wildfire change detection. Land 2024, 13, 1878. [Google Scholar] [CrossRef]
  23. Klimas, K.B.; Yocom, L.L.; Murphy, B.P.; David, S.R.; Belmont, P.; Lutz, J.A.; DeRose, R.J.; Wall, S.A. A machine learning model to predict wildfire burn severity for pre-fire risk assessments, Utah, USA. Fire Ecol. 2025, 21, 8. [Google Scholar] [CrossRef]
  24. Moghim, S.; Mehrabi, N. Comparing random forest and logistic regression models for wildfire susceptibility mapping in the Okanogan region, USA and James Bay, Canada. Fire Ecol. 2024, 20, 21. [Google Scholar] [CrossRef]
  25. Seydi, S.T.; Akhoondzadeh, M.; Amani, M.; Mahdavi, S. Wildfire Damage Assessment over Australia Using Sentinel-2 Imagery and MODIS Land Cover Product within the Google Earth Engine Cloud Platform. Remote Sens. 2021, 13, 220. [Google Scholar] [CrossRef]
  26. Lee, D.; Son, S.; Bae, J.; Park, S.; Seo, J.; Seo, D.; Lee, Y.; Kim, J. Single-Temporal Sentinel-2 for Analyzing Burned Area Detection Methods: A Study of 14 Cases in Republic of Korea Considering Land Cover. Remote Sens. 2024, 16, 884. [Google Scholar] [CrossRef]
  27. Louis, J.; Debaecker, V.; Pflug, B.; Main-Knorn, M.; Bieniarz, J.; Mueller-Wilm, U.; Cadau, E.; Gascon, F. Sentinel-2 Sen2Cor: L2A Processor for Users. In Proceedings of the Living Planet Symposium 2016, Prague, Czech Republic, 9–13 May 2016; ESA SP-740. Available online: https://elib.dlr.de/107381 (accessed on 17 May 2025).
  28. Howe, A.A.; Parks, S.A.; Harvey, B.J.; Saberi, S.J.; Lutz, J.A.; Yocom, L.L. Comparing Sentinel-2 and Landsat 8 for Burn Severity Mapping in Western North America. Remote Sens. 2022, 14, 5249. [Google Scholar] [CrossRef]
  29. Key, C.H.; Benson, N.C. Landscape Assessment: Ground Measure of Severity, the Composite Burn Index; and Remote Sensing of Severity, the Normalized Burn Ratio. In FIREMON: Fire Effects Monitoring and Inventory System; USDA Forest Service, Rocky Mountain Research Station: Ogden, UT, USA, 2006. Available online: https://www.usgs.gov/publications/landscape-assessment-ground-measure-severity-composite-burn-index-and-remote-sensing (accessed on 10 May 2025).
  30. Henry, M.C.; Maingi, J.K. Evaluating Landsat- and Sentinel-2-Derived Burn Indices to Map Burn Scars in Chyulu Hills, Kenya. Fire 2024, 7, 472. [Google Scholar] [CrossRef]
  31. Giddey, B.L.; Baard, J.A.; Kraaij, T. Verification of the Differenced Normalised Burn Ratio (dNBR) as an Index of Fire Severity in Afrotemperate Forest. S. Afr. J. Bot. 2022, 146, 348–353. [Google Scholar] [CrossRef]
  32. Gholinejad, S.; Khesali, E. An Automatic Procedure for Generating Burn Severity Maps from the Satellite Images-Derived Spectral Indices. Int. J. Digit. Earth 2021, 14, 1659–1673. [Google Scholar] [CrossRef]
  33. Ghazali, N.N.; Mohamed Saraf, N.; Abdul Rasam, A.R.; Othman, A.N.; Salleh, S.A.; Md Saad, N. Forest Fire Severity Level Using dNBR Spectral Index. Rev. Int. Géomatique 2025, 34, 89–101. [Google Scholar] [CrossRef]
  34. Franco, M.; Mundo, I.; Veblen, T. Field-Validated Burn-Severity Mapping in North Patagonian Forests. Remote Sens. 2020, 12, 214. [Google Scholar] [CrossRef]
  35. Dindaroglu, T.; Babur, E.; Yakupoğlu, T.; Rodrigo-Comino, J.; Cerdà, A. Evaluation of geomorphometric characteristics and soil properties after a wildfire using Sentinel-2 MSI imagery for future fire-safe forest. Fire Saf. J. 2021, 122, 103318. [Google Scholar] [CrossRef]
  36. Avetisyan, D.; Stankova, N.; Dimitrov, Z. Assessment of Spectral Vegetation Indices Performance for Post-Fire Monitoring of Different Forest Environments. Fire 2023, 6, 290. [Google Scholar] [CrossRef]
  37. Avetisyan, D.; Stankova, N. Observation of spectral indices performance for post-fire forest monitoring. Aerospace Res. Bulg. 2024, 36, e06. [Google Scholar] [CrossRef]
  38. Furuya, D.E.G.; Aguiar, J.A.F.; Estrabis, N.V.; Pinheiro, M.M.F.; Furuya, M.T.G.; Pereira, D.R.; Ramos, A.P.M. A Machine Learning Approach for Mapping Forest Vegetation in Riparian Zones in an Atlantic Biome Environment Using Sentinel-2 Imagery. Remote Sens. 2020, 12, 4086. [Google Scholar] [CrossRef]
  39. De Luca, G.; Silva, J.M.N.; Di Fazio, S.; Modica, G. Integrated Use of Sentinel-1 and Sentinel-2 Data and Open-Source Machine Learning Algorithms for Land Cover Mapping in a Mediterranean Region. Eur. J. Remote Sens. 2022, 55, 52–70. [Google Scholar] [CrossRef]
  40. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  41. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  42. Xu, Y.; Zomer, S.; Brereton, R. Support vector machines: A recent method for classification in chemometrics. Crit. Rev. Anal. Chem. 2006, 36, 177–188. [Google Scholar] [CrossRef]
  43. Somvanshi, M.; Chavan, P. A review of machine learning techniques using decision tree and support vector machine. In Proceedings of the 2016 International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 12–13 August 2016; pp. 1–7. [Google Scholar] [CrossRef]
  44. Aslani, M.; Seipel, S. Efficient and decision boundary aware instance selection for support vector machines. Inf. Sci. 2021, 577, 579–598. [Google Scholar] [CrossRef]
  45. Maindonald, J. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2007; pp. 1–3. [Google Scholar] [CrossRef]
  46. Song, Y.; Liang, J.; Wang, F. An accelerator for support vector machines based on the local geometrical information and data partition. Int. J. Mach. Learn. Cybern. 2018, 10, 2389–2400. [Google Scholar] [CrossRef]
  47. Honeine, P.; Richard, C. Preimage problem in kernel-based machine learning. IEEE Signal Process. Mag. 2011, 28, 77–88. [Google Scholar] [CrossRef]
  48. Wang, Y.; Wang, C.; Deng, T.; Li, W. Multi-label feature selection based on nonlinear mapping. Inf. Sci. 2024, 680, 121168. [Google Scholar] [CrossRef]
  49. Wang, R.; Ying, X.; Xing, B.; Tong, X.; Chen, T.; Yang, J.; Shi, Y. Improving point cloud classification and segmentation via parametric veronese mapping. Pattern Recognit. 2023, 144, 109784. [Google Scholar] [CrossRef]
  50. Trzcinski, T.; Christoudias, M.; Lepetit, V.; Fua, P. Learning image descriptors with the boosting-trick. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 278–286. Available online: https://proceedings.neurips.cc/paper_files/paper/2012/hash/0a09c8844ba8f0936c20bd791130d6b6-Abstract.html (accessed on 17 May 2025).
  51. Schölkopf, B.; Smola, A.; Müller, K.R. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar] [CrossRef]
  52. Schölkopf, B.; Mika, S.; Burges, C.; Knirsch, P.; Müller, K.; Rätsch, G.; Smola, A. Input space versus feature space in kernel-based methods. IEEE Trans. Neural Netw. 1999, 10, 1000–1017. [Google Scholar] [CrossRef]
  53. Jampour, M.; Lepetit, V.; Mauthner, T.; Bischof, H. Pose-specific non-linear mappings in feature space towards multiview facial expression recognition. Image Vis. Comput. 2017, 58, 38–46. [Google Scholar] [CrossRef]
  54. Tiwari, P.; Dehdashti, S.; Obeid, A.; Marttinen, P.; Bruza, P. Kernel method based on non-linear coherent states in quantum feature space. J. Phys. A: Math. Theor. 2022, 55, 245301. [Google Scholar] [CrossRef]
  55. Hsu, C.-W.; Chang, C.-C.; Lin, C.-J. A Practical Guide to Support Vector Classification; Department of Computer Science, National Taiwan University: Taipei, Taiwan, 2010; Available online: https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf (accessed on 17 May 2025).
  56. Scikit-learn. Support Vector Machines. 2024. Available online: https://scikit-learn.org/stable/modules/svm.html (accessed on 17 May 2025).
  57. Scikit-learn. Hyperparameter Optimization with Grid Search. 2024. Available online: https://scikit-learn.org/stable/modules/grid_search.html (accessed on 17 May 2025).
  58. Bruzzone, L.; Fernández-Prieto, D. Classification of remote sensing images using radial-basis-function neural networks: A supervised training technique. Proc. SPIE 1998, 3500, 320–327. [Google Scholar] [CrossRef]
  59. Kavzoglu, T.; Colkesen, I. A kernel functions analysis for support vector machines for land cover classification. Int. J. Appl. Earth Obs. Geoinf. 2009, 11, 352–359. [Google Scholar] [CrossRef]
  60. Razaque, A.; Frej, M.; Almi’ani, M.; Alotaibi, M.; Alotaibi, B. Improved Support Vector Machine Enabled Radial Basis Function and Linear Variants for Remote Sensing Image Classification. Sensors 2021, 21, 4431. [Google Scholar] [CrossRef]
  61. Tan, R.; Ottewill, J.; Thornhill, N. Monitoring Statistics and Tuning of Kernel Principal Component Analysis with Radial Basis Function Kernels. IEEE Access 2020, 8, 198328–198342. [Google Scholar] [CrossRef]
  62. Wang, Q.; Shi, W.; Atkinson, P. Sub-pixel mapping of remote sensing images based on radial basis function interpolation. ISPRS J. Photogramm. Remote Sens. 2014, 92, 1–15. [Google Scholar] [CrossRef]
  63. Izquierdo-Verdiguier, E.; Gómez-Chova, L.; Bruzzone, L.; Camps-Valls, G. Semisupervised Kernel Feature Extraction for Remote Sensing Image Analysis. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5567–5578. [Google Scholar] [CrossRef]
  64. Ordiyasa, W.; Diqi, M.; Lustiyati, E.; Hiswati, M.; Salsabela, M. Smart Fire Safety: Analyzing Radial Basis Function Kernel in SVM for IoT-driven Smoke Detection. semanTIK 2024, 10, 159–166. [Google Scholar] [CrossRef]
  65. Carvajal-Ramírez, F.; Da Silva, J.; Agüera-Vega, F.; Martínez-Carricondo, P.; Serrano, J.; Moral, F. Evaluation of Fire Severity Indices Based on Pre- and Post-Fire Multispectral Imagery Sensed from UAV. Remote Sens. 2019, 11, 993. [Google Scholar] [CrossRef]
  66. Teodoro, A.; Amaral, A. A Statistical and Spatial Analysis of Portuguese Forest Fires in Summer 2016 Considering Landsat 8 and Sentinel 2A Data. Environments 2019, 6, 36. [Google Scholar] [CrossRef]
  67. Ibrahim, S.; Kose, M.; Adamu, B.; Jega, I. Remote Sensing for Assessing the Impact of Forest Fire Severity on Ecological and Socio-Economic Activities in Kozan District, Turkey. J. Environ. Stud. Sci. 2024, 15, 342–354. [Google Scholar] [CrossRef]
  68. Zahabnazouri, S.; Belmont, P.; David, S.; Wigand, P.E.; Elia, M.; Capolongo, D. Detecting Burn Severity and Vegetation Recovery After Fire Using dNBR and dNDVI Indices: Insight from the Bosco Difesa Grande, Gravina in Southern Italy. Sensors 2025, 25, 3097. [Google Scholar] [CrossRef]
  69. Nie, F.; Hao, Z.; Wang, R. Multi-class Support Vector Machine with Maximizing Minimum Margin. arXiv 2023, arXiv:2312.06578. [Google Scholar] [CrossRef]
  70. Ren, Y.; Zhang, X.; Yang, Y.; Yang, Q.; Wang, C.; Liu, H.; Qi, Q. Full Convolutional Neural Network Based on Multi-Scale Feature Fusion for the Class Imbalance Remote Sensing Image Classification. Remote Sens. 2020, 12, 3547. [Google Scholar] [CrossRef]
  71. Quan, Z.; Pu, L. An Improved Accurate Classification Method for Online Education Resources Based on Support Vector Machine (SVM): Algorithm and Experiment. Educ. Inf. Technol. 2022, 28, 8097–8111. [Google Scholar] [CrossRef]
  72. Ganapathy, K.; Karthikeyan, P.; Harshitha, L. Detection of Arrhythmia Using Ensemble Classifier in Comparison with Support Vector Machine Classifier to Measure the Accuracy, Sensitivity, Specificity and Precision. In Proceedings of the 2022 4th International Conference on Advances in Computing, Communication Control and Networking (ICAC3N), Greater Noida, India, 16–17 December 2022. [Google Scholar] [CrossRef]
  73. Lee, C.; Wang, W.; Huang, J. Clustering and Classification for Dry Bean Feature Imbalanced Data. Sci. Rep. 2024, 14, 31058. [Google Scholar] [CrossRef] [PubMed]
  74. Li, X.; Dong, S.; Guo, S.; Zheng, C. Applying Support Vector Machines to a Diagnostic Classification Model for Polytomous Attributes in Small-Sample Contexts. Br. J. Math. Stat. Psychol. 2024, 78, 167–189. [Google Scholar] [CrossRef] [PubMed]
  75. Zou, R.; Xie, H.; Zhong, J.; Zheng, S. Optimization of Support Vector Machines Based on Sparrow Search Algorithm. In Proceedings of the 4th International Conference on Artificial Intelligence and Computer Engineering (ICAICE 2023), Beijing, China, 17–19 November 2023. [Google Scholar] [CrossRef]
  76. Widyawati, D.; Faradibah, A. Comparison Analysis of Classification Model Performance in Lung Cancer Prediction Using Decision Tree, Naive Bayes, and Support Vector Machine. Indones. J. Data Sci. 2023, 4, 76. [Google Scholar] [CrossRef]
  77. Tran, N.; Tanase, M.; Bennett, L.; Aponte, C. Fire-Severity Classification across Temperate Australian Forests: Random Forests versus Spectral Index Thresholding. In Proceedings of the Remote Sensing for Agriculture, Ecosystems, and Hydrology XXI, Strasbourg, France, 9 October 2019; Volume 11149. [Google Scholar] [CrossRef]
  78. Lasko, K.; Maloney, M.; Becker, S.; Griffin, A.; Lyon, S.; Griffin, S. Automated Training Data Generation from Spectral Indexes for Mapping Surface Water Extent with Sentinel-2 Satellite Imagery at 10 m and 20 m Resolutions. Remote Sens. 2021, 13, 4531. [Google Scholar] [CrossRef]
  79. Man, C.; Nguyen, T.; Bui, H.; Lasko, K.; Nguyen, T. Improvement of Land-Cover Classification over Frequently Cloud-Covered Areas Using Landsat 8 Time-Series Composites and an Ensemble of Supervised Classifiers. Int. J. Remote Sens. 2018, 39, 3610–3631. [Google Scholar] [CrossRef]
  80. Shi, F.; Gao, X.; Li, R.; Zhang, H. Ensemble Learning for the Land Cover Classification of Loess Hills in the Eastern Qinghai-Tibet Plateau Using GF-7 Multitemporal Imagery. Remote Sens. 2024, 16, 2556. [Google Scholar] [CrossRef]
  81. Waske, B.; Braun, M. Classifier Ensembles for Land Cover Mapping Using Multitemporal SAR Imagery. ISPRS J. Photogramm. Remote Sens. 2009, 64, 450–457. [Google Scholar] [CrossRef]
  82. Du, H.; Li, M.; Xu, Y.; Zhou, C. An Ensemble Learning Approach for Land Use/Land Cover Classification of Arid Regions for Climate Simulation: A Case Study of Xinjiang, Northwest China. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 3939–3950. [Google Scholar] [CrossRef]
  83. Ma, Z.; Li, W.; Warner, T.A.; He, C.; Wang, X.; Zhang, Y.; Guo, C.; Cheng, T.; Zhu, Y.; Cao, W.; et al. A Framework Combined Stacking Ensemble Algorithm to Classify Crop in Complex Agricultural Landscape of High Altitude Regions with Gaofen-6 Imagery and Elevation Data. Int. J. Appl. Earth Obs. Geoinf. 2023, 122, 103386. [Google Scholar] [CrossRef]
  84. Pelletier, C.; Webb, G.; Petitjean, F. Temporal Convolutional Neural Network for the Classification of Satellite Image Time Series. Remote Sens. 2019, 11, 523. [Google Scholar] [CrossRef]
  85. Navnath, N.; Chandrasekaran, K.; Stateczny, A.; Sundaram, V.; Panneer, P. Spatiotemporal Assessment of Satellite Image Time Series for Land Cover Classification Using Deep Learning Techniques: A Case Study of Reunion Island, France. Remote Sens. 2022, 14, 5232. [Google Scholar] [CrossRef]
  86. Zhang, G.; Ghamisi, P.; Zhu, X. Fusion of Heterogeneous Earth Observation Data for the Classification of Local Climate Zones. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9893–9906. [Google Scholar] [CrossRef]
Figure 1. Spatial distribution of wildfire ignition points and burned areas in the Gyeongbuk region, South Korea. (Left) Fire-identified points detected by Sentinel-3 imagery from March 22 to 26, 2025. Points are color-coded by detection date and are primarily concentrated in forested regions dominated by mixed deciduous and coniferous vegetation. Administrative boundaries are overlaid to indicate key municipalities such as Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun. (Right) Burned area visualization derived from Sentinel-2 imagery using a false-color composite (Band 12: SWIR; Band 8: NIR; Band 4: Red). Areas with high burn severity appear in shades of red, while green indicates healthy vegetation, effectively illustrating the spatial extent and severity of the fire.
Figure 1. Spatial distribution of wildfire ignition points and burned areas in the Gyeongbuk region, South Korea. (Left) Fire-identified points detected by Sentinel-3 imagery from March 22 to 26, 2025. Points are color-coded by detection date and are primarily concentrated in forested regions dominated by mixed deciduous and coniferous vegetation. Administrative boundaries are overlaid to indicate key municipalities such as Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun. (Right) Burned area visualization derived from Sentinel-2 imagery using a false-color composite (Band 12: SWIR; Band 8: NIR; Band 4: Red). Areas with high burn severity appear in shades of red, while green indicates healthy vegetation, effectively illustrating the spatial extent and severity of the fire.
Remotesensing 17 02196 g001
Figure 2. Workflow for burn area extraction using spectral indices (dNBR, dNDVI), supervised classification (SVM with cross-validation), and unsupervised clustering (ISODATA). The integration of these methods supports optimized post-fire burn area mapping.
Figure 2. Workflow for burn area extraction using spectral indices (dNBR, dNDVI), supervised classification (SVM with cross-validation), and unsupervised clustering (ISODATA). The integration of these methods supports optimized post-fire burn area mapping.
Remotesensing 17 02196 g002
Figure 3. Burn severity map derived from dNBR values. The dNBR values were classified into four severity categories—low, moderate-low, moderate-high, and high—based on threshold ranges defined by the USGS and the MTBS program. The resulting map quantitatively illustrates the spatial distribution of fire impact across the Gyeongbuk region and serves as a reference layer for validating classification performance.
Figure 3. Burn severity map derived from dNBR values. The dNBR values were classified into four severity categories—low, moderate-low, moderate-high, and high—based on threshold ranges defined by the USGS and the MTBS program. The resulting map quantitatively illustrates the spatial distribution of fire impact across the Gyeongbuk region and serves as a reference layer for validating classification performance.
Remotesensing 17 02196 g003
Figure 4. Burned area delineation results based on eight dNDVI thresholds ranging from 0.04 to 0.18. Lower thresholds result in broader coverage of burned areas, while higher thresholds isolate only the most severely impacted vegetation.
Figure 4. Burned area delineation results based on eight dNDVI thresholds ranging from 0.04 to 0.18. Lower thresholds result in broader coverage of burned areas, while higher thresholds isolate only the most severely impacted vegetation.
Remotesensing 17 02196 g004
Figure 5. False-color composite (SWIR–NIR–RED) satellite images of twelve fire-affected areas used as training data for supervised SVM classification. The selected cases span various years (2019–2023) and represent diverse landscape types across South Korea, including coastal, mountainous, and urban–agricultural interface zones. These reference samples were manually interpreted to extract spectral patterns associated with varying burn severity levels.
Figure 5. False-color composite (SWIR–NIR–RED) satellite images of twelve fire-affected areas used as training data for supervised SVM classification. The selected cases span various years (2019–2023) and represent diverse landscape types across South Korea, including coastal, mountainous, and urban–agricultural interface zones. These reference samples were manually interpreted to extract spectral patterns associated with varying burn severity levels.
Remotesensing 17 02196 g005
Figure 6. Visualization of decision boundaries learned by linear, polynomial, and radial basis function (RBF) SVM classifiers in a standardized three-dimensional spectral feature space constructed from Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR). The blue and red points represent the unburned and burned training samples, respectively, while the green surfaces indicate the decision boundaries formed based on the kernel functions defined within Equations (5)–(7). The left, center, and right panels correspond to the decision boundaries generated by the linear, polynomial, and RBF kernels, respectively. This visualization illustrates how the complexity and nonlinearity of each kernel function influence the transformation of the input space and the separation between classes.
Figure 6. Visualization of decision boundaries learned by linear, polynomial, and radial basis function (RBF) SVM classifiers in a standardized three-dimensional spectral feature space constructed from Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR). The blue and red points represent the unburned and burned training samples, respectively, while the green surfaces indicate the decision boundaries formed based on the kernel functions defined within Equations (5)–(7). The left, center, and right panels correspond to the decision boundaries generated by the linear, polynomial, and RBF kernels, respectively. This visualization illustrates how the complexity and nonlinearity of each kernel function influence the transformation of the input space and the separation between classes.
Remotesensing 17 02196 g006
Figure 7. Performance comparison of three SVM kernels—linear, polynomial, and RBF—based on 5-fold cross-validation using standardized spectral reflectance values (Bands 4, 8, and 12) of the training samples extracted from Sentinel-2 imagery. Evaluation metrics include accuracy (blue), precision (orange), recall (green), and F1-score (yellow). The RBF kernel achieved the highest scores across all metrics.
Figure 7. Performance comparison of three SVM kernels—linear, polynomial, and RBF—based on 5-fold cross-validation using standardized spectral reflectance values (Bands 4, 8, and 12) of the training samples extracted from Sentinel-2 imagery. Evaluation metrics include accuracy (blue), precision (orange), recall (green), and F1-score (yellow). The RBF kernel achieved the highest scores across all metrics.
Remotesensing 17 02196 g007
Figure 8. Classification outcomes of linear, polynomial, and RBF SVM models projected into a standardized three-dimensional spectral feature space constructed from Sentinel-2 Bands 12 (SWIR), 8 (NIR), and 4 (Red). Blue points indicate unburned training data; red points represent both burned training data and pixels classified as burned in the test area; gray points correspond to unburned pixels in the test area; green surfaces denote the decision boundaries formed by each kernel. The left, center, and right panels correspond to the linear, polynomial, and RBF kernels, respectively. This figure provides a visual comparison of kernel-specific classification behavior and generalization performance within a high-dimensional spectral space.
Figure 8. Classification outcomes of linear, polynomial, and RBF SVM models projected into a standardized three-dimensional spectral feature space constructed from Sentinel-2 Bands 12 (SWIR), 8 (NIR), and 4 (Red). Blue points indicate unburned training data; red points represent both burned training data and pixels classified as burned in the test area; gray points correspond to unburned pixels in the test area; green surfaces denote the decision boundaries formed by each kernel. The left, center, and right panels correspond to the linear, polynomial, and RBF kernels, respectively. This figure provides a visual comparison of kernel-specific classification behavior and generalization performance within a high-dimensional spectral space.
Remotesensing 17 02196 g008
Figure 9. Burned area maps generated by applying linear, polynomial, and radial basis function (RBF) SVM classifiers to Sentinel-2 surface reflectance data from Bands 4 (Red), 8 (NIR), and 12 (SWIR). Red regions indicate pixels classified as burned by each kernel. The left, center, and right panels correspond to the results of the linear, polynomial, and RBF models, respectively. Burned areas are primarily concentrated along mountainous ridgelines and upper slopes in central Gyeongsangbuk-do, including Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun. This figure provides a spatial comparison of kernel-dependent classification behavior in a real-world post-fire landscape.
Figure 9. Burned area maps generated by applying linear, polynomial, and radial basis function (RBF) SVM classifiers to Sentinel-2 surface reflectance data from Bands 4 (Red), 8 (NIR), and 12 (SWIR). Red regions indicate pixels classified as burned by each kernel. The left, center, and right panels correspond to the results of the linear, polynomial, and RBF models, respectively. Burned areas are primarily concentrated along mountainous ridgelines and upper slopes in central Gyeongsangbuk-do, including Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun. This figure provides a spatial comparison of kernel-dependent classification behavior in a real-world post-fire landscape.
Remotesensing 17 02196 g009
Figure 10. Spatial distribution of unsupervised classification results using the ISODATA algorithm with k = 10 clusters, applied to Sentinel-2 imagery of the study area. Each color represents a distinct spectral cluster derived from the combination of Bands 4 (Red), 8 (NIR), and 12 (SWIR). Clusters corresponding to burned areas (e.g., Cluster 3) are predominantly located along the central mountainous region, exhibiting typical fire-related spectral responses such as low NIR and high SWIR reflectance. The results demonstrate the effectiveness of ISODATA in segmenting complex spectral landscapes without the use of labeled training data.
Figure 10. Spatial distribution of unsupervised classification results using the ISODATA algorithm with k = 10 clusters, applied to Sentinel-2 imagery of the study area. Each color represents a distinct spectral cluster derived from the combination of Bands 4 (Red), 8 (NIR), and 12 (SWIR). Clusters corresponding to burned areas (e.g., Cluster 3) are predominantly located along the central mountainous region, exhibiting typical fire-related spectral responses such as low NIR and high SWIR reflectance. The results demonstrate the effectiveness of ISODATA in segmenting complex spectral landscapes without the use of labeled training data.
Remotesensing 17 02196 g010
Figure 11. Three-dimensional visualization of the ISODATA clustering results in the spectral feature space defined by Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR), with k = 10 clusters. Each point represents a pixel assigned to one of the ten spectral clusters, and each colored surface denotes the decision boundary for a specific cluster in the 3D reflectance space. The clustering structure reveals clear spectral separation between groups, while some overlap remains due to transitional land cover types. Clusters located in the low B8 and high B12 regions are associated with burned surfaces, illustrating the ISODATA algorithm’s capability to delineate fire-affected areas without labeled training data.
Figure 11. Three-dimensional visualization of the ISODATA clustering results in the spectral feature space defined by Sentinel-2 Bands 4 (Red), 8 (NIR), and 12 (SWIR), with k = 10 clusters. Each point represents a pixel assigned to one of the ten spectral clusters, and each colored surface denotes the decision boundary for a specific cluster in the 3D reflectance space. The clustering structure reveals clear spectral separation between groups, while some overlap remains due to transitional land cover types. Clusters located in the low B8 and high B12 regions are associated with burned surfaces, illustrating the ISODATA algorithm’s capability to delineate fire-affected areas without labeled training data.
Remotesensing 17 02196 g011
Figure 12. Final burned area map derived from the ISODATA unsupervised classification. Clusters exhibiting typical fire-related spectral characteristics—low near-infrared (Band 8) and high shortwave infrared (Band 12) reflectance—were manually identified and merged to extract the burned area extent. The mapped burn scars are primarily concentrated along the central mountainous region of Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun in Gyeongsangbuk-do Province. The spatial distribution corresponds closely with known fire perimeters and terrain features, demonstrating the effectiveness of the ISODATA algorithm in delineating fire-affected zones without the use of labeled training data.
Figure 12. Final burned area map derived from the ISODATA unsupervised classification. Clusters exhibiting typical fire-related spectral characteristics—low near-infrared (Band 8) and high shortwave infrared (Band 12) reflectance—were manually identified and merged to extract the burned area extent. The mapped burn scars are primarily concentrated along the central mountainous region of Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun in Gyeongsangbuk-do Province. The spatial distribution corresponds closely with known fire perimeters and terrain features, demonstrating the effectiveness of the ISODATA algorithm in delineating fire-affected zones without the use of labeled training data.
Remotesensing 17 02196 g012
Figure 13. Final spectral classification map generated using the ISODATA unsupervised clustering algorithm applied to previously delineated burned areas. Based on a comparison with a false-color composite of Sentinel-2 Bands 12 (SWIR), 8 (NIR), and 4 (Red), the burn severity levels were manually assigned and visually distinguished for each cluster. The resulting map effectively illustrates varying degrees of fire impact, primarily concentrated in the mountainous regions of Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun in northern Gyeongsangbuk-do Province.
Figure 13. Final spectral classification map generated using the ISODATA unsupervised clustering algorithm applied to previously delineated burned areas. Based on a comparison with a false-color composite of Sentinel-2 Bands 12 (SWIR), 8 (NIR), and 4 (Red), the burn severity levels were manually assigned and visually distinguished for each cluster. The resulting map effectively illustrates varying degrees of fire impact, primarily concentrated in the mountainous regions of Andong-si, Uiseong-gun, Cheongsong-gun, and Yeongdeok-gun in northern Gyeongsangbuk-do Province.
Remotesensing 17 02196 g013
Figure 14. Burned area estimates (in hectares) across five wildfire-affected regions in northern Gyeongsangbuk-do, South Korea—Andong, Uiseong, Cheongsong, Yeongyang, and Yeongdeok—using six classification methods: dNBR (blue), dNDVI (orange), SVM with linear kernel (yellow), polynomial kernel (purple), RBF kernel (green), and ISODATA clustering (cyan). The bar chart highlights inter-method variability in estimated fire-affected extents, with a notable overestimation by dNBR and more conservative outputs from dNDVI and SVM (RBF).
Figure 14. Burned area estimates (in hectares) across five wildfire-affected regions in northern Gyeongsangbuk-do, South Korea—Andong, Uiseong, Cheongsong, Yeongyang, and Yeongdeok—using six classification methods: dNBR (blue), dNDVI (orange), SVM with linear kernel (yellow), polynomial kernel (purple), RBF kernel (green), and ISODATA clustering (cyan). The bar chart highlights inter-method variability in estimated fire-affected extents, with a notable overestimation by dNBR and more conservative outputs from dNDVI and SVM (RBF).
Remotesensing 17 02196 g014
Table 1. Standard dNBR threshold ranges and corresponding burn severity levels used by the USGS and the Monitoring Trends in Burn Severity (MTBS) program.
Table 1. Standard dNBR threshold ranges and corresponding burn severity levels used by the USGS and the Monitoring Trends in Burn Severity (MTBS) program.
dNBR RangeSeverity LevelDescription
≥0.66Very High SeverityExtensive vegetation loss and exposed soil
0.44–0.66High SeveritySignificant vegetation loss
0.27–0.44Moderate SeverityPartial vegetation loss
0.10–0.27Low SeverityMinor vegetation loss or early signs of damage
<0.10Unburned or RegrowthLittle to no damage or recovering vegetation
Table 2. Summary of commonly used SVM kernels, their mathematical definitions, strengths, and limitations.
Table 2. Summary of commonly used SVM kernels, their mathematical definitions, strengths, and limitations.
TypeMathematical DefinitionStrengthsWeaknesses
Linear SVM f ( x ) = w x + b = 0 Fast and simpleLimited to linear separability
Polynomial K x i , x j = γ x i x j + r d Can model curved boundariesOverfitting with high-degree polynomials
RBF Kernel K x i , x j = exp γ x i x j 2 Can model complex patternsRequires tuning; low interpretability
Table 3. Sentinel-2 images used for training data.
Table 3. Sentinel-2 images used for training data.
YearLatitudeLongitudeEvent DateImage Date(s)
202337.7519128.876111 April12, 19, 27 April
202335.065126.52023–4 April22, 27 April; 2 May
202336.1081127.48812–4 April9, 12, 22 April
202336.6126.66752–4 April12, 22, 27 April
202335.4936128.748131 May–3 June3, 18 June
202236.2425128.572510–12 April19, 24 April; 4 May
202235.5661128.165328 Feb–1 March3, 15 March; 4 April
202236.9897129.40034–13 March15 March; 4, 9 April
202238.1053128.002210–12 April17 April, 17 May
202036.5683128.729424–26 April29 April, 12 May
201938.3808128.46754–5 April20 April
201937.7519128.87614–5 April20 April
Table 4. Performance comparison of SVM kernels based on 5-fold cross-validation (Unit: %).
Table 4. Performance comparison of SVM kernels based on 5-fold cross-validation (Unit: %).
Kernel TypeAccuracyPrecisionRecallF1-Score
Linear95.089.296.992.9
Polynomial99.298.399.498.9
RBF99.398.599.599.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, S.-H.; Lee, M.-H.; Kang, T.-H.; Cho, H.-R.; Yun, H.-S.; Lee, S.-J. Comparative Analysis of dNBR, dNDVI, SVM Kernels, and ISODATA for Wildfire-Burned Area Mapping Using Sentinel-2 Imagery. Remote Sens. 2025, 17, 2196. https://doi.org/10.3390/rs17132196

AMA Style

Lee S-H, Lee M-H, Kang T-H, Cho H-R, Yun H-S, Lee S-J. Comparative Analysis of dNBR, dNDVI, SVM Kernels, and ISODATA for Wildfire-Burned Area Mapping Using Sentinel-2 Imagery. Remote Sensing. 2025; 17(13):2196. https://doi.org/10.3390/rs17132196

Chicago/Turabian Style

Lee, Sang-Hoon, Myeong-Hwan Lee, Tae-Hoon Kang, Hyung-Rai Cho, Hong-Sik Yun, and Seung-Jun Lee. 2025. "Comparative Analysis of dNBR, dNDVI, SVM Kernels, and ISODATA for Wildfire-Burned Area Mapping Using Sentinel-2 Imagery" Remote Sensing 17, no. 13: 2196. https://doi.org/10.3390/rs17132196

APA Style

Lee, S.-H., Lee, M.-H., Kang, T.-H., Cho, H.-R., Yun, H.-S., & Lee, S.-J. (2025). Comparative Analysis of dNBR, dNDVI, SVM Kernels, and ISODATA for Wildfire-Burned Area Mapping Using Sentinel-2 Imagery. Remote Sensing, 17(13), 2196. https://doi.org/10.3390/rs17132196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop