Next Article in Journal
Assessing Relativeness in the Provision of Urban Ecosystem Services: Better Comparison Methods for Improved Well-Being
Previous Article in Journal
Social Perception of the Ecosystem Services of Prunus serotina subsp. capuli in the Andes of Ecuador
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Object- Versus Pixel-Based Unsupervised Fire Burn Scar Mapping under Different Biogeographical Conditions in Europe

by
Marta Milczarek
1,*,
Sebastian Aleksandrowicz
1,
Afroditi Kita
2,
Rizos-Theodoros Chadoulis
2,
Ioannis Manakos
2,* and
Edyta Woźniak
1,*
1
Centrum Badań Kosmicznych Polskiej Akademii Nauk (CBK PAN), 01-716 Warsaw, Poland
2
Centre for Research and Technology Hellas, Information Technologies Institute, 57001 Thessaloniki, Greece
*
Authors to whom correspondence should be addressed.
Land 2023, 12(5), 1087; https://doi.org/10.3390/land12051087
Submission received: 31 March 2023 / Revised: 13 May 2023 / Accepted: 16 May 2023 / Published: 18 May 2023

Abstract

:
Wildfire detection and mapping is crucial for managing natural resources and preventing further environmental damage. In this study, we compared two methods of mapping burn scars using Sentinel-2 satellite imagery, a pixel-based approach and an object-based approach, at test sites located in various climatic zones with diverse land cover synthesis. The study aimed to determine the advantages and limitations of each method in terms of accuracy and precision in detecting burn scars. The results showed that both methods could detect burn scars with high accuracy, but with some limitations. The F1 score was in the range of 0.64–0.89 for the object-based approach, and 0.58–0.90 for the pixel-based approach. The pixel-based method produced a more precise delineation of the burnt area, but it was only suitable for detecting burn scars in a limited area of interest. The object-based method, on the other hand, was able to detect burn scars over a larger area accurately but with some commission errors. The results of both methods were also compared to the Copernicus Emergency Management Service (CEMS) rapid mapping product.

1. Introduction

Forest fires, regardless of their causes, have a negative impact on society [1,2,3], the environment [4,5,6], and the economy [7,8] worldwide. They also alter the carbon cycle and speed up climate change [9]. On the other hand, global warming intensifies fire regime and fire severity [10,11]. Thus, it results in a self-feeding, autocatalytic process, which accelerates the pace of changes. Therefore, the monitoring of burnt areas is crucial for the proper management of forest fires and reduction of their negative impact.
Mapping of burnt areas is performed using satellite images with different spatial and temporal resolutions. In [12], a comprehensive review of burn scar mapping methods using Earth Observation techniques is presented. Later, new approaches were developed [13,14,15,16,17,18]. The detection of burnt areas is based on the interaction of an electromagnetic wave with objects on the Earth’s surface, described by several indices derived from the spectral bands of optical satellite images: the Normalised Difference Vegetation Index (NDVI) [19,20], the Global Environmental Monitoring Index (GEMI) [21], the Burn Area Index (BAI) [22], the Normalised Burn Ratio (NBR) [23], and their modifications: the differenced Normalised Burn Ratio (dNBR) [24], the multi-temporal differenced Normalised Burn Ratio (dNBRMT) [25], and the Burned Area Index for Sentinel-2 (BAIS2) [26].
From the point of view of burnt area monitoring, the methods which do not require a set of training data are particularly valuable. Automated approaches usually consist of two-phase algorithms: first, they detect “core” (also known as “seed”) burnt segments or pixels, and then they apply procedures to improve the delimitation of burn scars. Several methods for determining “core” burn scars have been developed. The first group of methods uses information on historical burnt and unburnt areas to iteratively define the decision criterion [27], to calculate membership functions of fuzzy sets [28,29], and to make predictions with a gradient-boosted regression model [30] for the “core” mapping. The second group looks for negative outliers in a time series [31]. The third group of methods uses adaptive thresholding based on image statistics [13,18] or active fires products [16]. Procedures for improving the delimitation of burn scars are based on the logistic regression approach [27], watershed region growing [31], spectral distance of neighbouring objects to the core burnt area [13], and mean-shift segmentation [18].
The majority of burnt area mapping methods use pixel-based classification approaches [14,15,16,17,18,27,28,29]. However, there are some works in which object-based classification approaches are applied [13,32,33]. The evaluation of the advantages and disadvantages of both approaches was carried out, among others, for land cover [34,35], urban environment [36,37], and landslides [38]. However, to the best of our knowledge, it has not been analysed in the context of automatic burnt area mapping.
The main aim of this work was to perform a remotely sensed detection of burnt and non-burnt areas within selected test sites with the use of two different approaches—pixel- and object-based—and then to compare the resulting maps with each other and with Copernicus EMS rapid mapping products, as well as to assess their accuracy against reference data extracted from high-resolution satellite images. For this purpose, we selected and applied two-step methods based on adaptive thresholding [13,18] on four test sites. The second objective was to perform an analysis of the applicability of both methods in various conditions, such as vegetation type, land cover composition, or climate zone, in order to highlight the advantages and disadvantages of each approach.

2. Test Sites and Data

2.1. Satellite Imagery

The pairs of pre- and post-event Sentinel-2 L2A (surface reflectance) images acquired for each test site (Table 1) were used in the processing chain of both algorithms. A total of eight satellite tiles were downloaded. All tiles were clipped to the boundaries of the test sites (see Section 2.3) and then further processed.
Four-band surface reflectance PlanetScope images from © Planet Labs PBC (San Francisco, CA, USA) [39] acquired after the fire events served as a reference data source. Better spatial resolution (3 m vs. 10 m) allowed for detailed visual interpretation of burnt areas [40].

2.2. Auxiliary Data

The auxiliary data from various sources and databases (see Table 2) were used at different stages of processing chain and for comparison of test sites with each other. The elevation data from the Shuttle Radar Topography Mission (SRTM) by NASA [41] served as a base for slope layer calculation that was performed in the object-based method. Digital elevation models (DEMs) were also used to assess the altitude diversity of the studied areas. For this purpose, three different DEMs were used: 10 m DEM LiDAR [42] from the Head Office of Geodesy and Cartography in Poland (TS1), 30 m ALOS World 3D [43] from JAXA (TS2), and 30 m SRTM (TS3 and TS4).
In order to exclude clouds and shadows during the classification process and validation of the results, the Sentinel-2 Land Cover (SLC) layer attached to the L2A dataset of each image tile was used [44]. The following classes were masked out: cloud shadows, medium cloud probability, high cloud probability, thin cirrus, snow, and ice. For analysis of land cover distribution inside each test site, the Sentinel-2 Global Land Cover 2017 (S2GLC) dataset [45] was used. It has a spatial resolution of 10 m and contains 13 classes.
Products of the Rapid Mapping activations of the Copernicus Emergency Management Service (EMS) [46] were used to extract the information on the extent of the fire, its area, and the type of land cover that was directly burnt. Table 2 lists all the auxiliary data used in this study.

2.3. Test Sites

Four regions were selected from the Copernicus Emergency Management Service’s Rapid Mapping Activations database of wildfires and forest fire products. Each region had to contain at least one fire burn scar visible in a given period of time, be distributed in different climatic zones in Europe, and represent different types of land cover affected by fire (Figure 1). In addition, burn scars had to occupy at least 1000 ha of land.
The size of test sites was set at 50 × 50 km, which is a quarter of the area of one Sentinel-2 tile. Even though the burn scars covered between 0.4% and 1.9% of each site, such a large terrain allowed for different land covers and landscape characteristics to be captured. The size of the selected test sites provided an opportunity to see how the algorithms cope with potential commission error.

2.3.1. Poland

The Polish test site is located in north-eastern Poland, east of Goniądz, on the north bank of the Biebrza River (Figure 2). The fire started in the afternoon of 19 April 2020, and the extent of the burnt area did not expand after 26 April 2020 (53°33′03.6″ N, 22°48′50.4″ E). As stated in the Copernicus EMS product, the burnt area covered about 5327 ha and concerned marshes, peatbogs, meadows, reeds, forest, and shrub, located on a relatively flat terrain. According to the S2GLC database, the entire test site is mainly cultivated (32%) or covered by coniferous trees (25%), herbaceous vegetation (20%), broadleaf trees (14%), marshes (4%), peatbogs (2%), and other (3%). It is situated in a warm temperate transitional climate zone.

2.3.2. Sweden

The Swedish test site is located in central Sweden, in the Jämtland county. In this area of interest, two burn scars are noted: one situated north of Strandasmyrvallen (61°48′39.6″ N, 14°25′55.2″ E), and the other further in the north-west direction, north-west of Lillhardal (61°54′25.2″ N, 13°57′28.8″ E). The fire started on 16 July 2018 and covered about 4849 ha of land in total, as stated in the Copernicus EMS maps. Those products also provide information on the type of land cover that burnt, mainly forests, shrub or herbaceous vegetation, and wetlands. The entire test site, according to the S2GLC database (Figure 3), consists mainly of coniferous tree cover (54%), peatbogs (15%), moors and heathland (8%), broadleaf trees (7%), marshes (6%), herbaceous vegetation (6%), water bodies (2%), and other (2%). It is situated in a cold temperate transitional climate zone.

2.3.3. United Kingdom

The test site in Great Britain is located east of the town of Mossley, east of Manchester, and includes the Saddleworth (Figure 4). Moor (53°30′28.8″ N 1°59′20.4″ W). A peat moor burnt for 3 weeks from 24 June 2018 to 18 July 2018, and as reported in the map published under Copernicus EMS activation, the total burnt area was 1020 ha. The wildfire mainly covered wetlands and moors spreading on the hills of the Pennines. According to S2GLC 2017, the entire test site consists mainly of herbaceous vegetation (30%), broadleaf trees (18%), artificial surfaces including built-up areas and roads (15%), peatbogs (14%), moors and heathland (13%), and other (10%). It is situated in a warm maritime temperate climate zone.

2.3.4. Greece

The Greek test site is located north-east from Diabolitsi town, in Messinia in the Peloponnese Region (37°19′08.4″ N 22°00′43.2″ E). The fire broke out on 4 August 2021. According to Copernicus EMS activation, the total burnt area was 5107 ha. The burnt terrain contained mainly pine forests and cultivations in the mountainous landscape with steep slopes. The entire test site consists mainly of broadleaf trees (38%), sclerophyllous vegetation (27%), herbaceous vegetation (13%), vineyards (9%), cultivated areas (3%), and other (10%), according to the S2GLC 2017 dataset (Figure 5). It is situated in a maritime subtropical climate zone.

3. Methods

This chapter presents a brief description of the applied algorithms—object-based and pixel-based, validation process, and comparison. Both algorithms first detect the main bulk of the burnt area, i.e., the union of all regions with spectral characteristics indicating a high probability of fire damage and then refine this area incorporating units (pixels or objects) that have been less evidently affected by the fire. The main bulk of the burnt area initially detected in both approaches is referred to as the core burnt area. The detailed description of algorithms, their implementation, and their performance are found in [13,18].

3.1. Object-Based Algorithm

The algorithm makes use of bi-temporal reflectance images. An extended description of the algorithm and equations is provided in [13]. It consists of five main steps: band arithmetic; multi-level segmentation and masking; detection of core burnt areas; region growing; and post-processing. In the first step, spectral indices are computed: the Normalised Burn Ratio (NBR) [23],
NBR = (NIR − SWIR)/(NIR + SWIR)
and the Normalised Difference Vegetation Index (NDVI) [19],
NDVI = (NIR − R)/(NIR + R)
In the first stage of the segmentation process, AOI is divided into homogeneous square objects using cloud mask layers of images t1 (pre-event) and t2 (post-event). The objects which contain information about clouds, shadows, snow, and water are classified and excluded from further analysis. Any remaining unclassified objects are then merged again and re-segmented to obtain objects, which will be directly used for burnt area mapping. This step uses the multiresolution segmentation method [47]. Trial and error showed that the best layers for segmentation are the NBR at t2 and the difference of NIR spectral channels at t1 and t2. The scale parameter is set at 100 and does not need to be changed between scenes. The next step, core burnt area classification, has two-stages. Firstly, potential burnt areas are classified on the basis of the NBR layer of the post-fire image. Objects that fulfil the following condition are considered for further analysis:
μO < (μS − σS)
where μO is the mean for an object of NBR_t2, μS is the mean for layer NBR_t2, and σS is the standard deviation for NBR_t2. Secondly, differences in each spectral parameter are calculated for pre- and post-fire images of the scene. We consider the following differences of spectral parameters: indices—dNBR, dNDVI; spectral bands—near infrared (dNIR) and shortwave infrared (dSWIR1, dSWIR2). These values are used to calculate the threshold for a specified pair of images. They are expressed as relative values and calculated as follows:
S = 100 − (μS t1⋅100)/μS t2
where dμS is the difference of a spectral parameter at scene level, μSt1 is the mean of a spectral parameter of a pre-fire image, and μS t2 is the mean of a spectral parameter of a post-fire image. Objects of potential burnt areas are reclassified into core using a set of variable thresholds (T) that are calculated as linear or polynomial functions of the difference between pre- and post-fire images or as a function of a post-fire (t2) image. Threshold functions and an explanation of how they were established is provided in [13]. After the classification of objects into core burnt areas, they are merged, and relations between them and neighbouring objects are analysed. Neighbouring objects are classified as burnt if their spectral distance of layer dNBR (difference of NRB in t1 and t2) from the core is lower than the standard deviation of all areas classified as burnt:
BA − σBA) < μO < (μBA + σBA)
where μBA is the mean of dNBR values of the total burnt area detected on the scene, σBA is the standard deviation for dNBR values in the total burnt area detected on the scene, and μO is the mean of the dNBR value of each object adjacent to the core burnt area object. The process is iterative—after an object is classified as burnt area, it is merged into core object, and the procedure is repeated.
The last step of the classification is post-processing. Each class of objects—classified as burnt areas, unclassified, and excluded in the first step—are merged, and the minimum mapping rules are applied. Minimum mapping unit is set to 1 ha. Moreover, enclosure analysis is carried out, and small unclassified areas or clouds are reclassified into burnt areas.

3.2. Pixel-Based Algorithm

The pixel-based methodology is realised through a two-phase approach. The first phase includes the detection of core burnt areas, while in the second phase, a localised adaptive thresholding approach is applied to optimise the pixel-based discrimination into burnt and unburnt classes. The NBR index is used to detect the core burnt areas. It is calculated for the extent of the image following the fire event.
The first phase starts with masking out water bodies, non-vegetated areas, clouds, and smoke, because according to preliminary experiments, these elements render the core burnt area detection output prone to noise. First, the water bodies are excluded by utilising the Normalised Difference Water Index (NDWI) [48]. Then, in order to identify the non-vegetated areas, the Normalised Difference Vegetation Index (NDVI) [49] is estimated, for both images, prior to and after the fire event, and their difference dNDVI is thus employed. Eventually, a pixel is considered non-vegetated when the NDVI index for both images—before and after the event—is less than 0.17 and its state has not changed more than 0.04 (dNDVI < |0.04|) [13] (thresholds may be adapted per landscape synthesis in an area). Finally, clouds, smoke, and bad pixels are identified as such using the Scene Classification Layer (SCL) provided by Sentinel-2 Level 2A products.
At the second phase, the first step is to estimate an initial threshold (Thinit) for the discrimination between core burnt and unburnt areas. This is carried out by thorough reviewing the histogram spectral range of the NBR index in the image while juxtaposing it with the known situation on the ground from the reference data (see Section 3.3). Thinit is defined as the value, for which the first deep valley of the histogram is detected. Then, a first categorisation is performed on the basis of this initial threshold as follows: if the NBR value of a pixel (p) is lower than the Thinit (NBR(p) < Thinit), then the pixel is considered burnt, or else it is labelled as unburnt (similar to [18]). This is in line with the expected spectral behaviour, i.e., lower NIR and higher SWIR values for burnt areas; thus, lower NBR values. This first clustering is based upon the assumption that the histogram of the NBR index is the sum of two partially overlapping distributions, the distribution of the unburnt pixels and the distribution of the burnt ones. A relative example for this procedure comes from [50], who determined the threshold from the sequential criteria assessment of the frequency distribution of dNBR values over a sample area with known fire activity based on the presence of MODIS Active Fire detections and the success rate of burnt area masks mapped per ecosystem of interest. At this point, an initial map has been produced, which depicts core burnt and unburnt areas.
As a second step, the post-fire RGB image (derived from the Sentinel-2 Red (B4), Green (B3), and Blue (B2) bands) is segmented using the mean-shift segmentation algorithm [51]. The segmentation output consists of segments with pixels of similar spectral behaviour leveraging on the highest possible spatial resolution provided by the aforementioned bands. Segments are then overlaid with the initial map from the previous step; the ones with a high percentage of burnt pixels (>70%) are selected, and their centroids (Cm) are calculated. Around each Cm square patches of expanding size p m k are centered and the size of each patch in pixels is calculated as (20⋅k) × (20⋅k), k = 1, 2, …20. Minimum Cross Entropy Thresholding [52] is then utilised to estimate the threshold for the binary classification for a given size. The optimal threshold fmopt for each segment, with centroid (Cm), is calculated as the median of all the expanding windows. The optimal splitting threshold for the scene is finally estimated as the median of all the segments’ optimal thresholds Thfinal = median_(m = 1…M) (fmopt). Pixels (p) with NBR(p) < Thfinal are considered burnt. The final map contains pixels that, on the basis of the localised approach, have a high probability of being labelled as burnt.

3.3. Validation

For each test site, the extent of the reference polygons of burn scars were prepared on the basis of the PlanetScope satellite image manual interpretation. The first step was to set a threshold on the NIR band of the PlanetScope image so that it indicated the burnt area in a general way and, after vectorisation, could be the basis for correction of the course of the boundary between the burn scar and the remaining land (layer 1). Then, the NDVI was calculated, and a threshold was set to delineate green vegetation from other land cover (layer 2). This layer was helpful in excluding unburnt places inside the burnt area. Finally, the main work was conducted by means of visual interpretation of images in various colour compositions, taking additionally into consideration these two layers (layer 1 and layer 2). Burnt areas were manually digitised on screen to obtain the pixels delineating exactly the damaged area.
In the case of test sites 1 and 2 (Poland and Sweden), clouds and shadows, present in the post-event Sentinel-2 images, were removed with the use of a mask layer included in the Sentinel-2 Land Cover (SLC) band attached to Level 2A tiles. The same mask was used in both detection algorithms. In order to maintain the uniformity of materials needed for later comparison, the following classes were masked out in the reference polygons: clouds shadows, cloud medium probability, cloud high probability, thin cirrus, snow, and ice.
The reference polygons were used to prepare four sets of reference points, one for each test site. Each set contained 25,000 points randomly spread inside the entire test site with a stratification into burnt and unburnt areas. The number of points corresponding to the burn scar reference polygon depended on the area it occupied in relation to the entire test area. It ranged from 94 to 446 points (Table 3), as burn scars accounted for 0.38% to 1.78% of each test site.
The reference points were used to validate the results of burn scar detection with object- and pixel-based methods, as well as the extent of fire-damaged areas from Copernicus EMS activations. The accuracy was assessed with the use of confusion matrix that indicated true positives (TP), false presences (FP), and false negatives (FN) on the basis of which the following measures were calculated: precision, recall, and overall accuracy (OA), as well as F1 score [53]. Precision is a ratio of correctly classified “burnt” points to all points classified as burnt. The lower the value of precision, the greater the commission error. Recall is a ratio of correctly classified “burnt” points to all points that are actually burnt. The lower the value of recall, the greater the omission error. F1 score is a harmonic mean of the precision and recall. The higher the F1 score, the more perfect precision and recall. Overall accuracy is a ratio of correctly classified points to the total number of reference points.

3.4. Comparison

Overlaying the extent of reference polygons on the burnt area determined by the pixel- and object-based methods and then comparing these results with the satellite image, the elevation map, and the land cover map, it was possible by means of visual interpretation to determine the types of land cover or the type of landscape, for which the algorithm resulted in misclassifications.
Core areas resulting from both algorithms were also examined. For this purpose, raster results were processed into vector form, both core and final burnt areas. Then, the core burnt areas were intersected with the corresponding reference burnt areas, and in the same way, the final burnt areas were intersected with the reference burnt areas. For each polygon created this way, their area was calculated, and then the percentage share of core areas and final areas in relation to the reference polygon area was calculated. On this basis, conclusions could be drawn about the size of the omission error generated by each algorithm and the extent, to which the applied post-classification and optimisation methods influenced the final result.

4. Results

Figure 6 shows reference polygons and burn scars detected by each method. The results of mathematical validation are presented in the Table 3 and described in Section 4.1. Table 4 and Table 5 collect the insights from visual validation, which are described in Section 4.2, together with the results of comparison.

4.1. Validation of Burn Scars Maps

Validation of EMS rapid mapping burn scar delineation maps showed that in general, none of them showed omission errors in relation to the reference extent of the fire (Table 3, Figure 7). The burn scar in the UK showed a slight commission error, and in Greece and Sweden had larger commission errors. In the case of Poland, the F1 score was much better than the results given by object- or pixel-based methods.
However, it must be noted that both tested methods always used Sentinel-2 images as a base for classification, and this is not the case in the Copernicus EMS mapping procedure. EMS used different sources of input data, namely, for Poland, Sweden, and Greece, it was SPOT 6/7, and for the UK, it was GeoEye-1 and SPOT 7. These satellites have sensors of better spatial resolution, and sometimes the images are acquired on a date that is closer to the event, so the vegetation changes are not as large, like when only Sentinel-2 images are used. There are also differences in the cloud cover amount during the acquisitions. These two conditions were revealed in the case of test site 1 and had an impact on such a good result of the EMS Copernicus map compared to the tested algorithms.
Table 3. Validation results.
Table 3. Validation results.
Test SiteMethodNo. of Burnt Reference PointsNo. of Non-burnt Reference PointsTPFPFNRecallPrecisionF1OA
(%)
TS1 PLpixel-based44624,5542932121530.660.580.6298.54
object-based222242240.500.900.6499.01
EMS Copernicus4451411.000.970.9899.94
TS2 SEpixel-based36824,632268821000.730.770.7599.27
object-based2321161360.630.670.6598.99
EMS Copernicus36126470.980.580.7398.92
TS3 UKpixel-based9424,9069012540.960.420.5899.48
object-based912030.970.820.8999.91
EMS Copernicus942001.000.830.9099.92
TS4 GRpixel-based39724,60336245350.910.890.9099.68
object-based32425730.820.930.8799.61
EMS Copernicus39512221.000.760.8699.50
The developed confusion matrices showed that, taking into account the two tested methods, the detection of the burnt area was the best for the Greek test site. The F1 score was 0.87 for the object-based method and 0.90 in the case of the pixel-based method. The worst results were obtained for the Polish site, with F1 equal to 0.62 for the pixel-based method (caused by both commission and omission errors) and 0.64 for the object-based method (caused by omission error). The results in Sweden and the United Kingdom areas varied. The largest difference in F1 score between both methods was recorded for the test site in the UK, where the object-based method achieved 0.89 (small commission error) and the pixel-based method achieved 0.58 (large commission error, however, recall was very good, which means almost no omission errors). The results in Sweden were opposite—the pixel-based method achieved F1 equal to 0.75 (small omission and commission errors), and the object-based method achieved 0.65 (both omission and commission errors).
Overall accuracy was very high. For the object-based method, accuracy ranged from 98.99% to 99.91%; for the pixel-based method, from 98.54% to 99.68%; and for EMS rapid mapping, from 98.92% to 99.94%. However, overall accuracy values have to be analysed with the awareness that they are not entirely reliable, due to the large disproportion between the number of burnt and non-burnt reference points.

4.2. Comparison of the Pixel- Versus Object-Based Approach

4.2.1. Visual Analysis of the Results

Table 4 presents the main classification errors detected for each test site by visual analysis of the results. As far as possible, the errors are ordered starting from the most frequent or those occupying the largest areas.
Table 4. The results of visual analysis of misclassified places within the respective test sites.
Table 4. The results of visual analysis of misclassified places within the respective test sites.
Object-Based MethodPixel-Based Method
TS 1 Poland
Commission Errors
-
water in the river Biebrza over a distance of several kilometers
-
large cluster of pixels in a forest on wetland
-
strips of green vegetation inside the main burn scar
-
one field which on the pre-event image is covered with green vegetation and on post-event image is an open ground
-
two small clusters of pixels on cloud shadows
-
large clusters of pixels in meandering river valley of Biebrza
-
some fields which on the pre-event image are covered with green vegetation and on post-event image are open ground
-
single pixels and small pixel clusters on farmland, cut forest, shadowed edges of forest, shores of water reservoirs
-
some edges of clouds and cloud shadows
-
some mowed meadows
Omission Errors
-
forest
-
meadows
- forest
TS 2 Sweden
Commission Errors
-
cloud shadows
-
clouds
-
patches of green vegetation, lakes, and strips of roads inside burn scar
-
single pixels and small pixel clusters on water reservoirs (especially close to shores) and forest
-
some edges of clouds and cloud shadows
Ommission Errors
- forest- forest
TS 3 United Kingdom
Commission Errors
-
patches of green vegetation inside burn scar
-
parts of three water reservoirs and a few small lakes
-
the edges of some clouds and small parts of shadows
-
single pixels and small pixel clusters on various land cover: build-up areas, farmland, meadows, roads, water
-
some shores of water reservoirs
-
some mowed meadows
Omission Errors
- small areas inside and one bigger at the edge of the main burn scar none
TS 4 Greece
Commission Errors
-
roads, patches of green vegetation, and open ground inside the main burn scar
-
parts of water reservoir
-
open ground inside the burn scar
-
single pixels and pixel clusters on various forms of land cover: cloud shadows, meadows, boundary between forest and non-forest area, open pit mine, open ground along railroad, some farmland
-
shores of water reservoirs
-
the edge of one cloud
Omission Errors
-
some vineyards
-
burnt grass between single trees and bushes
- burnt trees
As a result of the visual analysis of the algorithms’ outputs, a summary, included in Table 5, was created. It lists places that were incorrectly classified. The pixel-based method resulted in more commission errors. Some unburnt places were classified as burnt. However, there were also elements that were characteristic for the object-based approach. The misclassification of small unburnt areas surrounded by burnt areas resulted from the applied minimum size of objects and, in particular, from the post-classification tools used, which generalise the extent of the burnt area.
Table 5. Summary of the visual analysis of misclassified places within all test sites. The more pluses, the more test sites where the problem occurred.
Table 5. Summary of the visual analysis of misclassified places within all test sites. The more pluses, the more test sites where the problem occurred.
Commission ErrorsOmission Errors
Object-BasedPixel-BasedObject-BasedPixel-Based
Cloud or its rim+++++
Cloud shadow or its rim+++
Water reservoir+
Shore or part of water reservoir+++
Part of river or river’s valley (Figure 8)+++
Mowed meadow ++
Single pixels or small clusters of pixels with various land cover types+++++
Green vegetation inside main burn scar+++
Other land cover inside main burn scar++++
Trees or forest+ +++++
Meadow +
Vineyard +
Burnt grass between single trees or bushes +
Some errors were present on water bodies as well (Figure 9). This may indicate that the water index was not fully effective in detecting water. In the case of the pixel-based approach, a large number of individual or small clusters of pixels occurred in the unburnt area. They appeared on different types of land cover, and it was difficult to determine any prevailing one.
Omission errors occur primarily in areas covered with groups of trees or forest. This applies to both methods. If only the litter and undergrowth are burnt and show through between the unburnt treetops, the spectral response of such an area is not classified as burnt ground for both algorithms. Some parts of a forest are counted as burnt as a result of post-classification in the case of the object-based method, but this is quite random. Most burnt areas covered with trees are not detected. This was also confirmed by the low compliance values presented in the Table 5 for test sites in Poland and Sweden.
The applied cloud and shadow mask did not eliminate all areas that should be excluded from the analysis (Figure 10). This is a problem that influences both approaches because the same mask has been used.

4.2.2. Comparison of Accuracies of Object- and Pixel-Based Approaches

Both detection algorithms generate semi-finished products called core burnt areas. The results of comparing the extent of core areas and reference polygons (see Section 3.3) are demonstrated in the Figure 11. It can be seen that in the case of the third test site (UK), core areas had a very high percentage of coverage with the reference polygons and thus little could be improved in post-processing and optimisation of initial detection. It complies with the absence of omission errors, as proven by validation.
The greatest difference between the core burnt area and the final burn scar was revealed in the case of test site in Greece (both methods) and in Sweden (only pixel-based method). This could be related to the strong heterogeneity of burnt areas connected both to the composition and vertical structure of the burnt ecosystem and to the varied intensity of fire.
Relatively low values of compliance occurred for the Polish and Swedish test sites. This was due to problems with cloud and shadow masking and the presence of forest in burnt areas.

5. Discussion

The polar and temperate zones [54] in Europe are characterised by more frequent cloudiness than the subtropical zone [55]. Thus, it is often impossible to acquire a cloudless image after a fire. Therefore, the process of masking clouds and their shadows is an integral part of the analysis of the extent of the burnt area. The used mask did not fully remove cloudy and shadowy areas properly [56], so these places were susceptible to misclassification. Both pixel- and object-based methods are affected by errors related to inefficient cloud and shadow masking. This kind of error is not intrinsic to these methods and can be considered external. However, object-based classification is somewhat less affected.
The pixel-based method studied here does not avoid the inherent problem of this approach, i.e., commission error in the form of individual misclassified pixels called the salt and pepper effect, which has been discussed in the literature [33,34]. This effect was reduced by masking potentially problematic areas such as water bodies, non-vegetated areas, clouds, and smoke, but compared to the results of the object-based method, it is still visible.
The use of the pixel-based approach to detect a relatively small area of fire in relation to the size of the test site was also a major challenge for this method. In many studies [17,18], the analysed area is often limited to the immediate vicinity of the burnt area, which usually reduces the commission error. On the other hand, the pixel-based approach better detects green vegetation and other land cover inside main burn scars than the object-based approach, where the precision of detection of small areas is directly connected to the defined minimum mapping unit [35]. Moreover, the object-based approach detects other forms of land cover (meadows, vineyards) as burn scars.
Neither the object- nor pixel-based approach reduced the omission errors in burnt forest, especially when the fire mainly affected forest undergrowth. This is a common error, which was noted previously [57] and can be overcome by using hyperspatial imagery [58].
The period of data acquisition prior to and after a fire seems to play a role for both methods. The time of scene acquisition after a fire varies depending on the land cover and timing of the fire itself. In the boreal coniferous forest, it is not crucial, i.e., the image can be obtained even a few weeks after the event if the fire affected the tree crowns. However, in the case of a more dynamically changing ecosystem during the year, such as peat bogs, swamp vegetation, and reeds, the window in which a satellite image should be obtained is quite narrow. This is important due to the vegetation, which is dynamically reborn after the winter period. It turns green, which changes the spectral responses of the surroundings of the burnt area, and the burnt area itself can regenerate quite quickly. This was the case for the analysed test site 1.
It seems that hybrid classification, which firstly detects burn scars in a general scale using object-based classification and secondly applies a pixel-based approach for detailed delimitation of a burnt area, would reduce omission and commission errors. Similar conclusions were obtained in the case of land cover classification by [35].

6. Conclusions

The pixel-based method tested in our study allows for a more precise delineation of the border between burnt and unburnt areas, causing less commission errors in burn scar detection, but only in the immediate vicinity of the burnt surface. It is best suited for detecting burn scars in a limited area of interest. Otherwise, in the remaining unburnt area, commission errors are significant and show up as single misclassified pixels or small clusters of pixels covering different forms of land cover.
The object-based method tested in our study can and should be applied to large areas of interest, because unlike the pixel-based approach, it can calculate the statistics on the basis of the entire satellite scene, and commission errors outside the burn scar are rare. On the other hand, commission errors that contain relatively small patches of unburnt land surrounded by burnt areas are present, unlike with the pixel-based method. However, they could be minimised by setting less rigorous parameters of the minimum size of objects and the generalisation in the post-classification process.
Both methods are quite sensitive to phenological changes in vegetation, and thus it is very important, if not crucial, to choose the pair of satellite images acquired as close to the start and end of the fire as possible. Both are also dependent on the accuracy of the Scene Classification Layer included in Sentinel-2 products, which seems not to be high enough. Cloud and shadow masks dedicated to Sentinel-2 images need further improvement.
Finally, future research may focus on the combination of the two methods, which could mitigate their inefficiencies. For example, one may consider applying both methods in a sequential manner where they are adapted to each other, e.g., first the object-based method for a general recognition of burn scars in the entire satellite scene, and then the pixel-based method for a more precise delimitation of the edges of burnt areas.

Author Contributions

Conceptualisation, I.M., E.W. and M.M.; methodology, M.M., I.M., E.W., A.K., R.-T.C. and S.A.; software, S.A., A.K. and R.-T.C.; validation, M.M.; formal analysis, M.M.; investigation, M.M., I.M., E.W., S.A., A.K. and R.-T.C.; data curation, M.M.; writing—original draft preparation, M.M., A.K., R.-T.C., I.M., E.W. and S.A.; writing—review and editing, E.W., M.M. and I.M.; visualisation, M.M. and I.M.; supervision, E.W. and I.M.; project administration, E.W. and I.M.; funding acquisition, E.W. and I.M. All authors have read and agreed to the published version of the manuscript.

Funding

The project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under grant agreement no. 952111 (EOTiST).

Data Availability Statement

The data presented in this study are available on request from the corresponding authors. The data are not publicly available due to their use for ongoing research and intended publications on the topic by the authorship working teams.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brushlinsky, N.N.; Ahrens, M.; Sokolov, S.V.; Wagner, P. World Fire Statistics 22; Center of Fire Statistics of International Association of Fire and Rescue Services: Ljubljana, Slovenia, 2017. [Google Scholar]
  2. Molina-Terrén, D.M.; Xanthopoulos, G.; Diakakis, M.; Ribeiro, L.; Caballero, D.; Delogu, G.M.; Viegas, D.X.; Silva, C.A.; Cardil, A. Analysis of forest fire fatalities in Southern Europe: Spain, Portugal, Greece and Sardinia (Italy). Int. J. Wildland Fire 2019, 28, 85–98. [Google Scholar] [CrossRef]
  3. Zong, X.; Tian, X.; Yao, Q.; Brown, P.M. An analysis of fatalities from forest fires in China, 1951–2018. Int. J. Wildland Fire 2022, 31, 507–517. [Google Scholar] [CrossRef]
  4. Andersen, A.N.; Cook, G.D.; Corbett, L.K.; Douglas, M.M.; Eager, R.W.; Russell-Smith, J.; Setterfield, S.A.; Williams, R.J.; Woinarski, J.C. Fire frequency and biodiversity conservation in Australian tropical savannas: Implications from the Kapalga fire experiment. Austral Ecol. 2005, 30, 155–167. [Google Scholar] [CrossRef]
  5. Certini, G. Effects of fire on properties of forest soils: A review. Oecologia 2005, 143, 1–10. [Google Scholar] [CrossRef] [PubMed]
  6. Kumar, G.; Kumar, A.; Saikia, P.; Roy, P.S.; Khan, M.L. Ecological impacts of forest fire on composition and structure of tropical deciduous forests of central India. Phys. Chem. Earth 2022, 128, 103240. [Google Scholar] [CrossRef]
  7. Alcasena, F.J.; Salis, M.; Nauslar, N.J.; Aguinaga, E.A.; Vega-García, C. Quantifying economic losses from wildfires in black pine afforestations of northern Spain. For. Policy Econ. 2016, 73, 153–167. [Google Scholar] [CrossRef]
  8. Stougiannidou, D.; Zafeiriou, E.; Raftoyannis, Y. Forest Fires in Greece and Their Economic Impacts on Agriculture in Economies of the Balkan and Eastern European Countries. KnE Soc. Sci. 2020, 4, 54–70. [Google Scholar] [CrossRef]
  9. Bowman, D.M.J.S.; Balch, J.K.; Artaxo, P.; Bond, W.J.; Carlson, J.M.; Cochrane, M.A.; D’Antonio, C.M.; DeFries, R.S.; Doyle, J.C.; Harrison, S.P.; et al. Fire in the Earth System. Science 2009, 324, 481–484. [Google Scholar] [CrossRef]
  10. Flannigan, M.D.; Stocks, B.J.; Wotton, B.M. Climate change and forest fires. Sci. Total Environ. 2000, 262, 221–229. [Google Scholar] [CrossRef]
  11. McColl-Gausden, S.C.; Bennett, L.T.; Clarke, H.G.; Ababei, D.A.; Penman, T.D. The fuel–climate–fire conundrum: How will fire regimes change in temperate eucalypt forests under climate change? Glob. Chang. Biol. 2022, 28, 5211–5226. [Google Scholar] [CrossRef]
  12. Chuvieco, E.; Mouillot, F.; van der Werf, G.R.; San Miguel, J.; Tanase, M.; Koutsias, N.; García, M.; Yebra, M.; Padilla, M.; Gitas, I.; et al. Historical background and current developments for mapping burned area from satellite Earth observation. Remote Sens. Environ. 2019, 225, 45–64. [Google Scholar] [CrossRef]
  13. Woźniak, E.; Aleksansdrowicz, S. Self-Adjusting Thresholding for Burnt Area Detection Based on Optical Images. Remote Sens. 2019, 11, 2669. [Google Scholar] [CrossRef]
  14. Malambo, L.; Heatwole, C.D. Automated training sample definition for seasonal burned area mapping. ISPRS J. Photogramm. Remote Sens. 2020, 160, 107–123. [Google Scholar] [CrossRef]
  15. Hu, X.; Ban, Y.; Nascetti, A. Uni-Temporal Multispectral Imagery for Burned Area Mapping with Deep Learning. Remote Sens. 2021, 13, 1509. [Google Scholar] [CrossRef]
  16. Lizundia-Loiola, J.; Franquesa, M.; Khairoun, A.; Chuvieco, E. Global burned area mapping from Sentinel-3 Synergy and VIIRS active fires. Remote Sens. Environ. 2022, 282, 113298. [Google Scholar] [CrossRef]
  17. Cho, A.Y.; Park, S.-E.; Kim, D.-J.; Kim, J.; Li, C.; Song, J. Burned Area Mapping Using Unitemporal PlanetScope Imagery With a Deep Learning Based Approach. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 242–253. [Google Scholar] [CrossRef]
  18. Sismanis, M.; Chadoulis, R.-T.; Manakos, I.; Drosou, A. An Unsupervised Burned Area Mapping Approach Using Sentinel-2 Images. Land 2023, 12, 379. [Google Scholar] [CrossRef]
  19. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring vegetation systems in the Great Plains with ERTS. In Proceedings of the 3rd Earth Resources Technology Satellite-1 Symposium (NASA), Washington, DC, USA, 10–14 December 1974; pp. 309–317. [Google Scholar]
  20. Chuvieco, E.; Martín, M.P.; Palacios, A. Assessment of different spectral indices in the red-near-infrared spectral domain for burned land discrimination. Int. J. Remote Sens. 2002, 23, 5103–5110. [Google Scholar] [CrossRef]
  21. Pinty, B.; Verstraete, M.M. GEMI: A non-linear index to monitor global vegetation from satellites. Vegetation 1992, 101, 15–20. [Google Scholar] [CrossRef]
  22. Martín, M.P. Cartografía e Inventario de Incendios Forestales en la Península Ibérica a Partir de Imágenes NOAA–AVHRR. Ph.D. Thesis, Departamento de Geografía, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain, 1998. [Google Scholar]
  23. Key, C.H.; Benson, N.C. Measuring and remote sensing of burn severity. In Proceedings of the Joint Fire Science Conference and Workshop: Crossing the Millennium: Integrating Spatial Technologies and Ecological Principles for a New Age in Fire Management, Boise, Idaho, 15–17 June 1999; Volume 2, p. 284. [Google Scholar]
  24. Miller, J.D.; Thode, A.E. Quantifying burn severity in a heterogeneous landscape with a relative version of the delta Normalized Burn Ratio (dNBR). Remote Sens. Environ. 2007, 109, 66–80. [Google Scholar] [CrossRef]
  25. Veraverbeke, S.; Lhermitte, S.; Verstraeten, W.W.; Goossens, R. A time-integrated MODIS burn severity assessment using the multi-temporal differenced normalized burn ratio (dNBRMT). Int. J. Appl. Earth Obs. Geoinf. 2001, 13, 52–58. [Google Scholar] [CrossRef]
  26. Filipponi, F. BAIS2: Burned Area Index for Sentinel-2. Proceedings 2018, 2, 364. [Google Scholar] [CrossRef]
  27. Bastarrika, A.; Chuvieco, E.; Martín, M.P. Mapping burned areas from Landsat TM/ETM+ data with a two-phase algorithm: Balancing omission and commission errors. Remote Sens. Environ. 2011, 115, 1003–1012. [Google Scholar] [CrossRef]
  28. Stroppiana, D.; Bordogna, G.; Carrara, P.; Boschetti, M.; Boschetti, L.; Brivio, P.A. A method for extracting burned areas from Landsat TM/ETM+ images by soft aggregation of multiple Spectral Indices and a region growing algorithm. ISPRS J. Photogramm. Remote Sens. 2012, 69, 88–102. [Google Scholar] [CrossRef]
  29. Stroppiana, D.; Azar, R.; Calò, F.; Pepe, A.; Imperatore, P.; Boschetti, M.; Silva, J.M.N.; Brivio, P.A.; Lanari, R. Integration of Optical and SAR Data for Burned Area Mapping in Mediterranean Regions. Remote Sens. 2015, 7, 1320–1345. [Google Scholar] [CrossRef]
  30. Hawbaker, T.J.; Vanderhoof, M.K.; Beal, Y.-J.; Takacs, J.D.; Schmidt, G.L.; Falgout, J.T.; Williams, B.; Fairaux, N.M.; Caldwell, M.K.; Picotte, J.J.; et al. Mapping burned areas using dense time-series of Landsat data. Remote Sens. Environ. 2017, 198, 504–522. [Google Scholar] [CrossRef]
  31. Goodwin, N.R.; Collett, L.J. Development of an automated method for mapping fire history captured in Landsat TM and ETM + time series across Queensland, Australia. Remote Sens. Environ. 2014, 148, 206–221. [Google Scholar] [CrossRef]
  32. Gitas, I.Z.; Mitri, G.H.; Ventura, G. Object-based image classification for burned area mapping of Creus Cape, Spain, using NOAA-AVHRR imagery. Remote Sens. Environ. 2004, 92, 409–413. [Google Scholar] [CrossRef]
  33. Mitri, G.H.; Gitas, I.Z. A semi-automated object-oriented model for burned area mapping in the Mediterranean region using Landsat-TM imagery. Int. J. Wildland Fire 2004, 13, 367–376. [Google Scholar] [CrossRef]
  34. Chen, M.; Su, W.; Li, L.; Zhang, C.; Anzhi, Y.; Li, H. A Comparison of Pixel-based and Object-oriented Classification Using SPOT5 Imagery. Appl. Comput. Math. 2008, 6, 321–326. [Google Scholar]
  35. Bernardini, A.; Frontoni, E.; Malinverni, E.S.; Mancini, A.; Tassetti, A.; Zingaretti, P. Pixel, object and hybrid classification comparisons. J. Spat. Sci. 2010, 55, 43–54. [Google Scholar] [CrossRef]
  36. Bhaskaran, S.; Paramananda, S.; Ramnarayan, M. Per-pixel and object-oriented classification methods for mapping urban features using Ikonos satellite data. Appl. Geogr. 2010, 30, 650–665. [Google Scholar] [CrossRef]
  37. Zoleikani, R.; Zoej, M.J.V.; Mokhtarzadeh, M. Comparison of Pixel and Object Oriented Based Classification of Hyperspectral Pansharpened Images. J. Indian Soc. Remote Sens. 2017, 45, 25–33. [Google Scholar] [CrossRef]
  38. Moosavi, V.; Talebi, A.; Shirmohammadi, B. Producing a landslide inventory map using pixel-based and object-oriented approaches optimized by Taguchi method. Geomorphology 2014, 204, 646–656. [Google Scholar] [CrossRef]
  39. Planet, Planet Imagery Product Specifications. 2022. Available online: https://assets.planet.com/docs/Planet_Combined_Imagery_Product_Specs_letter_screen.pdf (accessed on 10 May 2023).
  40. Burned Area Satellite Product Validation Protocol Page. Available online: https://lpvs.gsfc.nasa.gov/PDF/BurnedAreaValidationProtocol.pdf (accessed on 20 April 2023).
  41. Farr, T.G.; Rosen, P.A.; Caro, E.; Crippen, R.; Duren, R.; Hensley, S.; Kobrick, M.; Paller, M.; Rodriguez, E.; Roth, L.; et al. The Shuttle Radar Topography Mission. Rev. Geophys. 2007, 45, RG2004. [Google Scholar] [CrossRef]
  42. Kurczyński, Z. Airborne Laser Scanning in Poland—Between Science and Practice. Arch. Photogramm. Cartogr. Remote Sens. 2019, 31, 105–133. [Google Scholar] [CrossRef]
  43. Tadono, T.; Nagai, H.; Ishida, H.; Oda, F.; Naito, S.; Minakawa, K.; Iwamoto, H. Initial Validation of the 30 m-mesh Global Digital Surface Model Generated by ALOS PRISM. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXIII ISPRS Congress, Prague, Czech Republic, 12–19 July 2016; Volume XLI-B4, pp. 157–162. [Google Scholar]
  44. Level-2A Algorithm Overview, ESA Page. Available online: https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-2-msi/level-2a/algorithm-overview (accessed on 20 April 2023).
  45. Malinowski, R.; Lewiński, S.; Rybicki, M.; Gromny, E.; Jenerowicz, M.; Krupiński, M.; Nowakowski, A.; Wojtkowski, C.; Krupiński, M.; Krätzschmar, E.; et al. Automated Production of a Land Cover/Use Map of Europe Based on Sentinel-2 Imagery. Remote Sens. 2020, 12, 3523. [Google Scholar] [CrossRef]
  46. Rapid Mapping, Copernicus Emergency Management Service Page. Available online: https://emergency.copernicus.eu/mapping/list-of-activations-risk-and-recovery (accessed on 11 May 2023).
  47. Baatz, M.; Schäpe, A. Object-oriented and multi-scale image analysis in semantic networks. In Proceedings of the 2nd International Symposium: Operationalization of Remote Sensing, Enschede, The Netherlands, 16–20 August 1999; pp. 7–13. [Google Scholar]
  48. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  49. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  50. Loboda, T.; O’Neal, K.; Csiszar, I. Regionally adaptable dNBR-based algorithm for burned area mapping from MODIS data. Remote Sens. Environ. 2007, 109, 429–442. [Google Scholar] [CrossRef]
  51. Comaniciu, D.; Meer, P. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef]
  52. Li, C.H.; Lee, C.K. Minimum cross entropy thresholding. Pattern Recognit. 1993, 26, 617–625. [Google Scholar] [CrossRef]
  53. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
  54. Map of Main Climates in Europe, European Environment Agency Page. Available online: https://www.eea.europa.eu/data-and-maps/figures/climate (accessed on 20 April 2023).
  55. Cloudy Earth, NASA Earth Observatory Page. Available online: https://earthobservatory.nasa.gov/images/85843/cloudy-earth (accessed on 26 April 2023).
  56. Coluzzi, R.; Imbrenda, V.; Lanfredi, M.; Simoniello, T. A first assessment of the Sentinel-2 Level 1-C cloud mask product to support informed surface analyses. Remote Sens. Environ. 2018, 217, 426–443. [Google Scholar] [CrossRef]
  57. Stroppiana, D.; Grégoire, J.-M.; Pereira, J.M.C. The use of SPOT VEGETATION data in a classification tree approach for burnt area mapping in Australian savanna. Int. J. Remote Sens. 2003, 24, 2131–2151. [Google Scholar] [CrossRef]
  58. Hamilton, D.; Brothers, K.; McCall, C.; Gautier, B.; Shea, T. Mapping Forest Burn Extent from Hyperspatial Imagery Using Machine Learning. Remote Sens. 2021, 13, 3843. [Google Scholar] [CrossRef]
Figure 1. Location of test sites in Europe (red rectangles).
Figure 1. Location of test sites in Europe (red rectangles).
Land 12 01087 g001
Figure 2. (left) Test site 1 in Poland; background—Sentinel-2 Global Land Cover 2017. (right) Close-up on the burn scar; background—shaded DEM 10 m (LiDAR/GUGiK). Black polygon marks the extent of the reference polygon.
Figure 2. (left) Test site 1 in Poland; background—Sentinel-2 Global Land Cover 2017. (right) Close-up on the burn scar; background—shaded DEM 10 m (LiDAR/GUGiK). Black polygon marks the extent of the reference polygon.
Land 12 01087 g002
Figure 3. (left) Test site 2 in Sweden; background—Sentinel-2 Global Land Cover 2017. (middle) and (right) Close-up on the burn scars; background—DEM 30 m (ALOS World 3D/JAXA). Black polygons mark the extent of the reference polygons.
Figure 3. (left) Test site 2 in Sweden; background—Sentinel-2 Global Land Cover 2017. (middle) and (right) Close-up on the burn scars; background—DEM 30 m (ALOS World 3D/JAXA). Black polygons mark the extent of the reference polygons.
Land 12 01087 g003
Figure 4. (left) Test site 3 in the United Kingdom; background—Sentinel-2 Global Land Cover 2017. (right) Close-up on the burn scar; background—shaded DEM 30 m (SRTM/NASA). Black polygon marks the extent of the reference polygon.
Figure 4. (left) Test site 3 in the United Kingdom; background—Sentinel-2 Global Land Cover 2017. (right) Close-up on the burn scar; background—shaded DEM 30 m (SRTM/NASA). Black polygon marks the extent of the reference polygon.
Land 12 01087 g004
Figure 5. (left) Test site 4 in Greece; background—Sentinel-2 Global Land Cover 2017. (right) Close-up on the burn scar; background—shaded DEM 30 m (SRTM/NASA). Black polygon marks the extent of the reference polygon.
Figure 5. (left) Test site 4 in Greece; background—Sentinel-2 Global Land Cover 2017. (right) Close-up on the burn scar; background—shaded DEM 30 m (SRTM/NASA). Black polygon marks the extent of the reference polygon.
Land 12 01087 g005
Figure 6. A close-up of burnt areas in (a) TS1 Poland, (b) TS2 Sweden, (c) TS3 United Kingdom, and (d) TS4 Greece. From left to right: Sentinel-2 satellite image (bands 12-8-4), reference polygon (in pink), object-based method (light blue—core burnt area, dark blue—final burnt area), pixel-based method (light violet—core burnt area, dark violet—final burnt area), Copernicus EMS (in yellow). The background image is Sentinel-2.
Figure 6. A close-up of burnt areas in (a) TS1 Poland, (b) TS2 Sweden, (c) TS3 United Kingdom, and (d) TS4 Greece. From left to right: Sentinel-2 satellite image (bands 12-8-4), reference polygon (in pink), object-based method (light blue—core burnt area, dark blue—final burnt area), pixel-based method (light violet—core burnt area, dark violet—final burnt area), Copernicus EMS (in yellow). The background image is Sentinel-2.
Land 12 01087 g006
Figure 7. Intercomparison of accuracies of the maps produced with the three (3) different methods.
Figure 7. Intercomparison of accuracies of the maps produced with the three (3) different methods.
Land 12 01087 g007
Figure 8. Snapshot example of misclassification in the river valley. (left) Sentinel-2 image; (right) Sentinel-2 image with burn scar classification: blue—object-based method, orange—pixel-based method.
Figure 8. Snapshot example of misclassification in the river valley. (left) Sentinel-2 image; (right) Sentinel-2 image with burn scar classification: blue—object-based method, orange—pixel-based method.
Land 12 01087 g008
Figure 9. Snapshot example of misclassification in the case of water reservoirs. Sentinel-2 images with burn scar classification: blue—object-based method, orange—pixel-based method.
Figure 9. Snapshot example of misclassification in the case of water reservoirs. Sentinel-2 images with burn scar classification: blue—object-based method, orange—pixel-based method.
Land 12 01087 g009
Figure 10. Snapshot example of misclassifications in the case of clouds and shadows. Sentinel-2 images with burn scar classification: blue—object-based method, orange—pixel-based method.
Figure 10. Snapshot example of misclassifications in the case of clouds and shadows. Sentinel-2 images with burn scar classification: blue—object-based method, orange—pixel-based method.
Land 12 01087 g010
Figure 11. Comparison of core and final burnt areas versus reference burnt areas.
Figure 11. Comparison of core and final burnt areas versus reference burnt areas.
Land 12 01087 g011
Table 1. Satellite imagery used for mapping of burn scars and validation.
Table 1. Satellite imagery used for mapping of burn scars and validation.
Test Site (TS)TS 1 PolandTS 2 SwedenTS 3 United KingdomTS 4 Greece
Pre-event imageSentinel-2 L2A,
25 March 2020
Sentinel-2 L2A, 4 July 2018Sentinel-2 L2A, 24 June 2018Sentinel-2 L2A, 1 August 2021
Post-event imageSentinel-2 L2A, 14 May 2020Sentinel-2 L2A, 27 July 2018Sentinel-2 L2A, 14 July 2018Sentinel-2 L2A, 16 August 2021
Reference imagePlanetScope, 8 May 2020PlanetScope, 27 July 2018PlanetScope,
14 July 2018
PlanetScope, 15 August 2021
Table 2. Auxiliary data used for mapping of burn scars and validation.
Table 2. Auxiliary data used for mapping of burn scars and validation.
Test Site (TS)TS 1 PolandTS 2 SwedenTS 3 United KingdomTS 4 Greece
Activation no. of Copernicus EMS Rapid MappingEMSR 436—Goniadz)EMSR 436—Lillhardal and StrandasmyrvallenEMSR 436—MossleyEMSR 436—Diabolitsi
ElevationDEM SRTM v.3.0 (NASA), DEM LiDAR (GUGiK)DEM ALOS World 3D (JAXA)DEM SRTM v.3.0 (NASA)DEM SRTM v.3.0 (NASA)
Clouds and shadowsSentinel-2 Land Cover band, 15 May 2020Sentinel-2 Land Cover band, 27 July 2018n/an/a
Land coverS2GLC 2017S2GLC 2017S2GLC 2017S2GLC 2017
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Milczarek, M.; Aleksandrowicz, S.; Kita, A.; Chadoulis, R.-T.; Manakos, I.; Woźniak, E. Object- Versus Pixel-Based Unsupervised Fire Burn Scar Mapping under Different Biogeographical Conditions in Europe. Land 2023, 12, 1087. https://doi.org/10.3390/land12051087

AMA Style

Milczarek M, Aleksandrowicz S, Kita A, Chadoulis R-T, Manakos I, Woźniak E. Object- Versus Pixel-Based Unsupervised Fire Burn Scar Mapping under Different Biogeographical Conditions in Europe. Land. 2023; 12(5):1087. https://doi.org/10.3390/land12051087

Chicago/Turabian Style

Milczarek, Marta, Sebastian Aleksandrowicz, Afroditi Kita, Rizos-Theodoros Chadoulis, Ioannis Manakos, and Edyta Woźniak. 2023. "Object- Versus Pixel-Based Unsupervised Fire Burn Scar Mapping under Different Biogeographical Conditions in Europe" Land 12, no. 5: 1087. https://doi.org/10.3390/land12051087

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop