Next Article in Journal
Airborne Drones for Water Quality Mapping in Inland, Transitional and Coastal Waters—MapEO Water Data Processing and Validation
Next Article in Special Issue
Automatic Extraction of Bare Soil Land from High-Resolution Remote Sensing Images Based on Semantic Segmentation with Deep Learning
Previous Article in Journal
Investigation of the Topside Ionosphere over Cyprus and Russia Using Swarm Data
Previous Article in Special Issue
Water Body Extraction from Sentinel-2 Imagery with Deep Convolutional Networks and Pixelwise Category Transplantation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Resolution Semantic Segmentation of Woodland Fires Using Residual Attention UNet and Time Series of Sentinel-2

1
Department of Forest Ecology and Management, Swedish University of Agricultural Sciences (SLU), Skogsmarksgränd 17, 901 83 Umeå, Sweden
2
Department of Forest Sciences, University of Helsinki, Latokartanonkaari 7, 00014 Helsinki, Finland
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(5), 1342; https://doi.org/10.3390/rs15051342
Submission received: 10 January 2023 / Revised: 18 February 2023 / Accepted: 23 February 2023 / Published: 28 February 2023

Abstract

:
Southern Africa experiences a great number of wildfires, but the dependence on low-resolution products to detect and quantify fires means both that there is a time lag and that many small fire events are never identified. This is particularly relevant in miombo woodlands, where fires are frequent and predominantly small. We developed a cutting-edge deep-learning-based approach that uses freely available Sentinel-2 data for near-real-time, high-resolution fire detection in Mozambique. The importance of Sentinel-2 main bands and their derivatives was evaluated using TreeNet, and the top five variables were selected to create three training datasets. We designed a UNet architecture, including contraction and expansion paths and a bridge between them with several layers and functions. We then added attention gate units (AUNet) and residual blocks and attention gate units (RAUNet) to the UNet architecture. We trained the three models with the three datasets. The efficiency of all three models was high (intersection over union (IoU) > 0.85) and increased with more variables. This is the first time an RAUNet architecture has been used to detect fire events, and it performed better than the UNet and AUNet models—especially for detecting small fires. The RAUNet model with five variables had IoU = 0.9238 and overall accuracy = 0.985. We suggest that others test the RAUNet model with large datasets from different regions and other satellites so that it may be applied more broadly to improve the detection of wildfires.

Graphical Abstract

1. Introduction

Africa has been called the “fire continent” [1], but the fire detection methods typically used across the continent are based on low-resolution satellite images (MODIS and Sentinel-3). Thus, small fires and burn areas cannot be detected precisely. Furthermore, these models are based on burned area products (e.g., Sentinel-3/OLCI, SLSTR V1, and MCD64A1 Version 6.1 from these satellites) [2,3] rather than raw satellite data, creating a time lag. Together, this means that fires cannot be detected at the start, in near real time when they are small. In the aftermath of fires, quantification of burned areas and the effects on forest ecology and carbon dynamics remain largely unknown.
Nowhere are these issues more acute than in miombo woodlands, where fires both occur ca. 50 % more frequently than in other ecoregions of the world [4] and are dominated by small burn patches [5,6]. The miombo ecoregion covers 360 Mha across eastern and southern Africa [7] and is vastly important for and interconnected with rural livelihoods [8,9]. Miombo is one of the richest ecoregions in the world and among the top five global wilderness areas for prioritizing conservation [10]. Unfortunately, miombo ecosystems are increasingly threatened by natural and anthropogenic forces [11], among which fire is a major driver of degradation [4]. Miombo woodlands have co-developed with people and are well adapted to fire [12,13]. However, frequent and intense fires—especially late in the dry season—can reduce regeneration and growth, spur substantial carbon emissions, and alter species composition, thereby threatening livelihoods, biodiversity, and climate-related programs [14].
Although several attempts have been made to detect large fires in miombo woodlands [5,15,16,17], identifying small fires remains difficult. The integration of cutting-edge deep-learning-based approaches and high-resolution Sentinel-1 and -2 data may help solve this problem, thereby helping us to both detect the onset of fires in near real time and quantify the effects of fires of all sizes in this dynamic ecosystem. Some dimensions of wildfires in miombo woodlands, such as frequency and severity, have been explored through coarse- and medium-resolution satellite time series from SPOT Vegetation, MODIS, and Landsat [16,18]. However, knowledge is still lacking on the exact spatial extent, duration, and timing of fires, as well as their relationship with driving forces. High-resolution Sentinel satellite data can overcome this limitation [19]. Using Sentinel-2 data, it was possible to identify over 45% of fires of less than 100 ha that were not visible in the lower-resolution MODIS products [20].
In order to map burned areas and active fires from satellite data, many studies have derived indices such as the Normalized Burn Ratio (NBR) [21,22,23], Burn Area Index (BAI) [22,23,24], Normalized Difference Vegetation Index (NDVI) [20,23,24], and Normalized Difference Water Index (NDWI) [21]. Unfortunately, these indices are adversely affected by the temporal changes in the vegetation and heterogeneity of the landscape [25,26]. Burn indices based on Sentinel-2 data, such as the Burned Area Index (BAIS2) [27], and prioritizing spectral indices [20,22,28,29,30] and main bands [28,29], show promise for improving fire mapping. However, a combination of different Sentinel-2 derivatives and main bands has yet to be fully explored. Therefore, one of the objectives of the present study was to identify the most important variables for detecting burned areas using Sentinel-2.
In recent years, various convolutional neural network (CNN)-based architectures have been developed for classification, segmentation, or object detection from high-resolution satellite data [31,32]. A CNN architecture consists of a set of layers and functions with a strong ability to learn features from low-to-high spatial scales via modules and classifiers [33]. Various CNN architectures have been tested for fire detection, such as UNet [21,33,34,35], BA-Net [36], FireNet [37], Smoke-UNet [38], ResNet [39], DSMNN-Net [40], and unsupervised deep learning [41]. Using FireCNN to detect active fires via multispectral data of the Himawari-8 satellite outperformed the traditional threshold method [42].
Several deep-learning-based approaches have been developed to detect forest fires that use the datasets of Landsat-8 [37,38]. Seydi et al. [37] developed the FireNet approach to detect active fires using the combination of optical and thermal images of Landsat-8 data. They compared the performance of the developed approach with classical machine learning approaches (e.g., k-nearest neighbor, support-vector machine, and random forest) and reported the superiority of FireNet relative to these methods in detecting both active fires and recently burned areas. Wang et al. [38] compared the efficiency of Smoke-UNet with the standard UNet for early detection of forest fires using a composite band of Landsat-8 imagery within different forest biomes and found that Smoke-UNet slightly outperformed the UNet model. In many of these studies, the deep learning architectures were trained based on small datasets or based on the medium- or coarse-resolution images. Some studies also trained the deep architecture via an RGB composite of the optical bands or fusion of the optical and SAR data [21,38,43,44]. Meanwhile, little is known about the efficiency of datasets that consider the derivatives of Sentinel data for fire detection, such as spectral indices or burn indices. The majority of these studies concentrated on the dense forest biomes, but the detection of fires in savanna woodlands (i.e., grassy woodlands) may require special architectures or datasets. Hence, we constructed a novel deep-learning-based architecture with a large dataset of Sentinel-2 data, which represents different locations and times for near-real-time segmentation of fires in the miombo woodlands.
Adding attention gate units to the architecture of UNet for image segmentation (Attention UNet; AUNet) has further improved the efficiency of UNet-based models. An attention mechanism concentrates on the target features within an image and ignores irrelevant backgrounds [45]. AUNet models have been efficient for mapping burned areas [43], wildfire severity [46], and deforestation [47]. In addition to attention gates, adding residual connections may further improve the efficiency of the UNet (residual attention UNet; RAUNet) [48], particularly for detecting small features within an image [49,50,51,52,53]. However, the efficiency of RAUNet has not yet been investigated for the detection of forest fires. Therefore, this research tests the possibility of improving the detection of fires and burned areas—particularly small fire events—using RAUNet and Sentinel-2 datasets.

2. Materials and Methods

2.1. Description of the Study Area

We focused our study on the Mozambican part of the miombo ecoregion and selected four areas throughout to create the label and image datasets for collecting the ground-truth samples of fires and training the deep learning model. Of Mozambique’s 824,000 km2 land area, roughly 41% is covered by natural forests and woodlands, the majority of which fall within the miombo ecoregion (Figure 1a).
Miombo is dominated by trees of the genera Brachystegia, Julbernardia, and Isoberlinia, while the mopane woodlands are dominated by Colophosperum mopane (Benth.) Leonard [54]. The woodlands are located at elevations between 200 and 300 m above sea level. The vast majority of the region falls into tropical savanna and dry climates [55]. The average annual precipitation varies between 650 and 1400 mm, with an average temperature of 24–27 °C [7]. Mozambique hosts more than 5500 flora and 4200 fauna species [56]. However, this ecoregion is threatened by high deforestation (ca. 0.80% per year) [57]. Mozambique lost more than 5 Mha of forest from 1990 to 2015, due to urban sprawl, agricultural expansion, logging, intense fuelwood extraction, and uncontrolled and destructive fires [58,59], resulting in ca. 5 Tg C emissions [57].

2.2. Data

We used Sentinel-1 and -2 data, MODIS fire products, Google Earth images, and field observation data to establish the labelling and training datasets to develop and assess the deep learning models. We obtained 324 Sentinel-2 images from the Copernicus database throughout the years from January 2014 to August 2022. Fires detected by MCD64A1 and Sentinel-3/OLCI Version 1 products were obtained from the online data pool of the NASA Land Processes Distributed Active Archive Center (LP DAAC; https://lpdaac.usgs.gov/data_access/data_pool, accessed on 10 October 2022) and the Copernicus Global Land Service (CGLS; https://land.copernicus.eu/global/products, accessed on 15 September 2022), respectively. We extracted the approximate locations and exact times of fires during the creation of the label datasets through these fire products. We carried out field observations from burned areas and active fires in central Mozambique (see Section 2.5).

2.3. The Datasets

2.3.1. Pre-Processing of Sentinel-2 Images

We randomly selected 48 multispectral Sentinel-2 images to create the label and training datasets. To derive reflectance data, we implemented atmospheric, radiometric, and topographic corrections on Sentinel-2 images via the open-source software of the Sentinel Application Platform (SNAP; https://step.esa.int/main/toolboxes/snap/, accessed on 15 November 2021). All bands were resampled to a single size of 10 m using the nearest-neighbor resampling method for simplifying the data processing.
In addition to the surface reflectance bands, we derived a variety of indices from Sentinel-2 data in four categories: burn, vegetation, soil, and water indices. All indices, their definitions, and their required formulae are provided in Table 1. After evaluating these in TreeNet, we fed a combination of prominent reflectance bands and their derivatives into the models as datasets.

2.3.2. Fire Label Dataset

The labeling dataset of woodland fires was generated from an RGB composite of Sentinel-2, consisting of shortwave infrared 1 (SWIR1), near infrared (NIR), and green bands, via visual interpretation (Figure 1b,c). We created a large geodatabase that includes the exact spatial extent of the fires and the date of the image for each year (Figure 2; Section 2.3.4). Approximately 6050 fire events (active fires or recently burned areas) were delineated, ranging from 0.5 to 71,534 ha and covering ca. 423,500 ha. The fire polygons were converted into a binary image containing pixels with the fire labels 0 (not affected) and 1 (affected by fire event).

2.3.3. Determining the top Variables

We generated a square-shaped tessellation with cell size equal to the minimum size of a created feature (i.e., 160 × 160 m) during the training of each deep-learning-based model based on our designed structure. The values of all reflectance bands, the derivatives from Sentinel-2, and the areas affected by fire were summarized within each cell. We randomly selected 15,524 cells from areas affected and unaffected by fire. This database was used to select influential variables for enhancing the segmentation of fire events.
The TreeNet regression approach was used to determine the importance values of the variables. TreeNet begins with a weak, small tree and gradually grows based on the residuals of the previous tree to build the next tree. The performance of TreeNet gradually increases by increasing the depth of the trees as well [81]. In our model, we set the number of trees and maximum nodes per tree on 2000 and 6, respectively. The learning rate was set to auto. The least absolute deviation was set as a loss function. The TreeNet model was trained with 75% of the data and tested with the remaining 25%. We used R2 to test the accuracy of the optimal TreeNet model to identify the most important variables.
The influence of each variable was determined using relative importance values. We also generated partial dependence plots [82] to analyze the response of the target variable (area affected by fire) to each predictor. The top variables were used to create the datasets for training UNet, AUNet, and RAUNet.

2.3.4. Training Datasets

We established three training datasets including the top three, four, and five variables of Sentinel-2. The values of each variable were normalized between 0 and 255 using the fuzzy linear transformation. The maximum values were assigned to cells that indicating the areas affected by fires, and the minimum values were assigned to the areas unaffected by fire. We converted the images and their corresponding labels into small patches with a size of 256 × 256 pixels (Figure 1c). Each dataset consisted of 6000 image patches and their 6000 corresponding fire labels. We divided the patches into three groups for training (70%), validation (20%), and testing (10%) of UNet, AUNet, and RAUNet. The validation dataset was used to control how well the training of our model was working, and the testing dataset was used to assess how well our training model was performing [83].

2.4. Residual Attention UNet (RAUNet) Architecture

RAUNet integrates residual blocks and attention gates into the UNet structure [48]. UNet is a fully convolutional neural network with an encoder (a contraction path) and decoder (an expansion path) architecture [84]. Attention gates can highlight the information of target features and reduce noisy and irrelevant backgrounds in an image [85]. Attention gates use an attention mechanism to retrieve spatial details in images, which may help detect small fires. The residual blocks boost the UNet efficiency to extract higher features in every convolution layer as well.
The architecture of RAUNet consists of the contraction and expansion paths of UNet, as well as a bridge that connects the two paths (Figure 3b). The contraction path takes an input image patch (256 × 256 × n, where n is the number of variables in the dataset) and passes it through blocks, helping the network learn the details of a fire object. The contraction path includes four layers, each consisting of a residual block. The spatial dimensions of features are reduced to a 2 × 2 max-pooling layer. Each residual block includes two dilated convolutional layers (3 × 3), followed by a batch normalization and a rectified linear unit (ReLU) activation function. While the spatial dimensions of features are halved at each layer, the number of features is doubled during downsampling.
The bridge layer includes a residual block layer with 512 feature maps of size 16 × 16. This layer connects the contraction and expansion paths. The output of the bridge layer enters the convolutional transpose layer on one side and becomes a gating signal for the attention gate on the other side.
The expansion path also includes four layers. Each layer consists of a residual block followed by a convolutional transpose layer (2 × 2). A skip connection and attention gate are used to connect each layer in the expansion path with its corresponding layer in the contraction path. The convolutional transpose layer (2 × 2) is used for upsampling of each layer. The output of upsampling is concatenated with the output of the attention gate and then passes through the residual blocks. Against the contraction path, the spatial dimensions of feature maps increase, while their number reduces by half in the expansion path.
The output layer is upsampled to the original size of the input image patch in each residual block. Then, it passes through a classification layer to reach the feature maps as the number of the channels of the input image. The probability layers of fires are created by passing the feature maps through a sigmoid activation function. The output is a 1 × 1 convolutional layer with one dimension and minimal loss (Figure 3c).
We used the TensorFlow and Keras libraries [86] to construct the architectures of UNet, AUNet, and RAUNet and train them in Python. The models were trained using the GPU of an RTX A2000. The optimal set of hyperparameters (e.g., learning rate, optimization algorithm, batch size, loss function, and dropout rate) were determined via KerasTuner [83,87]. We set the minimum number of epochs to 100. The performance of the models was controlled using the cross-entropy loss function.
Figure 4 shows an example of the feature maps that were produced in a block of the contraction path and a block of the expansion path. Figure 4b presents different feature maps from convolution, max pooling, batch normalization, and ReLU activation with a size of 128 × 128 cells and including 32 channels. RAUNet learned the spatial feature details of burned areas in this stage. Figure 4c shows the expansion path: convolution, upsampling, concatenation, and the added gate (with a size of 256 × 256 and 16 channels). RAUNet learned the high-level features to map the burned areas. The output of the sigmoid activation is shown in Figure 4d, and the final layer of burned area is shown in Figure 4e.
In addition to RAUNet, we trained AUNet (integrated attention gates to UNet) and UNet based on our three datasets to better understand the performance of RAUNet.

2.5. Accuracy Assessment

2.5.1. Collecting Testing Data from Forest Fires

We pre-selected some fire patches for collecting ground-truth fire samples from the LevasFlor concession in central Mozambique to assess the performance of the trained RAUNet, AUNet, and UNet with three datasets. We used the trained models to predict the spatial extent of fires that occurred in 2021 and 2022. The fire polygons were imported into GPS devices and spotted through the navigation capabilities of the GPS on the ground. We labelled the polygons as true (fire) and false (non-fire) objects. These samples were used to create a reliable database (ca. 10%; 630 label patches) to assess the performance of trained algorithms for fire detection through the evaluation metrics discussed in the next section.

2.5.2. Accuracy Assessment of UNet, AUNet, and RAUNet

We classified the collected samples into four categories and organized them within a confusion matrix: the sum of ground-truth samples that were labelled as woodland fires on the ground and predicted as woodland fires through each deep learning algorithm (true positive; TP); the sum of samples that were labelled as non-fires and predicted as non-fires (true negative; TN); the sum of samples that were labelled as fires but predicted as non-fires (false negative; FN); and the sum of samples that were labelled as non-fires but predicted as fires (false positive; FP). Then, the area under the receiver operating characteristic (ROC) curve (AUC), overall accuracy (OA), and intersection over union (IoU) metrics were derived from the confusion matrix to quantify the performance of the deep learning algorithms in detecting fires.
AUC represents the degree to which a model is capable of distinguishing between two classes. Higher values of AUC indicate better performance of the model.
The OA indicates the ratio of correct predictions for both fire and non-fire classes (Equation (1)).
OA = (TP + TN)/ (TP + TN + FP + FN)
The IoU expresses the similarity ratio between the predicted fires and the corresponding segments of ground-truth samples (Equation (2)).
IoU = TP/ (TP + FP + FN)

3. Results

3.1. Top Derivatives of Sentinel-2

The optimal TreeNet model was obtained after building 1983 trees with a learning rate of 0.086 and a tree size of 6 (R2 = 0.89 for the optimal model). All predictors remained in the TreeNet model, except for NDPI (Figure 5).
Our TreeNet analyses indicate that the derivatives of Sentinel-2 have remarkable priority over the original bands in the detection of forest fires. NBR2 was the most important variable (set to 100%), and BAIS2 and MIRBI had ca. 73% and 62% importance relative to NBR2, respectively. Although the importance values of other variables steadily decreased, the majority of them remained in the model (Figure 5). We used the datasets including the top three, four and five variables to train the three algorithms.

3.2. Marginal Effects of Top Derivatives

The analysis of the marginal effects of the top five derivatives of Sentinel-2 shows that the probability of detecting burned areas increases when NBR2 values are less than −0.30 (Figure 6a), BAIS2 values are over 0.40 (Figure 6b), MIRBI values are more than 1.60 (Figure 6c), MNDWI values are less than −0.43 (Figure 6d), and B11 values are less than 2300 (Figure 6e).

3.3. Performance of Trained Models Using Three Datasets

3.3.1. Dataset with the Top Three Variables

The comparison between three trained models using the top three variables (NBR2, BAIS2, and MIRBI) shows that UNet (IoU = 0.881) slightly outperformed the others for fire segmentation by all metrics (IoU, AUC, and OA; Table 2). This is shown for an image patch in Figure 7, as the fire predicted using UNet shows more similarity with the label of fire than the fires predicted by AUNet and RAUNet (Figure 7c–e).

3.3.2. Dataset with the Top Four Variables

By training the three models with the top four variables (NBR2, BAIS2, MIRBI, and MNDWI), RAUNet (IoU = 0.912) showed the best performance, followed by UNet (IoU = 0.895) and AUNet (IoU = 0.873) (Table 2). The fire predicted using RAUNet showed robust similarity with its labelled fire in the tested image patch in Figure 7h.

3.3.3. Dataset with the Top Five Variables

By adding the fifth-best variable (B11), the performance of AUNet (IoU = 0.897) and RAUNet (IoU = 0.9238) was enhanced. However, adding a fifth variable slightly lowered the performance of the UNet model (Table 2). RAUNet with five variables (RAUNet5) had the best overall performance and very high overall accuracy (98.53%).

4. Discussion

4.1. The Selected Variables and Datasets

We carried out an extensive investigation using TreeNet to select the top Sentinel-2 derivatives and bands for establishing a reliable dataset before training our models. Earlier studies that tested different combinations of the main bands of Sentinel-2 concluded that feeding the majority of the bands (visible, NIR, and SWIR) into the UNet model improves performance [34]. However, the top three variables in our study were from the burn indices derived from a combination of the SWIRs (NBR2 and MIRBI) or the red, red-edge, and NIR bands (BAIS2). Thus, our work is more aligned with other studies that found that these derivatives of Sentinel-2 have higher priorities—NBR2 [20,28,29], MIRBI [20,28,29], BAIS2 [22,30], NIR, red, and red-edge, or SWIRs [36,88]. However, the main band B11 ranked fifth. Thus, our research adds that a combination of derivatives and the main bands of Sentinel-2 have a better effect on detecting fires in woodland biomes.
Our results show that increasing the number of top variables from three to five in training datasets substantially improves the performance of AUNet and RAUNet for segmentation of fires (Table 2), but including more variables will likely not increase performance indefinitely [34]. Many studies used all of the main bands of Sentinel-2 [40,89], selected main bands of Sentinel-2 [34,90,91], or single derivatives (e.g., burned area indices) [44,92] for training their deep-learning-based approaches to detect forest fires. However, increasing the number of channels (i.e., variables in a dataset) increases the parameters and training time of the model. Thus, we sought to find appropriate datasets that included a combination of the main bands of Sentinel-2 and their derivatives—constrained to only those with the highest importance—to both maximize performance and reduce the training time of the deep-learning-based models.

4.2. The Performance of the Trained RAUNet

Our research confirms that integrating attention gate layers [93] and residual blocks [48] into the UNet structure improves its accuracy for detecting fires of varying sizes and shapes. Integrating the attention gate layers into the structure enhances the weight of important features with varying sizes and shapes [45]. Adding the residual blocks to UNet allows the model to extract further features in every layer [48]. These characteristics are crucial when segmenting images (such as remote sensing data) that have high spatial information.
Among all of our models, the trained RAUNet with four or five variables outperformed AUNet and UNet in detecting fire events. Other studies also found higher efficiency when adding attention gate units (i.e., AUNet) for mapping forest fires [43], fire severity [46], and deforestation [47], or adding residual connections and attention gate units to the UNet architecture (i.e., RAUNet) to enhance image segmentation [49,50,51,52,53], forest classification [94], and building detection [52]. The current research adds the conclusion that RAUNet significantly improved the accuracy of mapping burned areas compared to UNet and AUNet.
However, the computation time increases when the attention gate units and residual connections that RAUNet requires for learning more complex features are added. In our research, training RAUNet took 82% and 29% longer than training the UNet and AUNet models, respectively. Although the training time of RAUNet was longer than the training time of AUNet, adding residual connections [95] improved the performance of the trained RAUNet for detecting burned areas in our research.

4.3. The Efficiency of Trained Models in Demonstrating the Properties of the Fires

We used Sentinel-2 images from different dates (June–October) for training our models to boost their efficiency in detecting fires regardless of the time. Our trained models could predict, with high certainty, both active fires (e.g., Figure 8k) and recently burned areas for the selected image patches in three consecutive months. Moreover, our models detected not only medium and large fires, but also small fires. Figure 8a shows a small fire (red box) that was correctly detected by our trained RAUNet5 model (Figure 8e). In contrast, the efficiency of a UNet model for detecting small fires was criticized by previous research [96], even after adding more bands to the dataset (F1-score = 82.4%). Therefore, our introduced trained RAUNet may tackle the problem of detecting small burned patches using NASA burned area products in the miombo ecoregion [20,97] (e.g., Figure 8c,h,m).
In our example image patch, the sizes of burned areas were 102 ha, 147 ha, and 355 ha based on the fire labels in August (Figure 8b), September (Figure 8g), and October (Figure 8l), respectively. We compared the accuracy of the predicted burned areas based on our RAUNet5 model with the two most commonly used burned area products. Using MCD64A1 Version 6.1 with 500 m resolution [2], no fires were predicted in August (Figure 8c) or September (Figure 8c,h), and only one-third of the fires were detected in October (105 ha; Figure 8m). The Sentinel-3/OLCI and SLSTR Version 1 product with 300 m resolution [3] was slightly better than the MODIS product at detecting burned areas, but its estimates were still low. It predicted fire events at ca. 10, 96, and 285 ha in August, September, and October, respectively (Figure 8d,i,n). The RAUNet5 model drastically improved the quantification of fire events in these three months. It predicted burned areas of 108, 150, and 351 ha in August, September, and October, respectively (Figure 8e,j,o). Our approach not only outperformed the predictions by these two products concerning the accuracy of burned area sizes, but also in detecting fire edges, small fires, and active fires.
Our trained models could detect the shape and extent of the fires even better than their fire labels. For example, the expansion of fire (blue box in Figure 8a) was predicted by RAUNet5 more accurately than the fire label. We highlighted a burned area in September (red box in Figure 8f) that was missing in its label (Figure 8g) but was detected by RAUNet5 (Figure 8j). This shows that a well-trained model can perform better than visual interpretation in the detection of fires.

4.4. Application and Outlook

The RAUNet5 model developed in this study is an end-to-end model that uses the top five variables of Sentinel-2 (i.e., NBR2, BAIS2, MIRBI, MNDWI, and B11) to predict fires of all sizes with a high resolution (10 × 10 m). The Sentinel-2 satellite passes every five days, and our model can be run immediately. Our model can therefore detect active fires in near real time. This could be applied to detect fires at the start and perhaps manage them before they spread. As many of the fires in the miombo ecoregion are small [5,20,97], our model could further be used in a time series to gain a more holistic understanding of fire (e.g., frequency, duration, timing, and spread patterns). Such maps could be applied to improve fire management, such as planning firebreaks, extinguishing existing fires, and restoration of burned areas. The ability to quantify burned areas of all sizes will also improve estimates of forest carbon dynamics in the miombo ecoregion.
We trained RAUNet using the datasets from the miombo ecoregion. We suggest that RAUNet5 could become a more versatile model for mapping burned areas by training it with large datasets of fire labelling and Sentinel-derived datasets from different regions as well. Although datasets from Sentinel-2 showed great improvement over existing models (Figure 8), including Sentinel-1 derivatives could further improve efficiency, particularly under high cloud cover [33] and in real-time fire detection [98]. Combining Sentinel-1 and -2 could also be used to map fire severity, which would have important implications for carbon emissions and forest dynamics (e.g., mortality, growth, regeneration, and species composition).
The structure of UNet was improved by adding residual blocks, attention gate units, or a combination of the two (i.e., as in the RAUNet in our research). We encourage future studies to evaluate the efficiency of new integrations, such as RAUNet with guided decoder [50] and hierarchical attention residual nested UNet [99], for detecting and mapping forest and woodland fires. It would be beneficial if future studies could test the efficiency of our developed model in separating active fires from burned areas using our tested datasets.

5. Conclusions

This research is the first to use an RAUNet architecture to detect active fires and burn events anywhere in the world, and it brings high-resolution fire detection to southern Africa. We developed an end-to-end deep-learning-based approach that uses freely available datasets from Sentinel-2 for detecting burned areas with a spatial resolution of 10 m across the miombo ecoregion in Mozambique. We tested a set of derivatives, along with the main bands of Sentinel-2, using TreeNet to determine the top five variables. Including up to five variables in the dataset improved the performance of all three models. Adding attention gate units (AUNet)—and especially residual blocks and attention gate units (RAUNet)—to the UNet architecture improved the performance of the models when using the top five variables. All three models were able to detect both active fires and burned areas, and they showed a great improvement over the existing models that are typically used in Africa. Their ability to detect even small fires and burned patches in near real time means that fires can be detected, and perhaps even managed, before serious spreading. Further application of these models will improve our understanding of fires, as well as their influence on forest ecology and carbon dynamics, across the miombo ecoregion and the African continent.

Author Contributions

Conceptualization, Z.S., R.C.G., and O.A.; methodology, Z.S. and O.A.; data preparation, Z.S.; software and programming, Z.S. and O.A.; field investigation and sampling, Z.S.; visualization, Z.S., R.C.G., and O.A.; writing—original draft preparation, Z.S.; writing—review and editing, R.C.G. and O.A.; supervision, R.C.G.; project administration, R.C.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Formas, grant number 2020-01722, and the Swedish Research Council (Vetenskapsrådet), grant number 2019-04669.

Data Availability Statement

Data sharing is not applicable.

Acknowledgments

The authors would like to thank Osvaldo Meneses for his contribution to collecting fire events in the LevasFlor concession in central Mozambique. We are also grateful to the anonymous reviewers and editors for their constructive feedback and valuable contributions, which have played an instrumental role in elevating the quality of this article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Archibald, S.; Nickless, A.; Govender, N.; Scholes, R.J.; Lehsten, V. Climate and the inter-annual variability of fire in southern Africa: A meta-analysis using long-term field data and satellite-derived burnt area data. Glob. Ecol. Biogeogr. 2010, 19, 794–809. [Google Scholar] [CrossRef]
  2. Giglio, L.; Justice, C.; Boschetti, L.; Roy, D. MODIS/Terra+Aqua Burned Area Monthly L3 Global 500m SIN Grid V061; NASA EOSDIS Land Processes DAAC: Washington, DC, USA, 2021. [Google Scholar] [CrossRef]
  3. Tansey, K.; Grégoire, J.-M.; Defourny, P.; Leigh, R.; Pekel, J.-F.; van Bogaert, E.; Bartholomé, E. A new, global, multi-annual (2000–2007) burnt area product at 1 km resolution. Geophys. Res. Lett. 2008, 35, 1–6. [Google Scholar] [CrossRef]
  4. Saito, M.; Luyssaert, S.; Poulter, B.; Williams, M.; Ciais, P.; Bellassen, V.; Ryan, C.M.; Yue, C.; Cadule, P.; Peylin, P. Fire regimes and variability in aboveground woody biomass in miombo woodland. J. Geophys. Res. Biogeosci. 2014, 119, 1014–1029. [Google Scholar] [CrossRef] [Green Version]
  5. Tarimo, B.; Dick, Ø.B.; Gobakken, T.; Totland, Ø. Spatial distribution of temporal dynamics in anthropogenic fires in miombo savanna woodlands of Tanzania. Carbon Balance Manag. 2015, 10, 18. [Google Scholar] [CrossRef] [Green Version]
  6. Hantson, S.; Pueyo, S.; Chuvieco, E. Global fire size distribution is driven by human impact and climate. Glob. Ecol. Biogeogr. 2015, 24, 77–86. [Google Scholar] [CrossRef]
  7. Timberlake, J.; Chidumayo, E. Miombo Ecoregion: Vision Report: Report for World Wide Fund for Nature, Harare, Zimbabwe. Occasional Publications in Biodiversity No. 20. Biodiversity Foundation for Africa, Bulawayo. Available online: https://www.readkong.com/page/miombo-ecoregion-vision-report-jonathan-timberlake-8228894 (accessed on 12 September 2021).
  8. Ryan, C.M.; Pritchard, R.; McNicol, I.; Owen, M.; Fisher, J.A.; Lehmann, C. Ecosystem services from southern African woodlands and their future under global change. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2016, 371, 1–16. [Google Scholar] [CrossRef] [Green Version]
  9. Fisher, M. Household welfare and forest dependence in Southern Malawi. Environ. Dev. Econ. 2004, 9, 135–154. [Google Scholar] [CrossRef] [Green Version]
  10. Mittermeier, R.A.; Mittermeier, C.G.; Brooks, T.M.; Pilgrim, J.D.; Konstant, W.R.; Da Fonseca, G.A.B.; Kormos, C. Wilderness and biodiversity conservation. Proc. Natl. Acad. Sci. USA 2003, 100, 10309–10313. [Google Scholar] [CrossRef] [Green Version]
  11. Campbell, B.; Frost, P.; Byron, N. Miombo woodlands and their use: Overview and key issues. In The Miombo in Transition: Woodlands and Welfare in Africa; Campbell, B.M., Ed.; Center for International Forestry Research: Bogor, Indonesia, 1996; ISBN 9798764072. [Google Scholar]
  12. Ribeiro, N.S.; Katerere, Y.; Chirwa, P.W.; Grundy, I.M. Miombo Woodlands in a Changing Environment: Securing the Resilience and Sustainability of People and Woodlands; Springer International Publishing: Cham, Switzerland, 2020; ISBN 978-3-030-50103-7. [Google Scholar]
  13. Whitlock, C.; Higuera, P.E.; McWethy, D.B.; Briles, C.E. Paleoecological perspectives on fire ecology: Revisiting the fire-regime concept. Open Ecol. J. 2010, 3, 6–23. [Google Scholar] [CrossRef] [Green Version]
  14. Ryan, C.M.; Williams, M. How does fire intensity and frequency affect miombo woodland tree populations and biomass? Ecol. Appl. 2011, 21, 48–60. [Google Scholar] [CrossRef]
  15. Sá, A.C.L.; Pereira, J.M.C.; Vasconcelos, M.J.P.; Silva, J.M.N.; Ribeiro, N.; Awasse, A. Assessing the feasibility of sub-pixel burned area mapping in miombo woodlands of northern Mozambique using MODIS imagery. Int. J. Remote Sens. 2003, 24, 1783–1796. [Google Scholar] [CrossRef]
  16. Ribeiro, N.S.; Cangela, A.; Chauque, A.; Bandeira, R.R.; Ribeiro-Barros, A.I. Characterisation of spatial and temporal distribution of the fire regime in Niassa National Reserve, northern Mozambique. Int. J. Wildland Fire 2017, 26, 1021. [Google Scholar] [CrossRef]
  17. van Wilgen, B.W.; de Klerk, H.M.; Stellmes, M.; Archibald, S. An analysis of the recent fire regimes in the Angolan catchment of the Okavango Delta, Central Africa. Fire Ecol. 2022, 18, 13. [Google Scholar] [CrossRef]
  18. Mganga, N.D.; Lyaruu, H.V.; Banyikwa, F. Above-ground carbon stock in a forest subjected to decadal frequent fires in western Tanzania. J. Biodivers. Environ. Sci. 2017, 10, 25–34. [Google Scholar]
  19. Engelbrecht, J.; Theron, A.; Vhengani, L.; Kemp, J. A Simple Normalized Difference Approach to Burnt Area Mapping Using Multi-Polarisation C-Band SAR. Remote Sens. 2017, 9, 764. [Google Scholar] [CrossRef] [Green Version]
  20. Roteta, E.; Bastarrika, A.; Padilla, M.; Storm, T.; Chuvieco, E. Development of a Sentinel-2 burned area algorithm: Generation of a small fire database for sub-Saharan Africa. Remote Sens. Environ. 2019, 222, 1–17. [Google Scholar] [CrossRef]
  21. de Bem, P.P.; de Carvalho Júnior, O.A.; de Carvalho, O.L.F.; Gomes, R.A.T.; Fontes Guimarães, R. Performance Analysis of Deep Convolutional Autoencoders with Different Patch Sizes for Change Detection from Burnt Areas. Remote Sens. 2020, 12, 2576. [Google Scholar] [CrossRef]
  22. Deshpande, M.V.; Pillai, D.; Jain, M. Agricultural burned area detection using an integrated approach utilizing multi spectral instrument based fire and vegetation indices from Sentinel-2 satellite. MethodsX 2022, 9, 101741. [Google Scholar] [CrossRef] [PubMed]
  23. Mpakairi, K.S.; Kadzunge, S.L.; Ndaimani, H. Testing the utility of the blue spectral region in burned area mapping: Insights from savanna wildfires. Remote Sens. Appl. Soc. Environ. 2020, 20, 100365. [Google Scholar] [CrossRef]
  24. Vanderhoof, M.K.; Hawbaker, T.J.; Teske, C.; Ku, A.; Noble, J.; Picotte, J. Mapping Wetland Burned Area from Sentinel-2 across the Southeastern United States and Its Contributions Relative to Landsat-8 (2016–2019). Fire 2021, 4, 52. [Google Scholar] [CrossRef]
  25. Addison, P.; Oommen, T. Utilizing satellite radar remote sensing for burn severity estimation. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 292–299. [Google Scholar] [CrossRef]
  26. Cardil, A.; Mola-Yudego, B.; Blázquez-Casado, Á.; González-Olabarria, J.R. Fire and burn severity assessment: Calibration of Relative Differenced Normalized Burn Ratio (RdNBR) with field data. J. Environ. Manag. 2019, 235, 342–349. [Google Scholar] [CrossRef] [PubMed]
  27. Filipponi, F. BAIS2: Burned Area Index for Sentinel-2. In The 2nd International Electronic Conference on Remote Sensing; MDPI: Basel, Switzerland, 2018; p. 364. [Google Scholar]
  28. Long, T.; Zhang, Z.; He, G.; Jiao, W.; Tang, C.; Wu, B.; Zhang, X.; Wang, G.; Yin, R. 30m Resolution Global Annual Burned Area Mapping Based on Landsat Images and Google Earth Engine. ISPRS J. Photogramm. Remote Sens. 2018, 1–35. [Google Scholar] [CrossRef]
  29. Tanase, M.A.; Belenguer-Plomer, M.A.; Roteta, E.; Bastarrika, A.; Wheeler, J.; Fernández-Carrillo, Á.; Tansey, K.; Wiedemann, W.; Navratil, P.; Lohberger, S.; et al. Burned Area Detection and Mapping: Intercomparison of Sentinel-1 and Sentinel-2 Based Algorithms over Tropical Africa. Remote Sens. 2020, 12, 334. [Google Scholar] [CrossRef] [Green Version]
  30. Alcaras, E.; Costantino, D.; Guastaferro, F.; Parente, C.; Pepe, M. Normalized Burn Ratio Plus (NBR+): A New Index for Sentinel-2 Imagery. Remote Sens. 2022, 14, 1727. [Google Scholar] [CrossRef]
  31. Hoeser, T.; Kuenzer, C. Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review-Part I: Evolution and Recent Trends. Remote Sens. 2020, 12, 1667. [Google Scholar] [CrossRef]
  32. Hoeser, T.; Bachofer, F.; Kuenzer, C. Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review—Part II: Applications. Remote Sens. 2020, 12, 3053. [Google Scholar] [CrossRef]
  33. Rashkovetsky, D.; Mauracher, F.; Langer, M.; Schmitt, M. Wildfire Detection From Multisensor Satellite Imagery Using Deep Semantic Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7001–7016. [Google Scholar] [CrossRef]
  34. Knopp, L.; Wieland, M.; Rättich, M.; Martinis, S. A Deep Learning Approach for Burned Area Segmentation with Sentinel-2 Data. Remote Sens. 2020, 12, 2422. [Google Scholar] [CrossRef]
  35. de Almeida Pereira, G.H.; Fusioka, A.M.; Nassu, B.T.; Minetto, R. Active fire detection in Landsat-8 imagery: A large-scale dataset and a deep-learning study. ISPRS J. Photogramm. Remote Sens. 2021, 178, 171–186. [Google Scholar] [CrossRef]
  36. Pinto, M.M.; Libonati, R.; Trigo, R.M.; Trigo, I.F.; DaCamara, C.C. A deep learning approach for mapping and dating burned areas using temporal sequences of satellite images. ISPRS J. Photogramm. Remote Sens. 2020, 160, 260–274. [Google Scholar] [CrossRef]
  37. Seydi, S.T.; Saeidi, V.; Kalantar, B.; Ueda, N.; Halin, A.A. Fire-Net: A Deep Learning Framework for Active Forest Fire Detection. J. Sens. 2022, 2022, 8044390. [Google Scholar] [CrossRef]
  38. Wang, Z.; Yang, P.; Liang, H.; Zheng, C.; Yin, J.; Tian, Y.; Cui, W. Semantic Segmentation and Analysis on Sensitive Parameters of Forest Fire Smoke Using Smoke-Unet and Landsat-8 Imagery. Remote Sens. 2022, 14, 45. [Google Scholar] [CrossRef]
  39. Ba, R.; Chen, C.; Yuan, J.; Song, W.; Lo, S. SmokeNet: Satellite Smoke Scene Detection Using Convolutional Neural Network with Spatial and Channel-Wise Attention. Remote Sens. 2019, 11, 1702. [Google Scholar] [CrossRef] [Green Version]
  40. Seydi, S.T.; Hasanlou, M.; Chanussot, J. DSMNN-Net: A Deep Siamese Morphological Neural Network Model for Burned Area Mapping Using Multispectral Sentinel-2 and Hyperspectral PRISMA Images. Remote Sens. 2021, 13, 5138. [Google Scholar] [CrossRef]
  41. Abid, N.; Malik, M.I.; Shahzad, M.; Shafait, F.; Ali, H.; Ghaffar, M.M.; Weis, C.; Wehn, N.; Liwicki, M. Burnt Forest Estimation from Sentinel-2 Imagery of Australia using Unsupervised Deep Learning. In 2021 Digital Image Computing: Techniques and Applications (DICTA); IEEE: Piscataway, NJ, USA, 2021; pp. 1–8. [Google Scholar] [CrossRef]
  42. Hong, Z.; Tang, Z.; Pan, H.; Zhang, Y.; Zheng, Z.; Zhou, R.; Ma, Z.; Zhang, Y.; Han, Y.; Wang, J.; et al. Active Fire Detection Using a Novel Convolutional Neural Network Based on Himawari-8 Satellite Images. Front. Environ. Sci. 2022, 10, 102. [Google Scholar] [CrossRef]
  43. Zhang, Q.; Ge, L.; Zhang, R.; Metternicht, G.I.; Du, Z.; Kuang, J.; Xu, M. Deep-learning-based burned area mapping using the synergy of Sentinel-1&2 data. Remote Sens. Environ. 2021, 264, 112575. [Google Scholar] [CrossRef]
  44. Belenguer-Plomer, M.A.; Tanase, M.A.; Chuvieco, E.; Bovolo, F. CNN-based burned area mapping using radar and optical data. Remote Sens. Environ. 2021, 260, 112468. [Google Scholar] [CrossRef]
  45. Oktay, O.; Schlemper, J.; Le Folgoc, L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. In Proceedings of the 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands, 4–6 July 2018. [Google Scholar]
  46. Monaco, S.; Greco, S.; Farasin, A.; Colomba, L.; Apiletti, D.; Garza, P.; Cerquitelli, T.; Baralis, E. Attention to Fires: Multi-Channel Deep Learning Models for Wildfire Severity Prediction. Appl. Sci. 2021, 11, 11060. [Google Scholar] [CrossRef]
  47. Tovar, P.; Adarme, M.O.; Feitosa, R.Q. Deforestation Detection in the Amazon Rainforest with Spatial and Channel Attention Mechanisms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLIII-B3-2021, 851–858. [Google Scholar] [CrossRef]
  48. Yang, C.; Guo, X.; Wang, T.; Yang, Y.; Ji, N.; Li, D.; Lv, H.; Ma, T. Automatic Brain Tumor Segmentation Method Based on Modified Convolutional Neural Network. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2019, 2019, 998–1001. [Google Scholar] [CrossRef]
  49. Zhang, J.; Jiang, Z.; Dong, J.; Hou, Y.; Liu, B. Attention Gate ResU-Net for Automatic MRI Brain Tumor Segmentation. IEEE Access 2020, 8, 58533–58545. [Google Scholar] [CrossRef]
  50. Maji, D.; Sigedar, P.; Singh, M. Attention Res-UNet with Guided Decoder for semantic segmentation of brain tumors. Biomed. Signal Process. Control. 2022, 71, 103077. [Google Scholar] [CrossRef]
  51. Cha, J.; Jeong, J. Improved U-Net with Residual Attention Block for Mixed-Defect Wafer Maps. Appl. Sci. 2022, 12, 2209. [Google Scholar] [CrossRef]
  52. Li, C.; Liu, Y.; Yin, H.; Li, Y.; Guo, Q.; Zhang, L.; Du, P. Attention Residual U-Net for Building Segmentation in Aerial Images. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2021), Brussels, Belgium, 11–16 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 4047–4050, ISBN 978-1-6654-0369-6. [Google Scholar]
  53. Men, G.; He, G.; Wang, G. Concatenated Residual Attention UNet for Semantic Segmentation of Urban Green Space. Forests 2021, 12, 1441. [Google Scholar] [CrossRef]
  54. Maquia, I.; Catarino, S.; Pena, A.R.; Brito, D.R.A.; Ribeiro, N.S.; Romeiras, M.M.; Ribeiro-Barros, A.I. Diversification of African Tree Legumes in Miombo-Mopane Woodlands. Plants 2019, 8, 182. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Beck, H.E.; Zimmermann, N.E.; McVicar, T.R.; Vergopolan, N.; Berg, A.; Wood, E.F. Present and future Köppen-Geiger climate classification maps at 1-km resolution. Sci. Data 2018, 5, 180214. [Google Scholar] [CrossRef] [Green Version]
  56. Republic of Mozambique, Ministry for the Coordination of Environmental Affairs. The 4th National Report on Implementation of the Convention on Biological Diversity in Mozambique; Republic of Mozambique, Ministry for the Coordination of Environmental Affairs: Maputo, Mozambique, 2009; Available online: https://www.cbd.int/doc/world/mz/mz-nr-04-en.pdf (accessed on 14 September 2021).
  57. FAO. Global Forest Resources Assessment 2015: How Are the World’s Forests Changing? 2nd ed.; Food & Agriculture Organization of the United Nations: Rome, Italy, 2017; ISBN 978-92-5-109283-5. [Google Scholar]
  58. Chidumayo, E.; Gumbo, D. The Dry Forests and Woodlands of Africa: Managing for Products and Services, 1st ed.; Earthscan: London, UK; Washington, DC, USA, 2010; ISBN 978-1-84971-131-9. [Google Scholar]
  59. Manyanda, B.J.; Nzunda, E.F.; Mugasha, W.A.; Malimbwi, R.E. Effects of drivers and their variations on the number of stems and aboveground carbon removals in miombo woodlands of mainland Tanzania. Carbon Balance Manag. 2021, 16, 16. [Google Scholar] [CrossRef]
  60. Key, C.; Benson, N. Measuring and remote sensing of burn severity. In Proceedings Joint Fire Science Conference and Workshop; University of Idaho and International Association of Wildland Fire Moscow: Moscow, Russia, 1999; Volume 2, p. 284. [Google Scholar]
  61. Lutes, D.C.; Keane, R.E.; Caratti, J.F.; Key, C.H.; Benson, N.C.; Sutherland, S.; Gangi, L.J. FIREMON: Fire Effects Monitoring and Inventory System; General Technical Report RMRS-GTR-164-CD; USDA: Washington, DC, USA, 2006; Volume 164. [Google Scholar] [CrossRef]
  62. Chuvieco, E.; Martín, M.P.; Palacios, A. Assessment of different spectral indices in the red-near-infrared spectral domain for burned land discrimination. Int. J. Remote Sens. 2002, 23, 5103–5110. [Google Scholar] [CrossRef]
  63. Trigg, S.; Flasse, S. An evaluation of different bi-spectral spaces for discriminating burned shrub-savannah. Int. J. Remote Sens. 2001, 22, 2641–2647. [Google Scholar] [CrossRef]
  64. Welikhe, P.; Quansah, J.E.; Fall, S.; McElhenney, W. Estimation of Soil Moisture Percentage Using LANDSAT-based Moisture Stress Index. J. Remote Sens. GIS 2017, 6, 1–5. [Google Scholar] [CrossRef]
  65. Smith, A.M.; Wooster, M.J.; Drake, N.A.; Dipotso, F.M.; Falkowski, M.J.; Hudak, A.T. Testing the potential of multi-spectral remote sensing for retrospectively estimating fire severity in African Savannahs. Remote Sens. Environ. 2005, 97, 92–115. [Google Scholar] [CrossRef] [Green Version]
  66. Karnieli, A.; Kaufman, Y.J.; Remer, L.; Wald, A. AFRI—aerosol free vegetation index. Remote Sens. Environ. 2001, 77, 10–21. [Google Scholar] [CrossRef]
  67. Kaufman, Y.J.; Tanre, D. Atmospherically resistant vegetation index (ARVI) for EOS-MODIS. IEEE Trans. Geosci. Remote Sens. 1992, 30, 261–270. [Google Scholar] [CrossRef]
  68. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef] [Green Version]
  69. Jiang, Z.; Huete, A.R.; Didan, K.; Miura, T. Development of a two-band enhanced vegetation index without a blue band. Remote Sens. Environ. 2008, 112, 3833–3845. [Google Scholar] [CrossRef]
  70. Clevers, J. Application of the WDVI in estimating LAI at the generative stage of barley. ISPRS J. Photogramm. Remote Sens. 1991, 46, 37–47. [Google Scholar] [CrossRef]
  71. Clevers, J. The derivation of a simplified reflectance model for the estimation of leaf area index. Remote Sens. Environ. 1988, 25, 53–69. [Google Scholar] [CrossRef]
  72. Pinty, B.; Verstraete, M.M. GEMI: A non-linear index to monitor global vegetation from satellites. Vegetatio 1992, 101, 15–20. [Google Scholar] [CrossRef]
  73. Delegido, J.; Verrelst, J.; Alonso, L.; Moreno, J. Evaluation of Sentinel-2 red-edge bands for empirical estimation of green LAI and chlorophyll content. Sensors 2011, 11, 7063–7081. [Google Scholar] [CrossRef] [Green Version]
  74. Blackburn, G.A. Quantifying chlorophylls and caroteniods at leaf and canopy scales. Remote Sens. Environ. 1998, 66, 273–285. [Google Scholar] [CrossRef]
  75. Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  76. Qi, J.; Kerr, Y.; Chehbouni, A. External factor consideration in vegetation index development. In Proceedings of the 6th International Symposium on Physical Measurements and Signatures in Remote Sensing, Val D’Isere, France, 17–22 January 1994; pp. 723–730. Available online: https://ntrs.nasa.gov/citations/19950010656 (accessed on 11 February 2022).
  77. Gitelson, A.A. Remote estimation of canopy chlorophyll content in crops. Geophys. Res. Lett. 2005, 32, 1–4. [Google Scholar] [CrossRef] [Green Version]
  78. Gao, B. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  79. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  80. Lacaux, J.P.; Tourre, Y.M.; Vignolles, C.; Ndione, J.A.; Lafaye, M. Classification of ponds from high-spatial resolution remote sensing: Application to Rift Valley Fever epidemics in Senegal. Remote Sens. Environ. 2007, 106, 66–74. [Google Scholar] [CrossRef]
  81. Friedman, J.H. Stochastic gradient boosting. Comput. Stat. Data Anal. 2002, 38, 367–378. [Google Scholar] [CrossRef]
  82. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Statist. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  83. Kneusel, R.T. Practical Deep Learning: A Python-Based Introduction, 1st ed.; No Starch Press Inc.: San Francisco, CA, USA, 2021; ISBN 978-1-7185-0075-4. [Google Scholar]
  84. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. ISBN 978-3-319-24573-7. [Google Scholar]
  85. Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; Available online: https://dblp.org/db/conf/iclr/iclr2015.html (accessed on 11 May 2022).
  86. Chollet, F. Deep Learning with Python; Manning Publications: New York, NY, USA, 2017; ISBN 1617294438. [Google Scholar]
  87. TensorFlow. Introduction to the Keras Tuner. Available online: https://www.tensorflow.org/tutorials/keras/keras_tuner (accessed on 7 October 2022).
  88. Ngadze, F.; Mpakairi, K.S.; Kavhu, B.; Ndaimani, H.; Maremba, M.S. Exploring the utility of Sentinel-2 MSI and Landsat 8 OLI in burned area mapping for a heterogenous savannah landscape. PLoS ONE 2020, 15, e0232962. [Google Scholar] [CrossRef]
  89. Seydi, S.T.; Hasanlou, M.; Chanussot, J. Burnt-Net: Wildfire burned area mapping with single post-fire Sentinel-2 data and deep learning morphological neural network. Ecol. Indic. 2022, 140, 108999. [Google Scholar] [CrossRef]
  90. Pinto, M.M.; Trigo, R.M.; Trigo, I.F.; DaCamara, C.C. A Practical Method for High-Resolution Burned Area Monitoring Using Sentinel-2 and VIIRS. Remote Sens. 2021, 13, 1608. [Google Scholar] [CrossRef]
  91. Zhang, Q.; Ge, L.; Zhang, R.; Metternicht, G.I.; Liu, C.; Du, Z. Towards a Deep-Learning-Based Framework of Sentinel-2 Imagery for Automated Active Fire Detection. Remote Sens. 2021, 13, 4790. [Google Scholar] [CrossRef]
  92. Zhang, Y.; Ling, F.; Wang, X.; Foody, G.M.; Boyd, D.S.; Li, X.; Du, Y.; Atkinson, P.M. Tracking small-scale tropical forest disturbances: Fusing the Landsat and Sentinel-2 data record. Remote Sens. Environ. 2021, 261, 112470. [Google Scholar] [CrossRef]
  93. Masolele, R.N.; de Sy, V.; Marcos, D.; Verbesselt, J.; Gieseke, F.; Mulatu, K.A.; Moges, Y.; Sebrala, H.; Martius, C.; Herold, M. Using high-resolution imagery and deep learning to classify land-use following deforestation: A case study in Ethiopia. GIScience Remote Sens. 2022, 59, 1446–1472. [Google Scholar] [CrossRef]
  94. Yu, T.; Wu, W.; Gong, C.; Li, X. Residual Multi-Attention Classification Network for A Forest Dominated Tropical Landscape Using High-Resolution Remote Sensing Imagery. IJGI 2021, 10, 22. [Google Scholar] [CrossRef]
  95. John, D.; Zhang, C. An attention-based U-Net for detecting deforestation within satellite sensor imagery. Int. J. Appl. Earth Obs. Geoinf. 2022, 107, 102685. [Google Scholar] [CrossRef]
  96. Farasin, A.; Colomba, L.; Palomba, G.; Nini, G. Supervised Burned Areas Delineation by Means of Sentinel-2 Imagery and Convolutional Neural Networks. In CoRe Paper—Using Artificial Intelligence to Exploit Satellite Data in Risk and Crisis Management, Proceedings of the ISCRAM 2020 Conference Proceedings—17th International Conference on Information Systems for Crisis Response and Management, Blacksburg, VA, USA, 24–27 May 2020; Hughes, A.L., McNeill, F., Zobel, C.W., Eds.; Virginia Tech: Blacksburg, VA, USA, 2020; pp. 1060–1071. ISBN 2411-3482. [Google Scholar]
  97. Ramo, R.; Roteta, E.; Bistinas, I.; van Wees, D.; Bastarrika, A.; Chuvieco, E.; van der Werf, G.R. African burned area and fire carbon emissions are strongly impacted by small fires undetected by coarse resolution satellite data. Proc. Natl. Acad. Sci. USA 2021, 118, e2011160118. [Google Scholar] [CrossRef]
  98. Zhang, P.; Ban, Y.; Nascetti, A. Learning U-Net without forgetting for near real-time wildfire monitoring by the fusion of SAR and optical time series. Remote Sens. Environ. 2021, 261, 112467. [Google Scholar] [CrossRef]
  99. Li, H.; Wang, L.; Cheng, S. HARNU-Net: Hierarchical Attention Residual Nested U-Net for Change Detection in Remote Sensing Images. Sensors 2022, 22, 4626. [Google Scholar] [CrossRef]
Figure 1. Selected areas in Mozambique: (a) the location of the miombo ecoregion (green) and sampling sites for woodland fires (boxes); (b) an example of a Sentinel-2 image including the object fires for 13 September 2017; some examples of (c) 256 × 256 image patches and (d) their corresponding fire labels for training deep learning algorithms.
Figure 1. Selected areas in Mozambique: (a) the location of the miombo ecoregion (green) and sampling sites for woodland fires (boxes); (b) an example of a Sentinel-2 image including the object fires for 13 September 2017; some examples of (c) 256 × 256 image patches and (d) their corresponding fire labels for training deep learning algorithms.
Remotesensing 15 01342 g001
Figure 2. Examples of the spatiotemporal patterns of fires derived from Sentinel-2 for a part of the study area in 2021. Fire labels are not affected (black) and affected (light grey).
Figure 2. Examples of the spatiotemporal patterns of fires derived from Sentinel-2 for a part of the study area in 2021. Fire labels are not affected (black) and affected (light grey).
Remotesensing 15 01342 g002
Figure 3. (b) Architecture of the projected residual attention UNet (RAUNet), consisting of a contraction path (encoder) and an expansion path (decoder), using (a) the top variables of Sentinel-2 for (c) fire segmentation.
Figure 3. (b) Architecture of the projected residual attention UNet (RAUNet), consisting of a contraction path (encoder) and an expansion path (decoder), using (a) the top variables of Sentinel-2 for (c) fire segmentation.
Remotesensing 15 01342 g003
Figure 4. The feature maps that are produced in (b) a block of the contraction path and (c) a block of the expansion path by fitting the trained RAUNet on (a) an input image patch (256 × 256 × 5) including the top derivatives of Sentinel-2. (d,e) The outputs of the model.
Figure 4. The feature maps that are produced in (b) a block of the contraction path and (c) a block of the expansion path by fitting the trained RAUNet on (a) an input image patch (256 × 256 × 5) including the top derivatives of Sentinel-2. (d,e) The outputs of the model.
Remotesensing 15 01342 g004
Figure 5. Importance values of the variables derived from Sentinel-2 determining the detection of fires. The most influential variable is the Normalized Burn Ratio2 (NBR2). Other variables are ranked based on their importance relative to NBR2.
Figure 5. Importance values of the variables derived from Sentinel-2 determining the detection of fires. The most influential variable is the Normalized Burn Ratio2 (NBR2). Other variables are ranked based on their importance relative to NBR2.
Remotesensing 15 01342 g005
Figure 6. Univariate partial dependence plots for the top five variables derived from Sentinel-2 in enhancing burned areas, including (a) the Normalized Burn Ratio2 (NBR2), (b) the Burned Area Index for Sentinel-2 (BAIS2), (c) the Mid-Infrared Burn Index (MIRBI), (d) the Modified Normalized Difference Water Index (MNDWI), and (e) the shortwave infrared 1 band (SWIR1; B11).
Figure 6. Univariate partial dependence plots for the top five variables derived from Sentinel-2 in enhancing burned areas, including (a) the Normalized Burn Ratio2 (NBR2), (b) the Burned Area Index for Sentinel-2 (BAIS2), (c) the Mid-Infrared Burn Index (MIRBI), (d) the Modified Normalized Difference Water Index (MNDWI), and (e) the shortwave infrared 1 band (SWIR1; B11).
Remotesensing 15 01342 g006
Figure 7. The predicted burned areas using three datasets and three trained deep-learning-based models, including UNet, attention UNet (AUNet), and residual attention UNet (RAUNet), within (a) an image-patch with (b) fire labels. The prediction of fires using (ce) the top three Sentinel-derived variables, (fh) the top four variables, and (ik) the top five variables with each of the three models.
Figure 7. The predicted burned areas using three datasets and three trained deep-learning-based models, including UNet, attention UNet (AUNet), and residual attention UNet (RAUNet), within (a) an image-patch with (b) fire labels. The prediction of fires using (ce) the top three Sentinel-derived variables, (fh) the top four variables, and (ik) the top five variables with each of the three models.
Remotesensing 15 01342 g007
Figure 8. The performance of RAUNet5 in detecting burned areas at different times within a specific image patch of the miombo ecoregion: the predicted fires using the product fires of MCD64A1 Version 6.1 (500 m), Sentinel-3/OLCI Version 1 (300 m), and our approach (10 m) in (ae) August 2021, (fj) September 2021, and (ko) October 2021. The (a) very small fire in August and (k) active fire in October were only detected by (e,o) RAUNet5. Our model showed strong performance in depicting (e) the expansion of fires and (j) missing fires in its fire label.
Figure 8. The performance of RAUNet5 in detecting burned areas at different times within a specific image patch of the miombo ecoregion: the predicted fires using the product fires of MCD64A1 Version 6.1 (500 m), Sentinel-3/OLCI Version 1 (300 m), and our approach (10 m) in (ae) August 2021, (fj) September 2021, and (ko) October 2021. The (a) very small fire in August and (k) active fire in October were only detected by (e,o) RAUNet5. Our model showed strong performance in depicting (e) the expansion of fires and (j) missing fires in its fire label.
Remotesensing 15 01342 g008
Table 1. The derivatives of Sentinel-2, including burn, vegetation, soil, and water indices, for use in training deep-learning-based models for detecting burned areas.
Table 1. The derivatives of Sentinel-2, including burn, vegetation, soil, and water indices, for use in training deep-learning-based models for detecting burned areas.
TypeIndexDescription * and EquationReference
Burn IndicesNormalized Burn Ratio (NBR)NBR highlights burned areas in large fire zones (>500 acres), while mitigating illumination and atmospheric effects.
NBR = (B8A − B12)/(B8A + B12)
[60]
Normalized Burn Ratio2 (NBR2)NBR2 modifies the NBR to highlight water sensitivity in vegetation and may be useful in post-fire recovery studies.
NBR2 = (B11 − B12)/(B11 + B12)
[61]
Burned Area Index (BAI)Highlights burned land by emphasizing the charcoal signal in post-fire images. Brighter pixels indicate burned areas.
BAI = 1/((0.1 − B4)2 + (0.06 − B8A)2)
[62]
Burned Area Index for Sentinel-2 (BAIS2)BAIS2 is applied for the detection of both burned areas and active fires.
BAIS2 = 1 − √((B6 × B7 × B8A)/B4) × ((B12 − B8A)/√(B12 + B8A) + 1)
[27]
Mid-Infrared Burn Index (MIRBI)MIRBI is calculated using two reflectance SWIR bands (B11 and B12). It is sensitive to spectral changes due to fire regardless of noises.
MIRBI = 10 × B12 − 9.8 × B11 + 2
[63]
Moisture Stress Index (MSI)MSI is used for canopy stress analysis, productivity prediction, and biophysical modeling.
MSI = B11/B8
[64]
Char Soil Index2 (CSI2)In savanna and other fires, two ash endmembers occur: white mineral ash, where fuel has undergone complete combustion; and darker black ash or char, where an unburned fuel component remains.
CSI = B8A/B12
[65]
Vegetation IndicesAerosol-Free Vegetation Index (AFRI)AFRI 1600 nm (AFRI1) and AFRI 2100 nm (AFRI2). The advantage of the derived AFRIs is to penetrate the atmospheric column even when aerosols such as smoke or sulfates exist.
AFRI1 = (B8A − 0.66 × B11)/(B8A + 0.66 × B11)
AFRI2 = (B8A − 0.5 × B12)/(B8A + 0.5 × B12)
[66]
Atmospherically Resistant Vegetation Index (ARVI)ARVI retrieves information regarding the atmosphere opacity. ARVI is, on average, four times less sensitive to atmospheric effects than the NDVI.
ARVI = (B8 − (B4 − γ(B2 − B4)))/(B8 + (B4 − γ(B2 − B4)))
where γ is a weighting function for the difference in reflectance of the two bands that depends on aerosol type (in this study: γ = 1).
[67]
Normalized Difference Vegetation Index (NDVI)Provides a measurement for the photosynthetic activity and is strongly in correlation with the density and vitality of the vegetation on the Earth’s surface.
NDVI = (B8A − B4)/(B8A + B4)
[68]
Two-band Enhanced Vegetation Index (EVI2)EVI2 is similar to the NDVI in that it is a measure of vegetation cover, but it is less susceptible to biomass saturation.
EVI2 = 2.5 ((B8 − B4)/(B8 + (2.4 × B4) + 1))
[69]
Weighted Difference Vegetation Index (WDVI)Corrects near-infrared reflectance for the soil background.
WDVI = B8 − (g × B4)
where g is the slope of the soil line.
[70,71]
Global Environmental Monitoring Index (GEMI)Was developed to minimize problems of contamination of the vegetation signal by extraneous factors, and it is vital for the remote sensing of dark surfaces, such as recently burned areas.
GEMI = γ (1 − 0.25 × γ) − ((B4 − 0.125)/(1 − B4))
where γ(Sentinel) = (2 ((B8)2 − (B4)2) + 1.5 × B8 + 0.5 × B4)/(B8 + B4 + 0.5).
[72]
Normalized Difference Index 4 and 5 (NDI45)This index is linear, with less saturation at higher values than the NDVI. It has a good correlation with the green LAI because of the red-edge band usage.
NDI45 = (B5 − B4)/(B5 + B4)
[73]
Pigment-Specific Simple Ratio (PSSRa)PSSRa is sensitive to high concentrations of chlorophyll a, and it was developed to investigate the potential of a range of spectral approaches for quantifying pigments at the scale of the whole plant canopy.
PSSRa = B8A/B4
[74]
Soil IndicesModified Soil-Adjusted Vegetation Index (MSAVI)Determines the density of greenness by reducing the soil background influence based on the product of NDVI and WDVI.
MSAVI = (2 × B8 + 1 − √((2 × B8 + 1)2 − 8 (B8 − B5)))/2
[75]
Second Modified Soil-Adjusted Vegetation Index (MSAVI2)MSAVI2 is a good index for areas that are not completely covered with vegetation and have exposed soil surface. It is also quite susceptible to atmospheric conditions.
MSAVI2 = (2 × B8A + 1 − √((2 × B8A + 1)2 8 (B8A − B4)))/2
[76]
Red-Edge Chlorophyll Index (CIRed Edge/CIRE)CIRE was developed to estimate the chlorophyll content of leaves. Chlorophyll is a good indicator of the plant’s production potential.
CIRE = B7/B5 − 1
[77]
Water IndicesNormalized Difference Water Index2 (NDWI2)This index was developed to detect surface waters in wetland environments and to allow for the measurement of surface water extent.
NDWI2 = (B8 − B12)/(B8 + B12)
[78]
Modified Normalized Difference Water Index (MNDWI)This index was developed to enhance open water features, while efficiently suppressing and even removing built-up land, vegetation, and soil noise.
MNDWI = (B3 − B11)/(B3 + B11)
[79]
Normalized Pond Index (NDPI)The NDPI algorithm makes it possible not only to distinguish small ponds and water bodies (<0.01 ha), but also to differentiate vegetation inside ponds from that in their surroundings.
NDPI = (B11 − B3)/(B11 + B3)
[80]
* B2: blue; B3: green; B4: red; B5: red-edge 1 visible and near-infrared; B6: red-edge 2 visible and near-infrared; B7: red-edge 3 visible and near-infrared; B8: wide near-infrared; B8A: narrow near-infrared; B9: cloud; B10: water vapor SWIR; B11: shortwave infrared 1 (SWIR1); B12: shortwave infrared 2 (SWIR2).
Table 2. The accuracy of the trained UNet, attention UNet (AUNet), and residual attention UNet (RAUNet) using three datasets generated from the top five derivatives of Sentinel-2 data (i.e., NBR2, BAIS2, MIRBI, MNDWI, and B11) to identify the burned areas from the non-burned areas in miombo woodlands.
Table 2. The accuracy of the trained UNet, attention UNet (AUNet), and residual attention UNet (RAUNet) using three datasets generated from the top five derivatives of Sentinel-2 data (i.e., NBR2, BAIS2, MIRBI, MNDWI, and B11) to identify the burned areas from the non-burned areas in miombo woodlands.
DatasetUNetAUNetRAUNet
IoU 1AUC 2OA 3IoUAUCOAIoUAUCOA
NBR2 4, BAIS2 5, MIRBI 60.88090.88680.97580.87030.87260.97100.85620.86090.9659
NBR2, BAIS2, MIRBI, MNDWI 70.89460.89060.97930.87300.88050.97130.91170.89760.9830
NBR2, BAIS2, MIRBI, MNDWI, B11 80.89150.89760.97790.89740.90050.97980.92380.90880.9853
1 Intersection over union. 2 Area under the receiver operating characteristic curve. 3 Overall accuracy. 4 Normalized Burn Ratio2. 5 Burned Area Index for Sentinel-2. 6 Mid-Infrared Burn Index. 7 Modified Normalized Difference Water Index. 8 Band 11; shortwave infrared 1.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shirvani, Z.; Abdi, O.; Goodman, R.C. High-Resolution Semantic Segmentation of Woodland Fires Using Residual Attention UNet and Time Series of Sentinel-2. Remote Sens. 2023, 15, 1342. https://doi.org/10.3390/rs15051342

AMA Style

Shirvani Z, Abdi O, Goodman RC. High-Resolution Semantic Segmentation of Woodland Fires Using Residual Attention UNet and Time Series of Sentinel-2. Remote Sensing. 2023; 15(5):1342. https://doi.org/10.3390/rs15051342

Chicago/Turabian Style

Shirvani, Zeinab, Omid Abdi, and Rosa C. Goodman. 2023. "High-Resolution Semantic Segmentation of Woodland Fires Using Residual Attention UNet and Time Series of Sentinel-2" Remote Sensing 15, no. 5: 1342. https://doi.org/10.3390/rs15051342

APA Style

Shirvani, Z., Abdi, O., & Goodman, R. C. (2023). High-Resolution Semantic Segmentation of Woodland Fires Using Residual Attention UNet and Time Series of Sentinel-2. Remote Sensing, 15(5), 1342. https://doi.org/10.3390/rs15051342

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop