Next Article in Journal
Mare Volcanism in Apollo Basin Evaluating the Mare Basalt Genesis Models on the Moon
Previous Article in Journal
Artificial Intelligence-Based Precipitation Estimation Method Using Fengyun-4B Satellite Data
Previous Article in Special Issue
Integrating SAR, Optical, and Machine Learning for Enhanced Coastal Mangrove Monitoring in Guyana
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Change Detection of Mangrove Forests Using Deep Learning with Medium-Resolution Satellite Imagery: A Case Study of Wunbaik Mangrove Forest in Myanmar

by
Kyaw Soe Win
1,2,* and
Jun Sasaki
1
1
Department of Socio-Cultural Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa 277-8561, Japan
2
Environmental Conservation Department, Ministry of Natural Resources and Environmental Conservation, Naypyitaw 15011, Myanmar
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(21), 4077; https://doi.org/10.3390/rs16214077
Submission received: 10 September 2024 / Revised: 19 October 2024 / Accepted: 28 October 2024 / Published: 31 October 2024
(This article belongs to the Special Issue Remote Sensing in Mangroves III)

Abstract

:
This paper presents the development of a U-Net model using four basic optical bands and SRTM data to analyze changes in mangrove forests from 1990 to 2024, with an emphasis on the impact of restoration programs. The model, which employed supervised learning for binary classification by fusing multi-temporal Landsat 8 and Sentinel-2 imagery, achieved a superior accuracy of 99.73% for the 2020 image classification. It was applied to predict the long-term mangrove maps in Wunbaik Mangrove Forest (WMF) and to detect the changes at five-year intervals. The change detection results revealed significant changes in the mangrove forests, with 29.3% deforestation, 5.75% reforestation, and −224.52 ha/yr of annual rate of changes over 34 years. The large areas of mangrove forests have increased since 2010, primarily due to naturally recovered and artificially planted mangroves. Approximately 30% of the increased mangroves from 2015 to 2024 were attributed to mangrove plantations implemented by the government. This study contributes to developing a deep learning model with multi-temporal and multi-source imagery for long-term mangrove monitoring by providing accurate performance and valuable information for effective conservation strategies and restoration programs.

1. Introduction

Mangroves are intertidal and salt-tolerant evergreen forests that grow in the tropical and subtropical regions of the world [1,2]. They are critical in providing valuable ecosystem services, blue carbon conservation, and nature-based solutions for climate change adaptation and mitigation [1,2,3,4,5]. Despite their invaluable roles, approximately 3.4% of global mangroves have disappeared over the past 24 years [6] due to both natural drivers, such as climate change-induced sea level rise and extreme weather events, and anthropogenic factors, such as increasing human populations, industrialization, aquaculture expansion, natural retraction, and the expansion of paddy fields [1,7,8,9]. Therefore, accurate monitoring of mangrove ecosystems is imperative for understanding their changing extent and sustainable management and conservation practices.
Mangrove forests are in the global attention to conserve and manage them continuously. Despite their invaluable matters, it is difficult to access the remote and wide forest with tidal and mudflat existence, which causes difficulties in collecting reliable data, needs many workers, and is costly to monitor regularly [7,10,11,12]. Currently, leveraging the advancements of machine learning algorithms and the accessibility of satellite imagery facilitates the comprehensive mapping of mangrove ecosystems, enabling effective long-term monitoring [10,13]. Among the freely available satellite imagery, Landsat imagery has been widely used to conduct the historical mapping of mangrove forests than Sentinel-2 [14], although Sentinel-2 provides a higher spatial resolution and accurate mapping of mangrove species and extent than Landsat imagery [15,16]. The global distribution of mangrove forests in 2020 was generated with Sentinel-2 at 10 m resolution to allow for better detection of mangrove forests [17]. Accordingly, applying multi-source data fusing Landsat and Sentinel-2 imagery is a current trend to provide consistent spatial resolution for historical and continuous monitoring in cloudy regions [18]. However, according to our knowledge, no study has focused on a classification task fusing Landsat and Sentinel-2 imagery for mangrove mapping and other forest classifications.
Satellite image classification has been attempted using the multispectral bands of optical images, the polarizations of radar images, vegetation indices, and elevation data. Many studies have documented the usefulness of the Normalized Difference Vegetation Index (NDVI) in mangrove studies because of its significance in detecting healthy mangroves [13,19,20,21,22]. Normalized Difference Water Index (NDWI) and Soil-adjusted Vegetation Index (SAVI) are widely used indices for the change detection of mangrove forests while some studies combined with the mangrove recognition indices [13]. Elevation data have been applied to improve the discrimination of mangrove forests stacked with the Canopy Height Model (CHM) and slope [21,22,23] or applied to mask the input images with the high elevation and coastal water areas before classification [24,25]. However, recent researchers have not utilized numerous input features when training deep learning models because of the ability of convolutional layers to extract the distinct features of the input images [26,27]. The model can be trained efficiently, as fewer input features need less computation power. Accordingly, this study aims to explore the development of a deep learning model with fewer input features, considering its applicability to multi-source satellite imagery.
Remote sensing has been utilized to produce mangrove maps combined with a series of machine learning classification techniques, including random forests [28,29,30], support vector machine (SVM) [31], decision trees [19], and iterative self-organizing data analysis (ISODATA) [32]. With the recent development of computational power, deep learning algorithms have been popular for researchers in remote sensing image analyses [33]. Deep learning models have proven to perform outstandingly compared to traditional models, with improved environmental monitoring with remote sensing data [34]. Relatively few studies have applied deep learning to mangrove extent mapping with different satellite imagery from high to medium resolution [26,35,36], focusing on attempts to classify mangroves with a short-temporal [23,37] and single-source satellite datasets with multiple input features [23,37]. Although the multi-source models fusing Sentinel-1 and Sentinel-2 were applied to mangrove mapping for short-term distribution [29,38], there remains a need for a multi-source deep learning model based on Landsat imagery to monitor the mangrove forests on a long-term scale.
Recently, Guo et al. (2021) [26] proposed a Capsules–Unet model for large-scale and long-term mangrove mapping from 1990 to 2015 using Landsat imagery to achieve the precise extraction of mangroves. They achieved a low accuracy of 85.7–88.7% when they utilized a large dataset available in 1990, 2000, 2010, and 2015 [26]. Their model had challenges with low accuracy, although it can be applied for large-scale and long-term mangrove mapping. Studies using Landsat 8 or Sentinel-2 imagery with a temporal dataset for small-scale areas achieved a higher accuracy of 97.64% and 97.48% when developing the deep learning models, namely MSNet and ME-Net [37,39]. According to the study by Ghorbanian et al. (2022), multi-source datasets of Sentinel-1 and Sentinel-2 improved the Artificial Neural Networks (ANN) model and provided accurate mangrove maps compared to single-source datasets of Sentinel-2 [38]. Combining Landsat 8 and Sentinel-2 could improve image quality by reducing the spatial resolution gaps between them and providing temporally short-term observations for environmental applications [40]. Therefore, the present study will explore the higher performance of a deep learning model using multi-temporal and multi-source datasets using Landsat and Sentinel-2 for long-term mangrove mapping.
Myanmar ranked as the top country with the highest annual rate of mangrove loss from 2000 to 2012 [41] despite decreasing the net loss of global mangrove areas [1]. The Wunbaik Mangrove Forest (WMF) in Myanmar is one of the largest remnant mangrove ecosystems endowed with ecologically important and endangered species, providing invaluable ecosystem services to local communities [42]. It experienced a large hectare of mangrove loss due to changing land uses to paddy fields and shrimp ponds in the past [43] and has been recorded as one of the hotspots of mangrove changes worldwide [6]. However, existing studies on long-term mangrove distribution in WMF need to be updated [42,43], and recent information is a gap. They highlight the need for continuous monitoring using accurate techniques to understand the historical changes and current conditions of WMF, especially following extensive restoration efforts by the government.
The study by Rahman et al. (2024) observed an improving trend of mangroves in Southeast Asia, including Myanmar, from 2015 to 2020 when they analyzed the dataset of Global Mangrove Watch (GMW), although most of the existing studies discussed a high rate of deforestation in Myanmar [20,21,32,43,44,45]. A recent study by Maung and Sasaki (2021) observed a slight decline in WMF due to the results of change detection from 2015 to 2020, while they detected mangrove gains in the plantation sites and approximately 50% of the naturally recovered mangroves in the abandoned sites [23]. Meanwhile, the government of Myanmar has implemented restoration programs, such as the Myanmar Rehabilitation and Restoration Programme (MRRP), across the country to reach the national climate change and biodiversity targets [46,47]. Therefore, there is an urgent need to study the contributions of restoration programs to the changes in mangrove forests. This study will investigate the long-term distribution of mangrove forests, emphasizing the drivers of mangrove losses and gains in WMF.
Considering the advantages of multi-source data, this study aimed to develop a multi-temporal and multi-source deep learning model fusing medium-resolution satellite imagery of Landsat 8 and Sentinel-2 for predicting the long-term distribution of mangrove forests and to investigate the patterns of change distribution in WMF by discussing the drivers behind mangrove losses and role of mangrove plantations on increasing mangrove coverage. The proposed deep learning model provided accurate and reliable performance in predicting long-term mangrove mapping in WMF. This approach helped to better understand the changing patterns of mangrove forests, filling a critical gap of outdated information on historical mangrove distribution in WMF and identifying the losses and gains of mangrove forests due to anthropogenic factors in long-term periods.

2. Materials and Methods

2.1. Study Site

The study area is WMF, located between 19°07′02″N–19°23′30″N and 93°51′00″E–94°02′30″E in Yambye township, Rakhine State, Myanmar. WMF consists of Wunbaik Reserved Mangrove Forest (WRMF), Mingyaung Public Mangrove Forest, and Extended Mingyaung Mangrove Forest (see Figure 1), protected by the Forest Department of Myanmar, and it plays a vital role in providing numerous ecosystem benefits to residents living in and near the forest and enriching important and endangered mangrove species [48]. The topography of the study area is extensively flat and connects to the hill in the western part. A road 20 miles long across WRMF was constructed and occupied 0.4% of the total mangrove area in WMF [42].

2.2. Ground Truth Image

This study utilized a ground truth image or label image (Figure 2), created by Maung and Sasaki (2021) [23], who manually drew it based on the ground truth points collected from September to October 2019. As the original ground truth image was not available from the researchers, the current image was created by copying it from the original manuscript [49], upscaling it in OpenCV, georeferencing it in QGIS v3.32.0, validating it in high-resolution Google Earth images, setting the spatial resolution to 10 m and 30 m, and reclassifying it as 255 for mangroves and 0 for non-mangroves.

2.3. Earth Observation Data

Considering the historical changes, Landsat series imagery was applied to investigate the important input features to develop deep learning models and to predict long-term mangrove maps despite its medium resolution of 30 m spatial resolution. Additionally, Sentinel-2 imagery with a higher resolution of 10 m was utilized to build a multi-source model with greater precision, and the Harmonized Landsat and Sentinel-2 imagery (HLS) were employed to compare the area differences provided by different sources.
Multiple Landsat series images, including Landsat 5 for 1990, 1995, 2000, 2005, and 2010; Landsat 8 for 2015, 2019, and 2020; and Landsat 9 for 2024 and Sentinel-2 images for each year from 2015 to 2024, were downloaded, considering level 2 with the least cloud coverage (Appendix A). We conducted atmospheric correction using the Remotior Sensus [50] in Google Collab; Landsat 7 images were not collected because of scan line errors in the study area. HLS images of 2017, 2020, 2022, and 2024 [51] were available from the EARTHDATA (https://search.earthdata.nasa.gov/, accessed on 28 June 2024). Resampled Landsat images with 10 m resolution were used to compare deep learning models, build multi-temporal and multi-source models, and predict long-term mangrove mappings from 1990 to 2024.
For topographic and canopy height information, the Shuttle Radar Topography Mission (SRTM) 1 arc-second global Digital Elevation Model (DEM) [52] was downloaded from the United States Geological Survey in Earth Resources Observation and Science (USGS) EarthExplorer website (https://earthexplorer.usgs.gov/, accessed on 2 September 2023). The Multi-Error-Removed Improved-Terrain DEM (MERIT) [53] was available through the MERIT DEM website (https://hydro.iis.u-tokyo.ac.jp/~yamadai/MERIT_DEM/, accessed on 2 September 2023). Preprocessing processes such as band combination, image clipping, resampling, and calculation of vegetation indices and canopy heights were conducted in QGIS v3.32.0.

2.4. Input Feature Selections

Properly selecting important features is critical to building an accurate model in machine learning, as irrelevant features can reduce the model’s performance [54]. For model training based on 2020, the important input features were selected among the seven multispectral bands of Landsat 8 images; four vegetation indices, including NDVI [55], NDWI [56], SAVI [57], Combined Mangrove Recognition Index (CMRI) [58], and topographic height information, including SRTM, MERIT, Canopy Height Model (CHM), and Slope by conducting multiple experiments combined with different input features using the ANN; and Convolutional Neural Networks (CNN) models. Red, Green, Blue, Near-infrared (NIR), and Short-wave infrared (SWIR) bands were mainly applied in the input feature selections because of the unavailability of all bands in all Landsat series imagery. CHM was obtained from the difference between SRTM and MERIT, where SRTM was utilized as the Digital Surface Model (DSM) and MERIT as the Digital Terrain Model (DTM) [23,59,60,61]. The vegetation indices and CHM were calculated using the following equations. The default value of 0.5 was used as the soil-adjusted value (L) in SAVI.
NDVI = NIR RED NIR + RED
NDWI = Green NIR Green + NIR
SAVI = NIR RED NIR + RED + L × 1 + L
CMRI = NDVI NDWI
CHM = SRTM   DEM MERIT   DEM

2.5. CNN

CNN, a specialized form of ANN, is uniquely designed for tasks involving grid-structured data, such as images. Its ability to learn spatial information, including textures, edges, and distinct patterns, makes it particularly useful for remote sensing image analysis, such as scene-based classification and pixel-wise segmentation [33,62]. The basic architecture of CNN consists of three types of layers: convolutional layer, pooling layer, and fully connected layer. The convolutional layer, operating with a three-dimensional structure with height, width, and depth or channels, includes numerous optimizable filters that transform the input features into the important features that describe the targets. Pooling layers perform to reduce the number of parameters by down-sampling while preserving discriminant information. The fully connected layer interprets the features extracted by the convolutional and pooling layers, facilitating final classification tasks.
Multiple CNN architectures have been published, mainly for scene-based image classifications [62], and they are not appropriate for pixel-wise satellite image classifications. Therefore, the architecture (Figure 3) for CNN was designed on three convolutional layers, one pooling layer, and one flattened layer, and parameters were selected through hyper-parameter tuning. The model was trained with the parameters of 96, 64, 64 filters in three convolutional layers, 96 nodes in dense layers, four dropout layers with 0.25, Rectified Linear Unit (ReLU) activation functions for the hidden layers, sigmoid activation function for the output layer, Adaptive Moment Estimation (Adam) optimizer, and binary cross-entropy for loss function. Then, the results were compared with the U-Net model to identify the strengths and weaknesses of each model. To train the CNN model, the input features were clipped into the shape of 7 × 7 using the Pyrsgis library.

2.6. U-Net

The U-Net, a CNN designed specifically for medical image segmentation, stands out with its U-shaped architecture. This unique design includes a contracting path (encoder) for feature extraction and an expanding path (decoder) for precise localization facilitated by skip connections. Similar to a convolutional network, the contracting path comprises convolutional and pooling layers for down-sampling. The expanding path, on the other hand, up-samples the features learned in the contracting path using up-pooling operations, followed by convolutional layers with the activation functions [63].
In this study, the architecture of the U-Net model (Figure 4) was designed using the convolutional layers, max-pooling layers, convolutional transpose layers, concatenate layers, and dropout layers. For training the model, the images were clipped into the shape of 128 × 128 using the Geotile library. On the encoder side, the model was set with the parameters of 32 filters in the first two convolutional layers, 64 in the second two convolutional layers, and 128 in the third two convolutional layers, where three dropout layers were added between two convolutional layers, followed by max-pooling layers and 256 filters in the fourth two convolutional layers with a dropout layer. On the decoder side, 256 filters were applied in the first two convolutional layers, 128 in the second two convolutional layers, and 64 in the third two convolutional layers, where three dropout layers were added between two convolutional layers, followed by a concatenate layer and 32 filters in the fourth two convolutional layers with a dropout layer. The loss function, optimizer, and learning rate were utilized in the same way as the parameters of the CNN model: binary cross entropy for the loss function and Adam optimizer with 0.0001 learning rate.

2.7. Model Training and Evaluation

The input dataset in each model was split into three parts: training data, representing 60%; testing data, 20%; and validation data, 20%. The training data and validation data were utilized in the training process, and the model performance was evaluated using the testing data after training. Model training and evaluation were conducted using TensorFlow 2.4.1 with 1 GPU in the Wisteria/BDEC-01 Supercomputer System of the University of Tokyo. The results of each model were evaluated using several metrics calculated in the following equations: overall accuracy, precision, recall, F1-score, and intersection over union (IoU), and the predicted results were summarized in the confusion matrix describing the true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN).
Accuracy = Number   of   Correct   Predictions Number   of   Total   Predictions
Precision = TP TP + FP
Recall = TP TP + FN
F 1 - Score = 2 × Recall × Precision Recall + Precision
Intersection   over   Union   IoU = TP TP + FP + FN

2.8. Change Detection

Change detection is useful for identifying the differences in an image by observing it on different dates and is widely applied to several applications based on remote sensing [64,65]. The present study utilized the image differencing method among the multiple post-classification change detection techniques. The results of change detection were demonstrated in mangrove gains, mangrove losses, and unchanged areas and were visually checked using high-resolution Google Earth images. The number of pixels showing the gains, losses, and unchanged were extracted and used to calculate the long-term changes in mangrove forests and the annual rate of changes.

3. Results

3.1. Important Input Features

Important input features were first identified using feature importance scores derived from the Random Forest Classifier and correlation values between input features and targets. However, these techniques were not useful in providing a proper combination of input features. Subsequently, the proper combinations were investigated by executing multiple experiments using ANN and CNN models.
The feature importance scores of the Random Forest Classifier demonstrated that NDVI, CMRI, NDWI, and SAVI were the most important ones, followed by MERIT, SRTM, Slope, and CHM. According to experiments based on the ANN model, the highest accuracy of 93.80% was achieved with the combination of input features consisting of NDVI, NDWI, SAVI, CMRI, SRTM, and CHM. On the other hand, the CNN model provided the highest accuracy of 95.99% with the combination of four bands of Landsat 8 (Blue, Green, Red, and NIR) and SRTM (Appendix B). Therefore, four bands and SRTM were applied to compare the deep learning models of CNN and U-Net and to develop a multi-temporal and multi-source model using U-Net architecture.

3.2. Multi-Temporal and Multi-Source Model

3.2.1. Comparison of CNN and U-Net Using Landsat 8 and Sentinel-2

The CNN and U-Net models were compared using Landsat 8 and Sentinel-2 imagery to observe the strengths and weaknesses of each model and to select an optimal model for the long-term distribution of mangrove forests.
The comparison results (Table 1) revealed that the U-Net model using Sentinel-2 outperformed the CNN model, while similar accuracy was produced by both CNN and U-Net trained with Landsat 8. The U-Net model using Sentinel-2 produced an accuracy of 98.25%, while the CNN result was 97.24%. The most significant difference was training time; about 250–260 s was needed for the U-Net model, while the CNN model took about 10,000–12,000 s. Although the architecture of the CNN model was simple with 152,193 training parameters, it needed a huge amount of training time compared to the U-Net model, which utilized many convolutional layers with many filters, providing 3,133,921 parameters. This comparison has important implications, suggesting that the U-Net model is more efficient and effective than the CNN model for this application.

3.2.2. Performance of Multi-Temporal and Multi-Source Model

A multi-temporal and multi-source deep learning model was built on the U-Net architecture using the multi-temporal images available in November 2019, December 2019, and January 2020 of Landsat 8 and Sentinel-2. The model was trained for 500 epochs using twelve images by doubling the six temporal images of Landsat 8 and Sentinel-2. As a result, it achieved the highest accuracy of 99.73%, IoU 0.99, precision 1, recall 1, and F1-score 1 (Table 2). Then, it was applied to produce the mangrove maps with Landsat 8 and Sentinel-2 of 2020 to draw a confusion matrix (Table 3) describing TP, TN, FP, and FN and visualize them (Figure 5). The confusion matrix described a few false positives and false negatives given by the model, where more false positives and false negatives were found in Landsat 8 than in the results of Sentinel-2. After model evaluation, mangrove distribution maps from 1990 to 2024 were predicted using the model. The resulting maps (Figure 6) were visually checked using Google Earth images. The results were promising, and the changes in mangrove forests were significantly identified due to the visual assessment with high-resolution Google Earth images.

3.3. Long-Term Mangrove Changes in WMF

Monitoring the distribution and changes in mangrove forests over time is crucial for understanding the health of their ecosystems and the intervention of humans in these ecosystems, and informing conservation efforts. The present study utilized the image differencing between two different temporal maps provided by the model to detect the mangrove changes in WMF. Then, the mangrove and non-mangrove areas were extracted from mangrove maps for each year, and the changes in mangrove areas for each year were visualized for 34 years. The result of mangrove changes demonstrated the abnormal fluctuation of mangrove distribution in WMF (Figure 7). Mangrove areas in WMF were 32,402.40 ha in 1990 and had steadily decreased to 23,561.64 ha by 2010. Afterward, they started increasing slowly in 2015 and have still increased to 24,768.77 ha in 2024.
Change detection results highlighted that a significant area of mangroves has been changed in WMF over a period of 34 years (Figure 8). A substantial portion of the mangrove forests have been lost since 1990 and a minor recovery by 2024. Specifically, 29.3% of the mangrove extent in WMF has been lost, while only 5.75% has been gained. The most significant deforestation occurred during the period of 2005–2010, while the most substantial reforestation was observed during the period of 2015–2020. Notably, the expansion of mangrove forests has been consistent since 2010. The annual rate of change is −224.52 ha/yr, equivalent to a decrease of 0.69% per year over the 34 years. The highest negative rate of change was recorded at −703.4 ha/yr, representing a severe 2.27% per year decrease over 5 years between 1995 and 2000, with a positive rate of change observed from 2010, peaking between 2020 and 2024 (Figure 9).

4. Discussion

4.1. Selection of Key Input Features

Input feature selection is important in preprocessing data before training the model. Properly selecting important features influences the model’s performance by reducing the data complexity and irrelevant features among multiple input features [54]. To select the optimal combination of input features for model training, the present study utilized the Random Forest Classifier and conducted manual experiments using ANN and CNN models. Although the Random Forest Classifier demonstrated NDVI, CMRI, NDWI, and SAVI as the most important ones, followed by MERIT, SRTM, Slope, and CHM, it did not identify the optimal combination for model training. Therefore, multiple experiments were conducted using the ANN model for 30 epochs, which was designed with two hidden layers with 50 and 40 nodes for each layer. These results provided the highest accuracy of 93.80%, with the combination of input features consisting of NDVI, NDWI, SAVI, CMRI, SRTM, and CHM.
These findings suggested that the application of four vegetation indices, such as NDVI, NDWI, SAVI, and CMRI, was sufficient for mangrove classification with the ANN model, while SRTM and CHM contributed to higher accuracy in discriminating the terrestrial forests in high-elevation areas. However, these findings differed from those of Maung and Sasaki (2021), who discussed the significance of MERIT and CHM for model improvement by integrating ten bands of Sentinel-2, NDVI, and NDWI [23]. In the present study, SRTM emerged as significantly more important in almost every combination of input features than MERIT (Appendix B). Although CHM was observed as an important feature of the ANN model based on 2020 data, it did not represent 2020 because it was estimated using SRTM and MERIT, which were created on elevation data in 2000. For historical change detection, it does not represent different years and could produce a bias in the historical map prediction. Moreover, the above input combination demonstrated overfitting when used to train the CNN model with 50 epochs (Figure 10). Therefore, experiments for feature selection were conducted using the CNN model, considering fewer input features. The results of these experiments provided the highest accuracy of 95.99%, with the combination of four bands and SRTM when training for 100 epochs.
Existing studies proved the success of deep learning-based mangrove classification using three bands, including Red, NIR, and SWIR; and four bands, including Blue, Green, Red, and NIR combined with VH and VV of Sentinel-1; seven bands; and multiple bands combined with vegetation indices and elevation data [23,26,37,38,39]. The present study observed that four bands with SRTM were useful for mangrove classification, where SRTM effectively removed the high-elevation areas, such as terrestrial forests. Moreover, the present study found that three bands, including Green, Red, and NIR with SRTM, could be utilized in long-term mangrove monitoring except for 1995 images of Landsat 5, in which open forests with less mangrove cover were misclassified as non-mangroves (Figure 11). We did not identify the causes of misclassification in 1995 images, possibly due to the low image quality available in 1995 and their original image preprocessing. Therefore, this model training was conducted using the input features of four multispectral bands and SRTM.

4.2. Application of Multi-Temporal and Multi-Source Imagery to U-Net

The application of multi-temporal and multi-source imagery is a pivotal aspect of mangrove mapping. Numerous studies have demonstrated the benefits of multi-temporal images in developing machine learning and deep learning models [19,26,38], and some studies have utilized multi-source imagery of radar and optical satellites to enhance the model’s accuracy by reducing the effects of weather conditions [21,29,38]. While some studies have explored multi-source imagery, our study is unique in its focus on fusing Landsat and Sentinel-2 imagery.
Given the availability of a single ground truth image for 2020, we initially developed the model using a temporal dataset available in January 2020. However, the model’s performance was not consistent across different temporal images, leading to misclassified results when predicting the images in December 2019, when the conditions of paddy fields were different (Figure 12). The farmers in WMF grow the paddy fields from June to August and harvest them from December to March [43]. The paddy fields in November and December appeared green in the true color images, which were before harvesting, while white colors appeared in January after harvesting.
The fluctuation of NDVI values in different months confirmed the significant differences in paddy fields. The NDVI values of paddy fields were slightly different, mainly in January. These values of paddy fields were lower than those of mangroves in January and became higher in December than in January. These values were nearly identical to those of mangroves in November. The images from February to May showed conditions similar to those in January, and those before November contained a high percentage of clouds in the satellite images because of the monsoon rainy season. Therefore, three temporal images available in November, December, and January were utilized to train a multi-temporal and multi-source model.
The comparison of CNN and U-Net highlighted the strengths and weaknesses of each model when using Landsat 8 and Sentinel-2 images. The strength of the CNN model was that it could be trained using a single satellite image, but it took longer than U-Net. The CNN model is best suited if a study emphasizes a small area using medium-resolution satellite images. On the other hand, the U-Net model achieved higher accuracy when training for many epochs and needed less training time than the CNN model. However, the weakness of the U-Net model is its requirement for large training samples. Data augmentation techniques, such as doubling the same images, were employed during the training processes to address it. Moreover, we encountered overfitting when conducting multiple experiments to obtain an optimal model and solved it by changing the number of filters in each layer and adding or reducing the number of layers in the U-Net architecture.
The results of comparing different satellite imagery demonstrated that Sentinel-2 outperformed Landsat 8. The U-Net model with Sentinel-2 achieved higher accuracy than that of Landsat 8. Sentinel-2 imagery could be the best choice for mangrove mapping when focusing on data since 2015. However, this study needed to utilize the Landsat imagery for historical changes in mangrove forests. Therefore, the fusion of resampled Landsat 8 and Sentinel-2 images was applied to train the U-Net model using the multi-temporal datasets.

4.3. Model Performance and Limitations

The U-Net model trained with multi-temporal and multi-source imagery achieved the highest accuracy of 99.73%, which outperformed the ANN model trained on the same ground truth image [23] with Sentinel-2 in the same site, and all existing models, such as Capsules-Unet [26], MSNet [39], and ANN [38], studied in different locations with different satellite imagery. Moreover, it can be applied to multi-temporal datasets from medium-resolution optical satellite imagery, including Landsat, Sentinel-2, and HLS. Its higher accuracy was achieved when training the model for 500 epochs.
The U-Net model provided less accuracy when training with single-source datasets for 200 epochs; it provided an accuracy of 98.25% with Sentinel-2 and 96.64% with Landsat 8 (Table 1). The result could be about 98% if the present study focused on training the model with Landsat imagery for 500 epochs. It highlighted the usefulness of fusing Landsat 8 and Sentinel-2 imagery for mangrove mapping. The combination of Landsat and Sentinel-2 optical imagery enhanced the accurate performance by reducing the spatial resolution gap between them. This approach allows the model to leverage the higher spatial resolution of Sentinel-2 while benefiting from the broader spectral bands of Landsat 8.
However, the model has limitations in generalization. When the model was applied to different locations in Myanmar, the results were evaluated using the mangrove maps of the High-Resolution Global Mangrove Forest (HGMF), and it performed well in all mangrove areas located near Rakhine State, Myanmar. However, it failed to accurately classify mangrove forests located in the Ayeyarwady Delta and the Tanintharyi Region, Myanmar. To address these issues, we conducted multiple experiments using different input features and larger training samples from the above three regions. The testing model trained using four bands, NDVI and NDWI, classified mangrove forests in different regions, though it had a weakness in distinguishing mangroves from terrestrial forests. The model trained using four bands and SRTM with larger training datasets available from three regions performed well for all regions and accurately distinguished between mangroves and terrestrial forests. Therefore, more ground truth images representing different locations should be included to train a generalized model for nationwide maps. Additionally, it should be tested with data from different countries to ensure broader generalization and better reliability.
The present model trained on multi-source imagery provided a slight discrepancy in mangrove extents when predicting their specific images and with different spatial resolutions. The resampled Landsat 8 images of 2020 provided an estimated extent of 2430.13 ha, while Sentinel-2 images estimated 2431.45 ha. A significant difference was identified when predicting the same Landsat 8 images without resampling, which covered an extent of 2405.74 ha (Figure 13). Its resulting map provided a coarser distribution of mangrove forests without showing distinct features of the river network (Figure 14).
Similar discrepancies were observed when interpreting the mangrove extents of Landsat 8 at 30 m resolution in 2024, resampled Landsat 8 (10 m resolution), and Sentinel-2 (10 m resolution). The largest extent was observed using resampled Landsat 8, with about a 30-hectare difference compared to the results of Sentinel-2, followed by the least extent of Landsat 8 without resampling. Therefore, the U-Net model of this study should not be used to predict the Landsat images without resampling because it can provide a less accurate map. Instead, it is crucial to resample the Landsat images into 10 m to match the spatial resolution of Sentinel-2 images, ensuring consistent and reliable predictions.

4.4. Extent Changes in WMF

The change detection analysis revealed that 29.3% of the mangrove extent in the study area was deforested from 1990 to 2024, including a steady increase in mangroves since 2010. However, Saw and Kanzaki (2015) reported that 40% of the mangrove extent in WRMF was lost from 1990 to 2011 [43], and Maung and Sasaki (2021) observed a slight decrease in WMF from 2015 to 2020 [23]. The difference between the study of Saw and Kanzaki (2015) and the present study depends on the extent of the study area. The research of Saw and Kanzaki (2015) focused on the extent of WRMF, which covers 22,919 ha, while the total area of this study is 53,859.95 ha. When the change detection was analyzed for the period from 1990 to 2010, it was found that 30.75% of the mangrove forests had been deforested.
Our findings from 2015 to 2020 differ from those of Maung and Sasaki (2021), in which they observed a slight decline from 254.30 to 249.83 km2 in WMF, while our study observed a slight increase. To validate the results of the present study, we downloaded available mangrove maps of the Global Mangrove Watch (GMW) and analyzed their long-term distribution. GMW data showed a similar pattern of mangrove changes as the present study, illustrating deforestation from 1996 to 2015 and reforestation from 2015 to 2020 (Figure 15). The declined results of Maung and Sasaki (2021) could be due to the usage of different preprocessing levels of Sentinel-2: level 1 C images for 2015 and level 2 A for 2020 and the application of transfer learning in a small dataset, representing mainly the agricultural areas.

4.5. Extent Comparison with Different Datasets

Evaluating the resulting mangrove map with the available data is a crucial step to demonstrate the reliability of the proposed model. We compared the mangrove map predicted by the U-Net model with the existing global datasets for 2020, such as the Global Mangrove Watch (GMW) [6] and the High-resolution Global Mangrove Forest (HGMF) [17], by referencing the ground truth image used for model training. The comparison highlighted huge discrepancies: the area difference between the ground truth image (24,268.16 ha) and the GMW data (28,785.78 ha) was substantial. In contrast, the gap between the ground truth image and the HGMF data was much smaller, around 1000 ha (Table 4). The GMW data overestimated the distribution of mangrove forests in the study area by approximately 4000 ha. The mangrove map produced by the U-Net model is closely aligned with the ground truth image, indicating the reliability of the deep learning approach.
The long-term changes in mangrove forests were examined using Sentinel-2 and HLS images to compare the results of Landsat images with different medium-resolution satellite images. The mangrove distribution maps of Sentinel-2 predicted by the U-Net model indicated a relatively stable trend with slight fluctuations (Figure 16). The mangrove extent of the study area in 2015 was 24,541.09 ha, reaching a peak around 2017, followed by a slight decrease in 2020 to 24,314.47 ha and gradually increasing to 24,469.76 ha in 2024. The above data provided by Sentinel-2 showed a slight overall decrease in WMF from 2015 to 2020, with a marginal increase in 2024.
To verify the extent of changes in WMF, we applied the model to both HLS L30 and HLS S30. HLS L30 was created using Landsat surface reflectance by adjusting the Sentinel-2 tiling system. At the same time, HLS S30 images were created based on the Sentinel-2 imagery by resampling to 30 m and adjusting to Landsat 8 and 9 spectral functions. The mangrove distributions of HLS L30 demonstrated an abnormal increase from 2015 to 2024, with the least coverage in 2017 (Figure 17a). Meanwhile, HLS S30 images illustrate a marginal decrease from 2015 to 2020, followed by a slight increase in 2024 (Figure 17b). Their results are also different from those of Sentinel-2; however, they confirmed an increase in mangrove forests by 2024.
Different satellite images provided different results, possibly due to spatial resolutions, sensors, and preprocessing techniques, although they were available from the same source using the Remotior Sensus library. This study utilized the Landsat series images due to the availability of historical data. Moreover, our results align similarly to those from the global GMW dataset. Therefore, the results produced by Landsat images are considered reliable.

4.6. Drivers of Mangrove Losses and Gains

The existing studies reported that the drivers of mangrove changes in WMF were anthropogenic factors, such as the expansion of shrimp ponds and paddy fields [42,43]. Although the present study did not directly detect the drivers of mangrove changes, it identified the drivers of mangrove losses and gains through high-resolution Google Earth images (Figure 18). Due to change detection results, mangrove forests had been steadily lost from 1990 to 2010. These losses were caused by constructing a crossing road through the reserved forest in 1994, the significantly increased number of farmers and shrimp-pond operators from 1990 to 2000, the expanding paddy fields between 1994 and 2003, and illegal logging with weak regulations [43].
The mangrove forests have increased forward since 2010. The drivers of mangrove gains were identified as artificially planted and naturally recovered mangroves in abandoned sites, in sedimentation areas, and along roadsides (Figure 19). The coverage of mangrove trees has increased along roadsides; the road crossing through the study area was significantly detected in the mangrove maps from 1995 to 2015, while it was detected in some parts of 2020 and 2024. The Forest Department of Myanmar implemented 855 acres (346 ha) of mangrove plantations from 2017 to 2023 within WRMF. Moreover, Maung and Sasaki (2021) confirmed that mangrove gains were due to mangrove plantations and the natural recovery ability of mangroves [23].
The present study identified that some areas with recently recovered mangroves had the risk of being lost again due to anthropogenic disturbances. Maung and Sasaki (2021) found that mangroves naturally recovered at approximately 50% of the three abandoned sites. However, mangroves in two of these sites were lost again by 2024 (Figure 20). Accordingly, the natural recovery rate of mangroves is insufficient to ensure the long-term stability of these ecosystems. Anthropogenic disturbances pose a significant threat to the sustainability of naturally recovered mangroves. Protecting naturally recovered mangroves can increase mangrove coverage at a low cost. Therefore, sustainable management and conservation strategies are essential to protecting these vital ecosystems, ensuring their resilience and ability to thrive in the face of environmental and human-induced challenges.
Monitoring the present conditions in mangrove forests is vital for detecting changes, pinpointing their locations, and identifying the drivers behind them. Saw and Kanzaki (2015) reported that illegal woodcutting occurred in the reserved forests to obtain charcoal and firewood [43]. The present study observed that mangrove losses have occurred since 2020 and onward, both within and outside the reserved area. Over half of the lost areas from 2020 to 2024 have been lost within a year since 2023. As shown in Figure 21, some areas covered with mangroves in 2020 disappeared by 2024; however, over half of those mangroves were still present in the Google Earth images from January 2023, indicating rapid and recent cutting.
According to change detection from 2020 to 2024, most losses were outside the reserved area. Approximately half of those losses were detected close to the local villages. The proximity of those losses near the villages suggests that local communities could contribute to deforestation. The impact of local villages on mangrove loss underscores the need for community-based mangrove conservation. Therefore, the deep learning model effectively identified all changes and can be applied to real-time mangrove mapping.

4.7. Mangrove Gains After Restoration

Mangrove restoration is a nature-based solution (NbS) that supports biodiversity and climate change mitigation. Mangrove reforestation also benefits blue carbon storage. Therefore, mangrove restoration should be set as a priority when designing NbS [4]. Despite the numerous benefits of restoration, the systematic assessment and documentation of its achievements are still limited, especially in Myanmar [66].
Mangroves have been increasing in the plantation sites within WMF, initiated by the Forest Department [23]. The present study also identified the numerous patches of mangrove plantations in WMF, referencing the Google Earth images. Additionally, change detection summarized the increased areas of mangrove forests in WMF since 2010. Therefore, the increased areas were compared with the data on the implementation of mangrove plantations by the Forest Department.
The Forest Department planted 855 acres (346 ha) of mangrove trees in WRMF from 2017 to 2023. The model predicted 991.59 ha of mangrove gains from 2015 to 2024. Mangrove plantations of 346 ha accounted for 34.89% of the mangrove gains observed from 2015 to 2024. These percentages of planted mangroves did not fully represent all the increased mangroves identified in the change detection results because the model failed to predict newly planted mangroves accurately, classifying them as non-mangroves due to high soil reflection and low vegetation reflection. Therefore, the restoration efforts could represent approximately 30% of the total increased areas of mangrove forests from 2015 to 2024, with the remaining areas likely consisting of naturally recovered mangroves.
A lot of restoration programs have been implemented by the Forest Department, not only in WMF but also across the country. The present model detected the increased extent of mangrove forests introduced by restoration programs and noticed relatively small areas for 10 m spatial resolution despite having limitations for newly cultivated mangroves. Therefore, our research could be useful for designing mangrove restoration strategies for conservationists and policymakers by utilizing the present model for the methodology to develop an advanced model and using the results of the change detection analysis.
One of the possibilities for increasing mangrove areas could be the development of a forest restoration program. The Myanmar Rehabilitation and Restoration Programme (MRRP), developed by the Ministry of Natural Resources and Environmental Conservation (MONREC), is a 10-year restoration program (2017/2018 to 2026/2027) to increase the forest land up to 30% of the total country area by 2030 and to reduce the deforestation around the country. This program aimed to achieve the targets of Nationally Determined Contributions (NDC) with the guidance of the Myanmar Forest Policy (1995) [46,47]. Therefore, the beginning year of MRRP was observed in the same year, 2017, when the Forest Department initiated mangrove plantations in WMF. This highlights the restoration policy’s contribution to increasing mangrove areas in WMF.

5. Conclusions

This study developed a U-Net model using the multi-temporal and multi-source imagery of Landsat 8 and Sentinel-2 to predict mangrove maps for long-term periods from 1990 to 2024. The model utilized four optical bands and SRTM as input features and achieved an accuracy of 99.73%, outperforming the existing models for mangrove classification. The development of a deep learning model fusing Landsat 8 and Sentinel-2 imagery for mangrove mapping could be documented as the first attempt due to our knowledge, although some studies existed with deep learning models fusing Sentinel-1 and Sentinel-2 for mangrove mapping. The present model still needed additional data to improve generalization and to support national-scale mangrove mapping. While fusing Landsat 8 and Sentinel-2 images slightly enhanced the model by reducing the need for huge datasets, discrepancies in the estimated mangrove extents persisted across different satellite imagery, such as Landsat series, Sentinel-2, HLS L30 and HLS S30, and different spatial resolutions, such as Landsat 8 30 m and Landsat 8 10 m with resampling. These significant variations highlight the need for further investigation to address these challenges in future studies.
Change detection highlighted significant losses of mangrove forests from 1990 to 2010 and steady gains from 2010 to 2024. In WMF, 29.3% of the mangrove extent has been deforested, while only 5.75% has been reforested with −224.52 ha/yr of annual rate of changes over 34 years. Anthropogenic activities, such as shrimp ponds and paddy fields with weak regulations, were drivers of mangrove losses. By contrast, mangrove gains can be classified as planted mangroves, naturally recovered mangroves grown in abandoned sites, sedimentation areas, and along roadsides. Mangrove losses have continued in WMF despite gains where 30% were attributed to mangrove plantations and the remaining percentage to natural regeneration. The naturally recovered mangroves in the abandoned sites near the villages and outside the reserved area can potentially be lost again by human encroachment. Therefore, mangrove forests should be monitored continuously using the improved model, which helps reduce deforestation, monitors the healthy conditions of planted mangroves, provides accurate information, and supports blue carbon conservation strategies. Further research should develop a model to classify the natural and planted mangroves to better understand the contribution of restoration programs to mangrove gains.

Author Contributions

Conceptualization, methodology, modeling, analysis, and visualization, K.S.W.; writing—original draft preparation, K.S.W.; writing—review and editing, J.S.; supervision and conceptualization, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This study received no external funding.

Data Availability Statement

The data presented in the study are available at the request of the first author.

Acknowledgments

The first author is indebted to the Asian Development Bank for financial assistance during the study in Japan. The authors would like to express sincere thanks to Win Sithu Maung for permission to use his ground truth image, the Forest Department of Myanmar for plantation data, and the anonymous reviewers for their careful reading of the manuscript and insightful comments. The computation was carried out using the computer resource offered under the category of General Projects by the Research Institute for Information Technology, Kyushu University, and the FUJITSU Server PRIMERGY GX2570 (Wisteria/BDEC-01) at the Information Technology Center, the University of Tokyo.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Specifications of the satellite imagery used in the study.
Table A1. Specifications of the satellite imagery used in the study.
Satellite NameAcquired DateData Type/Product TypeCloud Coverage
Landsat 52 January 1990 TM_L2SP0
Landsat 55 March 1995 TM_L2SP0
Landsat 514 January 2000 TM_L2SP0
Landsat 527 January 2005 TM_L2SP0
Landsat 510 February 2010 TM_L2SP0
Landsat 823 January 2015 OLI_TIRS_L2SP2
Landsat 8 *18 November 2019 OLI_TIRS_L2SP0
Landsat 8 *4 December 2019 OLI_TIRS_L2SP2
Landsat 8 *21 January 2020 OLI_TIRS_L2SP0
Landsat 98 January 2024 OLI_TIRS_L2SP0
Sentinel-223 November 2015 S2MSI2A5
Sentinel-227 December 2016 S2MSI2A0
Sentinel-22 December 2017 S2MSI2A0
Sentinel-231 January 2018 S2MSI2A0
Sentinel-2 *22 November 2019 S2MSI2A0
Sentinel-2 *22 December 2019 S2MSI2A1
Sentinel-2 *21 January 2020 S2MSI2A0
Sentinel-225 January 2021 S2MSI2A0
Sentinel-29 February 2022 S2MSI2A0
Sentinel-24 February 2023 S2MSI2A0
Sentinel-220 January 2024 S2MSI2A0
HLS L3025 December 2015 HLS Landsat OLI0
HLS L301 March 2017 HLS Landsat OLI0
HLS L3021 January 2020 HLS Landsat OLI0
HLS L3010 January 2022 HLS Landsat OLI0
HLS L308 January 2024 HLS Landsat OLI0
HLS S3023 December 2015 HLS Sentinel-2 MSI0
HLS S3016 January 2017 HLS Sentinel-2 MSI0
HLS S305 February 2020 HLS Sentinel-2 MSI0
HLS S3025 January 2022 HLS Sentinel-2 MSI0
HLS S3020 January 2024 HLS Sentinel-2 MSI0
Satellite imagery with * was used for training the multi-source model.

Appendix B

Table A2. Experimental results of input feature selection using ANN model with 30 epochs.
Table A2. Experimental results of input feature selection using ANN model with 30 epochs.
No.Groups of Input FeaturesAccuracy (%)
17 Bands (B1, B2, B3, B4, B5, B6, B7)57.06
25 Bands (B2, B3, B4, B5, B6) (BGRNIR, SWIR)54.77
34 Bands (B2, B3, B4, B5)54.77
4NDVI, NDWI, SAVI, CMRI92.04
5NDVI, NDWI, SAVI, CMRI, MERIT93.41
6NDVI, NDWI, SAVI, CMRI, SRTM93.67
7NDVI, SAVI, CMRI, NDWI, Slope92.85
8NDVI, NDWI, SAVI, CMRI, CHM92.42
9NDVI, NDWI, SAVI, CMRI, SRTM, CHM93.80
10NDVI, NDWI, SAVI, CMRI, MERIT, CHM93.37
11NDVI, NDWI, SAVI, CMRI, SRTM, MERIT, CHM93.63
12NDVI, NDWI, SAVI, CMRI, SRTM, MERIT, CHM, Slope93.34
13NDVI, NDWI, SAVI, SRTM93.43
14NDVI, SAVI, CMRI, SRTM93.71
15NDVI, NDWI, CMRI, SRTM93.65
164 Bands, NDVI, NDWI, SAVI, CMRI, SRTM93.53
175 Bands, NDVI, NDWI, SAVI, CMRI, SRTM93.72
184 Bands, NDVI, NDWI, SAVI, CMRI, MERIT93.47
195 Bands, NDVI, NDWI, SAVI, CMRI, MERIT93.59
204 Bands, NDVI, NDWI, SAVI, CMRI, SRTM, CHM93.74
215 Bands, NDVI, NDWI, SAVI, CMRI, SRTM, CHM93.58
224 Bands, NDVI, SAVI, CMRI, SRTM93.64
235 Bands, NDVI, SAVI, CMRI, SRTM93.59
244 Bands, NDVI, NDWI, CMRI, SRTM93.25
255 Bands, NDVI, NDWI, CMRI, SRTM93.44
26NDVI, SAVI, CMRI, MERIT93.43
27NDVI, SAVI, CMRI, CHM92.81
284 Bands, SRTM69.79
293 Bands (B3, B4, B5), SRTM68.90
307 Bands, NDVI, NDWI, SAVI, CMRI, SRTM, CHM93.47
Table A3. Experimental results of input feature selection using CNN model with 100 epochs.
Table A3. Experimental results of input feature selection using CNN model with 100 epochs.
No.Groups of Input FeaturesAccuracy (%)
1NDVI, NDWI, SAVI, CMRI, SRTM, CHM95.93
24 Bands54.91
34 Bands, SRTM95.99
43 Bands, SRTM95.85

References

  1. FAO. The World’s Mangroves 2000–2020; FAO: Rome, Italy, 2023; ISBN 978-92-5-138004-8. [Google Scholar]
  2. Mitra, A. Mangroves: A Unique Gift of Nature. In Sensitivity of Mangrove Ecosystem to Changing Climate; Mitra, A., Ed.; Springer: New Delhi, India, 2013; pp. 33–105. ISBN 978-81-322-1509-7. [Google Scholar]
  3. Spalding, M.; Kainuma, M.; Collins, L. World Atlas of Mangroves; Routledge: London, UK, 2011; ISBN 978-1-84977-660-8. [Google Scholar]
  4. Song, S.; Ding, Y.; Li, W.; Meng, Y.; Zhou, J.; Gou, R.; Zhang, C.; Ye, S.; Saintilan, N.; Krauss, K.W.; et al. Mangrove Reforestation Provides Greater Blue Carbon Benefit than Afforestation for Mitigating Global Climate Change. Nat. Commun. 2023, 14, 756. [Google Scholar] [CrossRef] [PubMed]
  5. van Hespen, R.; Hu, Z.; Borsje, B.; De Dominicis, M.; Friess, D.A.; Jevrejeva, S.; Kleinhans, M.G.; Maza, M.; van Bijsterveldt, C.E.J.; Van der Stocken, T.; et al. Mangrove Forests as a Nature-Based Solution for Coastal Flood Protection: Biophysical and Ecological Considerations. Water Sci. Eng. 2023, 16, 1–13. [Google Scholar] [CrossRef]
  6. Bunting, P.; Rosenqvist, A.; Hilarides, L.; Lucas, R.M.; Thomas, N.; Tadono, T.; Worthington, T.A.; Spalding, M.; Murray, N.J.; Rebelo, L.-M. Global Mangrove Extent Change 1996–2020: Global Mangrove Watch Version 3.0. Remote Sens. 2022, 14, 3657. [Google Scholar] [CrossRef]
  7. Kuenzer, C.; Bluemel, A.; Gebhardt, S.; Quoc, T.V.; Dech, S. Remote Sensing of Mangrove Ecosystems: A Review. Remote Sens. 2011, 3, 878–928. [Google Scholar] [CrossRef]
  8. Veettil, B.K.; Pereira, S.F.R.; Quang, N.X. Rapidly Diminishing Mangrove Forests in Myanmar (Burma): A Review. Hydrobiologia 2018, 822, 19–35. [Google Scholar] [CrossRef]
  9. Giri, C.; Ochieng, E.; Tieszen, L.L.; Zhu, Z.; Singh, A.; Loveland, T.; Masek, J.; Duke, N. Status and Distribution of Mangrove Forests of the World Using Earth Observation Satellite Data. Glob. Ecol. Biogeogr. 2011, 20, 154–159. [Google Scholar] [CrossRef]
  10. Maurya, K.; Mahajan, S.; Chaube, N. Remote Sensing Techniques: Mapping and Monitoring of Mangrove Ecosystem—A Review. Complex Intell. Syst. 2021, 7, 2797–2818. [Google Scholar] [CrossRef]
  11. Lu, Y.; Wang, L. The Current Status, Potential and Challenges of Remote Sensing for Large-Scale Mangrove Studies. Int. J. Remote Sens. 2022, 43, 6824–6855. [Google Scholar] [CrossRef]
  12. Wang, L.; Jia, M.; Yin, D.; Tian, J. A Review of Remote Sensing for Mangrove Forests: 1956–2018. Remote Sens. Environ. 2019, 231, 111223. [Google Scholar] [CrossRef]
  13. Vasquez, J.; Acevedo-Barrios, R.; Miranda-Castro, W.; Guerrero, M.; Meneses-Ospina, L. Determining Changes in Mangrove Cover Using Remote Sensing with Landsat Images: A Review. Water. Air. Soil Pollut. 2023, 235, 18. [Google Scholar] [CrossRef]
  14. Wang, D.; Wan, B.; Qiu, P.; Su, Y.; Guo, Q.; Wang, R.; Sun, F.; Wu, X. Evaluating the Performance of Sentinel-2, Landsat 8 and Pléiades-1 in Mapping Mangrove Extent and Species. Remote Sens. 2018, 10, 1468. [Google Scholar] [CrossRef]
  15. Nasiri, V.; Deljouei, A.; Moradi, F.; Sadeghi, S.M.M.; Borz, S.A. Land Use and Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Comparison of Two Composition Methods. Remote Sens. 2022, 14, 1977. [Google Scholar] [CrossRef]
  16. Bouslihim, Y.; Kharrou, M.H.; Miftah, A.; Attou, T.; Bouchaou, L.; Chehbouni, A. Comparing Pan-Sharpened Landsat-9 and Sentinel-2 for Land-Use Classification Using Machine Learning Classifiers. J. Geovisualization Spat. Anal. 2022, 6, 35. [Google Scholar] [CrossRef]
  17. Jia, M.; Wang, Z.; Mao, D.; Ren, C.; Song, K.; Zhao, C.; Wang, C.; Xiao, X.; Wang, Y. Mapping Global Distribution of Mangrove Forests at 10-m Resolution. Sci. Bull. 2023, 68, 1306–1316. [Google Scholar] [CrossRef]
  18. Chaves, M.E.D.; Picoli, M.C.A.; Sanches, I.D. Recent Applications of Landsat 8/OLI and Sentinel-2/MSI for Land Use and Land Cover Mapping: A Systematic Review. Remote Sens. 2020, 12, 3062. [Google Scholar] [CrossRef]
  19. Ma, C.; Ai, B.; Zhao, J.; Xu, X.; Huang, W. Change Detection of Mangrove Forests in Coastal Guangdong during the Past Three Decades Based on Remote Sensing Data. Remote Sens. 2019, 11, 921. [Google Scholar] [CrossRef]
  20. Baltezar, P.; Murillo-Sandoval, P.J.; Cavanaugh, K.C.; Doughty, C.; Lagomasino, D.; Tieng, T.; Simard, M.; Fatoyinbo, T. A Regional Map of Mangrove Extent for Myanmar, Thailand, and Cambodia Shows Losses of 44% by 1996. Front. Mar. Sci. 2023, 10, 1127720. [Google Scholar] [CrossRef]
  21. De Alban, J.D.T.; Jamaludin, J.; Wong De Wen, D.; Than, M.M.; Webb, E.L. Improved Estimates of Mangrove Cover and Change Reveal Catastrophic Deforestation in Myanmar. Environ. Res. Lett. 2020, 15, 034034. [Google Scholar] [CrossRef]
  22. Tran, T.V.; Reef, R.; Zhu, X. A Review of Spectral Indices for Mangrove Remote Sensing. Remote Sens. 2022, 14, 4868. [Google Scholar] [CrossRef]
  23. Maung, W.S.; Sasaki, J. Assessing the Natural Recovery of Mangroves after Human Disturbance Using Neural Network Classification and Sentinel-2 Imagery in Wunbaik Mangrove Forest, Myanmar. Remote Sens. 2021, 13, 52. [Google Scholar] [CrossRef]
  24. Bunting, P.; Rosenqvist, A.; Lucas, R.M.; Rebelo, L.-M.; Hilarides, L.; Thomas, N.; Hardy, A.; Itoh, T.; Shimada, M.; Finlayson, C.M. The Global Mangrove Watch—A New 2010 Global Baseline of Mangrove Extent. Remote Sens. 2018, 10, 1669. [Google Scholar] [CrossRef]
  25. Weber, S.; Keddell, L.; Kemal, M.S. Myanmar Ecological Forecasting: Utilizing NASA Earth Observations to Monitor, Map, and Analyze Mangrove Forests in Myanmar for Enhanced Conservation; NASA: Greenbelt, MD, USA, 2014. [Google Scholar]
  26. Guo, Y.; Liao, J.; Shen, G. Mapping Large-Scale Mangroves along the Maritime Silk Road from 1990 to 2015 Using a Novel Deep Learning Model and Landsat Data. Remote Sens. 2021, 13, 245. [Google Scholar] [CrossRef]
  27. Dong, Y.; Yu, K.; Hu, W. GC-UNet: An Improved UNet Model for Mangrove Segmentation Using Landsat8. In Proceedings of the 2021 3rd International Conference on Big Data Engineering, Cox’s Bazar, Bangladesh, 23–25 September 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 58–63. [Google Scholar]
  28. Toosi, N.B.; Soffianian, A.R.; Fakheran, S.; Pourmanafi, S.; Ginzler, C.; Waser, L.T. Comparing Different Classification Algorithms for Monitoring Mangrove Cover Changes in Southern Iran. Glob. Ecol. Conserv. 2019, 19, e00662. [Google Scholar] [CrossRef]
  29. Sharifi, A.; Felegari, S.; Tariq, A. Mangrove Forests Mapping Using Sentinel-1 and Sentinel-2 Satellite Images. Arab. J. Geosci. 2022, 15, 1593. [Google Scholar] [CrossRef]
  30. Elmahdy, S.I.; Ali, T.A.; Mohamed, M.M.; Howari, F.M.; Abouleish, M.; Simonet, D. Spatiotemporal Mapping and Monitoring of Mangrove Forests Changes from 1990 to 2019 in the Northern Emirates, UAE Using Random Forest, Kernel Logistic Regression and Naive Bayes Tree Models. Front. Environ. Sci. 2020, 8, 102. [Google Scholar] [CrossRef]
  31. Rosmasita; Siregar, V.P.; Agus, S.B.; Jhonnerie, R. An Object-Based Classification of Mangrove Land Cover Using Support Vector Machine Algorithm. IOP Conf. Ser. Earth Environ. Sci. 2019, 284, 012024. [Google Scholar] [CrossRef]
  32. Estoque, R.C.; Myint, S.W.; Wang, C.; Ishtiaque, A.; Aung, T.T.; Emerton, L.; Ooba, M.; Hijioka, Y.; Mon, M.S.; Wang, Z.; et al. Assessing Environmental Impacts and Change in Myanmar’s Mangrove Ecosystem Service Value Due to Deforestation (2000–2014). Glob. Change Biol. 2018, 24, 5391–5410. [Google Scholar] [CrossRef]
  33. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in Vegetation Remote Sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  34. Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N. Prabhat Deep Learning and Process Understanding for Data-Driven Earth System Science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef] [PubMed]
  35. Wei, Y.; Cheng, Y.; Yin, X.; Xu, Q.; Ke, J.; Li, X. Deep Learning-Based Classification of High-Resolution Satellite Images for Mangrove Mapping. Appl. Sci. 2023, 13, 8526. [Google Scholar] [CrossRef]
  36. Liu, Y.; Zhang, Y.; Cheng, Q.; Feng, J.; Chun Chao, M.; Yeu Tsou, J. Mangrove Monitoring and Change Analysis with Landsat Images: A Case Study in Pearl River Estuary (China). Ecol. Indic. 2024, 160, 111763. [Google Scholar] [CrossRef]
  37. Guo, M.; Yu, Z.; Xu, Y.; Huang, Y.; Li, C. ME-Net: A Deep Convolutional Neural Network for Extracting Mangrove Using Sentinel-2A Data. Remote Sens. 2021, 13, 1292. [Google Scholar] [CrossRef]
  38. Ghorbanian, A.; Ahmadi, S.A.; Amani, M.; Mohammadzadeh, A.; Jamali, S. Application of Artificial Neural Networks for Mangrove Mapping Using Multi-Temporal and Multi-Source Remote Sensing Imagery. Water 2022, 14, 244. [Google Scholar] [CrossRef]
  39. Xu, C.; Wang, J.; Sang, Y.; Li, K.; Liu, J.; Yang, G. An Effective Deep Learning Model for Monitoring Mangroves: A Case Study of the Indus Delta. Remote Sens. 2023, 15, 2220. [Google Scholar] [CrossRef]
  40. Shao, Z.; Cai, J.; Fu, P.; Hu, L.; Liu, T. Deep Learning-Based Fusion of Landsat-8 and Sentinel-2 Images for a Harmonized Surface Reflectance Product. Remote Sens. Environ. 2019, 235, 111425. [Google Scholar] [CrossRef]
  41. Friess, D.A.; Rogers, K.; Lovelock, C.E.; Krauss, K.W.; Hamilton, S.E.; Lee, S.Y.; Lucas, R.; Primavera, J.; Rajkaran, A.; Shi, S. The State of the World’s Mangrove Forests: Past, Present, and Future. Annu. Rev. Environ. Resour. 2019, 44, 89–115. [Google Scholar] [CrossRef]
  42. Stanley, O.; Broadhead, J.; Myint, A.A. The Atlas and Guidelines for Mangrove Management in Wunbaik Reserved Forest: Sustainable Community Based Mangrove Management in Wunbaik Forest Reserve TCP/MYA/3204 (2009–2011); FAO-UN Myanmar: FAO Representation Office, Seed Division Compound: Yangon, Myanmar, 2011. [Google Scholar]
  43. Saw, A.A.; Kanzaki, M. Local Livelihoods and Encroachment into a Mangrove Forest Reserve: A Case Study of the Wunbaik Reserved Mangrove Forest, Myanmar. Procedia Environ. Sci. 2015, 28, 483–492. [Google Scholar] [CrossRef]
  44. Gandhi, S.; Jones, T.G. Identifying Mangrove Deforestation Hotspots in South Asia, Southeast Asia and Asia-Pacific. Remote Sens. 2019, 11, 728. [Google Scholar] [CrossRef]
  45. Gaw, L.Y.F.; Linkie, M.; Friess, D.A. Mangrove Forest Dynamics in Tanintharyi, Myanmar from 1989–2014, and the Role of Future Economic and Political Developments. Singap. J. Trop. Geogr. 2018, 39, 224–243. [Google Scholar] [CrossRef]
  46. Sann, B.; Brunner, J.; Brander, L. Report on Cost-Benefit Analysis of Forest Restoration Interventions in Sagaing Region, Myanmar; IUCN: Viet Nam Country Office: Hanoi, Vietnam, 2021; p. 23. [Google Scholar]
  47. MONREC. Myanmar Updated Nationally Determined Contributions-NDC; Ministry of Natural Resources and Environmental Conservation (MONREC), Naypyitaw, Myanmar. 2021. Available online: https://unfccc.int/sites/default/files/NDC/2022-06/Myanmar%20Updated%20%20NDC%20July%202021.pdf (accessed on 8 September 2024).
  48. Maung, W.S.; Tsuyuki, S.; Guo, Z. Improving Land Use and Land Cover Information of Wunbaik Mangrove Area in Myanmar Using U-Net Model with Multisource Remote Sensing Datasets. Remote Sens. 2024, 16, 76. [Google Scholar] [CrossRef]
  49. Maung, W.S. Assesment of Natural Recovery of Mangrove from Anthropogenic Disturbance Using Neural Network-Based Classification of Satellite Images; The University of Tokyo: Tokyo, Japan, 2020. [Google Scholar]
  50. Congedo, L.; Barsukov, I. Semiautomaticgit/Remotior_sensus: V0.3.5 2024. Zenedo. Available online: https://zenodo.org/records/10456752.
  51. Claverie, M.; Ju, J.; Masek, J.G.; Dungan, J.L.; Vermote, E.F.; Roger, J.-C.; Skakun, S.V.; Justice, C. The Harmonized Landsat and Sentinel-2 Surface Reflectance Data Set. Remote Sens. Environ. 2018, 219, 145–161. [Google Scholar] [CrossRef]
  52. Farr, T.G.; Kobrick, M. Shuttle Radar Topography Mission Produces a Wealth of Data. Eos Trans. Am. Geophys. Union 2000, 81, 583–585. [Google Scholar] [CrossRef]
  53. Yamazaki, D.; Ikeshima, D.; Tawatari, R.; Yamaguchi, T.; O’Loughlin, F.; Neal, J.C.; Sampson, C.C.; Kanae, S.; Bates, P.D. A High-Accuracy Map of Global Terrain Elevations. Geophys. Res. Lett. 2017, 44, 5844–5853. [Google Scholar] [CrossRef]
  54. Bharathi, N.; Rishiikeshwer, B.S.; Shriram, T.A.; Santhi, B.; Brindha, G.R. The Significance of Feature Selection Techniques in Machine Learning. In Fundamentals and Methods of Machine and Deep Learning; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2022; pp. 121–134. ISBN 978-1-119-82190-8. [Google Scholar]
  55. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS; NASA: Greenbelt, MD, USA, 1974. [Google Scholar]
  56. McFEETERS, S.K. The Use of the Normalized Difference Water Index (NDWI) in the Delineation of Open Water Features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  57. Huete, A.R. A Soil-Adjusted Vegetation Index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  58. Gupta, K.; Mukhopadhyay, A.; Giri, S.; Chanda, A.; Datta Majumdar, S.; Samanta, S.; Mitra, D.; Samal, R.N.; Pattnaik, A.K.; Hazra, S. An Index for Discrimination of Mangroves from Non-Mangroves Using LANDSAT 8 OLI Imagery. MethodsX 2018, 5, 1129–1139. [Google Scholar] [CrossRef]
  59. Hawker, L.; Uhe, P.; Paulo, L.; Sosa, J.; Savage, J.; Sampson, C.; Neal, J. A 30 m Global Map of Elevation with Forests and Buildings Removed. Environ. Res. Lett. 2022, 17, 024016. [Google Scholar] [CrossRef]
  60. Simard, M.; Rivera-Monroy, V.H.; Mancera-Pineda, J.E.; Castañeda-Moya, E.; Twilley, R.R. A Systematic Method for 3D Mapping of Mangrove Forests Based on Shuttle Radar Topography Mission Elevation Data, ICEsat/GLAS Waveforms and Field Data: Application to Ciénaga Grande de Santa Marta, Colombia. Remote Sens. Environ. 2008, 112, 2131–2144. [Google Scholar] [CrossRef]
  61. Aslan, A.; Aljahdali, M.O. Characterizing Global Patterns of Mangrove Canopy Height and Aboveground Biomass Derived from SRTM Data. Forests 2022, 13, 1545. [Google Scholar] [CrossRef]
  62. Chen, L.; Li, S.; Bai, Q.; Yang, J.; Jiang, S.; Miao, Y. Review of Image Classification Algorithms Based on Convolutional Neural Networks. Remote Sens. 2021, 13, 4712. [Google Scholar] [CrossRef]
  63. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015. [Google Scholar]
  64. Singh, A. Review Article Digital Change Detection Techniques Using Remotely-Sensed Data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef]
  65. Wu, C.; Du, B.; Cui, X.; Zhang, L. A Post-Classification Change Detection Method Based on Iterative Slow Feature Analysis and Bayesian Soft Fusion. Remote Sens. Environ. 2017, 199, 241–255. [Google Scholar] [CrossRef]
  66. Gerona-Daga, M.E.B.; Salmo, S.G.I. A Systematic Review of Mangrove Restoration Studies in Southeast Asia: Challenges and Opportunities for the United Nation’s Decade on Ecosystem Restoration. Front. Mar. Sci. 2022, 9, 987737. [Google Scholar] [CrossRef]
Figure 1. The location of the study area: (a) Wunbaik Mangrove Forest (Landsat 8 composite image of 4 bands); (b) Myanmar’s States and Regions (Myanmar Information Management Unit—MIMU); and (c,d) Example patches used in U-Net model training (Landsat 8 composite image).
Figure 1. The location of the study area: (a) Wunbaik Mangrove Forest (Landsat 8 composite image of 4 bands); (b) Myanmar’s States and Regions (Myanmar Information Management Unit—MIMU); and (c,d) Example patches used in U-Net model training (Landsat 8 composite image).
Remotesensing 16 04077 g001
Figure 2. Ground truth image used for model training.
Figure 2. Ground truth image used for model training.
Remotesensing 16 04077 g002
Figure 3. The architecture of the CNN model.
Figure 3. The architecture of the CNN model.
Remotesensing 16 04077 g003
Figure 4. The architecture of the U-Net model (visualized using visualkeras).
Figure 4. The architecture of the U-Net model (visualized using visualkeras).
Remotesensing 16 04077 g004
Figure 5. Evaluation results of example patches shown in Figure 1c,d, comparing Landsat 8 and Sentinel-2 images with ground truth: (a) ground truth images; (b) TP, TN, FP, and FN of Landsat 8 images; and (c) TP, TN, FP, and FN of Sentinel-2 images.
Figure 5. Evaluation results of example patches shown in Figure 1c,d, comparing Landsat 8 and Sentinel-2 images with ground truth: (a) ground truth images; (b) TP, TN, FP, and FN of Landsat 8 images; and (c) TP, TN, FP, and FN of Sentinel-2 images.
Remotesensing 16 04077 g005
Figure 6. Mangrove maps predicted by the multi-temporal and multi-source model and Google Earth images from 1990 to 2024: (a) the mangrove map of 1990; (b) the Google Earth image of 1990; (c) the mangrove map of 1995; (d) the Google Earth image of 1995; (e) the mangrove map of 2000; (f) the Google Earth image of 2000; (g) the mangrove map of 2005; (h) the Google Earth image of 2005; (i) the mangrove map of 2010; (j) the Google Earth image of 2010; (k) the mangrove map of 2015; (l) the Google Earth image of 2015; (m) the mangrove map of 2020; (n) the Google Earth image of 2020; (o) the mangrove map of 2024; and (p) the Google Earth image of 2024. (The dark green is mangroves, light and dark brown are agricultural and aquacultural areas, and light green is water bodies).
Figure 6. Mangrove maps predicted by the multi-temporal and multi-source model and Google Earth images from 1990 to 2024: (a) the mangrove map of 1990; (b) the Google Earth image of 1990; (c) the mangrove map of 1995; (d) the Google Earth image of 1995; (e) the mangrove map of 2000; (f) the Google Earth image of 2000; (g) the mangrove map of 2005; (h) the Google Earth image of 2005; (i) the mangrove map of 2010; (j) the Google Earth image of 2010; (k) the mangrove map of 2015; (l) the Google Earth image of 2015; (m) the mangrove map of 2020; (n) the Google Earth image of 2020; (o) the mangrove map of 2024; and (p) the Google Earth image of 2024. (The dark green is mangroves, light and dark brown are agricultural and aquacultural areas, and light green is water bodies).
Remotesensing 16 04077 g006aRemotesensing 16 04077 g006b
Figure 7. The annual changes of mangrove forests in WMF from 1990 to 2024.
Figure 7. The annual changes of mangrove forests in WMF from 1990 to 2024.
Remotesensing 16 04077 g007
Figure 8. Change detection maps for each period: (a) 1990–1995; (b) 1995–2000; (c) 2000–2005; (d) 2005–2010; (e) 2010–2015; (f) 2015–2020; (g) 2020–2024; and (h) 1990–2024.
Figure 8. Change detection maps for each period: (a) 1990–1995; (b) 1995–2000; (c) 2000–2005; (d) 2005–2010; (e) 2010–2015; (f) 2015–2020; (g) 2020–2024; and (h) 1990–2024.
Remotesensing 16 04077 g008aRemotesensing 16 04077 g008b
Figure 9. Annual rate of mangrove changes in WMF.
Figure 9. Annual rate of mangrove changes in WMF.
Remotesensing 16 04077 g009
Figure 10. The training flow of the CNN model with different input features: (a) accuracy flow trained with NDVI, NDWI, SAVI, CMRI, SRTM, and CHM; (b) accuracy flow trained with 4 bands and SRTM.
Figure 10. The training flow of the CNN model with different input features: (a) accuracy flow trained with NDVI, NDWI, SAVI, CMRI, SRTM, and CHM; (b) accuracy flow trained with 4 bands and SRTM.
Remotesensing 16 04077 g010
Figure 11. Mangrove maps predicted by CNN model trained with three bands and SRTM for 1995: (a) mangrove map predicted using 3 bands and SRTM; (b) mangrove map predicted using 4 bands and SRTM; and (c) Google Earth image for 1995. (The dark green is mangroves, light brown is agricultural and aquacultural areas, and light green is water bodies).
Figure 11. Mangrove maps predicted by CNN model trained with three bands and SRTM for 1995: (a) mangrove map predicted using 3 bands and SRTM; (b) mangrove map predicted using 4 bands and SRTM; and (c) Google Earth image for 1995. (The dark green is mangroves, light brown is agricultural and aquacultural areas, and light green is water bodies).
Remotesensing 16 04077 g011
Figure 12. Different temporal conditions of paddy fields: (a) Landsat composite image of January 2020; (b) Landsat composite image of December 2019; (c) Landsat composite image of November 2019; (d) CNN result for January 2020; (e) CNN result for December 2019; and (f) CNN result for November 2019. (The dark brown is mangroves, light green and white are agricultural and aquacultural areas, and yellow-green is water bodies).
Figure 12. Different temporal conditions of paddy fields: (a) Landsat composite image of January 2020; (b) Landsat composite image of December 2019; (c) Landsat composite image of November 2019; (d) CNN result for January 2020; (e) CNN result for December 2019; and (f) CNN result for November 2019. (The dark brown is mangroves, light green and white are agricultural and aquacultural areas, and yellow-green is water bodies).
Remotesensing 16 04077 g012
Figure 13. Area differences between Landsat 8 (30 m), Landsat 8 (10 m), and Sentinel-2 (10 m).
Figure 13. Area differences between Landsat 8 (30 m), Landsat 8 (10 m), and Sentinel-2 (10 m).
Remotesensing 16 04077 g013
Figure 14. Model results on satellite images with different spatial resolutions: (a) U-Net result on Landsat 8 (30 m); (b) U-Net result on Landsat 8 (10 m); and (c) U-Net result on Sentinel-2 (10 m).
Figure 14. Model results on satellite images with different spatial resolutions: (a) U-Net result on Landsat 8 (30 m); (b) U-Net result on Landsat 8 (10 m); and (c) U-Net result on Sentinel-2 (10 m).
Remotesensing 16 04077 g014
Figure 15. Annual area changes of WMF given by GMW data.
Figure 15. Annual area changes of WMF given by GMW data.
Remotesensing 16 04077 g015
Figure 16. Annual area changes of WMF given by Sentinel-2 data.
Figure 16. Annual area changes of WMF given by Sentinel-2 data.
Remotesensing 16 04077 g016
Figure 17. Annual area changes of WMF given by HLS data: (a) HLS L30 data, (b) HLS S30 data.
Figure 17. Annual area changes of WMF given by HLS data: (a) HLS L30 data, (b) HLS S30 data.
Remotesensing 16 04077 g017
Figure 18. Mangrove losses: (a) changes from 2010 to 2024; (b) land use in 2009; (c) land use in 2024; (d) land use in 2009; and (e) land use in 2024. The drivers of losses from (b) to (c) are shrimp ponds and from (d) to (e) are paddy fields due to Google Earth images and the study by Maung et al. (2024) [48].
Figure 18. Mangrove losses: (a) changes from 2010 to 2024; (b) land use in 2009; (c) land use in 2024; (d) land use in 2009; and (e) land use in 2024. The drivers of losses from (b) to (c) are shrimp ponds and from (d) to (e) are paddy fields due to Google Earth images and the study by Maung et al. (2024) [48].
Remotesensing 16 04077 g018
Figure 19. Mangrove gains: (a) changes from 2010 to 2024; (b) land use in 2009; (c) land use in 2023; (d) land use in 2009; and (e) land use in 2023. The drivers of gains from (b) to (c) are mangrove plantations and from (d) to (e) are natural mangroves due to Google Earth images and Maung and Sasaki (2021) [23].
Figure 19. Mangrove gains: (a) changes from 2010 to 2024; (b) land use in 2009; (c) land use in 2023; (d) land use in 2009; and (e) land use in 2023. The drivers of gains from (b) to (c) are mangrove plantations and from (d) to (e) are natural mangroves due to Google Earth images and Maung and Sasaki (2021) [23].
Remotesensing 16 04077 g019
Figure 20. Mangrove losses after recovery: (a) changes from 2010 to 2024; (b) land use in 2015; (c) land use in 2018; (d) land use in 2024; (e) changes from 2010 to 2015; (f) changes from 2015 to 2020; and (g) changes from 2020 to 2024 (Google Earth images for historical land uses).
Figure 20. Mangrove losses after recovery: (a) changes from 2010 to 2024; (b) land use in 2015; (c) land use in 2018; (d) land use in 2024; (e) changes from 2010 to 2015; (f) changes from 2015 to 2020; and (g) changes from 2020 to 2024 (Google Earth images for historical land uses).
Remotesensing 16 04077 g020
Figure 21. Mangrove losses after 2020: (a) changes from 2020 to 2024; (b) Google Earth image of 2023; (c) Sentinel-2 composite image of 2020; (d) Sentinel-2 composite image of 2024; (e) Google Earth image of 2023; (f) Sentinel-2 composite image of 2020; and (g) Sentinel-2 composite image of 2024. (The red line represents the areas of change with dark green for mangroves, light and dark brown indicating newly cut areas for aquaculture).
Figure 21. Mangrove losses after 2020: (a) changes from 2020 to 2024; (b) Google Earth image of 2023; (c) Sentinel-2 composite image of 2020; (d) Sentinel-2 composite image of 2024; (e) Google Earth image of 2023; (f) Sentinel-2 composite image of 2020; and (g) Sentinel-2 composite image of 2024. (The red line represents the areas of change with dark green for mangroves, light and dark brown indicating newly cut areas for aquaculture).
Remotesensing 16 04077 g021
Table 1. The comparison of CNN and U-Net models using Landsat 8 and Sentinel-2.
Table 1. The comparison of CNN and U-Net models using Landsat 8 and Sentinel-2.
SatellitesModelsAccuracyIoUPrecisionRecallF1-scoreTraining Time (s)InputsEpochs
Landsat 8 (10 m)CNN96.65%0.930.960.970.9610,988.994 images200
U-Net96.64%0.930.960.970.96260.594 images200
Sentinel-2 (10 m)CNN97.34%0.940.960.980.9811,231.634 images200
U-Net98.25%0.960.980.980.98257.364 images200
Four images consist of two pairs of identical images, with each pair captured on different dates.
Table 2. The classification results of the multi-temporal and multi-source U-Net model.
Table 2. The classification results of the multi-temporal and multi-source U-Net model.
AccuracyIoU (MG)IoU (nMG)Precision (MG)Precision (nMG)Recall (MG)Recall (nMG)F1-Score (MG)F1-Score (nMG)
99.73%0.991111111
MG = Mangrove, nMG = Non-Mangrove.
Table 3. Confusion matrix of multi-temporal and multi-source U-Net model on Landsat 8 and Sentinel-2 images.
Table 3. Confusion matrix of multi-temporal and multi-source U-Net model on Landsat 8 and Sentinel-2 images.
Landsat 8 (21 January 2020)Total PixelsSentinel-2 (21 January 2020)
Predicted Predicted
nMGMG nMGMG
ActualnMG2,951,56043042,955,864ActualnMG2,952,3922157
MG76202,422,5122,430,132MG67882,424,659
Table 4. The extent comparison with global datasets in 2020 for the study area (ha).
Table 4. The extent comparison with global datasets in 2020 for the study area (ha).
Ground Truth ImageU-Net ResultGMW DataHGMF Data
24,268.1624,301.3228,785.7825,882.29
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Win, K.S.; Sasaki, J. The Change Detection of Mangrove Forests Using Deep Learning with Medium-Resolution Satellite Imagery: A Case Study of Wunbaik Mangrove Forest in Myanmar. Remote Sens. 2024, 16, 4077. https://doi.org/10.3390/rs16214077

AMA Style

Win KS, Sasaki J. The Change Detection of Mangrove Forests Using Deep Learning with Medium-Resolution Satellite Imagery: A Case Study of Wunbaik Mangrove Forest in Myanmar. Remote Sensing. 2024; 16(21):4077. https://doi.org/10.3390/rs16214077

Chicago/Turabian Style

Win, Kyaw Soe, and Jun Sasaki. 2024. "The Change Detection of Mangrove Forests Using Deep Learning with Medium-Resolution Satellite Imagery: A Case Study of Wunbaik Mangrove Forest in Myanmar" Remote Sensing 16, no. 21: 4077. https://doi.org/10.3390/rs16214077

APA Style

Win, K. S., & Sasaki, J. (2024). The Change Detection of Mangrove Forests Using Deep Learning with Medium-Resolution Satellite Imagery: A Case Study of Wunbaik Mangrove Forest in Myanmar. Remote Sensing, 16(21), 4077. https://doi.org/10.3390/rs16214077

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop