1. Introduction
Flooding is one of the most frequent and destructive natural hazards, which often causes property and life loss [
1,
2]. Satellite remote sensing plays an important role in all phases of hazard monitoring and management [
3]. The traditional remote sensing of flood monitoring is still difficult due to the lack of data with sufficient acquisition frequency and timeliness [
4], while synthetic aperture radar (SAR) systems offer the possibility to operate in day and night time [
5]. Optical satellite images contain rich information in their bands [
6], which have a preferable result in the classification of ground cover [
7]. However, optical satellite images are vulnerable to weather conditions, and have poor quality during the period of disaster because it is always cloudy and rainy during that time. SAR images are independent of weather conditions, so they are the best choice under cloudy and rainy weather. The digital number (DN) values of SAR images show the backscatter characteristics of ground objects, but this has a bad result in the classification of ground cover due to its inadequacy of information [
8]. However, the change in backscatter coefficients can sensitively reveal the change in backscatter properties and the state of ground objects. In particular, the backscatter coefficient increases due to flooding in all SAR bands over vegetated areas [
9].
There were several studies on flood mapping using SAR images: (1) the threshold of SAR backscattering values; (2) RGB composition; and (3) classification techniques. The threshold selection of SAR backscattering values in flood mapping can be divided into two categories: (a) A single threshold value is used to separate flood and non-flood regions. The change detection of water bodies during pre-flood and flooding periods is normally used. Li et al. [
7] proposed a two-step automatic change detection chain for rapid flood mapping based on Sentinel-1 SAR images, which only dealt with the negative change caused by open water in rural areas. This approach can only detect completely inundated areas, but cannot identify slightly inundated areas. A harmonic model-based approach and the alternative change detection were proposed to derive the flood extent and the Otsu thresholding method was applied in the change image to determine a threshold value [
10]. However, a single threshold to distinguish flood and non-flood regions was not comprehensive enough to extract the changes caused by flooding because the backscatter values changed significantly; (b) Multiple threshold values are used to separate flood and non-flood regions with some proposed comprehensive models to select different thresholds for different land covers. Long et al. [
11] proposed an approach for flood change detection and threshold selection to delineate the extent of flooding for the Chobe floodplain in the Caprivi region of Namibia, which can identify flooding in vegetation, but only two thresholds were used to extract inundated area and inundated vegetation. Pulvirenti et al. [
12] used SAR data to map floods in agricultural and urban environments with interferometric coherence, a digital elevation model (DEM), and the land cover map as auxiliary data. A threshold selection is always complex and accompanied by algorithm innovation, which requires a good knowledge of mathematics.
A few researchers used classifiers with SAR images to extract inundated areas to avoid the critical step of thresholding. Amitrano et al. [
8] exploited Sentinel-1 ground range detected (GRD) products with an unsupervised method for rapid flood mapping, and classic co-occurrence measurements combined with amplitude information were used for a fuzzy classification system without a threshold selection. Benoudjit et al. [
13] proposed an approach to rapidly map the flood extent using a supervised classifier and both a pre-event SAR image and an optical Sentinel-2 image were used to train the supervised classifier to identify the inundation from the flooded SAR image. It can be processed in an automatic way, but only identified the completely inundated regions. Change detection using RGB composition is also a common approach. Dual polarized Sentinel-1 SAR images were used to map and monitor flooding in Kaziranga National Park [
14]. Francisco et al. [
15] used RGB composition and thresholding to monitor flooding with Sentinel-1 data in the Ebro River. The approach is based on the RGB color model, which processes the data to suit the model and get the best result, while this process is always subjective.
Although numerous efforts on flood mapping using SAR images were obtained, it was still challenging to use the Sentinel-1 SAR images as a powerful tool for flood mapping, especially for flood degree evaluation, which is an unsolved problem in current studies. SAR images show the backscatter characteristics of different ground objects. Each object has different backscatter characteristics in different inundated states, and the backscatter characteristics of one object in a certain flooding state is equal to another object’s backscatter characteristics in the normal state (not inundated). If the SAR images of pre-flood and flooding periods are both classified according to the same rule, the ground objects’ normal backscatter characteristics and the change of classes in the same position between two SAR images show the change in backscatter. Flood information can be extracted from the change with the variation rules of different ground objects, meaning that we can convert the change detection of ground objects’ backscatter to the change detection of land cover classification. In this paper, a novel approach is proposed to extract flooding regions and evaluate flooding extent from Sentinel-1 SAR images with the help of optical Sentinel-2 images, which converts the change detection of thresholding to backscatter classifications. The approach focuses on pixel-based change detection and a supervised classifier. A supervised classifier is used to ensure the SAR image classifications based on the same rule. Instead of other information, such as texture information, the optical Sentinel-2 images are used to improve the accuracy of land cover classification. In order to compare our approach with other traditional approaches, the Otsu thresholding method based on SAR images and the NDWI index method based on optical images are also applied in some cases. In
Section 2, the study area and the data are described, the detailed methodology is described in
Section 3, a case study and the results are presented in
Section 4, and finally conclusions are given in
Section 5.
4. Results and Discussion
4.1. Flood Extraction and Evaluation
There was no rain during the 24 h before the imaging on 21 July 2018 and 2 August 2018, and moderate rain during the 24 h before the imaging on 27 July 2018. The flood from 20 August 2018 to 27 August 2018 was caused by a rainstorm, along with Typhoon Rumbia and the erroneous flood relief of upstream reservoirs. The daily precipitation was around 120 mm on 20 August 2018 in Shouguang City. Although there was no rain during the 48 h before the imaging on 26 August 2018, the upstream reservoirs discharged continuously on 20 and 21 August 2018.
All GRD products were processed into Gamma bands and S_Gamma bands, which are shown in
Figure 3. Sentinel-2 images on 10 August 2018 were downloaded as the optical image (
Figure 1). Both Sentinel-1 and Sentinel-2 products were processed into 10-meter resolutions.
The normal optical classification of the study area using Sentinel-2 images on 10 August 2018 is shown in
Figure 4a and the random forest classification results using S_Gamma bands and normal optical classification are shown in
Figure 4b–f. There are some differences between
Figure 4a,b, which are due to the different characteristics between optical images and SAR images. The most intuitive difference between
Figure 4e,f is that the area of water bodies is larger in the flooding period, and the residential areas also show evidential changes, but accurate changes must be measured by pixel-based change detection in the next step.
To validate the accuracy of the RFC results, 4563 random points were created and their categories were determined by visual interpretation. In the training model, 3422 points were used as sample points, and 1141 points were used as verification points. User accuracy, producer accuracy, overall accuracy and kappa index were used to evaluate the accuracy, and the detailed results are shown in
Table 6. The overall accuracy is 80.95% and the kappa index is 74.1%, so the classification shows a good accuracy.
The maps of D-value between category numbers were derived from the pixel-based change detection. The random forest classification result on 2 August 2018 was taken as the initial map, and other classification results were taken as changed maps. The change detection results are shown in
Figure 5.
According to the flood evaluation criteria in
Table 5, the final flood extent maps are shown in
Figure 6a–d. The detailed flood conditions can be summarized as follows: On 21 July 2018, there were many scrappy and small flooded fragments, and no obvious flood in general. The flooded fragments may be due to image quality, which is analyzed in
Section 4.2. On 27 July 2018, most of the farmland without plastic sheds was mildly inundated, and most of farmland with plastic sheds was moderately inundated. On 20 August 2018, the flood was in the most serious state (shown in
Figure 6a), and the flood in the west region was more serious than that in the east region, which was probably surrounded by rivers in west region. The completely inundated areas were along the Mihe River, especially in the river bends. The farmland without plastic sheds in the west of the study area was moderately inundated, but the farmland with plastic sheds was completely inundated. There were many moderately inundated discrete small areas along the constructions. On 26 August 2018, the flood receded a lot in general (shown in
Figure 6d), and the flood in the west region was also more serious than that in the east region. The completely inundated areas almost completely disappeared except for some minor areas along the Mihe River and the farmland without plastic sheds was still moderately inundated in a few areas.
4.2. Flood Extraction with Otsu Thresholding
The Otsu thresholding relies on image histograms and a bi-modal histogram is essential for proper separation. Because water bodies were more easily detected under VH, only GRD products of VH were used. The histograms of five images are shown in
Figure 7.
An initial binary map was generated using the Otsu thresholding based on the global image feature. Only histograms of 27 July 2018 and 20 August 2018 are bi-modal histograms, and only the contrast in gray values between flooded and non-flooded pixels is distinct. Therefore, the initial results obtained by Otsu were not good. Because Otsu relies on the image with a bi-modal histogram for proper separation, so new thresholds were generated based on partial images whose histograms were bi-modal. The inundated areas were extracted from the binary maps by removing the standing water bodies and the new results are shown in
Figure 8. Except for 20 August 2018, the binary maps were all new. Although the results were improved a lot, the errors were still large. Except for the image on July 21, 2018, the river was extracted well, but there was a lot of noise. The farmland was barely inundated on July 27, August 20, and August 26, 2018.
4.3. Flood Extraction with NDWI
Optical images from Planet are used here for comparison. According to the date of the SAR images, several available Planet images are used. In terms of time, the Planet image on 21 August 2018 is closer to the SAR images of August 20 than the Planet image of August 20, so the Planet image of August 21 is used to compare the flood result on August 20, 2018. The western part of the study area is seriously inundated by the flood on 20 August 2018, and the most serious is the regional center on the west side of the main road. After about 24 h, the flood recedes in most areas (
Figure 9). There is no obvious flood on the image of 27 August 2018. The binary water maps based on water index images are derived using Otsu thresholding (
Figure 10).
Because optical sensors cannot detect the standing water beneath vegetation [
22], and are not exactly consistent with the SAR images in terms of imaging time, we did not find an appropriate way to validate the flood results, and only analyzed the spatial distribution of these results. From
Figure 7 and
Figure 10, we can see that the flooding along the river and in farmland with plastic sheds can be identified by both methods, but flooding in farmland without plastic sheds cannot be identified by optical images, probably because the water was beneath vegetation. According to statistics and news materials, farmlands in the western part of the study area were heavily damaged by the flood, so our flood mapping result is more reasonable.
4.4. Discussion
The extracted area from our approach was much larger than other two methods under a flood case, and much smaller under a non-flood case. The flood area and degree can be evaluated rapidly with our approach, but only the flooded regions can be extracted from the other two methods. However, the completely inundated areas were almost the same on 20 August 2018, which were presented as water bodies from Otsu thresholding and NDWI index methods and as completely inundated areas from our approach.
Although our method can extract the flood area and degree, there are some limitations: (1) it is vital to select appropriate typical land covers whose backscatter can reveal the backscatter variations precisely. If the differences between all categories are small, the change can reveal slight changes in the backscatter, but excessive classifications are also meaningless. If the differences between all categories are big, the change cannot reveal the slight changes in backscatter; (2) as important prior knowledge, each object’s backscatter characteristics in different flooding states and its change rules determine the accuracy of the results. The backscatter coefficient changed into only one trend when it was flooded, raising the question of whether there is a turning point during the change process; and (3) the approach can detect the change caused by “flood”, where the “flood” actually refers to the increase in surface water. However, it is unable to identify whether the “flood” is caused by natural disasters, such as rainstorms or artificial works, such as agricultural irrigation, while artificial works may result in false alarms.
5. Conclusions
In this paper, a methodology using Sentinel-1 and Sentinel-2 images is proposed to map the flood regions and estimate the flood degree rapidly. Backscatter characteristics and variation rules of different ground objects are essential prior knowledge for flood analysis. The backscatter of some ground objects in the normal state is treated as a certain flooding state for another objects. A supervised classifier was used to get optical and SAR image classifications. Our conclusions are summarized as follows:
- (1)
The accuracy of the RFC results based on Sentinel-1 and Sentinel-2 images reaches 80.95%, which avoids the inaccuracy caused by a single threshold. Furthermore, the optical images from Planet are used to validate the results.
- (2)
The final accuracy of rapid extent estimation using Sentinel-1 and Sentinel-2 images on 20 and 26 August 2018 are 85.22% and 95.45, respectively. Moreover, all required data and data processing are simple, so it can be popularized in rapid flood mapping in early disaster relief.
- (3)
The flood area and degree can be evaluated rapidly by our approach, but only the flooded regions can be extracted with the other two methods. The completely inundated areas were almost the same from the three methods.
In the future, we will further study the backscatter characteristics of different objects to summarize some typical objects whose backscatter characteristics can cover all land cover and sensitively reveal the changes due to flooding. We will also focus on the quantitative study of the change rules of different objects’ backscatter, especially the turning points during the change process.