Next Article in Journal
An Erosion-Based Approach Using Multi-Source Remote Sensing Imagery for Grassland Restoration Patterns in a Plateau Mountainous Region, SW China
Previous Article in Journal
Capturing Small-Scale Surface Temperature Variation across Diverse Urban Land Uses with a Small Unmanned Aerial Vehicle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Near-Real-Time Flood Detection Method Based on Deep Learning and SAR Images

1
Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
International Research Center of Big Data for Sustainable Development Goals, Beijing 100094, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
School of Geography, Development and Environment, The University of Arizona, Tucson, AZ 85719, USA
5
Natural Resources Aerogeophysical and Remote Sensing Center of China Geological Survey, Beijing 100083, China
6
College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China
7
Yanshan Earth Key Zone and Surface Flux Observation and Research Station, University of Chinese Academy of Sciences, Beijing 101408, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(8), 2046; https://doi.org/10.3390/rs15082046
Submission received: 24 February 2023 / Revised: 5 April 2023 / Accepted: 7 April 2023 / Published: 12 April 2023

Abstract

:
Owning to the nature of flood events, near-real-time flood detection and mapping is essential for disaster prevention, relief, and mitigation. In recent years, the rapid advancement of deep learning has brought endless possibilities to the field of flood detection. However, deep learning relies heavily on training samples and the availability of high-quality flood datasets is rather limited. The present study collected 16 flood events in the Yangtze River Basin and divided them into three categories for different purpose: training, testing, and application. An efficient methodology of dataset-generation for training, testing, and application was proposed. Eight flood events were used to generate strong label datasets with 5296 tiles as flood training samples along with two testing datasets. The performances of several classic convolutional neural network models were evaluated with those obtained datasets, and the results suggested that the efficiencies and accuracies of convolutional neural network models were obviously higher than that of the threshold method. The effects of VH polarization, VV polarization, and the involvement of auxiliary DEM on flood detection were investigated, which indicated that VH polarization was more conducive to flood detection, while the involvement of DEM has a limited effect on flood detection in the Yangtze River Basin. Convolutional neural network trained by strong datasets were used in near-real-time flood detection and mapping for the remaining eight flood events, and weak label datasets were generated to expand the flood training samples to evaluate the possible effects on deep learning models in terms of flood detection and mapping. The experiments obtained conclusions consistent with those previously made on experiments with strong datasets.

1. Introduction

Flooding is one of the most devastating natural hazards, causing economic losses of about USD 25.5 billion and 6570 fatalities worldwide annually on average between 1970 and 2020 [1]. The property and life losses related to flooding have accelerated at a rate of 6.3% and 1.5% per year, respectively, over the past five decades [2], and the global economic losses caused by flooding are projected to increase by 17% over the next 20 years [3]. China is a seriously affected country frequently faced with flood disasters, with huge economic losses and high fatalities [4,5]. For example, the flooding that took place in 2020 over southern China affected 30.2 million people, with an economic loss of about CNY 61.79 billion. Near-real-time flood mapping becomes a very necessary action to cope with flood rescue and disaster assessment with advancement of earth observation technologies by satellites.
Satellite-based flood mapping provides an effective means for near-real-time flood detection, which can accurately describe the dynamic processes of flooding in both the temporal and spatial scales [6]. Compared with ground observations, satellite-based obversations have unique advantages in flood detection and mapping, as they are quick, accurate, and cover an extensive area. Flood detection using optical remote sensing is mainly based on spectral information to detect waterbodies caused by undulation through normalized difference water index (NDWI) [7,8,9] or other segmentation algorithms [10,11,12]. Although some satisfactory achievements have been made with these methods, inherent limitations are yet involved in optical remote sensing-based flood detection due to its daytime-only operating mode and weak cloud penetrating capability. Synthetic aperture radar (SAR) can work under all-day and all-weather conditions, providing data support for near-real-time flood detection [13,14]. The global threshold method is an efficient and convenient solution for flood mapping using SAR images [15,16]. However, due to the complex characteristics of SAR images, the accurate detection of floods by image segmentations with a single threshold is very difficult [17]. Threshold algorithms based on regional difference have been proposed [11,18,19]. In addition, some automatic threshold algorithms, such as Otsu [20,21], entropy threshold [22], and bimodal histogram [23] algorithms, are widely used for flood detection. The undulated area by flooding can be detected very effectively with the change detection method from flood and non-flood images. As an efficient and convenient image segmentation approach, threshold method is very suitable for large-scale, near-real-time flood detection. However, the threshold method cannot deal with complex nonlinear problems, and has a lack of spatial consistency and is vulnerable to noise interference [24,25,26]. Therefore, many studies combine the change detection approach with the threshold method to obtain different image-specific information, and then use the threshold method to extract the changed part [27,28,29]. Nevertheless, both the threshold method and change detection approach rely heavily on expert knowledge and require tedious satellite image preprocessing [30]. Additionally, most flood detection methods are aimed at a single flood event; they cannot be transferred and reused on other flood events.
Traditional flood detection methods are labor-intensive and time-consuming, dependent on expert knowledge and face a lack of portability and scalability in most cases. In recent decades, deep learning, especially convolutional neural networks (CNNs), has made great achievements in remote sensing applications [31]. Convolutional neural network is an end-to-end efficient self-learning model, and has been widely used in automatic flood detection [32]. A flood detection dataset released for deep learning based on images from Sentinel 1 and 2 by Bonafilia et al. [33] was evaluated with various CNNs focusing on performances of those CNNs [26,34,35]. Performances of various CNNs compared with those obtained by traditional threshold methods in flood detection of Poyang Lake by Dong et al. suggested that CNNs can effectively suppress the speckle noise of SAR images [36]. Subsequently, an effective self-learning CNN model was proposed and applied to the urban area of Houston, USA for flood detection by Li et al. [37]. Although great achievements have been made in flood detection by using remotely sensed data with CNNs, several challenges are yet to be solved in near-real-time flood detection, especially at large-scale detections, i.e.,:
(1)
As a data-driven algorithm, deep learning for flood detection lacks the support of big data;
(2)
Generation of training data for deep learning is currently a labor-intensive and time-consuming task. Discovering a method to efficiently generate representative training datasets for deep learning is an issue worth studying;
(3)
Most flood detection methods developed in the past are aimed at a single flood event, but they are difficult to transfer and reuse for other flood events.
The performance of the satellite-based flood detection and mapping for an individual flood events may be affected by sensors, satellite attitudes, or atmospheric conditions, etc., and flood training samples obtained from multiple flood events can be used with a deep learning model to minimize errors introduced by those effects. To address these issues, the present study took the Yangtze River Basin (YRB) as an experimental study region to investigate the possibility of the application of SAR Images with a deep learning model for developing a near-real-time flood detection and automatic mapping approach. The main contents and highlights of present study can be summarized as follows:
(1)
An efficient and fast approach for generating a standard flood training dataset for flood detection with deep learning was proposed;
(2)
Two kinds of standard flood training datasets generated by the proposed approach, namely a strong and weak labeled dataset, were used to evaluate the performances of several CNNs;
(3)
Large-scale flood detection in the YRB was attempted with deep learning models.
The paper is structured as follows: Section 2 introduces the study area, satellite data, and dataset production along with the method for dataset generation. Section 3 presents the models proposed or adopted and the performance of each model trained with the strong label dataset as well as the flood detection results of the Yangtze River basin. The performances of the models trained with the weak label dataset, the perspective on the change detection methods and some limitations of present study are discussed in Section 4. The conclusions made from this study are given at the end of the paper.

2. Materials and Methods

2.1. Study Area and Data

Rising in the Tanggula Mountains in west-central China, the Yangtze River is about 3964 miles (6380 km) long and flows from its source in a glacier in Qinghai Province, eastwards into the East China Sea at Shanghai, receiving water from over 700 tributaries along the way with catchment area of about 1.8 million km2 in China. Under the influence of monsoon climate, the YRB has long been subject to an uneven temporal–spatial distribution of precipitation and temperature with a great inter-annual variation and concentrated intra-annual distribution, one of the most important factors with respect to frequent flooding. Floods occur almost every year in the Yangtze River basin. Many obvious anomalous changes in the spatial–temporal distribution have been observed in recent decades compared with the past, which may very likely upset the established balance between the existing river runoff and flood control system and result in unexpected major disasters. In the present study, for developing a near-real-time flood detection and automatic mapping approach by using remote sensing with the deep learning model, 16 flood events that took place in the past decade in the YRB were systematically investigated, and a total of 32 Sentinel-1 SAR images acquired in flood and non-flood periods were used as main satellite data sources in this study. Since only backscattering intensity data is needed for SAR image processing, Ground Range Detected (GRD) data of Sentinel 1 was used in this study. GRD data includes VH and VV polarization data, which are represented later in VH and VV.The 12.5 m DEM generated from ALOS-PALSAR was used as auxiliary data for CNN model training. To meet the requirements of near-real-time flood detection, 16 images acquired in 8 flood events were used for the training and testing of the deep learning models, and the remaining 16 images derived from the remaining 8 flood events were used for flood detection. Table 1 lists the locations and flooding durations of the 16 selected flood events, as well as the IDs of Sentinel-1 SAR images acquired in corresponding flood events along with their usages in present study.
An overview of the YRB with the geo-location of the Sentinel-1 SAR images used in the present study is shown in Figure 1.

2.2. Method

The flowchart of present study, as presented in Figure 2, consists of three parts. The first part mainly involves the satellite data preprocessing. Six steps of preprocessing, i.e., orbit correction, thermal noise removal, radiometric calibration, speckle filtering, terrain corrections, and decimalization, were applied to the Sentinel-1 images acquired in each of the 8 selected flood events in the YRB, Meanwhile, DEM corresponding to the coverage of each Sentinel-1 image was spliced, clipped, and resampled to ensure the same spatial resolution of the DEM and Sentinel-1 images. The second part mainly dealt with the generation of the standard training dataset for floods. First, a radar-based water index was used to segment a rough undulated waterbody boundary of the studied flood event. Then, a regional threshold method was adopted to refine the segmentation of the undulated extent of the flood in association with manual annotation and auxiliary DEM data to generate the strong label dataset by clipping VH, VV polarization, DEM, and label into tiles. The last part is preliminary for flood detection and mapping. The CNN model was firstly trained and evaluated by using previously obtained strong label datasets for flood detection and mapping; the results obtained were further processed to produce the weak label flood dataset that can be quickly expanded to the dynamic flooding database for near-real-time flood detection and mapping.

2.2.1. Dataset Production Method

The Sentinel-1 images acquired in each of the 8 selected flood events in the YRB were preprocessed with SNAP software. Meanwhile, DEM corresponding to the coverage of each Sentinel-1 images was spliced, clipped, and resampled to ensure the same spatial resolution of the DEM and Sentinel-1 images; additionally, a water index method used in previous studies [38] with a threshold of 0.3–0.4 for the rough segmentation of the flood waterbody was applied to extract the extent of the undulated area. Formula (1) lists the definition of the water index, where VH and VV represent polarized bands of the SAR images:
WI = ln ( 10 × V H × V V ) 8
It should be noted that the results derived in this way are just the rough segmentation results with many errors. To obtain accurate deep learning labels of the flood training samples, the careful selection of as many as possible regions of interest (ROI) covering various ground objects to generate classified datasets into training and test samples were necessary. As shown in Figure 3, among 8 flood events selected, 7 flood events were used for training and testing, and 1 flood event was only used for testing. Two test datasets were obtained to test the robustness and generalization of the model. As can be seen from Figure 3, the training and test datasets include various land cover types to ensure the balance of positive and negative samples for deep learning.
Once the selection of the ROIs and the generation of the training and testing datasets were completed, the region threshold method was adopted to refine the segmentations previously derived. In the present study, we found that for hilly areas with many terrain shadows, a single segmentation threshold of 0.4 could correct most of the misclassifications in rough segmentations, while for areas such as farmland and aquaculture, a threshold of 0.15–0.2 was more appropriate. For mountainous areas with steep terrain, the mask with a slope of 10 degrees was used to further correct the effects of terrain shadows. Thus far, almost all processes were implemented programmatically in batches. The remaining small part that was difficult to solve by the threshold method was completed by manual annotation. Figure 4 exhibits the results of fine segmentation. It can be seen from the figure that most of the mis-segmented areas by the global threshold method are well corrected.
In the present study, 32 Sentinel-1 SAR images corresponding to the selected 16 flood events that took place in the last 2–6 years in YRB were utilized for the development of a near-real-time flood detection method. Among these, 7 flood events were used for training and testing the proposed deep learning model, and 1 flood event was only used for testing the model. In this way, one strong label training dataset and two testing datasets were obtained. Training the deep learning model requires the training data to be cropped into 256 × 256 tiles. The two testing datasets contain 13 and 14 images with 3000–5000 pixels, and were generated for near-real-time flood detection and mapping, respectively. The window cutting strategy was used for testing and application, so there was no need for image cropping. The remaining 8 flood events were used for testing the proposed near-real-time flood detection method. The tested results of the floods can be generated into weak label datasets to improve productivity of the deep learning datasets. Some examples of strong label datasets are shown in Figure 5.

2.2.2. Deep Learning Models Adopted for Experimental Studies

In this study, four popular deep learning models, FCN-8 [39], SegNet [40], UNet [41], and DeepResUNet [42], were adopted in order to evaluate their performances in flood detection. Fully Convolutional Networks (FCN) is the first deep learning model used for semantic segmentation. In FCN, deconvolution is used to replace the full connection layer. According to different deconvolution scales, FCN can be divided into FCN-8, FCN-16 and FCN-32. In this study, FCN-8, which has the most detailed features, was used for flood detection. UNet is the most classic and widely used segmentation network. Since many subsequent deep learning networks are proposed based on UNet, UNet structure was introduced specifically. UNet is a typical encoding (down-sampling) and decoding (up-sampling) model. As shown in Figure 6, the encoder and decoder have a symmetrical structure, including 4 up-sampling and 4 down-sampling layers, respectively. Each sampling layer is composed of 2–3 stacked convolutional layers, and the number of convolutional layer channels is 64, 128, 256, 512, and 1024. The feature maps of the up-sampling and down-sampling layers are connected by a concatenation function to recover the details lost during the max pooling. Similar to UNet in structure, SegNet has no concatenation operation, but retains the index of max pooling in the down-sampling layer, so that the detailed features can be reconstructed more accurately. DeepResUNet takes UNet as the basic framework, but adds the residual structure of ResNet [43] and reduces the number of convolutional channels to 128, making the model more efficient.

2.2.3. Evaluation Metrics and Experimental Parameters

In this study, evaluation metrics, overall accuracy (OA), precision, recall and F1-score were used to evaluate the flood detection results. OA refers to the proportion of correct predictions in the total number of predictions. However, the assessment category is not balanced, and OA may be misleading. Therefore, precision, recall rate, and F1-score were also used for more objective model evaluation. For binary classification problems, the confusion matrix intuitively shows the classification of each category by the classifier. The formulas of confusion matrix and 4 evaluation indexes were given in Table 2.
SNAP 8.0 software was used to preprocess Sentinel 1 images. Arcpy and Python were used for the segmentation of coarse and fine and dataset generation. The experiments were implemented under the TensorFlow framework on an NVIDIA GeForce RTX 2080Ti GPU. The Adam optimizer was used in computation, the training batch was set to 10, and the number of iterations was 60,000. At the same time, an exponential decay strategy was used, with 0.8 decays per 10,000 iterations and an initial learning rate of 0.0001. Table 2 lists the details of the confusion matrix used in the present study for the performance evaluation of deep learning models adopted for flood detection.

3. Experimental Results

3.1. Model Comparison Experiment

The performances of the five classic deep learning models, the global threshold method, FCN-8, SegNet, UNet, and DeepResUNet, were compared with the two sets of test datasets generated previously. As can be observed in Table 3, the global threshold method performed the worst among all the models compared, especially for recall rates, which were about 15% lower than the other models. FCN-8 had the lowest F1-score of all CNN models, and its precision was about 0.08 lower than those of other models. SegNet performed well in precision, but its comprehensive index F1-score was slightly lower than those of UNet and DeepResUNet. The performances of UNet and DeepResUNet were very close, and their accuracies were better than those of other models. The performances of all the models tested with test dataset 1 were better than those tested with test dataset 2, which can probably be attributed to the fact that test dataset 2 came from different flood events (refer to Figure 3 in Section 2.2.1).
Flood detection results were visualized by being compared with 5812 × 4260 images derived with the test dataset 1, as exhibited in Figure 7. As can be observed, the missing detected area of flood by the global threshold method was very large, mainly distributed in the boundaries of rivers and lakes. This is attributed to the poor capability of the threshold method in dealing with the heavily noise-affected SAR images surrounding the waterbody boundaries. The wrongly detected area of flooding by FCN-8 was also large, mainly concentrated along riverbanks and surrounding lakes, as well as the small waterlogged areas. The result of SegNet was significantly better than that of FCN-8, while fewer areas were mis-detected by UNet and DeepResUNet. UNet was therefore selected for the final flood detection and mapping in present study.

3.2. Band Comparison Experiments

The influences of polarization mode and the application of DEM on flood detection were also experimentally investigated. The results are summarized in Table 4. It can be observed that the highest F1-score was achieved by using VH polarization alone. The precision of VV polarization is slightly higher than that of VH polarization, but the Recall decreased considerably. However, no obvious improvements in various evaluation metrics were seen after adding DEM to facilitate the experiments. This can probably be attributed to the mountain samples in the training data set suppressing the effects of mountain shadows on flood detection, while the DEM of the middle and lower reaches of the YRB pose limited effects in nature on flood detection. When the VH, VV, and DEM channels were used as inputs for the deep learning models, the accuracies for all the models were decreased. The experiments indicated that the signal-to-noise ratio of VV and DEM bands was low, and that the VH polarization has the best effect on flood detection in the YRB.
Similarly, flood detection results were visualized compared with 2688 × 2248 images derived from test dataset 2 as exhibited in Figure 8 for investigating the performances of UNet with different band combinations as inputs. As can be seen from Figure 8, only a few mis-detected or missing flooded areas existed in the map generated with UNet model with VH polarization band as input, and the errors mainly concentrated on the areas surrounded by the flooded area. The flood detection results with the VV band as input displayed poor accuracy because of many errors in river edge detection. Adding auxiliary DEM as an input did not improve the flood mapping results but introduced some noise. Band combination of VH and VV polarization as input for the UNet did not improve the performances of the model, most likely due to the reason previously analyzed.

3.3. Near-Real-Time Flood Detection and Mapping

Using the UNet model trained with the strong label datasets generated previously, the near real-time flood detection and mapping were performed with the remaining eight flood events. Figure 9 presented the detection and mapping results of four flood events. The first two columns of image maps shown in Figure 9 were the VH band images acquired in each flood event and the images acquired in non-flood period, respectively. The red part indicated the flood area detected and mapped with the UNet, obtained through the difference between the detected results in flood period and non-flood period. The floods shown in Figure 9a took place in the middle reach of the YRB over the Honghu Lake near Jingzhou city, Hubei province, China. As can be seen from the figure, large areas of lakes and the main stream of the YRB have been flooded. The floods shown in Figure 9b occurred in the Chaohu Lake basin of Hefei City, Anhui Province. The area inundated by the flood was mainly cultivated land and farmland, resulting in great agricultural losses during this event. The serious floods detected with Sentinel-1 SAR images by the UNet, as presented in Figure 9c, took place in July 2017 in Yueyang city, Hunan province, while regional floods occurred in the Dongting lake basin in Hunan province. As can be seen from the figure, the Dongting lake expanded by more than two times in area, and an extensive area of cultivated land has been inundated. The floods exhibited in Figure 9d happened in July 2020 over the Poyang Lake in Jiangxi province. During this flooding, many wetlands near the Poyang Lake were heavily affected. From the above flood examples, it is obvious that the flood areas in the YRB are mainly concentrated in the middle and lower reaches of the basin, and the floods that took place in the key areas of the Poyang Lake and Dongting Lake were especially serious. The remaining near-real-time flood detection results are shown in Figure A1, Figure A2, Figure A3 and Figure A4.

4. Discussion

4.1. Weak Label Datasets Experiments

From present study, it can be concluded that the CNN-based flood detection deep learning model is efficient and fast, which is of great significance for improving the efficiency of near real-time flood detection. In practice, however, quick and efficient flood mapping technology is essential for disaster prevention and mitigation. To improve the efficiency of flood mapping, the test results in Section 3.3 were made into a weak label dataset. First of all, the detected flood image was cut into tiles with a size of 256 × 256 pixels, among which some tiles were almost fully flood-covered while some were non-flooded. The algorithm used for this processing eliminated 80% of such tiles, leaving 21,826 tiles to form the weak label dataset that consisted of partly flood-covered and partly non-flood-covered tiles. With this weak label dataset, performances of deep learning models were evaluated and comparison experiments with different band combinations as inputs were carried out, and the results were shown in Table 5. Since the performances of those models have been evaluated previously, here we combined two test datasets to reduce the number of tables. Similar concluding remarks as obtained in performance evaluations of deep learning models can be summarized:
(1)
Performances of the UNet and DeepResUNet were fairly close with each other, while FCN had the lowest flood detection accuracy;
(2)
The VH polarization band as input for the deep learning models performed the best in flood detection, while the DEM had a very minor affect on the results of flood detection.
In general, the effect of the weak label dataset on the performance of the CNN model was not that significant compared that of the strong label dataset. This could be attributed to, on the one hand, the weak label dataset having not been selected and marked manually, so the overall accuracy was lower than that of the strong label dataset; on the other hand, the weak label dataset comes from eight flood events detected with UNet model, which was completely different from the generating methods of the other eight flood events used for training and testing. Therefore, we can refer to the flood detection results with the strong label dataset to optimize the the flood detection results with the weak label dataset for quickly expanding the training dataset of flood samples.

4.2. Change Detection Method

The methodological logic we followed for flood detection was based on the difference between the detected waterbody extent in flood period and the natural waterbody extent in the non-flood period to determine the flooded area. In fact, this is not a complete end-to-end flood detection method. In CNN, images acquired in the flood and non-flood period can be set as input, and the label is the changed part of the waterbody extent between these two periods, so the output is the flooded area. Currently, two kinds of change detection models based on the convolutional neural network are popularized. One is to use ordinary neural network models to directly learn the features of the changes [44]. The other is to use a Siamese neural network model that uses two networks to extract features, which shares weights between the two networks [45,46]. The long short-term memory (LSTM) convolutional neural network [47] with a time series of remote sensing images as input has been proposed for change detection, which sheds light on the possibility to expand the detected changes into a standard change detection dataset in the near future. However, it is worth noting that the following issues remain as challenges:
(1)
The selection of areas with high classification accuracy to prevent noise interference;
(2)
Just like the weak label dataset, some of the unchanged data labels need to be eliminated;
(3)
The proportion of positive and negative training samples should be balanced, or a special loss function, such as dice loss, needs to be considered.

4.3. Novelty, Potential, and Limitations

Deep learning is a data-driven algorithm. From the perspective of training and testing datasets, the present study aims to improve the efficiency of flood detection and mapping by using convolutional neural networks for large-scale near-real-time flood detection and mapping. Four classic CNN models were used for large-scale near-real-time flood detection and mapping in the present study. In the future, by means of spatial pyramid structures, feature reuse structures or attention mechanisms, the number of convolutional kernel channels can be reduced to reduce the redundancy of the model for improving the accuracy of the model. Meanwhile, the integration of the strong label dataset and weak label dataset can effectively facilitate long time series flood disaster monitoring. Sentinel 1 satellite is constrained by its revisit cycle for effective capture of flood events. More SAR satellites are needed to enhance the generalization and applicability of datasets for large-scale near-real-time flood detection and mapping.
In some studies, optical and SAR image data are used for flood detection to improve the accuracy of the results [48,49,50]. The spectral information of optical data is easier to identify water bodies from than SAR data. In the future, we can find optical images with no or little cloud coverage during flooding events to expand the available datasets. The floods in the YRB are concentrated in the middle and lower reaches of the basin where the terrain is flat and DEM has limited effects. However, for flash floods, DEM are important for identifying mountainous shadows that may pose serious effects on flood detection and mapping.

5. Conclusions

The suddennature of flooding makes it difficult to seize the dynamic process and the extent of flooding in real time which is essential in disaster prevention and mitigation. Although deep learning is a promising technology to assist remote sensing for near-real-time flood detection and mapping, constrained by the shortage of qualified flood training and testing samples and the low efficiency of the data processing procedures involved, the performances of the available deep learning models for large-scale flood detection and mapping are far beyond the expectations. For breaking through the current predicament, a semi-automatic flood dataset-generating method was proposed to counter the problems in efficient generation of strong label datasets at first and then realize the near-real-time flood detection and mapping in the YRB by using the generated strong label flood datasets with association of CNN model. Several experiments were conducted to investigate the performances of the proposed method under various conditions. It was concluded that the VH polarization data of SAR images alone performed the best for flood detection, while the involvement of the DEM as input for the CNN posed limited effects in flood detection over the YRB. Meanwhile, the weak label dataset was generated according to the near real-time flood detection results, and experiments on near real-time flood detection with the expanded flood datasets proved that the weak label dataset positively affected the flood detection. If the procedure for weak label dataset generation can be improved (refer to the production method of the strong label dataset), the efficiency and precision of flood datasets and flood detection and mapping results can be greatly improved. In short, from the perspective of datasets, this study proves that CNN has great potential for high-efficiency flood dataset generation as well as in near-real-time flood detection.

Author Contributions

X.W. and Z.Z. designed this study. Z.L., B.A. and R.L. completed data collection and preprocessing. X.W. wrote this manuscript. Z.Z., W.Z., S.X. and J.T. revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This study was jointly financed by Major science and technology project of Ministry of Water Resources [Grant No. SKS-2022008] and the Key R & D and Transformation Program of Qinghai Province [Grant No. 2020-SF-C37].

Data Availability Statement

The Sentinel 1 image used in the article can be downloaded from ESA through product ID (Link: https://scihub.copernicus.eu/dhus/#/home accessed on 5 April 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Near-Real-Time Flood Detection Results

Figure A1. Flood detected in Dongting Lake in 2020.
Figure A1. Flood detected in Dongting Lake in 2020.
Remotesensing 15 02046 g0a1
Figure A2. Flood detected in the middle reaches of the Yangtze River in 2020.
Figure A2. Flood detected in the middle reaches of the Yangtze River in 2020.
Remotesensing 15 02046 g0a2
Figure A3. Flood detected in Shengjin Lake in 2020.
Figure A3. Flood detected in Shengjin Lake in 2020.
Remotesensing 15 02046 g0a3
Figure A4. Flood detected in Xinmiao Lake in 2020.
Figure A4. Flood detected in Xinmiao Lake in 2020.
Remotesensing 15 02046 g0a4

References

  1. EM-DAT. EM-DAT: The International Disaster Database. 2008. Available online: http://www.emdat.be/Database/Trends/trends.html (accessed on 22 November 2022).
  2. Tanoue, M.; Hirabayashi, Y.; Ikeuchi, H. Global-scale river flood vulnerability in the last 50 years. Sci. Rep. 2016, 6, 36021. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Willner, S.N.; Otto, C.; Levermann, A. Global economic response to river floods. Nat. Clim. Chang. 2018, 8, 594–598. [Google Scholar] [CrossRef]
  4. Zhang, X.; Chan, N.W.; Pan, B.; Ge, X.; Yang, H. Mapping flood by the object-based method using backscattering coefficient and interference coherence of Sentinel-1 time series. Sci. Total Environ. 2021, 794, 148388. [Google Scholar] [CrossRef]
  5. Yang, H.; Wang, H.; Lu, J.; Zhou, Z.; Feng, Q.; Wu, Y. Full lifecycle monitoring on drought-converted catastrophic flood using sentinel-1 sar: A case study of poyang lake region during summer 2020. Remote Sens. 2021, 13, 3485. [Google Scholar] [CrossRef]
  6. Martinis, S.; Kersten, J.; Twele, A. A fully automated TerraSAR-X based flood service. ISPRS J. Photogramm. Remote Sens. 2015, 104, 203–212. [Google Scholar] [CrossRef]
  7. Huang, C.; Chen, Y.; Wu, J. Mapping spatio-temporal flood inundation dynamics at large riverbasin scale using time-series flow data and MODIS imagery. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 350–362. [Google Scholar] [CrossRef]
  8. Sakamoto, T.; Van Nguyen, N.; Kotera, A.; Ohno, H.; Ishitsuka, N.; Yokozawa, M. Detecting temporal changes in the extent of annual flooding within the Cambodia and the Vietnamese Mekong Delta from MODIS time-series imagery. Remote Sens. Environ. 2007, 109, 295–313. [Google Scholar] [CrossRef]
  9. Cian, F.; Marconcini, M.; Ceccato, P. Normalized Difference Flood Index for rapid flood mapping: Taking advantage of EO big data. Remote Sens. Environ. 2018, 209, 712–730. [Google Scholar] [CrossRef]
  10. McCormack, T.; Campanyà, J.; Naughton, O. A methodology for mapping annual flood extent using multi-temporal Sentinel-1 imagery. Remote Sens. Environ. 2022, 282, 113273. [Google Scholar] [CrossRef]
  11. Boni, G.; Ferraris, L.; Pulvirenti, L.; Squicciarino, G.; Pierdicca, N.; Candela, L.; Pisani, A.R.; Zoffoli, S.; Onori, R.; Proietti, C.; et al. A Prototype System for Flood Monitoring Based on Flood Forecast Combined with COSMO-SkyMed and Sentinel-1 Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2794–2805. [Google Scholar] [CrossRef]
  12. Mason, D.C.; Davenport, I.J.; Neal, J.C.; Schumann, G.J.P.; Bates, P.D. Near real-time flood detection in urban and rural areas using high-resolution synthetic aperture radar images. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3041–3052. [Google Scholar] [CrossRef] [Green Version]
  13. Li, Y.; Martinis, S.; Plank, S.; Ludwig, R. An automatic change detection approach for rapid flood mapping in Sentinel-1 SAR data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 123–135. [Google Scholar] [CrossRef]
  14. Martinis, S.; Twele, A.; Voigt, S. Unsupervised extraction of flood-induced backscatter changes in SAR data using markov image modeling on irregular graphs. IEEE Trans. Geosci. Remote Sens. 2011, 49, 251–263. [Google Scholar] [CrossRef]
  15. Wangchuk, S.; Bolch, T.; Robson, B.A. Monitoring glacial lake outburst flood susceptibility using Sentinel-1 SAR data, Google Earth Engine, and persistent scatterer interferometry. Remote Sens. Environ. 2022, 271, 112910. [Google Scholar] [CrossRef]
  16. Tsyganskaya, V.; Martinis, S.; Marzahn, P.; Ludwig, R. Detection of temporary flooded vegetation using Sentinel-1 time series data. Remote Sens. 2018, 10, 1286. [Google Scholar] [CrossRef] [Green Version]
  17. Zhao, J.; Pelich, R.; Hostache, R.; Matgen, P.; Wagner, W.; Chini, M. A large-scale 2005–2012 flood map record derived from ENVISAT-ASAR data: United Kingdom as a test case. Remote Sens. Environ. 2021, 256, 112338. [Google Scholar] [CrossRef]
  18. Chen, S.; Huang, W.; Chen, Y.; Feng, M. An adaptive thresholding approach toward rapid flood coverage extraction from sentinel-1 SAR imagery. Remote Sens. 2021, 13, 4899. [Google Scholar] [CrossRef]
  19. Liang, J.; Liu, D. A local thresholding approach to flood water delineation using Sentinel-1 SAR imagery. ISPRS J. Photogramm. Remote Sens. 2020, 159, 53–62. [Google Scholar] [CrossRef]
  20. Nakmuenwai, P.; Yamazaki, F.; Liu, W. Automated extraction of inundated areas from multi-temporal dual-polarization radarsat-2 images of the 2011 central Thailand flood. Remote Sens. 2017, 9, 78. [Google Scholar] [CrossRef] [Green Version]
  21. Qiu, J.; Cao, B.; Park, E.; Yang, X.; Zhang, W.; Tarolli, P. Flood monitoring in rural areas of the pearl river basin (China) using sentinel-1 SAR. Remote Sens. 2021, 13, 1384. [Google Scholar] [CrossRef]
  22. Martinis, S.; Twele, A. A hierarchical spatio-temporal Markov model for improved flood mapping using multi-temporal X-band SAR data. Remote Sens. 2010, 2, 2240–2258. [Google Scholar] [CrossRef] [Green Version]
  23. Lu, J.; Giustarini, L.; Xiong, B.; Zhao, L.; Jiang, Y.; Kuang, G. Automated flood detection with improved robustness and efficiency using multi-temporal SAR data. Remote Sens. Lett. 2014, 5, 240–248. [Google Scholar] [CrossRef]
  24. Lin, L.; Di, L.; Tang, J.; Yu, E.; Zhang, C.; Rahman, M.S.; Shrestha, R.; Kang, L. Improvement and validation of NASA/MODIS NRT global flood mapping. Remote Sens. 2019, 11, 205. [Google Scholar] [CrossRef] [Green Version]
  25. Shen, X.; Anagnostou, E.N.; Allen, G.H.; Robert Brakenridge, G.; Kettner, A.J. Near-real-time non-obstructed flood inundation mapping using synthetic aperture radar. Remote Sens. Environ. 2019, 221, 302–315. [Google Scholar] [CrossRef]
  26. Konapala, G.; Kumar, S.V.; Khalique Ahmad, S. Exploring Sentinel-1 and Sentinel-2 diversity for flood inundation mapping using deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 180, 163–173. [Google Scholar] [CrossRef]
  27. Byun, Y.; Han, Y.; Chae, T. Image fusion-based change detection for flood extent extraction using bi-temporal very high-resolution satellite images. Remote Sens. 2015, 7, 10347–10363. [Google Scholar] [CrossRef] [Green Version]
  28. Chini, M.; Hostache, R.; Giustarini, L.; Matgen, P. A hierarchical split-based approach for parametric thresholding of SAR images: Flood inundation as a test case. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6975–6988. [Google Scholar] [CrossRef]
  29. Landuyt, L.; Van Wesemael, A.; Schumann, G.J.P.; Hostache, R.; Verhoest, N.E.C.; Van Coillie, F.M.B. Flood Mapping Based on Synthetic Aperture Radar: An Assessment of Established Approaches. IEEE Trans. Geosci. Remote Sens. 2019, 57, 722–739. [Google Scholar] [CrossRef]
  30. Tiwari, V.; Kumar, V.; Matin, M.A.; Thapa, A.; Ellenburg, W.L.; Gupta, N.; Thapa, S. Flood inundation mapping-Kerala 2018; Harnessing the power of SAR, automatic threshold detection method and Google Earth Engine. PLoS ONE 2020, 15, e0237324. [Google Scholar] [CrossRef]
  31. Yuan, Q.; Shen, H.; Li, T.; Li, Z.; Li, S.; Jiang, Y.; Xu, H.; Tan, W.; Yang, Q.; Wang, J.; et al. Deep learning in environmental remote sensing: Achievements and challenges. Remote Sens. Environ. 2020, 241, 111716. [Google Scholar] [CrossRef]
  32. Dong, Z.; Wang, G.; Amankwah, S.O.Y.; Wei, X.; Hu, Y.; Feng, A. Monitoring the summer flooding in the Poyang Lake area of China in 2020 based on Sentinel-1 data and multiple convolutional neural networks. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102400. [Google Scholar] [CrossRef]
  33. Bonafilia, D.; Tellman, B.; Anderson, T.; Issenberg, E. Sen1Floods11: A georeferenced dataset to train and test deep learning flood algorithms for sentinel-1. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 835–845. [Google Scholar] [CrossRef]
  34. Bai, Y.; Wu, W.; Yang, Z.; Yu, J.; Zhao, B.; Liu, X.; Yang, H.; Mas, E.; Koshimura, S. Enhancement of detecting permanent water and temporary water in flood disasters by fusing sentinel-1 and sentinel-2 imagery using deep learning algorithms: Demonstration of sen1floods11 benchmark datasets. Remote Sens. 2021, 13, 2220. [Google Scholar] [CrossRef]
  35. Katiyar, V.; Tamkuan, N.; Nagai, M. Near-real-time flood mapping using off-the-shelf models with sar imagery and deep learning. Remote Sens. 2021, 13, 2334. [Google Scholar] [CrossRef]
  36. Zhang, L.; Xia, J. Flood detection using multiple Chinese satellite datasets during 2020 China summer floods. Remote Sens. 2022, 14, 51. [Google Scholar] [CrossRef]
  37. Li, Y.; Martinis, S.; Wieland, M. Urban flood mapping with an active self-learning convolutional neural network based on TerraSAR-X intensity and interferometric coherence. ISPRS J. Photogramm. Remote Sens. 2019, 152, 178–191. [Google Scholar] [CrossRef]
  38. Tian, H.; Li, W.; Wu, M.; Huang, N.; Li, G.; Li, X.; Niu, Z. Dynamic monitoring of the largest freshwater lake in China using a new water index derived from high spatiotemporal resolution sentinel-1A data. Remote Sens. 2017, 9, 521. [Google Scholar] [CrossRef] [Green Version]
  39. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
  40. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  41. Ronneberger, O.; Fischer, P.; Brox, T. Computer U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015. [Google Scholar] [CrossRef]
  42. Yi, Y.; Zhang, Z.; Zhang, W.; Zhang, C.; Li, W.; Zhao, T. Semantic segmentation of urban buildings from VHR remote sensing imagery using a deep convolutional neural network. Remote Sens. 2019, 11, 1774. [Google Scholar] [CrossRef] [Green Version]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  44. Wang, Q.; Yuan, Z.; Du, Q.; Li, X. GETNET: A General End-To-End 2-D CNN Framework for Hyperspectral Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3–13. [Google Scholar] [CrossRef] [Green Version]
  45. Zhang, C.; Yue, P.; Tapete, D.; Jiang, L.; Shangguan, B.; Huang, L.; Liu, G. A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images. ISPRS J. Photogramm. Remote Sens. 2020, 166, 183–200. [Google Scholar] [CrossRef]
  46. Hou, X.; Bai, Y.; Li, Y.; Shang, C.; Shen, Q. High-resolution triplet network with dynamic multiscale feature for change detection on satellite images. ISPRS J. Photogramm. Remote Sens. 2021, 177, 103–115. [Google Scholar] [CrossRef]
  47. Shi, C.; Zhang, Z.; Zhang, W.; Zhang, C.; Xu, Q. Learning Multiscale Temporal-Spatial-Spectral Features via a Multipath Convolutional LSTM Neural Network for Change Detection with Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5529816. [Google Scholar] [CrossRef]
  48. Mallinis, G.; Gitas, I.Z.; Giannakopoulos, V.; Maris, F.; Tsakiri-Strati, M. An object-based approach for flood area delineation in a transboundary area using ENVISAT ASAR and LANDSAT TM data. Int. J. Digit. Earth 2013, 6, 124–136. [Google Scholar] [CrossRef]
  49. Landuyt, L.; Verhoest, N.E.C.; Van Coillie, F.M.B. Flood mapping in vegetated areas using an unsupervised clustering approach on sentinel-1 and-2 imagery. Remote Sens. 2020, 12, 3611. [Google Scholar] [CrossRef]
  50. Ovando, A.; Martinez, J.M.; Tomasella, J.; Rodriguez, D.A.; von Randow, C. Multi-temporal flood mapping and satellite altimetry used to evaluate the flood dynamics of the Bolivian Amazon wetlands. Int. J. Appl. Earth Obs. Geoinf. 2018, 69, 27–40. [Google Scholar] [CrossRef]
Figure 1. Geo-location of the YRB and the locations of the satellite data used in present study.
Figure 1. Geo-location of the YRB and the locations of the satellite data used in present study.
Remotesensing 15 02046 g001
Figure 2. The flowchart of present study for near-real-time flood detection and mapping.
Figure 2. The flowchart of present study for near-real-time flood detection and mapping.
Remotesensing 15 02046 g002
Figure 3. Training and test datasets selected in present study.
Figure 3. Training and test datasets selected in present study.
Remotesensing 15 02046 g003
Figure 4. Examples of the refined segmentations for several regions. (a) The water bodies in mountainous areas. (b) The rivers and the lakes. (c) The lakes. (d) The mountain areas.
Figure 4. Examples of the refined segmentations for several regions. (a) The water bodies in mountainous areas. (b) The rivers and the lakes. (c) The lakes. (d) The mountain areas.
Remotesensing 15 02046 g004
Figure 5. Examples of strong label dataset derived from present study.
Figure 5. Examples of strong label dataset derived from present study.
Remotesensing 15 02046 g005
Figure 6. The structure of UNet.
Figure 6. The structure of UNet.
Remotesensing 15 02046 g006
Figure 7. Flood mapping results of 5812 × 4260 images derived with different deep learning models with test dataset 1.
Figure 7. Flood mapping results of 5812 × 4260 images derived with different deep learning models with test dataset 1.
Remotesensing 15 02046 g007
Figure 8. Flood mapping results of 2688 × 2248 images derived with UNet with the test dataset 2.
Figure 8. Flood mapping results of 2688 × 2248 images derived with UNet with the test dataset 2.
Remotesensing 15 02046 g008
Figure 9. Near-real-time flood detection results obtained with Sentinel-1 SAR images by using deep learning model of UNet. (a) Flood detected in Honghu Lake in 2020. (b) Flood detected in Chaohu Lake basin in 2020. (c) Flood detected in Dongting Lake in 2017. (d) Flood detected in Poyang Lake in 2020.
Figure 9. Near-real-time flood detection results obtained with Sentinel-1 SAR images by using deep learning model of UNet. (a) Flood detected in Honghu Lake in 2020. (b) Flood detected in Chaohu Lake basin in 2020. (c) Flood detected in Dongting Lake in 2017. (d) Flood detected in Poyang Lake in 2020.
Remotesensing 15 02046 g009
Table 1. Information about the selected flood events and the corresponding Sentinel-1 SAR images used in present study.
Table 1. Information about the selected flood events and the corresponding Sentinel-1 SAR images used in present study.
Flood EventsFlood PeriodImage IDTrain or Test
Dongting Lake9 June 2016–3 July 2016011CB0_5A050127C5_F17DTrain and Test
Poyang Lake30 May 2016–17 July 2016011822_A928012E7C_86E8Train and Test
Middle Reaches of the Yangtze River11 June 2016–5 July 2016011D9A_08010128B9_886DTrain and Test
Poyang Lake12 June 2017–6 July 201700A8F1_763200B2FB_3091Train and Test
Juzhang River5 July 2018–29 July 201802747C_139D027F52_9A16Train and Test
Huaihe River7 August 2018–19 August 201802836E_FDCB028919_2CEBTrain and Test
Middle Reaches of the Yangtze River2 July 2019–14 July 2019032778_40DE032CC4_6DEFTrain and Test
Ruan Jiang30 July 2020–11 August 202003E75D_6DAE03ED1D_5ADETest
Dongting Lake4 June 2017–10 July 201701C150_503B01D14B_23A9Application
Poyang Lake20 June 2020–26 July 2020029F8B_298B02AF8A_BF3AApplication
Chaohu Lake3 July 2020–27 July 202003DB5D_91FD03E612_6A3EApplication
Fujiang River14 August 2020–19 September 202003EE9F_8BAF04012C_41B2Application
Dongting Lake19 June 2020–25 July 202003D52B_49F403E52E_6E8EApplication
Middle and Lower Reaches of the Yangtze River14 June 2020–8 July 202003D2E0_90D303DD85_0B97Application
Middle and Lower Reaches of the Yangtze River14 June 2020–8 July 202003D2E0_261F03DD85_725AApplication
Upper Reaches of the Yangtze River16 August 2021–21 September 202104A272_97F504B46E_4D61Application
Table 2. Metrics used in performance evaluation of the deep learning models adopted in flood detection.
Table 2. Metrics used in performance evaluation of the deep learning models adopted in flood detection.
Confusion Matrix
PredictionWaterNo-Water
Label
WaterTrue Positive (TP)False Negative (FN)
No-WaterFalse Positive (FP)True Negative (TN)
Evaluation Metrics
Overall Accuracy (OA) OA = T P + T N T P + T N + F P + F N
Precision (P) P = T P T P + F P
Recall (R) R = T P T P + F N
F1-score (F) F = 2 × P × R P + R
Table 3. Comparisons of model performances with two different test datasets. The first line presents the models’ performances with test dataset 1, and the second with test dataset 2. The values in bold indicate the highest numbers for corresponding metrics.
Table 3. Comparisons of model performances with two different test datasets. The first line presents the models’ performances with test dataset 1, and the second with test dataset 2. The values in bold indicate the highest numbers for corresponding metrics.
ModelOAPrecisionRecallF1_Score
Global Threshold Method0.9580.9770.7950.877
0.9530.9690.7740.860
FCN-80.9740.9430.9700.956
0.9610.8810.9390.909
SegNet0.9830.9910.9530.971
0.9750.9810.8970.937
UNet0.9860.9800.9730.976
0.9780.9510.9420.947
DeepResUNet0.9860.9850.9670.976
0.9790.9700.9270.948
Table 4. Band comparison experiments on UNet with two different test datasets. The first line presents the model performances with test dataset 1, and the second with test dataset 2. The values in bold indicate the highest numbers for corresponding metrics.
Table 4. Band comparison experiments on UNet with two different test datasets. The first line presents the model performances with test dataset 1, and the second with test dataset 2. The values in bold indicate the highest numbers for corresponding metrics.
UNet/BandOAPrecisionRecallF1_Score
VH0.9860.9800.9730.976
0.9780.9510.9420.947
VV0.9760.9850.9330.958
0.9610.9720.8350.898
VH + DEM0.9860.9760.9750.976
0.9780.9410.9520.947
VV + DEM0.9770.9660.9540.960
0.9640.9340.8860.909
VH + VV0.9830.9800.9610.971
0.9710.9660.8890.926
VH + VV + DEM0.9810.9880.9480.968
0.9680.9790.8650.918
Table 5. Model performances with weak label datasets.
Table 5. Model performances with weak label datasets.
ModelOAPrecisionRecallF1_Score
FCN-80.9480.8970.9090.903
SegNet0.9550.9120.9170.914
UNet0.9580.9300.9110.920
DeepResUNet0.9580.9270.9120.919
UNet/BandOAPrecisionRecallF1_score
VH0.9580.9300.9110.920
VV0.9520.9140.9040.910
VH + DEM0.9580.9330.9050.919
VV + DEM0.9530.9180.9020.910
VH + VV0.9570.9280.9100.919
VH + VV + DEM0.9550.9220.9080.915
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, X.; Zhang, Z.; Xiong, S.; Zhang, W.; Tang, J.; Li, Z.; An, B.; Li, R. A Near-Real-Time Flood Detection Method Based on Deep Learning and SAR Images. Remote Sens. 2023, 15, 2046. https://doi.org/10.3390/rs15082046

AMA Style

Wu X, Zhang Z, Xiong S, Zhang W, Tang J, Li Z, An B, Li R. A Near-Real-Time Flood Detection Method Based on Deep Learning and SAR Images. Remote Sensing. 2023; 15(8):2046. https://doi.org/10.3390/rs15082046

Chicago/Turabian Style

Wu, Xuan, Zhijie Zhang, Shengqing Xiong, Wanchang Zhang, Jiakui Tang, Zhenghao Li, Bangsheng An, and Rui Li. 2023. "A Near-Real-Time Flood Detection Method Based on Deep Learning and SAR Images" Remote Sensing 15, no. 8: 2046. https://doi.org/10.3390/rs15082046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop