Next Article in Journal
Temporal-Spatial Evolution Analysis of Lake Size-Distribution in the Middle and Lower Yangtze River Basin Using Landsat Imagery Data
Next Article in Special Issue
Data Assimilation of Satellite Soil Moisture into Rainfall-Runoff Modelling: A Complex Recipe?
Previous Article in Journal
Retrieval of Seasonal Leaf Area Index from Simulated EnMAP Data through Optimized LUT-Based Inversion of the PROSAIL Model
Previous Article in Special Issue
A Sharable and Efficient Metadata Model for Heterogeneous Earth Observation Data Retrieval in Multi-Scale Flood Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Fusion-Based Change Detection for Flood Extent Extraction Using Bi-Temporal Very High-Resolution Satellite Images

1
Core Technology Research Laboratory, Pixoneer Geomatics, Daejeon 305-733, Korea
2
Center for Information and Communication Technology, Fondazione Bruno Kessler, Via Sommarive, 18-38123 Povo, Trento, Italy
3
Satellite Information Research Laboratory, Korea Aerospace Research Institute, Daejeon 305-333, Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2015, 7(8), 10347-10363; https://doi.org/10.3390/rs70810347
Submission received: 18 February 2015 / Revised: 3 August 2015 / Accepted: 4 August 2015 / Published: 12 August 2015
(This article belongs to the Special Issue Remote Sensing in Flood Monitoring and Management)

Abstract

:
Change detection based on satellite images acquired from an area at different dates is of widespread interest, according to the increasing number of flood-related disasters. The images help to generate products that support emergency response and flood management at a global scale. In this paper, a novel unsupervised change detection approach based on image fusion is introduced. The approach aims to extract the reliable flood extent from very high-resolution (VHR) bi-temporal images. The method takes an advantage of the spectral distortion that occurs during image fusion process to detect the change areas by flood. To this end, a change candidate image is extracted from the fused image generated with bi-temporal images by considering a local spectral distortion. This can be done by employing a universal image quality index (UIQI), which is a measure for local evaluation of spectral distortion. The decision threshold for the determination of changed pixels is set by applying a probability mixture model to the change candidate image based on expectation maximization (EM) algorithm. We used bi-temporal KOMPSAT-2 satellite images to detect the flooded area in the city of N′djamena in Chad. The performance of the proposed method was visually and quantitatively compared with existing change detection methods. The results showed that the proposed method achieved an overall accuracy (OA = 75.04) close to that of the support vector machine (SVM)-based supervised change detection method. Moreover, the proposed method showed a better performance in differentiating the flooded area and the permanent water body compared to the existing change detection methods.

Graphical Abstract

1. Introduction

Catastrophic events such as floods, landslides, and tsunamis have a significant impact on our lives as these events cause major losses to life and properties. Flooding events are the most frequently occurring worldwide natural disasters and may become a major area of concern in the future as a result of climate change [1,2]. The circumstances have forced the policy makers to include flood monitoring as issues of national importance. Flood monitoring requires rapid access to essential information about flood extents and changes in land cover. Change detection techniques using remotely sensed data, which are the information about an object or phenomenon acquired without direct physical contact, can be used to estimate the information in a quick time frame for the flood monitoring [3]. Change detection is a process of identifying differences in the state of an object or phenomenon by observing it at different times. Timely and accurate change detection of flood-related disasters provides the foundation for better understanding of disaster situation, after which it helps to make the disaster recovery plan.
With increased availability and improved quality of multi-temporal remote sensing data, there has been a growing interest in the development of change detection techniques for the flood monitoring using multi-temporal satellite imagery over the past few years. As an advantage of rapid revisit time and of widely available spectral wavelengths, multi-temporal images acquired from sensors having low spatial resolution but high spectral resolution, such as Landsat and MODIS sensors, have been generally used for flood monitoring [4,5,6,7]. SAR images have also been employed due to their strength that can acquire the data regardless of the weather condition [8,9,10]. In spite of the advantages of those sensors, exploiting very high-resolution (VHR) multi-temporal images is still attractive to detect flood extent for more detail analysis with precise performance [11].
Several contributions can be found in the recent literature for automatic change detection. Several methods, including principle component analysis (PCA), change vector analysis (CVA), support vector machine (SVM), multivariate alteration detection (MAD), have proven to be effective in various applications [12,13,14,15,16,17,18]. The CVA is a binary change detection method that identifies the changes using the magnitude between two spectral vectors. A threshold for indicating changed areas needs to be determined on the magnitude of the change vector. This method performed better in a comparative evaluation of some change detection techniques for detecting flood extent using Landsat TM data [19]. The MAD method works based on canonical correlation analysis (CCA) between two groups of variables, in order to find their linear combinations that give a set of mutually orthogonal difference images. The method can concentrate all spectral variations associated with land cover changes between two acquisition times into a few resulting MAD components. They provide an optimal change indicator for multi-temporal remotely sensed images in theory [20]. The SVM-based change detection method, which is a supervised method known for showing a good performance, is applied to the multispectral difference image generated by the difference of spectral feature vectors associated with pairs of corresponding pixels in bi-temporal images [13].
In 2009–2010, there was a Data Fusion Contest organized by the IEEE Geoscience and Remote Sensing Society. It was focused on the evaluation of existing algorithms for flood mapping through change detection [21]. A new change detection technique based on image differencing has been introduced to enable the automated and reliable flood extent extraction from VHR TerraSAR-X images [22]. Moreover, the change detection and thresholding (CDAT) method has been developed to delineate the extent of flooding for the Chobe floodplain in the Caprivi region of Namibia [23].
Among all above-mentioned approaches, the simple difference image, which is made by subtracting the pixel values of one image from those in another, is one of the main sources of potential change information as it contains clues about spectral changes. The pixels of the difference image having significantly large values are associated with the regions that show high probability of being changed. Changes are then identified by thresholding the difference image according to empirical strategies or statistical methods. Obtaining the difference information and selecting the appropriate threshold to extract the change information are the key steps in change detection from multi-temporal remote sensing images. However, the critical limitation of the change detection method based on the difference image is that the result from the method is heavily reliant on the spectral features [24]. It only considers the spectral values of pixels to obtain the difference information so that large noises occur due to radiometric and geometric differences between images [25]. Since the pixels are not spatially independent and the noise pixels have a great impact on change detection, differences based on spectral feature may fail to reveal the changes in VHR bi-temporal satellite images.
The main objective of this study is to develop a new change detection approach for the detection of flooded areas and the generation of flood hazard map using VHR bi-temporal satellite images. To do this, we take advantages of spectral distortion that occurs during the image fusion process to detect changed areas caused by flood. This concept is based on the fact that changed areas show a spectral distortion after image fusion due to the spectral and spatial differences between bi-temporal images. A candidate change image is extracted from the fused image generated with bi-temporal images using the universal image quality index (UIQI) index, which can be used locally in order to evaluate the spectral distortion. Finally, the flood extent region is detected by an automated thresholding method.
The remainder of the paper is as follows. The experiment data in this study is presented in Section 2. The major methodology of our study is described in Section 3. In Section 4, we apply our algorithm on the Kompsat-2 bi-temporal images and compare our result with those generated using the existing CVA-, MAD-, and SVM-based analysis. A conclusion is presented in Section 5.

2. Image Preparation

In this study, a bi-temporal dataset acquired by KOMPSAT-2 satellite over the city of N′djamena in Chad is used to evaluate the performance and feasibility of our methodology. The N′djamena is the largest city of Chad and the topography of this region is relatively flat. In this region, flooding along the river is a frequent consequence of heavy rainfall caused by tropical cyclones. The images were acquired on 22 June 2010 and 14 October 2012, respectively. The specifications of the data are described in Table 1. Even though two images in the scene were acquired with a two-year gap, the scene mutually exhibits a high proportion of changes due to the significant flooding event as shown in Figure 1. Since the images were taken with different off-nadir angles, they show geometric dissimilarity. In order to solve this problem, it is necessary to geo-reference the dataset to common coordinate system using image registration technique. We manually co-registered the images implemented by ENVI image processing software, with a result of root mean square error (RMSE) around 0.5 pixels.
Figure 1. The fused images using GSA image fusion method: (a) the F1 GSA-fused image generated from the KOMPSAT-2 satellite images collected before the flood event and (b) the F2 GSA-fused image generated from the KOMPSAT-2 satellite images collected after the flood event.
Figure 1. The fused images using GSA image fusion method: (a) the F1 GSA-fused image generated from the KOMPSAT-2 satellite images collected before the flood event and (b) the F2 GSA-fused image generated from the KOMPSAT-2 satellite images collected after the flood event.
Remotesensing 07 10347 g001
Table 1. KOMPSAT-2 satellite data characteristics.
Table 1. KOMPSAT-2 satellite data characteristics.
Before Flood EventAfter Flood Event
Acquisition date22/06/201014/10/2012
Image size (pixels)PAN: 4000 × 4000MS: 1000 × 1000
MS: 1000 × 1000MS: 1000 × 1000
Spatial resolutionPAN: 1 mPAN: 1 m
MS: 4 mMS: 4 m
Radiometric resolution10 bit10 bit
off-nadir angle24°
Processing levelLevel 1RLevel 1R

3. Change Detection Approach Based on Cross-Fused Image

Unlike classical unsupervised change detection methods, which are generally fulfilled based on the difference images, our approach is based on analysis of spectral distortion that occurs during image fusion process. Image fusion is defined as the process of combining relevant information from two or more images into a single image [26]. When the fused image is generated from the images acquired at different times, it is inevitable to occur the spatial and spectral distortions within the image due to the dissimilarity between the multi-temporal images. In this case, the spectral-distorted area of the fused image can be considered as a candidate of the changed area. Within this framework, we focus on the discrimination between two opposite classes associated with changed and unchanged pixels caused by flooding from the fused image. Let us consider that two VHR satellite datasets F 1 and F 2 are consist of a panchromatic (PAN) and four-band multispectral (MS) images. The F 1 and F 2 datasets are acquired in the same geographical area at different times t 1 and t 2 , before and after flooding. To better understand the concept and procedure of the proposed change detection technique, a schematic diagram is given in Figure 2. Each step of the procedure is explained in detail as follows.
Figure 2. Conceptual workflow of the proposed methodology for flood extent extraction using bi-temporal VHR satellite images.
Figure 2. Conceptual workflow of the proposed methodology for flood extent extraction using bi-temporal VHR satellite images.
Remotesensing 07 10347 g002

3.1. Gram-Schmidt Adaptive (GSA) Image Fusion

Spatial resolution of MS images is usually slightly lower than that of PAN image captured by the same satellite. Generally, image fusion methods aim to improve the spatial information of the original MS images by using the spatial details of the VHR PAN image in situations where we cannot obtain ideal VHR MS images due to the technical limitations of certain satellite sensor [26]. Most of image fusion methods are based on a general protocol, which can be broadly summarized in two steps: (1) extraction of high-frequency spatial information from the PAN image; and (2) injection of such spatial details into the resized MS images by exploiting different models [27]. A general fusion framework protocol can be defined as [28]
M S n h = M S n l + ω n · ( P h P l )
where M S n h is the fused image of the nth band, M S n l is the resized MS image of the nth band that is resampled to the same spatial resolution of PAN. P h is the PAN image, P l is the synthetic image having an equivalent spatial resolution with P h , and ω n determines the amount of spatial detail that is added to the resized MS bands.
A GSA is a representative component substitution (CS)-based fusion algorithm. It denotes a case in which P l is determined by performing multivariate linear regression procedure between the resized MS data set and the PAN image, while ω n is determined as proportional to the covariance value between the P l and the resized MS bands [29]. According to the concept of GSA image fusion method, we generate the fused image for F 2 satellite ( F 2 -GSA) dataset taken after the flood event.

3.2. Empirical Scene Normalization

The near infrared (NIR) image band is a useful information source for detecting flooded areas. The flooded area generally appears very dark in this NIR band in which water has strong absorption characteristic. The idea of this paper is based on the assumption that if the F 1 MS image is fused with the NIR band of F 2 , it may produce serious spectral distortion in the flooded areas in the fused image. The F 2 NIR band is directly extracted from the GSA-fused image of F 2 for the generation of a cross-fused image.
Before the generation of the cross-fused image, the F 2 NIR image band is required to minimize the radiometric difference with F 1 dataset (i.e., images acquired before the flood event) caused due to the different atmospheric conditions, solar illumination, and view angles. In this paper, we used the empirical line calibration (ELC) method [30] for the radiometric normalization between the F 2 NIR image band and F 1 PAN image. This method involves the selection of pseudo-invariant features (PIFs) whose reflectance values are nearly invariant over time. The 10 PIFs were manually selected throughout the study area. The F 2 NIR image band was radiometrically normalized to F 1 PAN image using the derived regression from the selected PIFs, and the normalized F 2 NIR′ image band was finally used to generate the cross-fused image.

3.3. Cross-Fused Image Generation

The GSA is a well-known CS-based fusion method that can effectively inject the spatial details into the fused image. The major drawback of CS-based fusion method is spectral distortion, also called the color (or radiometric) distortion. It is characterized by a trend to present a predominance of a color on the others. This spectral distortion is caused by the mismatch between the spectral responses of the MS and PAN bands according to the different bandwidth [31]. In this paper, we aim to enhance the flood change detection performance by intentionally increasing this spectral distortion in the flooded regions. To this end, as aforementioned in Section 3.2, we use the F 2 NIR′ image band instead of F 1 PAN image in order to generate the cross-fused image. The fused image brings the effect of significant spectral distortion in the flooded areas while maintaining the radiometric characteristic of the permanent water body. This is because the level of mismatch of spectral response is very severe outside of the NIR spectral range; the bandwidth of NIR band is much narrower compared to that of the PAN image. This characteristic helps to distinguish permanent water bodies from flooded areas in the process of the flood change detection.

3.4. Generation of Change Candidate Image Using UIQI Index

We consider spectral distortion in the cross-fused image as an indication of changed area. A certain amount of undesirable spectral distortion additionally occurs when the cross-fused image is generated from the two images acquired at different times. The distortion occurs mainly due to some spatial inconsistency and shape disagreement between two images taken from different geometric viewpoints. This may result in substantial false alarms of change in the regions having a high degree of spatial inconsistency, such as densely populated urban area.
To alleviate this problem and quantify the change candidate area, we use the UIQI index, which is the representative window-based spectral distortion measure. It considers context characteristics of local regions, instead of using simple pixel difference-based measures such as mean square error (MSE) and signal-to-noise ratio (SNR). The UIQI index is easy to calculate and has robust characteristic against several types of image noise such as white Gaussian, salt and pepper, mean shift, and multiplicative noises [32]. Generally, the pixel-based and statistic-oriented change detection measures are sensitive to image noise, because they focus mainly on the spectral value and mostly ignore the spatial context [33].
We employed the UIQI index to measure the distortion between original and distorted images with a combination of three factors: loss of correlation C w , luminance distortion l w , and contrast distortion S w [32]. The first component C w is the correlation coefficient between the original and distorted images in the window mask, which is the measure of linear correlation. Its range extends from −1 to 1, and the best value is obtained when the distorted image is equal to the original image. The second component l w measures the mean luminance between the two images, and its range is [0 1]. This component has the maximum value when the means of the images are the same. The variance of the signal can be viewed as an estimation of contrast. Thus the third component S w measures how similar the contrasts of the two images are. Its range of values is also [0 1], and the best value is achieved when the variances are equal.
The UIQI index is applied using local moving window of size N × N . It moves over the entire image, pixel by pixel along the horizontal and vertical directions. The UIQI value at a generic position ( x , y ) of individual bands is calculated as:
UIQI ( x , y ) = C w ( x , y ) l w ( x , y ) S w ( x , y ) = σ x y σ x σ y · 2 σ x σ y σ x 2 + σ y 2 · 2 μ x μ y ( μ x ) 2 + ( μ y ) 2
where μ x and μ y denote the mean values of original and distorted images within the window mask, and σ x 2 and σ y 2 are the variance values of original and distorted images within the window mask, respectively. σ x y is the covariance between two images within the mask.
According to the property of three factors, the UIQI value takes high values in undistorted areas and low values in distorted ones within the range [−1, 1]. This index has an advantage by taking into account local spatial properties, i.e., luminance, contrast, and correlation information, whereas traditional pixel-based similarity measures focus solely on the spectral signature of each pixel. Therefore, this index is worth to be considered for detecting the flood change in VHR bi-temporal images. The UIQI index is computed between the F 1 GSA-fused image (i.e., fused image by the dataset acquired before the flood event) and the cross-fused image. The illustration of the UIQI measurement is shown in Figure 3.
Figure 3. Illustration of the universal image quality index (UIQI) measurement system.
Figure 3. Illustration of the universal image quality index (UIQI) measurement system.
Remotesensing 07 10347 g003

3.5. Determination of the Final Flooded Area

We applied a thresholding method into the UIQI image in order to assign each image pixel to one of the two opposite classes, namely flooded and un-flooded area. These two classes can be separated as a binary classification problem, where the probability density function p ( x ) of the image is a mixture of two parametric density functions associated with the flooded and un-flooded classes, i.e.,
p ( x ) = P 1 p 1 ( x 1 ) + P 2 p 2 ( x | θ 2 )
where P 1 and P 2 are the prior probabilities of the flooded and un-flooded classes, and p 1 ( x | θ 1 ) and p 2 ( x | θ 2 ) are the class-conditional densities associated with the flooded and un-flooded classes, respectively. θ 1 and θ 2 are the vectors of parameters on which two parametric class-conditional densities depend. The expectation maximization (EM) algorithm, assuming that the class-conditional densities follow a Gaussian distribution, is an iterative method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models [34]. The EM iteration alternates between performing an expectation step and a maximization step; the expectation step creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameter P i and θ i   ( i = 1 ,   2 ) ; the maximization step computes parameters maximizing the expected log-likelihood found on the expectation step. These estimated parameters are then used to determine the given Gaussian variable distribution in the next expectation step. The EM algorithm is sensitive to select initial value of the parameters to be estimated because of the possibility of finding the local maxima for the total likelihood in the parameter space. If these initial values are inappropriately selected, the EM algorithm may lead to an unsatisfactory estimation of the class distribution. To address this issue, several methods are reported in the literature [35]. In this paper, the EM algorithm is initialized by the result of Otsu’s method [36]. The final change detection image is generated by exploiting the estimated parameters. The Bayes rule that minimizes the overall error of probability is applied in order to select the decision threshold in the change-detection process.

4. Experimental Result and Accuracy Assessment

In this paper, two experiments were conducted to evaluate the performance and feasibility of our algorithm. In order to check whether the cross-fused image is effective for the extraction of the flood inundation area, we first compared our method with one based on the GSA-fused images of each dataset (Figure 1), which is now called the MFI method. The MFI method is developed based on the following steps: (1) generate the GSA-fused images in F 1 and F 2 dataset, respectively; (2) calculate the UIQI between the F 1 GSA- and the F 2 GSA-fused images; (3) determine the changed area using the EM algorithm. The window size for measuring UIQI of both the proposed and the MFI methods was set to 64, and the threshold was automatically selected by applying the EM algorithm to a mean image obtained by averaging UIQI images of individual bands (Figure 4). The results of the flood extent extraction obtained by both methods are shown in Figure 5e,f.
In an effort to evaluate the results with a numerical manner, a ground-truth map was produced from the original image by manually digitizing the flooded area as shown in Figure 5a. In the construction of the ground-truth map, we only considered the visually flood-affected areas along the river, as not only it is hard to track down all the changes in urban residential district, but also we focus on changes caused by flood. By comparing the ground-truth image with the results of the flood extraction, we can obtain change detection accuracies. In order to evaluate the proposed algorithm, the error matrix method was applied for the accuracy assessment of the tested methodologies. From the error matrices of each tested methodology, the commission error (CE), the omission error (OE), and the overall accuracy (OA) were calculated [37].
Per the results of the quantitative analysis shown in Table 2, the MFI method showed the highest OA value, but it is difficult to confirm that this method is better than the proposed method. This is because the result of the MFI method has given too much false detection in the region of the permanent water body compared with the ground-truth image. In other words, the MFI method cannot separate between the permanent water body and the flooded area. From the Figure 5f, we can see that the use of the cross-fused image allowed a more precise identification of the flooded area and gave a good performance in differentiating the flooded area and the permanent water body.
Figure 4. The component images generated from the experiment: (a) MAD components (RGB color composite: 4 3 2); (b) the cross-fused image (RGB color composite: 3 2 1); (c) the mean UIQI generated from the MFI method; (d) the mean UIQI generated from the proposed method.
Figure 4. The component images generated from the experiment: (a) MAD components (RGB color composite: 4 3 2); (b) the cross-fused image (RGB color composite: 3 2 1); (c) the mean UIQI generated from the MFI method; (d) the mean UIQI generated from the proposed method.
Remotesensing 07 10347 g004aRemotesensing 07 10347 g004b
As another way to examine the performance of our algorithm, we compared our result to the ones generated from the CVA- , MAD- and SVM-based change detection methods. The original bi-temporal GSA-fused images (Figure 1) were used as input dataset for the methods. The EM algorithm was applied to the magnitude image for the automatic determination of an optimal decision threshold in the CVA-based change detection method. In the MAD-based change detection method, a multi-level thresholding based on the EM algorithm was applied to the mean of the MAD components for the optimal selection of two thresholds; the pixel value greater than the upper threshold or less than the lower threshold was determined as the final flooded pixel [38]. To apply the SVM-based method, training pixels for the classes of both flooded and un-flooded pixels should be selected on the multispectral difference image. This has been done through a visual inspection. When using the SVM-based method, the user faces many possible choices of kernel functions commonly yielding different results. In this paper, we used Radial Basis Function (RBF), which handles the case in which the relationships between class labels and attributes were nonlinear. A gamma value to determine the RBF kernel width and a penalty parameter to control the margin error were set at 0.333 and 100, respectively.
Figure 5 and Figure 6 show the results of change detection with different methods, and the detailed quantitative results are given in Table 2. Figure 5 includes the whole study area, allowing an initial visual assessment of the results of flood extent extraction. Figure 6 shows the sub-images extracted from different regions of Figure 5: The red color represents the correctly extracted flood pixels, whereas the blue and yellow colors represent the commission and omission errors, respectively. The results are overlaid over the original PAN image collected before the flood event. At first glance, one can observe that the masks obtained with the SVM and the proposed methods showed consistent results to actual changes in comparison with the CVA- and the MAD-based methods. Upon close inspection of the change detection results using the ground-truth image (Figure 5a), it seems that the flood extent extracted by the SVM method was over-estimated compared to the results of the proposed method. The permanent water body was even incorrectly classified as the flooded area by the SVM method (Figure 5d). As shown in Table 2, the OA of the proposed method was 75.04%. The SVM-based change detection method produced a slightly better result, showing 1.13% higher OA than the proposed method. However, obvious CE exits throughout the permanent water body area compared with the original image taken before the flood event as shown in Figure 1a. Moreover, the SVM method needs training pixels for a given two classes, meaning that it needs additional manual intervention. Although the proposed method appears to be satisfactory in the flood extent extraction, it produces false positives in some regions far from the central flooded region, such as in the upper right part of Figure 5f. This is because the spectral distortion occurred by the remnant atmospheric effects after radiometric correction and the spatial inconsistency occurred by different look angles of bi-temporal VHR imagery. To remove these false positives, it would seem preferable to consider the pixels that are only close to the largest central water regions. To increase the accuracy of the proposed change detection method, therefore, we will further study about the rule that can maintain the flooded areas and disregard the false positive far from the river.
Figure 5. Result images of flooded area extraction using the tested methods: (a) Ground-truth image (b) MAD result; (c) CVA result; (d) SVM result; (e) MFI result; (f) result of the proposed method. The extracted flood pixels according to each method are represented in red color.
Figure 5. Result images of flooded area extraction using the tested methods: (a) Ground-truth image (b) MAD result; (c) CVA result; (d) SVM result; (e) MFI result; (f) result of the proposed method. The extracted flood pixels according to each method are represented in red color.
Remotesensing 07 10347 g005aRemotesensing 07 10347 g005b
Figure 6. Sub-images extracted from different regions of Figure 5: (a) a location map of each sub-image; (b) MAD result; (c) CVA result; (d) SVM result; (e) MFI result; (f) result of the proposed method. The red color represents the correctly extracted flood pixels, the blue color shows the commission error, and the yellow color shows the omission error.
Figure 6. Sub-images extracted from different regions of Figure 5: (a) a location map of each sub-image; (b) MAD result; (c) CVA result; (d) SVM result; (e) MFI result; (f) result of the proposed method. The red color represents the correctly extracted flood pixels, the blue color shows the commission error, and the yellow color shows the omission error.
Remotesensing 07 10347 g006aRemotesensing 07 10347 g006bRemotesensing 07 10347 g006c
Table 2. Accuracy assessment of the tested change detection methods: (F) Flood, (NF) No Flood, (OE) Omission error, (CE) Commission error, (OA) Overall accuracy.
Table 2. Accuracy assessment of the tested change detection methods: (F) Flood, (NF) No Flood, (OE) Omission error, (CE) Commission error, (OA) Overall accuracy.
Reference ChangeF (Pixels)NF (Pixels)OE (%)CE (%)OA (%)
Classified Change
MADF (pixels)2496144,62999.9698.3060.03
NF (pixels)6,249,2349,603,641
CVAF (pixels)230,216140,91996.3237.9761.48
NF (pixels)6,021,5149,607,351
SVMF (pixels)5,091,4252,653,15818.5634.2676.17
NF (pixels)1,160,3057,095,112
MFIF (pixels)5,742,4513,180,9258.1435.6576.94
NF (pixels)509,2796,567,345
Proposed MethodF (pixels)4,567,4222,307,76526.9433.5675.04
NF (pixels)1,684,3087,440,505

5. Conclusions

In this paper, we proposed a novel unsupervised change detection methodology based on a combination of image fusion and spectral distortion measure for the flood extent extraction. The experimental results from bi-temporal KOMPSAT-2 VHR images showed that the proposed approach could visually produce a good result for the flooded areas compared with the traditional CVA-, MAD-, SVM-based change detection methods. The OA obtained by the proposed method was 75.04%, which is close to that of SVM-based supervised change detection method. The proposed method is insensitive to image noise due to the use of context information based on the UIQI index. In contrast, the traditional pixel-based change detection techniques focus on the spectral value only. Therefore, the proposed method can achieve a lower rate of false alarms compared with the conventional methods. The cross-fused image was also found to be able to extract a more precise identification of the flooded area and gave a good performance in differentiating the flooded area and the permanent water body. The separation allows us to predict the damaged scale by flooding and make the decision to recover accordingly.
It is worth noting that the proposed method is designed based only on the NIR band of the post-flood image for the flood change detection. It means that we are focusing on flooding-related areas, which are sensitive to the NIR band. This is the reason why the proposed method produced some false positives in regions that are not related to water. Nevertheless, the proposed method has a strong advantage for flood extent extraction due to the possibility on separation between the flooded area and the permanent water body. It will be obvious that the proposed method improves CE over the other methods when the site is constructed mainly on the flood-related area.
In order to increase the accuracy of flood extent extraction, our future research will focus on making more precise framework to suppress false positives in the urban areas. We will apply the proposed approach to different sites affected by floods to confirm the robustness of the method. The effects according to different data fusion algorithms will also be investigated.

Acknowledgments

This research was supported by the Korea Aerospace Research Institute (KARI) and the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2014R1A6A3A03055270).

Author Contributions

Younggi Byun designed the study and analyzed the results. Taebyung Chae was involved in the acquisition of the KOMPSAT-2 data. Youkyung Han pre-processed the data in order to apply the proposed technique. Younggi Byun prepared the manuscript collaborated with Youkyung Han. All authors revised and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hallegatte, S.; Green, C.; Nicholls, R.J.; Corfee-Morlot, J. Future flood losses in major coastal cities. Nat. Clim. Chang. 2013, 3, 802–806. [Google Scholar] [CrossRef]
  2. Peter, L.; Tatjana, V.; Matjaz, M.; Kristof, O. Detecting flooded areas with machine learning techniques: Case study of the Selska Sora river flash flood in Septemper 2007. Appl. J. Remote Sens. 2013, 7, 073564. [Google Scholar]
  3. Sanyal, J.; Lu, X.X. Application of remote sensing in flood management with special reference to Monsoon Asia: A review. Nat. Hazards 2004, 33, 283–301. [Google Scholar] [CrossRef]
  4. Haq, M.; Akhtar, M.; Muhammad, S.; Paras, S.; Rahmatullah, J. Techniques of remote sensing and GIS for flood monitoring and damage assessment: A case study of Sindh province, Pakistan. Egypt. J. Remote Sens. Sp. Sci. 2012, 15, 135–141. [Google Scholar] [CrossRef]
  5. Schnebele, E.; Cervone, G. Improving remote sensing flood assessment using volunteered geographical data. Nat. Hazards Earth Syst. Sci. 2013, 13, 669–677. [Google Scholar] [CrossRef]
  6. Ticehurst, C.; Guerschman, J.P.; Chen, Y. The strengths and limitations in using the daily MODIS open water likelihood algorithm for identifying flood events. Remote Sens. 2014, 6, 11791–11809. [Google Scholar] [CrossRef]
  7. Jung, Y.; Kim, D.; Kim, D.; Kim, M.; Lee, S.O. Simplified flood inundation mapping based on flood elevation-discharge rating curves using satellite images in gauged watersheds. Water 2014, 6, 1280–1299. [Google Scholar] [CrossRef]
  8. Dewan, A.M.; Islam, M.M.; Kumamoto, T.; Nishigaki, M. Evaluating flood hazard for land-use planning in Greater Dhaka of Bangladesh using remote sensing and GIS techniques. Water Resour. Manag. 2007, 21, 1601–1612. [Google Scholar] [CrossRef]
  9. Martinez, J.M.; Toan, T.L. Mapping of flood dynamics and spatial distribution of vegetation in the Amazon floodplain using multitemporal SAR data. Remote Sens. Environ. 2007, 108, 209–223. [Google Scholar] [CrossRef]
  10. Kuenzer, C.; Guo, H.; Schlegel, I.; Tuan, V.Q.; Li, X.; Dech, S. Varying scale and capability of Envisat ASAR-WSM, TerraSAR-X Scansar and TerraSAR-X Stripmap data to assess urban flood situations: A case study of the Mekong Delta in Can Tho province. Remote Sens. 2013, 5, 5122–5142. [Google Scholar] [CrossRef]
  11. Wierzbicki, G.; Ostrowski, P.; Mazgajski, M.; Bujakowski, F. Using VHR multispectral remote sensing and LIDAR data to determine the geomorphological effects of overbank flow on a floodplain (the Vistula River, Poland). Geomorphology 2013, 183, 73–81. [Google Scholar] [CrossRef]
  12. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanly, D. Change detection from remotely sensed image: From pixel-based to object-based approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  13. Nemmour, H.; Chibani, Y. Multiple support vector machines for land cover change detection: An application for mapping urban extension. ISPRS J. Photogramm. Remote Sens. 2006, 61, 125–133. [Google Scholar] [CrossRef]
  14. Chen, J.; Chen, X.; Cui, X. Change vector analysis in posterior probability space: A new method for land cover change detection. IEEE Geosci. Remote Sens. Lett. 2011, 8, 317–321. [Google Scholar] [CrossRef]
  15. Renza, D.; Martinez, E.; Arquero, A. A new approach to change detection in multispectral images by means of ERGAS index. IEEE Geosci. Remote Sens. Lett. 2013, 10, 76–80. [Google Scholar] [CrossRef]
  16. Canty, M.J.; Nielsen, A.A. Automatic radiometric normalization of multitemporal satellite imagery with the iteratively re-weighted MAD transformation. Remote Sens. Environ. 2008, 112, 1025–1036. [Google Scholar] [CrossRef]
  17. Marpu, P.R.; Gamba, P.; Canty, M.J. Improving change detection results of IR-MAD by eliminating strong changes. IEEE Geosci. Remote Sens. Lett. 2011, 8, 799–803. [Google Scholar] [CrossRef]
  18. Canty, M.J.; Nielsen, A.A. Linear and kernel methods for multivariate change detection. Comput. Geosci. 2012, 38, 107–114. [Google Scholar] [CrossRef]
  19. Dhakal, A.S.; Amada, T.; Aniya, M.; Sharma, R.R. Detection of areas associated with flood and erosion caused by a heavy rainfall using multispectral Landsat TM data. Photogramm. Eng. Remote Sens. 2002, 68, 233–239. [Google Scholar]
  20. Nielsen, A.A.; Conradsen, K.; Simpson, J.J. Multivariate alteration detection (MAD) and MAF processing in multispectral, bitemporal image data: New approches to change detection studies. Remote Sens. Environ. 1998, 64, 1–19. [Google Scholar] [CrossRef]
  21. Longbotham, N.; Pacifici, F.; Glenn, T.; Zare, A.; Volpi, M.; Tuia, D.; Christophe, E.; Michel, J.; Inglada, J.; Chanussot, J.; et al. Multi-modal change detection, application to the detection of flooded areas: Outcome of the 2009–2010 data fusion contest. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2012, 5, 331–342. [Google Scholar] [CrossRef]
  22. Giustraini, L.; Hostache, R.; Matgen, P.; Schumann, G.J.P.; Bates, P.D.; Mason, D.C. A change detection approach to flood mapping in urban areas using TerraSAR-X. IEEE Geosci. Remote Sens. 2013, 51, 1301–1312. [Google Scholar] [CrossRef]
  23. Long, S.; Fatoyinbo, T.E.; Policelli, F. Flood extent mapping for Namibia using change detection and thresholding with SAR. Enviorn. Res. Lett. 2014, 9, 35002–35011. [Google Scholar] [CrossRef]
  24. Almutairi, A.; Warner, T.A. Change detection accuracy and image properties: A study using simulated data. Remote Sens. 2010, 2, 1508–1529. [Google Scholar] [CrossRef]
  25. Bruzzone, L.; Cossu, R. An adaptive approach to reducing registration noise effects in unsupervised change detection. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2455–2465. [Google Scholar] [CrossRef]
  26. Zhang, Y. Understanding image fusion. Photogramm. Eng. Remote Sens. 2004, 70, 657–661. [Google Scholar]
  27. Witharana, C.; Civco, D.L.; Meyer, T. Evaluation of pansharpening algorithms in support of earth observation based rapid mapping workflow. Appl. Geogr. 2013, 37, 63–87. [Google Scholar] [CrossRef]
  28. Dou, W.; Chen, Y.; Li, X.; Sui, D.Z. A general framework for component substitution image fusion: An implementation using the fast image fusion method. Comput. Geosci. 2007, 33, 219–228. [Google Scholar] [CrossRef]
  29. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS+pan data. IEEE Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  30. Janzen, D.T.; Fredeen, A.L.; Wheate, R.D. Radiometric correction techniques and accuracy assessment for Landsat TM data in remote forested regions. Can. J. Remote Sens. 2006, 32, 330–340. [Google Scholar] [CrossRef]
  31. Thomas, C.; Ranchin, T.; Wald, L.; Chanussot, J. Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics. IEEE Geosci. Remote Sens. 2008, 46, 1301–1312. [Google Scholar] [CrossRef] [Green Version]
  32. Wang, Z.; Bovik, A.C. A uinversal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  33. Verbesselt, J.; Hyndman, R.; Zeileis, A.; Culvenor, D. Phenological change detection while accounting for abrupt and gradual trends in satellite image time series. Remote Sens. Environ. 2010, 114, 2970–2980. [Google Scholar] [CrossRef]
  34. Moon, T.K. The expectation-maximization problem. Signal Process. Mag. 1996, 13, 47–60. [Google Scholar] [CrossRef]
  35. Roberts, S.; Husmeier, D.; Rezek, I.; Penny, W. Bayesian approaches to gaussian mixture modeling. IEEE Pattern Anal. Mach. 1998, 20, 1133–1142. [Google Scholar] [CrossRef]
  36. Otsu, N. A threshold selection method from gray-level histogram. IEEE Syst. Man Cyhern. 1979, 9, 62–66. [Google Scholar]
  37. Congalton, R.G. A review of assessing the accuracy of classification of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  38. Canty, M.J.; Nielsen, A.A. Visualization and unsupervised classification of changes in multispectral satellite imagery. Int. J. Remote Sens. 2006, 27, 3961–3975. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Byun, Y.; Han, Y.; Chae, T. Image Fusion-Based Change Detection for Flood Extent Extraction Using Bi-Temporal Very High-Resolution Satellite Images. Remote Sens. 2015, 7, 10347-10363. https://doi.org/10.3390/rs70810347

AMA Style

Byun Y, Han Y, Chae T. Image Fusion-Based Change Detection for Flood Extent Extraction Using Bi-Temporal Very High-Resolution Satellite Images. Remote Sensing. 2015; 7(8):10347-10363. https://doi.org/10.3390/rs70810347

Chicago/Turabian Style

Byun, Younggi, Youkyung Han, and Taebyeong Chae. 2015. "Image Fusion-Based Change Detection for Flood Extent Extraction Using Bi-Temporal Very High-Resolution Satellite Images" Remote Sensing 7, no. 8: 10347-10363. https://doi.org/10.3390/rs70810347

APA Style

Byun, Y., Han, Y., & Chae, T. (2015). Image Fusion-Based Change Detection for Flood Extent Extraction Using Bi-Temporal Very High-Resolution Satellite Images. Remote Sensing, 7(8), 10347-10363. https://doi.org/10.3390/rs70810347

Article Metrics

Back to TopTop