Next Article in Journal
Detection of Indoor High-Density Crowds via Wi-Fi Tracking Data
Next Article in Special Issue
Rough or Noisy? Metrics for Noise Estimation in SfM Reconstructions
Previous Article in Journal
Detection of Potentially Compromised Computer Nodes and Clusters Connected on a Smart Grid, Using Power Consumption Data
Previous Article in Special Issue
Potential of Pléiades and WorldView-3 Tri-Stereo DSMs to Represent Heights of Small Isolated Objects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Change Detection Method for Natural Disaster Detection and Segmentation from Video Sequence

1
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
2
Technology and Engineering Center for Space Utilization, Chinese Academy of Science, Beijing 100094, China
3
Key Laboratory of Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(18), 5076; https://doi.org/10.3390/s20185076
Submission received: 22 July 2020 / Revised: 27 August 2020 / Accepted: 2 September 2020 / Published: 7 September 2020

Abstract

:
Change detection (CD) is critical for natural disaster detection, monitoring and evaluation. Video satellites, new types of satellites being launched recently, are able to record the motion change during natural disasters. This raises a new problem for traditional CD methods, as they can only detect areas with highly changed radiometric and geometric information. Optical flow-based methods are able to detect the pixel-based motion tracking at fast speed; however, they are difficult to determine an optimal threshold for separating the changed from the unchanged part for CD problems. To overcome the above problems, this paper proposed a novel automatic change detection framework: OFATS (optical flow-based adaptive thresholding segmentation). Combining the characteristics of optical flow data, a new objective function based on the ratio of maximum between-class variance and minimum within-class variance has been constructed and two key steps are motion detection based on optical flow estimation using deep learning (DL) method and changed area segmentation based on an adaptive threshold selection. Experiments are carried out using two groups of video sequences, which demonstrated that the proposed method is able to achieve high accuracy with F1 value of 0.98 and 0.94, respectively.

1. Introduction

Natural disasters, such as earthquakes, tsunamis, floods and landslides, have shown a dramatically and globally increasing trend, both in frequency and intensity [1,2,3]. Accurate determination of changes on ground features associated with destructive disaster events is crucial to quick disaster response, post-disaster reconstruction and financial planning [4]. Change detection (CD) using remote sensing data can effectively capture changes before and after disasters [5,6,7], which has been widely used in various fields of natural disasters such as flood monitoring [8], landslide displacement tracking [9,10] and earthquake damage assessment [11,12], as well as relief priority mapping [13,14].
With the continuing growth of earth observation techniques and computer technology, massive amounts of remote sensing data for natural disaster with different spectral-spatial-temporal resolution are available for surveying and assessing changes in natural disaster, which greatly promotes the development of change detection methodologies. Many change detection approaches for natural disaster scenes have been proposed and they can be broadly divided into traditional and deep learning (DL)-based [15]. For traditional CD methods, the simplest approaches are algebra-based methods. Hall and Hay [16] firstly segmented two panchromatic SPOT data observed at different times and then detected changes through an image differencing method. Matsuoka et al. [17], on the basis of the difference between the backscattering coefficient and correlation coefficient achieved in an earthquake, applied supervised classification of the pre- and post-event optical images to present the distribution of damaged areas in Bam. These directly algebraic operations were easy to be implemented but always generated noisy outputs, such as isolated pixels or holes in the changed objects; thus, some transformation and models were used in CD researches. Sharma et al. [18] finished a damage assessment of landslides in a minimum time by pseudo color transformation and extracting the landslide affected area based on the pre- and post-earthquake Landsat-8 images. Lee et al. [19] first proposed an optimization algorithm based on Stepwise Weight Assessment Ratio Analysis (SWARA) model and geographic information system to assess seismic vulnerability. In order to overcome the limitation of the sole band and improve the identification of change detection, fusing datasets acquired from various remote sensors and geographical data are paramount to monitoring the environmental impacts of natural disasters. ElGharbawi et al. [20] estimated the crustal deformation affected by the 2011 Tohoku earthquake combined with two deformation patterns using Synthetic Aperture Radar (SAR) and GPS data. With the purpose of determining the changed buildings in forested areas, Du et al. [21] adopted the graph cuts method taking account of spatial relationships and took grey-scale similarity from old aerial images and height difference based on Digital Surface Model (DSM) generated from LiDAR data as two change detection indexes to optimize building detection.
Due to the rapid development of computer technology, the research of traditional change detection approaches has tuned into integrating deep learning techniques in recent years. Deep learning-based methods have presented promising potentials based on the extraction of high-level features. Saha et al. [22] detected collapsed buildings from SAR images and Ji et al. [23] further put them into a random forest classifier to detect post-seismic destroyed buildings using pre- and post-disaster remote sensing images. In order to achieve higher accuracy, some new neural networks have been introduced into disaster monitoring researches. Ci et al. [24] proposed a novel Convolutional Neural Network (CNN) model in combination with a CNN feature extractor, a new loss function and an ordinal regression classifier to evaluate the degree of building damage caused by earthquakes using aerial imagery. Peng et al. [25] utilized an end-to-end CD method named UNet++ to fuse multiple feature maps from different semantic segmentation levels to generate a final change map with high accuracy. Yavariabdi et al. [26] proposed a new change detection method based on multiobjective evolutionary algorithm (MOEA), which is robust to multispectral Landsat images with atmospheric changes. In this method, the similarity index measure (SSIM) is used to generate the difference image. After that, MOEA is applied to obtain a set of multiple binary change masks by iteratively minimizing two objective functions for changed and unchanged regions and the final binary mask is optimally fused by MRF. With the purpose of improving efficiency, in Ghaffarian et al. [27], extended U-net based on deep residual (ResUnet) followed a Conditional Random Field (CRF) implementation was proposed to update the post-disaster buildings from very high resolution imagery. Alizadeh et al. [12] established a new hybrid framework of Analytic Network Process (ANP) and Artificial Neural Network (ANN) models for earthquake vulnerability assessment. To avoid labeling a massive number of data for the training network, transfer learning has received increased attention. Pi et al. [28] employed transfer learning to train eight CNN models based on You-Only-Look-Once (YOLO), so as to recognize undamaged building roofs in disaster-affected areas. Transferring learning was used by Kung et al. [29] to manage disaster by combination of data augmentation, reference model and augmented model. Li et al. [30] proposed SDPCANet by combining PCANet and saliency detection to make change detection based on SAR images, which effectively reduced the number of training samples but kept higher change detection performance.
Recently, the development of commercial video satellites and the spread of mobile devices makes it possible to thoroughly monitor the changing process in natural disaster. For example, high resolution video sequences from video satellites, such as SkySat and Jilin_1, can provide valuable data during different disaster phases [31,32]. Thus, change detection can now move from pre- and post-image analysis to almost real-time disaster monitoring using video sequences. Although change detection for natural disaster has been researched for years in the society of remote sensing, the main studies are focused on pre- and post-disaster satellite imagery, while the change detection based on video satellite for natural disaster monitoring has rarely been studied. These video sequences, which capture disaster motion change, bring a challenge for existing CD methods because the color and texture of objects usually remain the same while only the positions have been changed.
The aim of this paper is to explore an effective method that detects the motion change in disaster from video sequences. Optical flow in the field of computer vision, is very likely to detect the pixel in this video sequence owing to its fast speed and pixel-based motion tracking. However, to fuse the result of optical flow into the final change detection map is a challenge. The generation of change detection map required the empirical threshold for optical flow, which may vary from case to case. Thus, this paper first presents the investigation of the motion detection property of the optical flow estimation algorithm based on deep learning and then proposes a novel change detection framework, OFATS, based on a new objective function for video sequence in natural disaster which combines the optical flow results and an adaptive thresholding segmentation algorithm based on the ratio of maximum between-class variance and minimum within-class variance.
The rest of this paper is organized as follows: Section 2 briefly reviews the optical flow estimation methods. The proposed change detection method is then introduced in Section 3. In Section 4, the effective of the proposed method is tested and compared with some most commonly used CD methods using two different natural disaster datasets. Finally, the paper is concluded in Section 5.

2. Optical Flow Estimation Methods

Optical flow, which represents change of the pixels’ displacement vectors between image frames, is most widely used in motion tracking [33]. For example, optical flow has been used to detect human/animal movements [34,35] and medical organ lesions [36,37], robots or vehicle navigation [38,39], measure flow motion [40], airfoil deformation and surface strain [41]. With the assumption of instantaneous pixel value invariance over a short displacement, optical flow can be separated into two categories: local computation method on the basis of Lucas–Kanade (LK) method and global computation method based on Horn and Schunck (HS) formulation [42]. LK method supposes that the adjacent pixels in a sliding window share the same motion and keep locally constant [43]. However, the size of a sliding window is difficult to be determined and further affect the final accuracy [33]. HS assumes that the velocity field varies globally smoothly, which is more fit for real scenes [44]. Horn and Schunck introduced the optical constraint equation based on the combination of velocity field and gray value to build a basic algorithm of optical flow estimation [45]. However, these traditional optical flow computation methods often provide blurred boundaries and are hard to be used in real time [46,47]. Convolutional neural networks (CNNs) have a strong ability of feature extraction and speckle noise suppressing [15,48,49], which has attracted more attention to numerous computer vision tasks.
FlowNet is the first end-to-end optical flow estimation model with CNN in 2015 and it uses an encoder-decoder structure making up of convolutional and deconvolutional layer with additional crosslinks between these contracting and expanding networks [50]. For the encoder module, it is made up of nine convolutional layers and Rectified Linear Unit (ReLU), and mainly used to compute abstract features from respective fields of increasing seize, but the latter reestablishes the original resolution by an expanding upconvolutional architecture using four deconvolutional layers and ReLU active function layer. It turned out to be an achievable training network and can directly compute optical flow from two input images, but it is not competitive with fine-tuned traditional methods at accuracy and the running speed is also slower [51]. On the basis of FlowNet, FlowNet 2.0, a novel end-to-end optical flow estimation network was proposed in the winter of 2016, which can effectively solve the above-mentioned problems in close accuracy with the state-of-the-art methods while running orders of magnitude faster and be marked as a milestone for optical flow estimation based on CNN [52]. This success benefits from the following three aspects: new adding training dataset including tiny motion and real-word data, stacking numerous networks by warping operations and a novel leaning schedule of multiple datasets fusion. The schematic view of FlowNet 2.0 is shown in Figure 1. The network is separated into two parts: large displacement and small displacement optical flow network. For the computation of large displacement optical flow, two FlowNetS is combined and the warping layers are introduced as a refinement. To cope with small displacements, a smaller network, FlowNet-SD is added. Then the former stacked network and the small network are fused into FlowNet 2.0 in an optimal manner, which can achieve optimal performance on arbitrary displacements.
Optical flow estimation method based on FlowNet 2.0 has achieved considerable progress. Nevertheless, it has rarely been used to make change detection in natural disasters, as far as we know. In fact, FlowNet 2.0 based on HS generates a dense velocity field, that is, each pixel has the corresponding optical flow field [43]. In order to accurately divide pixels into changed and unchanged part based on the optical flow field, the selection of an appropriate threshold is critical. However, the threshold selection for image segmentation needs to consider the data characteristics with expert knowledge. Thus, in this paper, a novel CD framework, OFATS, for disaster detection has been proposed by combing motion detection based on FlowNet 2.0 and the adaptive threshold determination method based on a novel objective function.

3. Proposed OFATS Method

In this section, OFATS, the automated CD framework for natural disaster detection from video sequence is proposed and the workflow is as shown in Figure 2.
It consists of two main steps: motion detection where FlowNet 2.0, the optical flow estimation method based on deep learning, is introduced to compute the displacement and change boundary extraction based on an adaptive threshold determination algorithm which takes the ratio of maximum between-class variance and minimum within-class variance as the new objective function. Specially, two optimization strategies are proposed: narrowing the searching range of potential thresholds and dynamic normalization of motion information.

3.1. Motion Detection

In this paper, the pixel displacements in horizontal and vertical are calculated by FlowNet 2.0, denoted as u ( x , y ) and v ( x , y ) , respectively, as shown in Figure 3.
The displacement can be calculated as follow:
r ( x , y ) = u ( x , y ) 2 + v ( x , y ) 2
Figure 4 shows an example of displacements’ distribution based on sequence “RubberWhale” [53]. The sample frame and the corresponding optical flow field visualization result are shown in Figure 4a,b, respectively. In Figure 4b, the angles of arrows represent directions of each pixel’s displacement   r ( x , y )   and the lengths show the magnitudes of displacements. Four boxes of different colors in Figure 4a,b representing various objects with different types of motions. In order to demonstrate the detailed difference of four boxes in Figure 4b, they are zoomed in Figure 4c–f. Overall, these figures show that the changed area can be roughly determined by the magnitudes and directions of optical flow. Given that magnitude changes are more obvious to detection, a proper segmentation threshold based on magnitude should be taken into consideration, separating the changed and unchanged part in the next step.

3.2. Change Boundary Extraction

After the motion detection, the next step is to determine the global optimal threshold of the displacements so as to divide the changed and unchanged part which can be viewed as a two-class classification problem. Based on displacements characteristics, the novel objective function and two optimizing strategies for the optimal threshold selection are proposed, respectively.

3.2.1. The Objective Function

For any CD algorithm, the key factor differentiating change from non-change is the objective function. The goal of setting the objective function is to find the global optimal threshold which can maximize the between-class variance and minimize the within-class variance at the same time by exploring a finite set of the possible displacement values as the possible threshold. Otsu is widely used for global thresholding selection [54], but it does not work when the target and background vary widely and two classes are very unequal [55,56]. Thus, in this paper, the objective function is set as the ratio of maximum between-class variance and minimum within-class variance:
p b e s t ( i ) = σ b 2 ( i ) σ i n 2 ( i )
where i is the iteration number and the value range is from 0 to N u m , the number of unique value of motion detection results. p b e s t ( i ) is the fitness value of i th iteration, σ b 2 ( i ) and σ i n 2 ( i ) are the between-class and within-class variance, and the calculations are based on Equations (3) and (4), respectively.
  σ b 2 = P ( C 1 ) P ( C 2 ) ( μ 1 μ 2 ) 2
σ i n 2 = P ( C 1 ) σ 1 2 + P ( C 2 ) σ 2 2
where P ( C 1 ) , P ( C 2 ) , μ 1 , μ 2 ,   σ 1 2 , σ 2 2 represent probabilities of class occurrence, class mean levels and the class variances of unchanged class C 1 and changed class C 2 , respectively. They are defined as following:
P ( C 1 ) = i = 1 m p i
P ( C 2 ) = i = m + 1 n p i
μ 1 = 1 P ( C 1 ) i = 1 m w i p i
μ 2 = 1 P ( C 2 ) i = m + 1 n w i p i
σ 1 2 = 1 m i = 1 m ( w i 1 m i = 1 m w i ) 2
σ 2 2 = 1 n m i = m + 1 n ( w i 1 n m i = m + 1 n w i ) 2
The displacements can be represented in n levels [1, 2, …, n] and C 1 denotes pixels with levels [1, …, m], and C 2 denotes pixels with levels [m + 1, …, n]. The pixel value and the corresponding percentage at level i are denoted by w i and p i .
To concluded, σ b 2 ( i ) and σ i n 2 ( i ) are determined by the iteration threshold value based on motion detection results and the iteration threshold w i corresponding to the maximal fitness value p b e s t ( i ) is considered as the optimal threshold t b e s t . The displacement of each pixel which is larger than the optimal threshold could be classified into the changed part, while the smaller ones are unchanged.
g ( x , y ) = { 1 ,   r ( x , y ) > t b e s t 0 ,   r ( x , y ) t b e s t
However, it requires large numbers of iteration and costs much time of too many unique values in motion detection results because the changeable range of different pixels’ displacements being really wide. Thus, the optimizing strategies are further proposed for optimal threshold selection.

3.2.2. Optimizing Strategies for Threshold Selection

According to the distribution of displacement data, we proposed two strategies to optimize the threshold selection criterion: narrowing the searching range of iterations and dynamic normalization of displacements which are greater than the currently selected threshold for each iteration.
Narrowing searching range is to efficiently reduce the scope of potential thresholds determined by the wide range of experimental data. Comparing with the change of pre- and post- disasters, the video data with 30 FPS during disasters can record the whole minor change of each frame. Thus, it should be labeled as ‘change’ when pixels with displacements are larger than 1 pixel. Then, the searching range can be narrowed from 0 to 1 and the corresponding iteration number is reduced to [0, N] and N is the number of unique values of displacements with the value from 0 to 1. According to this, a large number of the useless pixel values are excluded and speed can be enhanced greatly.
In order to reduce the influence of large range of displacement on the calculation of objective function, quite a lot of pixels with displacements exceeding 1 pixel have to be normalized for the whole image. Generally, pixels with larger displacements and great variations in magnitudes of pixels’ movements must be classified as change class; therefore, normalization is executed only for the pixels in changed class whose displacements are more than 1 pixel. To be more elaborate, the partial normalization dynamically changes with each iteration. For i th iteration, the threshold is w i and pixels with displacements’ values w 1 , ,   w i are automatically labeled as unchanged class C 1 and other pixels with displacements w i + 1 , ,   w n which are larger than w i   are classed as changed part C 2 . The pixels in C 2 whose displacements w j , , w n ( j i + 1 ), are greater than 1 need to be normalized to [ w i + 1 ,   w e n d ] but the corresponding percentages remain unchanged. The formulas are as follows:
k j = w e n d w i + 1 w m a x w m i n
w j = w e n d + k j ( w j w m i n )
w e n d is the maximum displacement value which is most close to 1; w m a x , w m i n are the maximum and minimum pixel value of changed class C 2 which need to be normalized.
Based on the two strategies, the threshold calculation can be more efficient and lay a foundation for the selection of the optimal threshold.

3.3. The Proposed CD Method

The whole flowchart of proposed OFATS is as Figure 5 shown and the details of change detection process are implemented in Algorithm 1. The essential steps of OFATS are motion detection based on FlowNet 2.0 and segmentation based on adaptive threshold selection criteria. For motion detection, the selected frames are input into FlowNet 2.0 to compute the magnitude in horizontal and vertical directions, based on which the displacements are calculated by Equation (1). After that, the next steps are the iterative process for optimal threshold selection. Following Algorithm 1, Equations (2)–(13) are repeatedly applied to calculate the fitness value for a fixed number to enable the iterative optimization. Based on the optimal threshold, the displacement result can be segmented into changed and unchanged parts.
The details of OFATS are implemented in Algorithm 1:
Algorithm 1. The proposed OFATS for change detection in natural disaster
Input: The two frames extracted from the video sequence.
Output: The change detection result.
1:   Input the two frame images and calculate the movement in horizontal and vertical directions based on FlowNet 2.0;
2:   Calculate the displacements reserving a decimal fraction based on Equation (1);
3:   Generate initial global fitness value gbest and iteration value i;
4:   while the algorithm does not reach the termination condition do
5:   I = I + 1;
6:   Divide into unchanged class C1 and changed class C2 threshold wi and normalize displacements which are larger than 1 in C2 according to Equation (12) and (13) and then involve in arithmetic by using Equations (5)–(10);
7:   Calculate between- and within-class variance by using Equations (3) and (4);
8:   Calculate fitness value by using Equation (2);
9:   if The solution is better then
10:     Replace the current individual;
11:   else
12:     The individual does not change;
13:   End if
14:   Find out the current global best agent;
15: end while
16: return The optimal threshold.
17: Divide the image into two parts by optimal threshold value by using Equation (11).

4. Data and Experiments

In this section, the proposed OFATS is applied to the detection of motion in two real video datasets including tsunami dataset and landslide dataset. The experimental results can be divided into two parts. Firstly, we verify the performance of the proposed OFATS method using video frame images with different input parameters. Secondly, the proposed method is compared with state-of-the-art CD methods using the two datasets.

4.1. Study Datasets

The proposed CD method is evaluated using two video frame datasets representing different natural disasters. The first video data gives a glimpse at tsunami in Petobo, Indonesia, where a 7.5 magnitude earthquake trigged a tsunami on 28 September 2018. Digital Globe’s WorldView captured these change progress by satellite images and transformed into a video consisting of 301 effective video clips [57]. Another example is about the slow-moving landslide produced by a subject named massive landslides caught on camera 2 and a video clip with 172 effective frames is selected [58]. In this research, we both select six frames of two video datasets but the quantities of alternate frame are different (at frame 160 and 165, 162 and 163, and 175 and 180 for the tsunami scene and at frame 4960 and 4970, frame 4970 and 4980, frame 4980 and 4985 for landslide scene, respectively) as the input image sequences for change detection in order to test OFATS’s robust to arbitrary movements. The experimental data together with the ground truth generated by visual interpretation are shown in Figure 6.

4.2. Evaluation of the Proposed Threshold Selection Method

In this section, the aim is to verify whether the proposed algorithm is able to automatically determine the optimal threshold for CD. Considering the change detection as a binary classification problem, the F1-measure is often used to test the selection of the optimal threshold. F1, which can synthetically consider precision (P) and recall (R) in binary classification, is shown in Table 1. The threshold that has the highest F1 will be considered as the optimum threshold. The value of F1 indicates the accuracy of change detection, and the closer to 1 means more accurate. This verification is executed on two sides: the correspondence of the optimal threshold and the maximum F1-measure value, and the performance to determine the optimal threshold value based on adaptive threshold selection proposed in OFATS and Otsu, a classic thresholding way for binarization in image processing.
In the first experiment, the frames 160 and 165 in the tsunami video are taken as an example to test whether the proposed OFATS can generate the optimal threshold with the corresponding maximum F1-measure value. If the threshold determined by the proposed objective function is in accordance with the peak value of F1, it means that the generated threshold is optimum and OFATS has the best performance. The variations of objective function value with respect to threshold value are demonstrated in Figure 7, as well as the corresponding F1 value. The blue bars represent the objective function values and the red line represents F1 values with iterated threshold values. The optimal threshold based on OFATS and the corresponding F1 are labeled with green circle. According to Figure 7, the maximum objective function value (6.04 × 104) corresponds to the optimum threshold (0.3) based on which can achieve the highest peak of F1 during the whole iterations. This indicates that OFATS can automated produce the optimum threshold with the highest F1-measure value.
In order to illustrate the robustness of adaptive threshold selection in OFATS, Otsu is introduced as a comparison of the optimal threshold selection based on the same displacements’ results in the second experiment. There are three groups of tsunami data and three groups of landslide data and the comparison results based on different threshold selection methods for experimental data have been shown in Table 2.
According to Table 2, the corresponding F1 values based on the adaptive threshold selection in OFATS are higher than Otsu in most cases, except the case in frames 4960–4970, which OFATS has the same threshold with Otsu. For the majority experimental frames, average F1 values based on OFATS are 0.02 higher than Otsu. This comparison indicates that the proposed OFATS is more robust to generate the optimum threshold thus accurately detecting the natural disaster change between video sequences.

4.3. Comparing the Proposed Method with Other CD Methods

This section takes frames 160–165 of the tsunami video and frames 4960–4970 of the landslide video as examples and compares the proposed algorithm with state-of-the-art CD algorithms, including Image Differencing, Image Rationing, Change Vector Analysis (CVA), Post-classification comparison (PCC), Kullback–Leibler divergence (KL), and Classic optical flow (COF) based on HS, a traditional optical flow estimation method. The aim is to test the superiority but to verify the robustness of OFATS to different range of movements.
The evaluation methods in this experiment are Producer’s Accuracy (PA), and User’s Accuracy (UA), Overall Accuracy (OA), Kappa coefficient (K). PA and UA are local indexes, where PA is obtained by dividing the number of correctly classified pixels in each class by the number of ground truth pixels in the corresponding class and UA is obtained by dividing the number of the total correctly classified pixels in the same class. Thus, there are four related indexes, that are P A c , P A u n , PA for changes and unchanges, and   U A c , U A u n , UA for changes and unchanges, as shown by Equations (14)–(17). OA and K are global indexes, where OA is the proportion of number of correctly identified pixels, both changed and unchanged, to the number of total pixels, and K builds on OA by taking into account both the omission and commission of pixels. As OA, K, and F1 increase and approach 100%, 1, and 1, respectively, so too does the accuracy of the CD method in differentiating changes from non-changes.
P A c = t p t p + f n
P A u n = f p f p + t n
U A c = t p t p + f p
U A u n = f n f n + t n
OA and K are calculated by Equations (17) and (18):
OA = t p + t n t p + t n + f p + f n
k = k 1 k 2 1 k 2
where k 1 and k 2 are computed as follows:
k 1 = t p + t n t p + t n + f p + f n
k 2 = ( t p + f n ) ( t p + f p ) + ( f p + t n ) ( f n + t n ) ( t p + t n + f p + f n ) 2
All of the previously mentioned CD methods were used to analyze tsunami and landslide data, and CD maps are shown in Figures 8 and 10 and the accuracy results were tabulated in Table 3 and Table 4, respectively, and demonstrated in Figures 9 and 11, respectively. The results indicate that the proposed OFATS method has K and F1 values closing to 1 and also has the highest OA values, which shows that it is capable of accurately distinguishing between changed and unchanged pixels. The values of K, F1, and OA, according to Table 3 and Table 4, are 0.98%, 0.97% and 98.5%, respectively, for the tsunami dataset, and 0.94%, 0.91%, and 96.3%, respectively, for the landside dataset.
Figure 8a is the ground truth in which white represents changes and black means unchanges. Figure 8b–g are the results from comparative CD methods, from which most of them have difficulty to provide a clear boundary and complete changed area, especially for the CD methods of PCC (Figure 8e) and KL (Figure 8f). Many changed areas are wrongly detected as non-changed areas of image differencing (Figure 8b); however, the results of image rationing and CVA (Figure 8c,d) are the opposite. Figure 8g from COF is better than the traditional CD methods, and the result is very similar to OFATS (Figure 8h); however, the change detection accuracy of COF is less than the proposed method with several tiny false alarm areas.
To further analyze the experimental results, Table 3 presents the quantitative performance indexes of these different CD methods for the tsunami dataset and Figure 9 visually displays this. For image rationing, PCC and KL CD methods, the F1 values are around 0.5, OA is less than 60% and K is smaller than 0.15, which shows these methods fail to identify the changed area. Image differencing and CVA are slightly better than them but the results are still unsatisfying. This is because most of these CD methods only use one band of the RGB images to directly do algebraic operation or some transformation computing, which is unable to digging deep features to detect the change information of the corresponding pixel with small changes. Thus, some originally changed land cover types are hard to be tested. Moreover, the CD results are extremely fragmented and have to be implemented by morphologic erosion and dilation. The selection of the optimum threshold for the final binary image also produces errors. KL detects changes based on the spectral similarity of two single band, but the changed pixels in our research are only with displacements and no obvious spectral change in appearance. Thus, KL is less sensitive to this kind of changes that occurring when the displacements are small. PCC concurrently obtains the changed boundary and the “from-to” change information; however, PCC is the least accurate of the algorithms that were studied in this paper. Only small displacements or slight deformations occur rather than land cover type changes, resulting in PCC’s ineffectiveness in the experimental data. Both COF and the proposed OFATS method produce reasonable CD accuracy with values of F1, OA and K higher than 0.9. Compared to COF, the proposed OFATS can achieve higher accuracy because high-level features can be extracted based on deep learning and the corresponding motion results can keep a higher precision [59]. Thus, the final CD accuracy based on OFATS are higher than that based on COF even if they use the same optimal threshold value.
Figure 10 shows the results for landslide dataset. Table 4 and Figure 11 present the quantitative analysis indexes of different CD methods. By observing the CD results in Figure 10, we can still find that the proposed OFATS is the most similar one to the change ground truth.
Similar to Table 3, Table 4 also demonstrates the superior performance of OFATS with the highest F1, OA and K indexes, reaching 0.94, 96.3% and 0.91, respectively. Figure 11 also visually displayed the comparison of different CD algorithms, where the three accuracy indexes of OFATS are all the maximum but the corresponding values of PCC are minimal. The similar accuracies of the two datasets show that the proposed OFATS method maintains excellent performance and is robust to different motion changes. It should be noticed that nearly all CD methods perform better than in the Indonesia tsunami case.
Although the two experimental datasets are both from natural disaster scenes, the mean pixel displacement in the landslide dataset is larger than that of the tsunami dataset. The difference can be used to test the robustness of OFATS. The comparative analysis will be based on K values for all of the algorithms for both the landslide and tsunami datasets, as shown in Figure 12. The K values based on CD methods for the two datasets have varied but the change trends are basically identical. The K values for the traditional CD methods, other than image differencing, all follow the trend of the value obtained from the landslide dataset being higher than that of the tsunami dataset. It is worthwhile to be mentioned that the K values obtained from the landslide dataset are significantly higher than that of the tsunami dataset for image rationing, CVA and KL. Specially, the K values are greater than 0.6 and less than 0.4 for the landslide and tsunami datasets, respectively, which illustrates that these CD methods should only be fit for large displacements. The different performances of these CD methods indicate that only when the differences between corresponding pixels on bi-temporal images are large, like in the case of landslides, can these CD algorithms detect the change. The K values that were obtained using PCC are less than 0.2, therefore these results further indicate that PCC is not applicable for these two types of situations regardless of the magnitude of displacement.
Despite the variations of K values, the two CD methods based on optical flow estimation algorithms achieved excellent results for both experimental datasets and K values were all greater than 0.8. However, the K values obtained using OFATS are 10% higher than COF for both datasets and their absolute values are both greater than 0.9. The superiority of OFATS is not only the accuracy, but it is also significantly more efficient in terms of computing time. Therefore, our proposed OFATS is more practical in these actual circumstances.

5. Conclusions

The challenging problems in natural disaster detection are how to detect the motion change and how to determine an adaptive threshold that can automatically and rapidly produce accurate change detection results. To solve this problem, an automatic change framework, termed as OFATS, is proposed in this paper. First, the displacement was computed from two frames using optical flow estimation algorithm based on deep learning. Then, the optimal threshold for rapidly separating changed from unchanged parts was automatically generated using an adaptive threshold selection based on a new objective function by narrowing the threshold searching range and dynamic normalization.
The proposed OFATS has been applied to two different natural disaster videos. The CD results have been compared with seven other state-of-the-art CD methods, visually and quantitatively. The quantitative evaluation demonstrated that the accuracies of proposed method are greater than 95% for the two experimental datasets and it surpasses the most excellent CD algorithms by almost 4% for tsunami data and 5% for landslide data. Experiments showed three advantages of the proposed method: (1) it can detect the change using video datasets for natural disasters in an automatic way, which have rarely been studied before; (2) it is highly efficient to conduct natural disaster change detection, even for small motion; (3) it can automatically generate the optimum threshold for the following image segmentation.

Author Contributions

H.Q., X.W. provided the original idea for this study; Y.W., S.L. and W.Z. contributed to the discussion of the design; H.Q., X.W. designed and performed the experiments, supervised the research and contributed to the article’s organization; X.W. and Y.W. edited the manuscript, which was revised by all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key Research and Development Program of China (No. 2018YFD1100405) and the National Natural Science Foundation of China (No. 41701468).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Milly, P.C.D.; Wetherald, R.T.; Dunne, K.A.; Delworth, T.L. Increasing risk of great floods in a changing climate. Nature 2002, 415, 514. [Google Scholar] [CrossRef] [PubMed]
  2. Sublime, J.; Kalinicheva, E. Automatic post-disaster damage mapping using deep-learning techniques for change detection: Case study of the Tohoku Tsunami. Remote Sens. 2019, 11, 1123. [Google Scholar] [CrossRef] [Green Version]
  3. Crooks, A.T.; Wise, S. GIS and agent-based models for humanitarian assistance. Comput. Environ. Urban Syst. 2013, 41, 100–111. [Google Scholar] [CrossRef]
  4. Lu, C.; Ying, K.; Chen, H. Real-time relief distribution in the aftermath of disasters—A rolling horizon approach. Transp. Res. Part E Logist. Transp. Rev. 2016, 93, 1–20. [Google Scholar] [CrossRef]
  5. Asokan, A.; Anitha, J. Change detection techniques for remote sensing applications: A survey. Earth Sci. Inform. 2019, 12, 143–160. [Google Scholar] [CrossRef]
  6. Klomp, J. Economic development and natural disasters: A satellite data analysis. Global Environ. Chang. 2016, 36, 67–88. [Google Scholar] [CrossRef]
  7. Yu, H.; Wen, Y.; Guang, H.; Ru, H.; Huang, P. Change detection using high resolution remote sensing images based on active learning and Markov random fields. Remote Sens. 2017, 9, 1233. [Google Scholar] [CrossRef] [Green Version]
  8. Pulvirenti, L.; Chini, M.; Pierdicca, N.; Guerriero, L.; Ferrazzoli, P. Flood monitoring using multi-temporal COSMO-SkyMed data: Image segmentation and signature interpretation. Remote Sens. Environ. 2011, 115, 990–1002. [Google Scholar] [CrossRef]
  9. Lacroix, P.; Bièvre, G.; Pathier, E.; Kniess, U.; Jongmans, D. Use of Sentinel-2 images for the detection of precursory motions before landslide failures. Remote Sens. Environ. 2018, 215, 507–516. [Google Scholar] [CrossRef]
  10. Cai, J.; Wang, C.; Mao, X.; Wang, Q. An adaptive offset tracking method with SAR images for landslide displacement monitoring. Remote Sens. 2017, 9, 830. [Google Scholar] [CrossRef] [Green Version]
  11. Gautam, D.; Dong, Y. Multi-hazard vulnerability of structures and lifelines due to the 2015 Gorkha earthquake and 2017 central Nepal flash flood. J. Build. Eng. 2018, 17, 196–201. [Google Scholar] [CrossRef]
  12. Alizadeh, M.; Ngah, I.; Hashim, M.; Pradhan, B.; Pour, A. A hybrid analytic network process and artificial neural network (ANP-ANN) model for urban earthquake vulnerability assessment. Remote Sens. 2018, 10, 975. [Google Scholar] [CrossRef] [Green Version]
  13. Carlotto, M.J. Detection and analysis of change in remotely sensed imagery with application to wide area surveillance. IEEE T. Image Process. 1997, 6, 189–202. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Bejiga, M.; Zeggada, A.; Nouffidj, A.; Melgani, F. A convolutional neural network approach for assisting avalanche search and rescue operations with UAV imagery. Remote Sens. 2017, 9, 100. [Google Scholar] [CrossRef] [Green Version]
  15. Shi, W.; Zhang, M.; Zhang, R.; Chen, S.; Zhan, Z. Change detection based on artificial intelligence state-of-the-art and challenges. Remote Sens. 2020, 12, 1688. [Google Scholar] [CrossRef]
  16. Hall, O.; Hay, G.J. A multiscale object-specific approach to digital change detection. Int. J. Appl. Earth Obs. 2003, 4, 311–327. [Google Scholar] [CrossRef]
  17. Matsuoka, M.; Yamazaki, F. Building damage mapping of the 2003 Bam, Iran, earthquake using Envisat/ASAR intensity imagery. Earthq. Spectra 2005, 21, 285–294. [Google Scholar] [CrossRef]
  18. Sharma, K.; Saraf, A.K.; Das, J.; Baral, S.S.; Borgohain, S.; Singh, G. Mapping and change detection study of Nepal-2015 earthquake induced landslides. J. Indian Soc. Remote 2018, 46, 605–615. [Google Scholar] [CrossRef]
  19. Alizadeh, M.; Shirzadi, A.; Khosravi, K.; Melesse, A.M.; Yekrangnia, M.; Rezaie, F.; Al, E. SEVUCAS a novel GIS-based machine learning software for seismic vulnerability assessment. Appl. Sci. 2019, 9, 3495. [Google Scholar]
  20. ElGharbawi, T.; Tamura, M. Coseismic and postseismic deformation estimation of the 2011 Tohoku earthquake in Kanto Region, Japan, using InSAR time series analysis and GPS. Remote Sens. Environ. 2015, 168, 374–387. [Google Scholar] [CrossRef] [Green Version]
  21. Du, S.; Zhang, Y.; Qin, R.; Yang, Z.; Zou, Z.; Tang, Y.; Fan, C. Building change detection using old aerial images and new LiDAR data. Remote Sens. 2016, 8, 1030. [Google Scholar] [CrossRef] [Green Version]
  22. Sudipan, S.; Francesca, B.; Lorenzo, B. Destroyed-buildings detection from VHR SAR images using deep features. In Proceedings of the Image and Signal Processing for Remote Sensing XXIV, Berlin, Germany, 10–12 September 2018. [Google Scholar]
  23. Ji, M.; Liu, L.; Du, R.; Buchroithner, M.F. A comparative study of texture and convolutional neural network features for detecting collapsed buildings after earthquakes using pre- and post-event satellite imagery. Remote Sens. 2019, 11, 1202. [Google Scholar] [CrossRef] [Green Version]
  24. Ci, T.; Liu, Z.; Wang, Y. Assessment of the degree of building damage caused by disaster using convolutional neural networks in combination with ordinal regression. Remote Sens. 2019, 11, 2858. [Google Scholar] [CrossRef] [Green Version]
  25. Peng, D.; Zhang, Y.; Guan, H. End-to-end change detection for high resolution satellite images using improved UNet++. Remote Sens. 2019, 11, 1382. [Google Scholar] [CrossRef] [Green Version]
  26. Yavariabdi, A.; Kusetogullari, H. Change detection in multispectral landsat images using multiobjective evolutionary algorithm. IEEE Geosci. Remote Sens. 2017, 14, 414–418. [Google Scholar] [CrossRef]
  27. Ghaffarian, S.; Kerle, N.; Pasolli, E.; Jokar Arsanjani, J. Post-disaster building database updating using automated deep learning: An integration of pre-disaster OpenStreetMap and multi-temporal satellite data. Remote Sens. 2019, 11, 2427. [Google Scholar] [CrossRef] [Green Version]
  28. Pi, Y.; Nath, N.D.; Behzadan, A.H. Convolutional neural networks for object detection in aerial imagery for disaster response and recovery. Adv. Eng. Inform. 2020, 43, 101009. [Google Scholar] [CrossRef]
  29. Kung, H.; Hsieh, C.; Ho, C.; Tsai, Y.; Chan, H.; Tsai, M. Data-augmented hybrid named entity recognition for disaster management by transfer learning. Appl. Sci. 2020, 10, 4234. [Google Scholar] [CrossRef]
  30. Li, M.; Li, M.; Zhang, P.; Wu, Y.; Song, W.; An, L. SAR image change detection using PCANet guided by saliency detection. IEEE Geosci. Remote Sens. 2018, 16, 402–406. [Google Scholar] [CrossRef]
  31. Curtis, A.; Mills, J.W. Spatial video data collection in a post-disaster landscape: The Tuscaloosa Tornado of 27 April 2011. Appl. Geogr. 2012, 32, 393–400. [Google Scholar] [CrossRef]
  32. Curtis, A.J.; Mills, J.W.; McCarthy, T.; Fotheringham, A.S.; Fagan, W.F. Space and Time Changes in Neighborhood Recovery after a Disaster Using a Spatial Video Acquisition System; Springer: Berlin, Germany, 2009; pp. 373–392. [Google Scholar]
  33. Tu, Z.; Xie, W.; Zhang, D.; Poppe, R.; Veltkamp, R.C.; Li, B.; Yuan, J. A survey of variational and CNN-based optical flow techniques. Signal Process. Image Commun. 2019, 72, 9–24. [Google Scholar] [CrossRef]
  34. Guo, Y.; Zhang, Z.; He, D.; Niu, J.; Tan, Y. Detection of cow mounting behavior using region geometry and optical flow characteristics. Comput. Electron. Agric. 2019, 163, 104828. [Google Scholar] [CrossRef]
  35. Gronskyte, R.; Clemmensen, L.H.; Hviid, M.S.; Kulahci, M. Monitoring pig movement at the slaughterhouse using optical flow and modified angular histograms. Biosyst. Eng. 2016, 141, 19–30. [Google Scholar] [CrossRef] [Green Version]
  36. Yan, W.; Wang, Y.; van der Geest, R.J.; Tao, Q. Cine MRI analysis by deep learning of optical flow: Adding the temporal dimension. Comput. Biol. Med. 2019, 111, 103356. [Google Scholar] [CrossRef] [PubMed]
  37. Wang, L.; Clarysse, P.; Liu, Z.; Gao, B.; Liu, W.; Croisille, P.; Delachartre, P. A gradient-based optical-flow cardiac motion estimation method for cine and tagged MR images. Med. Image Anal. 2019, 57, 136–148. [Google Scholar] [CrossRef]
  38. Cao, Y.; Renfrew, A.; Cook, P. Comprehensive vehicle motion analysis using optical flow optimization based on pulse-coupled neural network. IFAC Proc. Vol. 2008, 41, 158–163. [Google Scholar] [CrossRef] [Green Version]
  39. Tchernykh, V.; Beck, M.; Janschek, K. Optical flow navigation for an outdoor UVA using a wide angle mono camera and dem matching. IFAC Proc. Vol. 2006, 39, 590–595. [Google Scholar] [CrossRef] [Green Version]
  40. Liu, Y.; Xi, D.; Li, Z.; Hong, Y. A new methodology for pixel-quantitative precipitation nowcasting using a pyramid Lucas Kanade optical flow approach. J. Hydrol. 2015, 529, 354–364. [Google Scholar] [CrossRef]
  41. Zhao, R.; Sun, P. Deformation-phase measurement by optical flow method. Opt. Commun. 2016, 371, 144–149. [Google Scholar] [CrossRef]
  42. Osman, A.B.; Ovinis, M. A review of in-situ optical flow measurement techniques in the Deepwater Horizon oil spill. Measurement 2020, 153, 107396. [Google Scholar]
  43. Yuan, W.; Yuan, X.; Xu, S.; Gong, J.; Shibasaki, R. Dense Image-Matching via Optical Flow Field Estimation and Fast-Guided Filter Refinement. Remote Sens. 2019, 11, 2410. [Google Scholar] [CrossRef] [Green Version]
  44. Sun, D.; Roth, S.; Black, M.J. Secrets of optical flow estimation and their principles. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2432–2439. [Google Scholar]
  45. Horn, B.K.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef] [Green Version]
  46. Prajapati, D.; Galiyawala, H.J. A Review on Moving Object Detection and Tracking; Department of Electronics and Communication Engineering, UKA Tarsadia University: Bardoli, India, 2015. [Google Scholar]
  47. Wei, S.; Yang, L.; Chen, Z.; Liu, Z. Motion detection based on optical flow and self-adaptive threshold segmentation. Procedia Eng. 2011, 15, 3471–3476. [Google Scholar] [CrossRef] [Green Version]
  48. Hou, B.; Wang, Y.; Liu, Q. Change detection based on deep features and low rank. IEEE Geosci. Remote Sens. 2017, 14, 2418–2422. [Google Scholar] [CrossRef]
  49. Yuan, Q.; Shen, H.; Li, T.; Li, Z.; Li, S.; Jiang, Y.; Xu, H.; Tan, W.; Yang, Q.; Wang, J.; et al. Deep learning in environmental remote sensing: Achievements and challenges. Remote Sens. Environ. 2020, 241, 111716. [Google Scholar] [CrossRef]
  50. Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; Brox, T. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 2758–2766. [Google Scholar]
  51. Hui, T.W.; Tang, X. LiteFlowNet: A lightweight convolutional neural network for optical flow estimation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8981–8989. [Google Scholar]
  52. Ilg, E.; Mayer, N.; Saikia, T.; Keuper, M.; Dosovitskiy, A.; Brox, T. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, San Francisco, CA, USA, 18–20 June 1996; pp. 2462–2470. [Google Scholar]
  53. Baker, S.; Scharstein, D.; Lewis, J.P.; Roth, S.; Black, M.J.; Szeliski, R. A Database and evaluation methodology for optical flow. Int. J. Comput. Vis. 2011, 92, 1–31. [Google Scholar] [CrossRef] [Green Version]
  54. Vala, H.J.; Baxi, A. A review on Otsu image segmentation algorithm. Int. J. Adv. Res. Comput. Eng. Technol. 2013, 2, 387–389. [Google Scholar]
  55. Pal, N.R.; Pal, S.K. A review on image segmentation techniques. Pattern Recogn. 1993, 26, 1277–1294. [Google Scholar] [CrossRef]
  56. Waseem Khan, M. A Survey: Image segmentation techniques. Int. J. Future Comput. Commun. 2014, 3, 89–93. [Google Scholar] [CrossRef] [Green Version]
  57. Digital Globe Data in Indonesia Earthquake. Available online: https://www.youtube.com/watch?v=-41ENJF0wVwx (accessed on 24 October 2018).
  58. Slow-Moving Landslide Des Caught on Camera 2. Available online: https://www.youtube.com/watch?v=PmLHg-mLrMU (accessed on 10 July 2019).
  59. Qiao, H.J.; Wan, X.; Xu, J.Z.; Li, S.Y.; He, P.P. Deep learning based optical flow estimation for change detection: A case study in Indonesia earthquake. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 3, 317–322. [Google Scholar] [CrossRef]
Figure 1. Scheme of FlowNet 2.0.
Figure 1. Scheme of FlowNet 2.0.
Sensors 20 05076 g001
Figure 2. The workflow of OFATS (optical flow-based adaptive thresholding segmentation).
Figure 2. The workflow of OFATS (optical flow-based adaptive thresholding segmentation).
Sensors 20 05076 g002
Figure 3. Illustration of displacements based on FlowNet 2.0.
Figure 3. Illustration of displacements based on FlowNet 2.0.
Sensors 20 05076 g003
Figure 4. An example of displacements’ distribution (a) sample frame on sequence “RubberWhale” [53]; (b) the corresponding r(x,y) visualization based on FlowNet 2.0; (cf) zoom-in boxes.
Figure 4. An example of displacements’ distribution (a) sample frame on sequence “RubberWhale” [53]; (b) the corresponding r(x,y) visualization based on FlowNet 2.0; (cf) zoom-in boxes.
Sensors 20 05076 g004
Figure 5. Flowchart for the proposed OFATS.
Figure 5. Flowchart for the proposed OFATS.
Sensors 20 05076 g005
Figure 6. Experimental video frames and the corresponding ground truth: (ac) Frame 160 and 165, Frame 162 and 163, and Frame 175 and 180 of tsunami video, respectively; (df) Frame 4960 and 4970, Frame 4970 and 4980, and Frame 4980 and 4985 of landslide video, respectively.
Figure 6. Experimental video frames and the corresponding ground truth: (ac) Frame 160 and 165, Frame 162 and 163, and Frame 175 and 180 of tsunami video, respectively; (df) Frame 4960 and 4970, Frame 4970 and 4980, and Frame 4980 and 4985 of landslide video, respectively.
Sensors 20 05076 g006
Figure 7. The objective function and F1 values with the variation of iterated threshold values.
Figure 7. The objective function and F1 values with the variation of iterated threshold values.
Sensors 20 05076 g007
Figure 8. The change detection results for tsunami dataset: (a) Ground Truth; (b) Image differencing; (c) Image rationing; (d) CVA; (e) PCC; (f) KL; (g) COF and (h) OFATS.
Figure 8. The change detection results for tsunami dataset: (a) Ground Truth; (b) Image differencing; (c) Image rationing; (d) CVA; (e) PCC; (f) KL; (g) COF and (h) OFATS.
Sensors 20 05076 g008aSensors 20 05076 g008b
Figure 9. Comparison between other change detection (CD) methods and OFATS using Tsunami images.
Figure 9. Comparison between other change detection (CD) methods and OFATS using Tsunami images.
Sensors 20 05076 g009
Figure 10. The change detection results for landslide dataset: (a) Ground Truth; (b) Image differencing; (c) Image rationing; (d) CVA; (e) PCC; (f) KL; (g) COF and (h) OFATS.
Figure 10. The change detection results for landslide dataset: (a) Ground Truth; (b) Image differencing; (c) Image rationing; (d) CVA; (e) PCC; (f) KL; (g) COF and (h) OFATS.
Sensors 20 05076 g010
Figure 11. Comparison between other CD methods and OFATS using Landslide images.
Figure 11. Comparison between other CD methods and OFATS using Landslide images.
Sensors 20 05076 g011
Figure 12. The comparison of K for experimental datasets based on CD methods.
Figure 12. The comparison of K for experimental datasets based on CD methods.
Sensors 20 05076 g012
Table 1. Formulas related to calculating F1.
Table 1. Formulas related to calculating F1.
Parameter NameFormulaExplanation of Abbreviations
P t p t p + f p t p (true positive): detects that are correctly identified as changed
t n (true negative): detects that are correctly identified as unchanged
f p (false positive): detects that are falsely identified as changed
f n (false negative): detects that are falsely identified as unchanged
R t p t p + f n
F 1 2 P R P + R
Table 2. The optimum thresholds and the corresponding F1 values based on adaptive threshold selection in OFATS and Otsu for the experimental data.
Table 2. The optimum thresholds and the corresponding F1 values based on adaptive threshold selection in OFATS and Otsu for the experimental data.
Adaptive Threshold Selection in OFATSThreshold Selection Based on Otsu
Study DataExperimental FrameOptimum ThresholdF1Optimum ThresholdF1
Tsunami dataFrame 160–1650.30.980.40.97
Frame 162–1630.20.970.380.90
Frame 175–1800.30.990.480.96
Landslide dataFrame 4960–49700.40.940.40.94
Frame 4970–49800.30.920.50.91
Frame 4980–49850.30.920.50.91
Table 3. Confusion matrices along with indexes of the tsunami data.
Table 3. Confusion matrices along with indexes of the tsunami data.
Method Ground TruthF1KOA (%)
CUUA (%)
Image differencingC507,93437,89993.00.790.6986.0
U237,0431,183,20483.3
PA (%)68.296.7
Image rationingC624,451841,52042.60.570.1351.1
U120,526379,58376.0
PA (%)83.831.1
CVAC420,863229,19964.70.600.3971.9
U324,114991,90475.4
PA (%)56.581.2
PCCC294,325401,78942.20.410.0756.6
U450,652819,31464.5
PA (%)39.567.1
KLC548,695750,22242.20.540.1151.9
U196,282470,88170.6
PA (%)73.738.6
COFC688,33836,87394.90.940.9095.2
U56,6391,184,23095.4
PA (%)92.497.0
OFATSC737,23721,16497.20.980.9798.5
U77401,199,93999.3
PA (%)98.998.2
Table 4. Confusion matrices along with indexes of landslide data.
Table 4. Confusion matrices along with indexes of landslide data.
Method Ground TruthF1KOA (%)
CUUA (%)
Image differencingC512,244218,62370.10.760.6483.8
U100,3091,134,90491.9
PA (%)83.683.8
Image rationingC540,840205,47172.50.800.6985.9
U71,7131,148,05694.1
PA (%)88.384.8
CVAC577,78090,08786.50.900.8693.6
U34,7731,263,44097.3
PA (%)94.393.3
PCCC380,536613,37638.30.470.1457.0
U232,017740,15176.1
PA (%)62.154.7
KLC525,254208,69171.60.780.6784.9
U87,2991,144,83692.9
PA (%)85.784.6
COFC611,643140,76281.30.900.8492.8
U9101,212,76599.9
PA (%)99.889.6
OFATSC584,92744,77792.90.940.9196.3
U27,6261,308,75097.9
PA (%)95.496.7

Share and Cite

MDPI and ACS Style

Qiao, H.; Wan, X.; Wan, Y.; Li, S.; Zhang, W. A Novel Change Detection Method for Natural Disaster Detection and Segmentation from Video Sequence. Sensors 2020, 20, 5076. https://doi.org/10.3390/s20185076

AMA Style

Qiao H, Wan X, Wan Y, Li S, Zhang W. A Novel Change Detection Method for Natural Disaster Detection and Segmentation from Video Sequence. Sensors. 2020; 20(18):5076. https://doi.org/10.3390/s20185076

Chicago/Turabian Style

Qiao, Huijiao, Xue Wan, Youchuan Wan, Shengyang Li, and Wanfeng Zhang. 2020. "A Novel Change Detection Method for Natural Disaster Detection and Segmentation from Video Sequence" Sensors 20, no. 18: 5076. https://doi.org/10.3390/s20185076

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop