Next Article in Journal
Phased-Array Radar System Simulator (PASIM): Development and Simulation Result Assessment
Previous Article in Journal
Improving Details of Building Façades in Open LiDAR Data Using Ground Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Can Despeckling and Structural Features Benefit to Change Detection on Bitemporal SAR Images?

1
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi’an 710071, China
2
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
3
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(4), 421; https://doi.org/10.3390/rs11040421
Submission received: 19 December 2018 / Revised: 5 February 2019 / Accepted: 13 February 2019 / Published: 18 February 2019

Abstract

:
Change detection on bitemporal synthetic aperture radar (SAR) images is a key branch of SAR image interpretation. However, it is challenging due to speckle and unavoidable registration errors within bitemporal SAR images. A key issue is whether and how despeckling and structural features can improve accuracy. In this paper, we investigate how despeckling and structural features can benefit change detection for SAR images. Several change detection methods were performed on both input images and the corresponding despeckled images, where despeckling was achieved by different methods. The comparisons demonstrate that despeckling methods that preserve the structures can suppress noise in difference images and can improve the accuracy of change detection. We also developed a sparse model to exploit structural features from the difference images while reducing the influence of misalignment between bitemporal SAR images. The comparisons were performed on five datasets of bitemporal SAR images, and the experimental results show that our proposed sparse model outperforms other traditional methods, demonstrating the advantages of change detection.

Graphical Abstract

1. Introduction

Change detection aims to identify differences when an object is observed at multiple times [1]. It has been widely applied to monitor atmospheric effects [2] and survey geographic information in some regions [3,4]. Over the past few decades, remote sensing images have been widely used for change detection [5,6].
A synthetic aperture radar (SAR) is a sensor that can work without limitations due to seasons or weather. It transmits microwaves to the surface of the Earth and receives echo data reflected from the ground. Then, a SAR image is generated from the echo data. The images, therefore, contain much useful information about the terrestrial terrain and have been frequently used in change detection.
Bruzzone et al. proposed two change detection algorithms based on a difference image (DI) [7,8], where changing regions are detected based on the analysis of a DI. However, a SAR image is usually contaminated by speckle, and a simple DI, only based on subtraction, is inaccurate. To tackle this issue, Bovolo proposed a DI based on a log-ratio (LR) operator [9], since speckle can be characterized as a multiplicative noise. Bzai et al. also employed the log-ratio operator to generate the DI in their algorithm [10]. Gong proposed a neighborhood-based log-ratio operator by spatial-context information [11], which is more robust to speckle. Although there have been many other improvements [12,13,14,15] proposed for change detection, traditional DIs are subject to speckle. Especially when the resolution of SAR image increases, a single-look SAR image with a low signal-to-noise ratio (SNR) may result in a large amount of noise in the DI, which reduces the accuracy of traditional change detection methods. Recently, Conradsen et al. proposed a fast change detection method for time-series polarimetric SAR images based on statistical testing [16].
To reduce the effect of speckle, several proposed methods exploit useful information in the changed regions of a noisy DI. Zhang et al. proposed a graph-cut method to extract the changed region by exploiting its statistical properties [17]. Gong et al. proposed a fuzzy clustering method with a Markov random field (MRF) [18] to exploit the spatial context in the DI. Li et al. proposed an object-based change detection method [19] in which joint sparse representation learning is developed for each pair of regions and the learned features and pairwise dictionaries are obtained by the bitemporal SAR images. The robust ratio and the difference of bitemporal SAR images are obtained from the learned sparse features. Similarly, Hou et al. proposed a DI fusion and a compressive projection-based change detection method [20]. The main idea of this method is that the LR and the Gaussian LR-based DIs are fused and denoised by a discrete wavelet transform [21] and a non-subsample Contourlet transform [22], respectively. Wang et al. developed a spatial sparse-coding-based SAR image change detection method [23], where self-similarity was employed as a prior. Recently, Zheng and Wang proposed saliency-based change detection methods [24,25]. In these methods, the vision attention principle was introduced, and saliency maps of DIs were used as a guide for change detection. The experimental results show that a more robust detection image is obtained using input from the saliency map. Li et al. proposed a two-level clustering method based on Gabor features [26]. Su et al. proposed a likelihood ratio test-based method for SAR image change detection where the change detection results are obtained through normalized cut-based clustering [27] and a change criterion matrix [28].
However, the performance of the above methods is limited when the speckle is heavy and registration errors between bitemporal SAR images exist [29,30]. The misalignments between bitemporal SAR images result in the noisy DIs generated by the traditional log-ratio operator. Bruzzone proposed a multiscale method [31] to reduce the effects of registration errors. However, the corresponding relation between bitemporal SAR images becomes nonlinear, especially when the terrain becomes complex and the speckle is heavy. The registration methods based on geometrical information [32] cannot accurately align temporal SAR images with heavy speckle and complex terrain. Fan et al. proposed a nonlinear registration method for SAR image alignment [33]. However, registration errors in bitemporal SAR images are unavoidable for SAR images with strong speckle or low SNR.
In fact, over the past few years, many methods have been proposed for SAR image despeckling [34], such as Lee filtering [35], Frost filtering [36], and improved sigma filtering [37]. For high-resolution SAR images, Deledalle et al. proposed an iterative probabilistic patch-based (PPB) SAR image despeckling method [38]. Zhong et al. proposed Bayesian nonlocal filtering [39]. Feng et al. proposed improved PPB filtering for SAR image despeckling [40]. Parrilli proposed a SAR image-based block-matching and 3D filtering (SAR-BM3D) despeckling method [41]. Both the SAR-BM3D and PPB filters perform better than other methods as they allow for feature preservation and artifact reduction. Di Martino et al. proposed a scattering-based non-local means method for SAR image despeckling [42,43]. Recently, bitemporal despeckling techniques have been recently introduced for SAR stacks in several kinds of applications, e.g., SAR image detection [44], SAR interferometry [45], and SAR tomography [46]. Pham et al. [47] proposed a method that performs non-local filtering on relevant patches via graph similarity. All these methods achieve state-of-the-art performance on SAR image despeckling and increase the SNR of SAR images. It is naturally asked whether and how despeckling can improve accuracy, which is a key issue.
Motivated by this, in this paper, we first evaluate the contributions of despeckling methods to SAR image change detection by comparing results on input images and despeckled images. Then, several change detection methods are applied to SAR images despeckled by different methods to show how the despeckling methods benefit change detection. Next, we propose a sparse model that exploits structural features of changed regions in noisy DIs generated from bitemporal SAR images. Comparisons are performed on five sets of bitemporal SAR images. The experimental results demonstrate that our proposed method outperforms the other methods used. In addition, only structure-preserving despeckling methods benefit change detection, especially for certain details in changed regions.
The rest of this paper is organized as follows. The proposed sparse-based change detection method is introduced in Section 2. Experimental results are presented in Section 3, and brief concluding remarks are made in Section 4.

2. Change Detection on Bitemporal Despeckled SAR Images

In this section, we first give a high-level overview of three SAR image despeckling methods and propose a sparse learning method for SAR change detection.

2.1. SAR Image Despeckling

In the process of SAR imaging and due to the coherent nature of scattering information in the observed area, a SAR image usually includes strong speckle, which degrades the appearance of the SAR image. Given a SAR image I , speckle is generally modeled as pixel-wise multiplicative noise N , i.e., I = X N . The goal of despeckling is to estimate the clean image X from the given image I .
Over the past few decades, many despeckling methods have been developed [34,35,36,37,38,39,40]. However, with the increasing resolution of SAR imaging, the content of a scene is becoming ever more complex. Traditional despeckling methods, such as Lee filtering [35] and Frost filtering [36], can oversmooth a SAR image, and some important details cannot be preserved. To solve this problem, Buades et al. proposed a nonlocal means image-denoising method [48], which effectively preserves the texture structure of noisy images by exploiting image self-similarity. Based on this framework, a block-matching and 3D filtering (BM3D) algorithm [49] is proposed, which is a sparse 3D transform-domain collaborative filter. It has been demonstrated by extensive application and experiment that the BM3D algorithm can effectively remove additive Gaussian noise from noisy images. Parrilli extended the concept of BM3D to SAR image despeckling [41], where a probabilistic similarity measure was employed in the block-matching step. Wavelet shrinkage was put into an additive signal-dependent noise model to linearly search for the local minima of the mean square error estimator in the wavelet domain.
In fact, given a SAR image I , the pixel on the u th site I u can be modeled as follows:
I u = X u · N u = X u + ( N u 1 ) · X u = X u + V u
where X u and N u are the intensities of a clear image and the speckle on the u th site, respectively. With this conversion, the clean SAR image X can be estimated using the optimal minimum mean squared error (MMSE) estimator:
X ^ = E ( X ) + C X I C I 1 ( I E ( I ) )
where E ( · ) denotes statistical expectation, C I is the covariance matrix of I , and C I X is a cross-covariance matrix of I and X . According to [41], since E [ V ] is zero, it can be assumed that the clean image X and noise V are uncorrelated in the wavelet domain and that the covariance matrices are diagonal. Then, the estimation of each pixel can be simplified as follows:
X ^ u = E ( X u ) + σ X u 2 σ X u 2 + σ V u 2 ( I u E ( I u ) )
where u is the site of the pixel. X u , I u , V u are the intensities at site u of images X , I , V , and σ X u , σ I u , σ V u are the standard deviations of the corresponding images.
Deledalle et al. proposed an iterative probabilistic patch-based (PPB) filter [38] for SAR image despeckling. Their method stems from the nonlocal means-based image-denoising framework. Given a SAR image I and a site u, according to the framework of nonlocal means image denoising, the clean image X can be estimated by:
X ^ u 2 = s ω u , s I s 2 s ω u , s
where s is the index of the pixel within a search range and ω u , s is the filtering weight of the pixel on site s. According to the distribution of the speckle [50] and the maximum likelihood criterion, the weight in the t th iteration can be estimated as [38]:
ω u , s t = exp k Ω 2 L 1 h log I u , k I s , k + I s , k I u , k + 1 t 1 · ( X ^ u , k t 1 X ^ s , k t 1 ) 2 X ^ u , k t 1 · X ^ s , k t 1
where Ω is the neighborhood of a pixel. X ^ u , k t 1 is the intensity of the estimated clean image in the (t − 1)th iteration, and the subscript indicates the k th neighborhood of the u th pixel. L is the number of looks of a SAR image, and h is the parameter that controls the decay of the exponential function [48].
It has been proven in [38,41] that these methods achieve state-of-the-art performance on SAR image despeckling. Most recently, Ma et al. also applied a denoising method to the generated DIwhile performing SAR image detection [51]. However, the denoising on generated DI makes it is easy to over-smooth the weakly-changed pixels. Usually, people prefer to extract robust features from DI to reduce the noise effect in DI, e.g., [11,52,53]. In this paper, we mainly focus on whether despeckling can improve accuracy, and how to do so is a key issue. This is investigated by the comparison experiments in Section 3.

2.2. Change Detection by Sparse Learning

Due to the speckle and unavoidable misalignments, it is difficult to obtain the changed regions from the noisy DI, which is generated by the LR operator. Furthermore, the reference maps for bitemporal SAR images are usually not available, and it is challenging to detect the changes from bitemporal SAR images without any guidance. Recently, sparse learning has been employed in many tasks on SAR images’ interpretation and understanding, including target classification [54,55,56], image segmentation [57,58], and change detection [19,23]. The spirit of sparse learning is that learning a dictionary exploits the fundamental structures from a given dataset and encodes each sample into a robust feature vector.
In this section, inspired by the framework in [57], a sparse learning model is introduced for unsupervised SAR image change detection. It aims to learn a dictionary to exploit the fundamental structures of the noisy DI, and the learned dictionary is employed for each patch in a robust feature vector. Given two temporal SAR images I i , ( i { 1 , 2 } ) , a DI, denoted by I D I , can be generated by applying the LR operator [9] to the despeckled images X i , ( i { 1 , 2 } ) ,
I D I = log X 1 + ϵ X 2 + ϵ
where ϵ is a tiny number to avoid invalid division.
Next, a sparse model is applied to I D I to learn robust and high-level features for change detection. To achieve this, a set of patches Y = [ y 1 , y 2 , , y N ] is extracted from I D I and normalized as follows:
y i ( y i D I ¯ ) / σ D I
where D I ¯ and σ D I are the mean and standard deviation of I D I . The sparse model can thus be expressed as:
min A , D 1 2 | | Y DA | | F 2 + λ | | A | | 1 s . t . | | d k | | 2 1
where D = [ D c , D u ] is a dictionary, and two sub-dictionaries span the space of changed and unchanged class, respectively. d k is the k th column of dictionary D . A is the feature matrix, and each column a i is the feature of each patch. λ is the regularization parameter. | | · | | F , | | · | | 1 , and | | · | | 2 mean the Frobenius norm, 1 norm, and 2 norm, respectively. This model can be efficiently solved by the alternative direction method of multiplier (ADMM) algorithm [59]. When dictionary D is fixed, that is the patches are independent of each other, the model in Equation (8) can be converted to:
min a i 1 2 | | y i D a i | | 2 2 + λ | | a i | | 1
This model can be solved by a proximal algorithm [60]. More details on the derivation can be found in [58].
The pseudo label of each patch y i that indicates the class to which it belongs can be inferred from:
l i = arg max j y i D j a i 2 , j { c , u }
With the pseudo labels, the set of patches can be divided into two parts: Y c and Y u , and encoded into A c and A u . When A is fixed, the dictionary D can be updated by solving the following model:
min D 1 2 | | Y DA | | F 2 s . t . | | d k | | 2 1
The two parts of the dictionary, D c and D u , should be independent, and they can be updated separately. The model in Equation (11) can be decomposed into two parts:
min D j 1 2 | | Y j D j A j | | F 2 s . t . | | d k | | 2 1 , j { c , u }
This model can be efficiently solved by the stochastic gradient descent (SGD) method [61]. In fact, the above model can be further converted into:
min D j f ( D j ) = 1 2 Tr ( D j T D j A j A j T ) Tr ( D j A j Y j T ) s . t . | | d k | | 2 1 , j { c , u }
Here, two sub-dictionaries D c and D u are updated in the same manner. In the following derivation, the sub-dictionary subscript is ignored.
In the t th iteration, the k th column d k can be updated as follows:
u k ( t ) d k ( t 1 ) k ( t ) , d k ( t ) u k ( t ) max ( | | u k ( t ) | | 2 , 1 ) ,
where k ( t ) is the i th column of gradient matrix ( t ) , which can be calculated as:
( t ) = f ( D ) D = D j ( t 1 ) S ( t ) B ( t ) , S ( t ) = S ( t 1 ) + i = 1 M a i ( t ) a i ( t ) T , B ( t ) = B ( t 1 ) + i = 1 M a i ( t ) y i T ,
where M is the batch size in each update.
It can be observed from the above optimizations that pseudo labels play a key role in the proposed method. Not only do they indicate the changed pixels, but they also divide the training samples into two classes, each of which is employed to train the corresponding sub-dictionary. To achieve this, at the beginning of training, an initial split of the training samples is determined by an improved Otsu threshold, which is estimated as follows:
T h = τ + 0.5 × σ D I T l = τ 0.5 × σ D I
where τ is the estimated threshold by applying the Otsu method [62] to the DI and σ D I is the standard deviation of the DI. With the two thresholds T h and T l , the DI can be divided into three classes according to its intensity, where the pixels higher than T h and lower than T l are classified into changed and unchanged, respectively. The sub-dictionaries can be initialized using samples from the corresponding class.
The proposed algorithm is summarized in Algorithm 1.
Algorithm 1: Change detection based on the sparse model.
Remotesensing 11 00421 i001

2.3. Complexity Analysis

In this section, we propose a change detection method based on sparse learning and shown the whole pipeline of the proposed method in Algorithm 1. The proposed method consists of two parts: sparse coding and dictionary update. Given a dictionary with m × N , for each patch with m dimension feature vector, the sparse coding is performed by the method in [58], and the complexity of sparse coding is O ( S · m 2 ) , where S is the number of non-zero entries in the sparse codes. Suppose we have L samples to train the dictionary. Then, the complexity is O ( S · m · M ) . Therefore, the total computational complexity of the proposed method is O ( S · m 2 + S · m · M ) .

3. Experimental Results and Analysis

3.1. Datasets

In this paper, five datasets of bitemporal SAR images were used in order to verify the effectiveness of the proposed change detection algorithm.
  • Figure 1a shows the Bern dataset, containing two C-band and VV polarization SAR images (301 × 301 pixels) with a spatial resolution of 30 × 30-m. They were acquired by the European Remote Sensing (ERS) two-satellite SAR sensor from an area near the city of Bern, Switzerland, in April and May 1999, respectively. The reference map is publicly available.
  • Figure 2a shows the San Francisco dataset [63], which is part (256 × 256 pixels) of two SAR images acquired by an ERS-2 SAR sensor from the city of San Francisco. The images, with 25-m spatial resolution, were provided by the European Space Agency. These two images were captured in August 2003 and May 2004, respectively. The ground truth change map was provided in [63].
  • The images in Figure 3a and Figure 4a are parts of a Yellow River dataset, with 8 × 8-m spatial resolution. They were acquired by Radarsat-2 from the region of the Yellow River around Dongying City, Shandong Province, China, in June 2008 and June 2009. In addition, these two images are four-look and single-look, respectively. The different numbers of looks means that there are different noise levels. The sizes of these two parts are 257 × 289 and 400 × 300, respectively. The reference maps of these two datasets were kindly provided by Gong et al. [64].
  • Figure 5a shows the Ottawa dataset, which has two SAR images (290 × 350 pixels) with a spatial resolution of 10 × 10-m. They were acquired from the city of Ottawa by the Radarsat SAR sensor. They were provided by the Defense Research and Development Canada (DRDC)–Ottawa and acquired in July and August 1997. The reference map is publicly available.

3.2. Experimental Configurations

To verify the effectiveness of the proposed method, in this section, the proposed method (SC-SGD) is compared with the PCA K-means (PCAK) method [15], SPatial Coding method (SPC) [23], fuzzy clustering based on the MRF method (FCM-MRF) [18], Gabor feature-based Two-Level Clustering method (GTLC) [26], and SAliency-based Change Detection method (SACD) [24].
For the compared methods, the parameters were set according to the original literature. For our proposed method, the size of a patch was taken as 3 × 3. The length of each sub-dictionary was set as 50, and regularization parameter λ was set to 0.15.
To investigate the benefits of despeckling, three SAR image despeckling methods were employed: MMSE filtering [65], PPB filtering [38], and SAR-BM3D filtering [41].

3.3. Benefits from Despeckling

Each change detection method was performed on input images and despeckled images by the three filtering methods mentioned above, i.e., MMSE filtering, SAR-BM3D filtering, and PPB filtering. In MMSE filtering methods, the size of the neighborhood was set as 3 × 3. In the SAR-BM3D method, the number of looks L was set as one for the Yellow River datasets and three for other datasets. The default size of stack was set as 16, and the size of search window was set as 37. We employed the “daub4”wavelet basis [66] in the SAR-BM3D method, and the size of the 2D Kaiser window used for reconstruction was set to two. In the PPB method, the parameter of look L was set the same as in the SAR-BM3D method. The sizes of the patch and search window were set as seven and 21, respectively. The quantile parameter α was set as 0.74 for the Yellow River datasets and 0.92 for the other datasets. The number of iterations was set to four for all datasets.
The comparisons were evaluated using the probability of false alarm (pFA), the probability of missed alarm (pMA), and the kappa coefficient ( κ ) [67], where pFA and pMA are defined as the ratio between the numbers of FA, respectively MA, and changed pixels, and lower FA and MA. Higher kappa means better performance. To demonstrate the benefits of despeckling, several change detection methods were selected: the PCAK [15], SPC [23], FCM-MRF [18], GTLC [26], SACD [24], Conradsen’s [16] and SC-SGD methods. Each method was performed on an input image and the corresponding despeckled images by MMSE filtering, SAR-BM3D filtering, and PPB filtering, where the performance on the input image was considered as a baseline.
To compare performances on the raw and despeckled images, the ratios between the performances of the despeckled images and the baseline were plotted, as shown in Figure 6a–e. In each group of figures, the evaluations of the input image are first shown, and the ratios of pFA, pMA, and kappa coefficients are shown in the following figures, respectively, where the red dotted line in each figure indicates a 1:1 ratio.
In the figures, it can be seen that in most cases, the pFAs drop when using despeckled images. This means that for most methods, despeckling could be beneficial for lowering the pFA; while for the Bern, San Francisco, and Ottawa datasets, the kappa coefficients increased a little after despeckling; while for the two scenes of the Yellow River datasets with heavy speckle noise, the kappa coefficients significantly increased after despeckling. This means that the speckle in input images generated noise in a DI. After despeckling of raw images, the noise in the corresponding DI was reduced, which is often misclassified as changed pixels. In particular, for those two scenes from the Yellow River dataset, the speckle was very heavy, and the strength of the speckle of temporal SAR images was different, which brought more challenges for change detection.
For the pMA, on the other hand, filtering cannot definitively lead to a decrease. For the Bern dataset, pMA increased after filtering, since several change regions in the Bern dataset were very small, as shown in the reference map (Figure 1a). These small regions subject to speckle can be blurred by filtering, which led to an increase in MA. For the Ottawa dataset, MMSE filtering led to the increase of MA due to the oversmoothing, while pMA decreased after SAR-BM3D and PPB filtering. For San Francisco and the second scene of the Yellow River dataset, MA decreased in most cases, since the change regions in these two datasets were larger and not affected as much by filtering.
For the kappa coefficients, the results on filtered images were better than those of the input images, since the filtering operator reduced the speckle and indirectly compensated for registration errors in the DI. In particular, Conradsen’s method [16] was quite sensitive to the speckle. It is shown in Figure 6b,c,e that Conradsen’s method [16] obtained higher pFA on the input images, and after despeckling, the ratios of pFA and kappa were higher than other compared methods. This means the despeckling was quite beneficial for those methods sensitive to the speckle.
The visual comparison results are presented in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11. In each figure, we show the detection results for raw and filtered images using each filtering method. Figure 7 shows that, for the Bern dataset, there were some false alarm points in the input image results due to noise in the DI. Fewer false alarm points were present in the filtered images. However, some alarm points were missed in the results of filtered images, since some details in the DI were not preserved during filtering, as shown in Figure 1b–d. This demonstrates why MA decreases after filtering.
For the Yellow River datasets, Figure 9 and Figure 10 show that noise was reduced after filtering, as shown in the second to fourth rows. Especially, in Figure 3f and Figure 10f, we can observe the significant improvements after speckling. This means that filtering can reduce interruption from speckle. The other datasets tell a similar story.
Filtering, however, cannot always benefit change detection. Input images can be over-smoothed, leading to details being filtered out of the DI. Instead, only filtering methods that preserve structures while suppressing the speckle in input images benefit change detection. This is demonstrated in Figure 6b, where the evaluations of input images and MMSE-filtered images were less good, while the images filtered by PPB showed obvious advantages.

3.4. Benefits from Sparse Learning

We have demonstrated the benefit of filtering in previous paragraphs. In addition, to verify the benefits of our proposed SC-SGD method, it was compared with the PCAK [15], SPC [23], FCM-MRF [18], GTLC [26], Conradsen’s [16] and SACD [24] methods. The first row of Figure 9 and Figure 10 show the comparison with other methods. They show that the results obtained by SC-SGD were less subject to speckle noise, while the results by other methods were much more subject to speckle noise and contained many false positives. In addition, for the Bern dataset, Figure 7 shows that our proposed method can detect more details of the changed regions than the other methods. These results indicate that the sparse learning method can extract more details on changed regions from a DI.
Performance was also compared for the pFA, pMA, and kappa coefficient. It can be seen from Figure 6b–d that our proposed method obtained comparable results with SACD on the input images and performed better than any of the other compared methods. However, on the despeckled images, it obtained better ratios of pFA, pMA, and kappa coefficient than any of the other methods.
In the proposed method, we exploited the essential structures of the difference map by learning a dictionary, where each atom of the dictionary, i.e., each column, represented a local structure of the difference map. With the dictionary, we can obtain a more robust and spare feature for each patch via sparse representation. The coefficients obtained by sparse representation for each patch were more robust, and the coefficient vector benefited from exploiting the changed regions from the difference map.
Furthermore, we show the receiver operating characteristic (ROC) curves of the compared methods on five datasets in Figure 12, where in each comparison, we also list the corresponding area under the curve (AUC) values. It is clearly shown that sparse learning had great advantages for SAR images’ change detection over other compared methods. Especially in Figure 12c,d, our method performed much better on the Yellow River datasets with heavy speckle. They are consistent with the evaluations in Figure 6.
Finally, we compared the running times of all methods. Here, since the compared methods were performed on the same raw images and the corresponding despeckled images, we only compared the running times of these methods on raw images. We took the Ottawa dataset as an example, and list the running times in Table 1.

3.5. Analysis of Parameters

So far, we have shown the advantages of our proposed method. In fact, there are only three parameters in our proposed method: the size of the patch, the length of the dictionary, and the regularization parameter λ . The size of the patch is usually taken based on the resolution of input SAR images. Generally, the larger patch we took, the better the smoothing effect. When we applied our method to the despeckled images, we took the size of patch as 3 × 3 . Furthermore, the length of the dictionary was nonsensitive to the results, but it affected the running time of the method. Therefore in this paper, we took the length of the sub-dictionary as 50. Finally, the regularize parameter λ controlled the sparsity of encoding, which would affect the performance of our method. We took this parameter as 0.15, referring to [57].

3.6. Further Discussions

Although the proposed method outperformed the other compared methods on the San Francisco, Ottawa, and Yellow River datasets, for the Bern dataset, the results of all methods were comparable. Since the changed regions of the Bern dataset contained many details, they were more easily contaminated by speckle and liable to over-smoothing during filtering. Figure 6a illustrates that pMA was unavoidable by all of the methods.
In addition, for certain methods, MMSE filtering was not a good choice, since it was often over-smoothing. A higher pFA may be caused by region-based methods, such as SACD. In fact, an over-smoothed SAR image lowers the contrast of the DI, and it is difficult to distinguish the changed region from the unchanged regions.

4. Conclusions

In this paper, we have explored how despeckling and structure features can benefit to SAR image change detection. We demonstrated that structure-preserving despeckling methods, such as SAR-BM3D and PPB, not only suppress the influence of the speckle on the DI, but also enhance the information obtained from changed regions. Furthermore, a sparse learning-based method is proposed to exploit the fundamental structural feature from the DI. The experimental results show that our proposed method outperformed other methods on most selected datasets. However, for highly-detailed change regions, there is still much room for improvement, which can be covered by future works.

Author Contributions

R.W. designed the experiments and analyzed the data; J.-W.C designed the project and wrote the manuscript; L.J. and M.W. improved the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 61701361), the Open Fund of the State Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University (No. 17E02), the Key R&D Program—The Key Industry Innovation Chain of Shaanxi (No. 2018JM6083), and the Fundamental Research Funds for the Central University (No. JB181701).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Théau, J. Change Detection. In Springer Handbook of Geographic Information; Kresse, W., Danko, D.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 75–94. [Google Scholar]
  2. Song, C.; Woodcock, C.E.; Seto, K.C.; Lenney, M.P.; Macomber, S.A. Classification and change detection using Landsat TM data: When and how to correct atmospheric effects? Remote Sens. Environ. 2001, 75, 230–244. [Google Scholar] [CrossRef]
  3. Collins, J.B.; Woodcock, C.E. An assessment of several linear change detection techniques for mapping forest mortality using multitemporal Landsat TM data. Remote Sens. Environ. 1996, 56, 66–77. [Google Scholar] [CrossRef]
  4. Tarantino, C.; Blonda, P.; Pasquariello, G. Application of change detection techniques for monitoring man-induced landslide causal factors. In Proceedings of the 2004 IEEE International Geoscience and Remote Sensing Symposium, IGARSS’04, Anchorage, AK, USA, 20–24 September 2004; Volume 2, pp. 1103–1106. [Google Scholar]
  5. Liu, Z.G.; Mercier, G.; Dezert, J.; Pan, Q. Change Detection in Heterogeneous Remote Sensing Images Based on Multidimensional Evidential Reasoning. IEEE Geosci. Remote Sens. Lett. 2014, 11, 168–172. [Google Scholar] [CrossRef]
  6. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  7. Bruzzone, L.; Prieto, D.F. Unsupervised change detection in multisource and multisensor remote sensing images. In Proceedings of the IEEE 2000 International Geoscience and Remote Sensing Symposium, IGARSS 2000, Honolulu, HI, USA, 24–28 July 2000; Volume 6, pp. 2441–2443. [Google Scholar]
  8. Bruzzone, L.; Prieto, D.F. An adaptive semiparametric and context-based approach to unsupervised change detection in multitemporal remote-sensing images. IEEE Trans. Image Process. 2002, 11, 452–466. [Google Scholar] [CrossRef] [PubMed]
  9. Bovolo, F.; Bruzzone, L. A detail-preserving scale-driven approach to change detection in multitemporal SAR images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2963–2972. [Google Scholar] [CrossRef]
  10. Bazi, Y.; Bruzzone, L.; Melgani, F. An unsupervised approach based on the generalized Gaussian model to automatic change detection in multitemporal SAR images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 874–887. [Google Scholar] [CrossRef]
  11. Gong, M.; Cao, Y.; Wu, Q. A Neighborhood-Based Ratio Approach for Change Detection in SAR Images. IEEE Geosci. Remote Sens. Lett. 2012, 9, 307–311. [Google Scholar] [CrossRef]
  12. Zheng, Y.; Zhang, X.; Hou, B.; Liu, G. Using Combined Difference Image and k -Means Clustering for SAR Image Change Detection. IEEE Geosci. Remote Sens. Lett. 2013, 11, 691–695. [Google Scholar] [CrossRef]
  13. Ghosh, A.; Mishra, N.S.; Ghosh, S. Fuzzy clustering algorithms for unsupervised change detection in remote sensing images. Inf. Sci. 2011, 181, 699–715. [Google Scholar] [CrossRef]
  14. Gong, M.; Zhou, Z.; Ma, J. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering. IEEE Trans. Image Process. 2012, 21, 2141–2151. [Google Scholar] [CrossRef] [PubMed]
  15. Celik, T. Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  16. Conradsen, K.; Nielsen, A.A.; Skriver, H. Determining the points of change in time series of polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3007–3024. [Google Scholar] [CrossRef]
  17. Zhang, X.; Chen, J.; Meng, H. A novel SAR image change detection based on graph-cut and generalized gaussian model. IEEE Geosci. Remote Sens. Lett. 2013, 10, 14–18. [Google Scholar] [CrossRef]
  18. Gong, M.; Su, L.; Jia, M.; Chen, W. Fuzzy clustering with a modified MRF energy function for change detection in synthetic aperture radar images. IEEE Trans. Fuzzy Syst. 2014, 22, 98–109. [Google Scholar] [CrossRef]
  19. Li, W.; Chen, J.; Yang, P.; Sun, H. Multitemporal SAR images change detection based on joint sparse representation of pair dictionaries. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 6165–6168. [Google Scholar]
  20. Hou, B.; Wei, Q.; Zheng, Y.; Wang, S. Unsupervised change detection in SAR image based on Gauss-log ratio image fusion and compressed projection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3297–3317. [Google Scholar] [CrossRef]
  21. Gao, H.; Zou, B. Algorithms of image fusion based on wavelet transform. In Proceedings of the 2012 International Conference on Image Analysis and Signal Processing (IASP), Hangzhou, China, 9–11 November 2012; pp. 1–4. [Google Scholar]
  22. Da Cunha, A.L.; Zhou, J.; Do, M.N. The nonsubsampled contourlet transform: Theory, design, and applications. IEEE Trans. Image Process. 2006, 15, 3089–3101. [Google Scholar] [CrossRef] [PubMed]
  23. Wang, S.; Jiao, L.; Yang, S. SAR Images Change Detection Based on Spatial Coding and Nonlocal Similarity Pooling. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3452–3466. [Google Scholar] [CrossRef]
  24. Zheng, Y.; Jiao, L.; Liu, H.; Zhang, X.; Hou, B.; Wang, S. Unsupervised saliency-guided SAR image change detection. Pattern Recognit. 2017, 61, 309–326. [Google Scholar] [CrossRef]
  25. Wang, S.; Yang, S.; Jiao, L. Saliency-guided change detection for SAR imagery using a semi-supervised Laplacian SVM. Remote Sens. Lett. 2016, 7, 1043–1052. [Google Scholar] [CrossRef]
  26. Li, H.C.; Celik, T.; Longbotham, N.; Emery, W.J. Gabor feature based unsupervised change detection of multitemporal SAR images based on two-level clustering. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2458–2462. [Google Scholar]
  27. Shi, J.; Malik, J. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar]
  28. Su, X.; Deledalle, C.A.; Tupin, F.; Sun, H. NORCAMA: Change analysis in SAR time series by likelihood ratio change matrix clustering. ISPRS J. Photogramm. Remote Sens. 2015, 101, 247–261. [Google Scholar] [CrossRef]
  29. Chen, G.; Zhao, K.; Powers, R. Assessment of the image misregistration effects on object-based change detection. ISPRS J. Photogramm. Remote Sens. 2014, 87, 19–27. [Google Scholar] [CrossRef]
  30. Dai, X.; Khorram, S. The effects of image misregistration on the accuracy of remotely sensed change detection. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1566–1577. [Google Scholar]
  31. Bruzzone, L.; Bovolo, F.; Marchesi, S. A multiscale change detection technique robust to registration noise. In Proceedings of the International Conference on Pattern Recognition and Machine Intelligence, Kolkata, India, 18–22 December 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 77–86. [Google Scholar]
  32. Sansosti, E.; Berardino, P.; Manunta, M.; Serafino, F.; Fornaro, G. Geometrical SAR image registration. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2861–2870. [Google Scholar] [CrossRef]
  33. Fan, J.; Wu, Y.; Wang, F.; Zhang, Q.; Liao, G.; Li, M. SAR image registration using phase congruency and nonlinear diffusion-based SIFT. IEEE Geosci. Remote Sens. Lett. 2015, 12, 562–566. [Google Scholar]
  34. Lee, J.S.; Jurkevich, L.; Dewaele, P.; Wambacq, P.; Oosterlinck, A. Speckle filtering of synthetic aperture radar images: A review. Remote Sens. Rev. 1994, 8, 313–340. [Google Scholar] [CrossRef]
  35. Lee, J.S. Speckle suppression and analysis for synthetic aperture radar images. Opt. Eng. 1986, 25, 636–643. [Google Scholar] [CrossRef]
  36. Frost, V.S.; Stiles, J.A.; Shanmugan, K.S.; Holtzman, J.C. A model for radar images and its application to adaptive digital filtering of multiplicative noise. IEEE Trans. Pattern Anal. Mach. Intell. 1982, PAMI-4, 157–166. [Google Scholar] [CrossRef]
  37. Lee, J.S.; Wen, J.H.; Ainsworth, T.L.; Chen, K.S.; Chen, A.J. Improved sigma filter for speckle filtering of SAR imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 202–213. [Google Scholar]
  38. Deledalle, C.A.; Denis, L.; Tupin, F. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. IEEE Trans. Image Process. 2009, 18, 2661–2672. [Google Scholar] [CrossRef] [PubMed]
  39. Zhong, H.; Li, Y.; Jiao, L. SAR image despeckling using Bayesian nonlocal means filter with sigma preselection. IEEE Geosci. Remote Sens. Lett. 2011, 8, 809–813. [Google Scholar] [CrossRef]
  40. Feng, H.; Hou, B.; Gong, M. SAR image despeckling based on local homogeneous-region segmentation by using pixel-relativity measurement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2724–2737. [Google Scholar] [CrossRef]
  41. Parrilli, S.; Poderico, M.; Angelino, C.V.; Verdoliva, L. A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2012, 50, 606–616. [Google Scholar] [CrossRef]
  42. Di Martino, G.; Di Simone, A.; Iodice, A.; Riccio, D. Scattering-based nonlocal means SAR despeckling. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3574–3588. [Google Scholar] [CrossRef]
  43. Di Simone, A.; Di Martino, G.; Iodice, A.; Riccio, D. Sensitivity analysis of a scattering-based nonlocal means Despeckling Algorithm. Eur. J. Remote Sens. 2017, 50, 87–97. [Google Scholar] [CrossRef]
  44. Chierchia, G.; El Gheche, M.; Scarpa, G.; Verdoliva, L. Multitemporal SAR image despeckling based on block-matching and collaborative filtering. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5467–5480. [Google Scholar] [CrossRef]
  45. Sica, F.; Reale, D.; Poggi, G.; Verdoliva, L.; Fornaro, G. Nonlocal adaptive multilooking in SAR multipass differential interferometry. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1727–1742. [Google Scholar] [CrossRef]
  46. Jiang, M.; Ding, X.; Hanssen, R.F.; Malhotra, R.; Chang, L. Fast statistically homogeneous pixel selection for covariance matrix estimation for multitemporal InSAR. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1213–1224. [Google Scholar] [CrossRef]
  47. Pham, M.T.; Mercier, G.; Michel, J. Change Detection Between SAR Images Using a Pointwise Approach and Graph Theory. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2020–2032. [Google Scholar] [CrossRef]
  48. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition, CVPR 2005, San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
  49. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  50. Goodman, J.W. Some fundamental properties of speckle. JOSA 1976, 66, 1145–1150. [Google Scholar] [CrossRef]
  51. Ma, W.; Yang, H.; Wu, Y.; Xiong, Y.; Hu, T.; Jiao, L.; Hou, B. Change Detection Based on Multi-Grained Cascade Forest and Multi-Scale Fusion for SAR Images. Remote Sens. 2019, 11, 142. [Google Scholar] [CrossRef]
  52. Gong, M.; Zhan, T.; Zhang, P.; Miao, Q. Superpixel-Based Difference Representation Learning for Change Detection in Multispectral Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2658–2673. [Google Scholar] [CrossRef]
  53. Wang, R.; Zhang, J.; Chen, J.; Jiao, L.; Wang, M. Imbalanced Learning-Based Automatic SAR Images Change Detection by Morphologically Supervised PCA-Net. IEEE Geosci. Remote Sens. Lett. 2018, 1–5. [Google Scholar] [CrossRef]
  54. Wen, Z.; Hou, B.; Wang, S. High resolution SAR target reconstruction from compressive measurements with prior knowledge. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Melbourne, VIC, Australia, 21–26 July 2013; pp. 3167–3170. [Google Scholar]
  55. Wen, Z.; Hou, B.; Wang, S.; Jiao, L. Learning task-driven polarimetric target decomposition: A new perspective. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 4761–4764. [Google Scholar]
  56. Wang, S.; Jiao, L.; Yang, S.; Liu, H. SAR image target recognition via Complementary Spatial Pyramid Coding. Neurocomputing 2016, 196, 125–132. [Google Scholar] [CrossRef]
  57. Chen, J.; Jiao, L.; Wen, Z. High-level feature selection with dictionary learning for unsupervised SAR imagery terrain classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 145–160. [Google Scholar] [CrossRef]
  58. Chen, J.; Jiao, L.; Ma, W.; Liu, H. Unsupervised high-level feature extraction of SAR imagery with structured sparsity priors and incremental dictionary learning. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1467–1471. [Google Scholar] [CrossRef]
  59. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  60. Parikh, N.; Boyd, S. Proximal algorithms. Found. Trends Optim. 2014, 1, 127–239. [Google Scholar] [CrossRef]
  61. Bottou, L. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010; Springer: Berlin, Germany, 2010; pp. 177–186. [Google Scholar]
  62. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  63. Gao, F.; Liu, X.; Dong, J.; Zhong, G.; Jian, M. Change Detection in SAR Images Based on Deep Semi-NMF and SVD Networks. Remote Sens. 2017, 9, 435. [Google Scholar] [CrossRef]
  64. Gong, M.; Zhao, J.; Liu, J.; Miao, Q.; Jiao, L. Change Detection in Synthetic Aperture Radar Images Based on Deep Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2017, 27, 125–138. [Google Scholar] [CrossRef] [PubMed]
  65. Huang, T.; Yang, G.; Tang, G. A fast two-dimensional median filtering algorithm. IEEE Trans. Acoust. Speech Signal Process. 1979, 27, 13–18. [Google Scholar] [CrossRef]
  66. Mallat, S. A Wavelet Tour of Signal Processing; Elsevier: Amsterdam, The Netherlands, 1999. [Google Scholar]
  67. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
Figure 1. Bern dataset. (a) Raw images from April 1999 and May 1999 and the reference map. (bd) Filter images by MMSE, SAR-block-matching and 3D filtering (BM3D), probabilistic patch-based (PPB), and the corresponding DIs.
Figure 1. Bern dataset. (a) Raw images from April 1999 and May 1999 and the reference map. (bd) Filter images by MMSE, SAR-block-matching and 3D filtering (BM3D), probabilistic patch-based (PPB), and the corresponding DIs.
Remotesensing 11 00421 g001
Figure 2. San Francisco dataset. (a) Raw images from August 2003 and May 2004 and reference map. (bd) Filter images by MMSE, SAR-BM3D, PPB, and the corresponding DIs.
Figure 2. San Francisco dataset. (a) Raw images from August 2003 and May 2004 and reference map. (bd) Filter images by MMSE, SAR-BM3D, PPB, and the corresponding DIs.
Remotesensing 11 00421 g002
Figure 3. Yellow River Set A. (a) Raw images from June 2008 and June 2009 and reference map. (bd) Filter images by MMSE, SAR-BM3D, PPB, and the corresponding DIs.
Figure 3. Yellow River Set A. (a) Raw images from June 2008 and June 2009 and reference map. (bd) Filter images by MMSE, SAR-BM3D, PPB, and the corresponding DIs.
Remotesensing 11 00421 g003
Figure 4. The Yellow River Set B. (a) Raw images from June 2008 and June 2009 and reference map. (bd) Filter images by MMSE, SAR-BM3D, PPB, and the corresponding DIs.
Figure 4. The Yellow River Set B. (a) Raw images from June 2008 and June 2009 and reference map. (bd) Filter images by MMSE, SAR-BM3D, PPB, and the corresponding DIs.
Remotesensing 11 00421 g004
Figure 5. Ottawa dataset. (a) Raw images from August 1997 and September 1997 and reference map. (bd) Filter images by MMSE, SAR-BM3D, PPB, and the corresponding DIs.
Figure 5. Ottawa dataset. (a) Raw images from August 1997 and September 1997 and reference map. (bd) Filter images by MMSE, SAR-BM3D, PPB, and the corresponding DIs.
Remotesensing 11 00421 g005
Figure 6. The performance of change detections on both raw and despeckled images. (a) Bern dataset. (b) San Francisco dataset. (c) Yellow River A dataset. (d) Yellow River B dataset. (e) Ottawa dataset.
Figure 6. The performance of change detections on both raw and despeckled images. (a) Bern dataset. (b) San Francisco dataset. (c) Yellow River A dataset. (d) Yellow River B dataset. (e) Ottawa dataset.
Remotesensing 11 00421 g006
Figure 7. Results of (a) PCAK, (b) SPC, (c) FCM-MRF, (d) GTLC, (e) SACD, (f) Conradsen, and (g) SC-SGD on the Bern dataset. From top to bottom: each row contains the results of the input image, and images filtered by MMSE, SAR-BM3D, and PPB filters, respectively.
Figure 7. Results of (a) PCAK, (b) SPC, (c) FCM-MRF, (d) GTLC, (e) SACD, (f) Conradsen, and (g) SC-SGD on the Bern dataset. From top to bottom: each row contains the results of the input image, and images filtered by MMSE, SAR-BM3D, and PPB filters, respectively.
Remotesensing 11 00421 g007
Figure 8. Results of (a) PCAK, (b) SPC, (c) FCM-MRF, (d) GTLC, (e) SACD, (f) Conradsen, and (g) SC-SGD on the San Francisco dataset. From top to bottom: each row contains the results of the input image, and images filtered by MMSE, SAR-BM3D, and PPB filters, respectively.
Figure 8. Results of (a) PCAK, (b) SPC, (c) FCM-MRF, (d) GTLC, (e) SACD, (f) Conradsen, and (g) SC-SGD on the San Francisco dataset. From top to bottom: each row contains the results of the input image, and images filtered by MMSE, SAR-BM3D, and PPB filters, respectively.
Remotesensing 11 00421 g008
Figure 9. Results of (a) PCAK, (b) SPC, (c) FCM-MRF, (d) GTLC, (e) SACD, (f) Conradsen, and (g) SC-SGD on the Yellow River A dataset. From top to bottom: each row contains the results of the input image, and images filtered by MMSE, SAR-BM3D, and PPB filters, respectively.
Figure 9. Results of (a) PCAK, (b) SPC, (c) FCM-MRF, (d) GTLC, (e) SACD, (f) Conradsen, and (g) SC-SGD on the Yellow River A dataset. From top to bottom: each row contains the results of the input image, and images filtered by MMSE, SAR-BM3D, and PPB filters, respectively.
Remotesensing 11 00421 g009
Figure 10. Results of (a) PCAK, (b) SPC, (c) FCM-MRF, (d) GTLC, (e) SACD, (f) Conradsen, and (g) SC-SGD on Yellow River B dataset. From top to bottom: each row contains the results of the input image, and images filtered by MMSE, SAR-BM3D, and PPB filters, respectively.
Figure 10. Results of (a) PCAK, (b) SPC, (c) FCM-MRF, (d) GTLC, (e) SACD, (f) Conradsen, and (g) SC-SGD on Yellow River B dataset. From top to bottom: each row contains the results of the input image, and images filtered by MMSE, SAR-BM3D, and PPB filters, respectively.
Remotesensing 11 00421 g010
Figure 11. Results of (a) PCAK, (b) SPC, (c) FCM-MRF, (d) GTLC, (e) SACD, (f) Conradsen, and (g) SC-SGD on the Ottawa dataset. From top to bottom: each row contains the results of the input image, and images filtered by median, SAR-BM3D, and PPB filters, respectively.
Figure 11. Results of (a) PCAK, (b) SPC, (c) FCM-MRF, (d) GTLC, (e) SACD, (f) Conradsen, and (g) SC-SGD on the Ottawa dataset. From top to bottom: each row contains the results of the input image, and images filtered by median, SAR-BM3D, and PPB filters, respectively.
Remotesensing 11 00421 g011
Figure 12. The ROC curves and AUC values of change detections on both raw and despeckled images by MMSE, SAR-BM3D, and PPB, respectively. (a) Bern dataset. (b) San Francisco dataset. (c) Yellow River A dataset. (d) Yellow River B dataset. (e) Ottawa dataset.
Figure 12. The ROC curves and AUC values of change detections on both raw and despeckled images by MMSE, SAR-BM3D, and PPB, respectively. (a) Bern dataset. (b) San Francisco dataset. (c) Yellow River A dataset. (d) Yellow River B dataset. (e) Ottawa dataset.
Remotesensing 11 00421 g012
Table 1. Classification accuracy (%) obtained by an ensemble of 200 discrimination classifiers.
Table 1. Classification accuracy (%) obtained by an ensemble of 200 discrimination classifiers.
MethodsPCAKSPCFCM-MRFGTLCSACDSC-SGD
Times (s)2.0628.444.3610.8862.8825.56

Share and Cite

MDPI and ACS Style

Wang, R.; Chen, J.-W.; Jiao, L.; Wang, M. How Can Despeckling and Structural Features Benefit to Change Detection on Bitemporal SAR Images? Remote Sens. 2019, 11, 421. https://doi.org/10.3390/rs11040421

AMA Style

Wang R, Chen J-W, Jiao L, Wang M. How Can Despeckling and Structural Features Benefit to Change Detection on Bitemporal SAR Images? Remote Sensing. 2019; 11(4):421. https://doi.org/10.3390/rs11040421

Chicago/Turabian Style

Wang, Rongfang, Jia-Wei Chen, Licheng Jiao, and Mi Wang. 2019. "How Can Despeckling and Structural Features Benefit to Change Detection on Bitemporal SAR Images?" Remote Sensing 11, no. 4: 421. https://doi.org/10.3390/rs11040421

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop