Next Article in Journal
Evaluating Different Deep Learning Approaches for Tree Health Classification Using High-Resolution Multispectral UAV Data in the Black Forest, Harz Region, and Göttinger Forest
Next Article in Special Issue
Lightweight Multilevel Feature-Fusion Network for Built-Up Area Mapping from Gaofen-2 Satellite Images
Previous Article in Journal
Preharvest Durum Wheat Yield, Protein Content, and Protein Yield Estimation Using Unmanned Aerial Vehicle Imagery and Pléiades Satellite Data in Field Breeding Experiments
Previous Article in Special Issue
Semi-Supervised Urban Change Detection Using Multi-Modal Sentinel-1 SAR and Sentinel-2 MSI Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Scale Graph Based on Spatio-Temporal-Radiometric Interaction for SAR Image Change Detection

1
College for Informatics and Cyber Security, People’s Public Security University of China, Beijing 100038, China
2
College of Geosciences and Surveying Engineering, China University of Mining and Technology, Beijing 100083, China
3
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
4
China Waterborne Transport Research Institute, Beijing 100088, China
5
Xi’an Co-Build Regal Technology Co., Ltd., Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(3), 560; https://doi.org/10.3390/rs16030560
Submission received: 28 December 2023 / Revised: 21 January 2024 / Accepted: 25 January 2024 / Published: 31 January 2024

Abstract

:
Change detection (CD) in remote sensing imagery has found broad applications in ecosystem service assessment, disaster evaluation, urban planning, land utilization, etc. In this paper, we propose a novel graph model-based method for synthetic aperture radar (SAR) image CD. To mitigate the influence of speckle noise on SAR image CD, we opt for comparing the structures of multi-temporal images instead of the conventional approach of directly comparing pixel values, which is more robust to the speckle noise. Specifically, we first segment the multi-temporal images into square patches at multiple scales and construct multi-scale K-nearest neighbor (KNN) graphs for each image, and then develop an effective graph fusion strategy, facilitating the exploitation of multi-scale information within SAR images, which offers an enhanced representation of the complex relationships among features in the images. Second, we accomplish the interaction of spatio-temporal-radiometric information between graph models through graph mapping, which can efficiently uncover the connections between multi-temporal images, leading to a more precise extraction of changes between the images. Finally, we use the Markov random field (MRF) based segmentation method to obtain the binary change map. Through extensive experimentation on real datasets, we demonstrate the remarkable superiority of our methodologies by comparing with some current state-of-the-art methods.

1. Introduction

1.1. Background

Change detection (CD) refers to the analysis of remote sensing images acquired at different times of the same scene, to identify changes occurring on the Earth’s surface [1]. It finds extensive applications in both civilian and military domains, such as disaster relief, agricultural surveys, urban planning, and military monitoring [2,3,4]. Among these, synthetic aperture radar (SAR) is an active imaging system known for its all-weather and all-day imaging capabilities, as well as its insensitivity to atmospheric and lighting conditions. Therefore, SAR image CD technology has been receiving increased attention [5,6,7].
In general, CD methods can be categorized, based on the requirement for labeled data, into supervised, semi-supervised, and unsupervised methods [8,9,10]. While supervised and semi-supervised methods often yield more accurate detection outcomes, these approaches necessitate the utilization of labeled samples for training, which incurs substantial labor costs and demands significant domain expertise. Consequently, such methods are relatively constrained in practical applications. Traditional unsupervised SAR CD methods can be divided into three steps: pre-processing, computation of difference images (DI), and extraction of change results [11]. During the pre-processing phase, operations such as radiometric correction and image registration are typically employed to enhance radiometric and spatial comparability among multi-temporal SAR images. The second step involves generating DI to initially distinguish changed and unchanged regions. Ultimately, in the third step, the DI is subject to analysis and segmentation into categories of changed and unchanged.

1.2. Related Work

The mechanism of SAR coherent imaging leads to the intrinsic property of speckle noise in SAR images. This speckle noise manifests as random pixel intensity variations within homogeneous regions on SAR images, visually appearing as grainy speckles. Due to the impact of speckle noise, generating a high-quality DI in the SAR CD becomes a challenging task [12,13,14].
Ratio operators [15], logarithmic-ratio operators [16], mean-ratio operators [17], and neighborhood-ratio operators [18] have the ability to transform multiplicative speckle noise into additive noise, while also enhancing low-intensity pixels to some extent. They exhibit a certain robustness to speckle noise, making them commonly used in constructing DI. Furthermore, some researchers have proposed methods for fusing DI, such as using wavelet fusion techniques to fuse logarithmic and mean ratio images [19], Gaussian logarithmic ratio images, and logarithmic ratio images [20], using Shearlet fusion techniques to fuse saliency images and Gaussian logarithmic ratio images [21] and saliency-guided logarithmic ratio images [22], and so on. Although methods based on image ratios can alleviate the impact of noise to some extent, they cannot fully exploit the information from multi-temporal images, leading to the presence of residual noise that can still interfere with the final detection results. In [23], the authors have tested various despeckling methods for their impact on change detection performance. Although despeckling SAR images prior to comparison can alleviate, to some extent, the impact of speckle noise on DI, this approach also carries a certain risk; namely, the details lost during the despeckling process cannot be recovered in subsequent processes. To address this challenge, a model that combines the statistical properties of logarithmically transformed SAR images and non-local low-rank modeling has been proposed [5]. This model does not require the application of despeckling methods separately to multi-temporal SAR images, nor does it directly apply ratio/logarithmic-ratio/mean-ratio operators to SAR images. Therefore, this approach avoids the loss of information during the despeckling and image contrast enhancement processes, thus obtaining improved DI.
It is important to note that completely eliminating speckle noise from SAR images is unattainable without compromising the structural details of the images. Moreover, speckle noise in high-resolution SAR images exacerbates intensity variations, which can impact the accuracy of change detection. Consequently, generating difference images typically encounters the challenge of striking a balance between the robustness against speckle noise and the preservation of image details without compromising the effectiveness of coherent speckle noise reduction.
Recently, to address this contradiction, some graph-based CD methods have been proposed. In [24], a pointwise graph model is first constructed for one image to capture the intensity and geometry information, and then the changes are evaluated by directly comparing the two images within this pointwise graph. Wu et al. have constructed an object-based graph for each image [25], which first segments the images into superpixels as the vertices of the graph and uses the log-ratio distance to compute the edge weights of the graph, subsequently transforming the issue of change measurement into the comparison of edge weights between two graphs. Wang et al. have not only established a spatio-temporal graph model [26] for extracting the spatio-temporal neighborhood information of images but also extended the concept of hypergraphs into CD [27]. They have employed hyperedges by forming sets of pixels with similar characteristics within local neighborhoods, effectively transforming the problem of SAR CD into one of hypergraph matching. Sun et al. have introduced a series of CD methods based on graph signal processing [28]. The central concept of these methods involves utilizing K-nearest neighbor (KNN) graphs that represent the intrinsic similarity relationships within images to characterize their topological structures. Subsequently, they employ graph mapping [12,29,30] or image regression [31,32,33] techniques to compare structural differences between images and detected changes. Expanding on this, Chen et al. have proposed CD methods that combine local and non-local graph structures [34,35], Zheng et al. have proposed global and local graph-based DI enhancement methods [13] and a change smoothness method [36] for CD. Furthermore, numerous methods based on graph neural networks (GNN) have also been introduced for SAR CD tasks, such as the dynamic graph-level neural network [37], the multi-scale graph convolutional network [38], the variational graph auto-encoder network [39] and the graph-based knowledge supplement network [40].
From these methods, it can be observed that graph-based CD methods offer several advantages. Firstly, graphs can depict the intricate relationships between objects within images, representing the image’s structure, which are relatively stable and less affected by speckle noise. Secondly, the construction of graph models is versatile, that is, selecting different vertices (such as pointwise, patch, superpixels) and various types of connections (like fully connected, K-nearest neighbors, spatial neighbors, etc.) can yield different graph models (such as local, non-local, global-graph), capturing distinct information (such as spatio-temporal, geometric, topological, and intensity information). Thirdly, there is a diverse range of change measurement methods, enabling CD through different graph comparison techniques, such as direct comparison, graph mapping, graph regression, and more.

1.3. Motivation and Contribution

The above graph-based methods still have some limitations in utilizing graph information, primarily in two respects: firstly, they overlook the inherent multi-scale information in remote sensing images. It is well known that remote sensing images depict complex scenes with significant variations in object sizes. For instance, within the same scene, there can be smaller objects like vehicles and buildings alongside larger ones such as farmlands, forests, rivers, and lakes. Consequently, single-scale graph models struggle to capture the intricate structures present in remote sensing images. Secondly, these methods neglect the temporal-spatial-radiometric information interaction in constructing graph models across different time frames of images. While previous approaches have aimed to capture the spatiotemporal relationships using graph models, they often employ a single graph model to represent the temporal dimension’s correlations and do not consider integrating multiple graph models to capture the multifold relationships across other dimensions.
To overcome the aforementioned limitations and leverage the advantages of graph models in CD tasks, this paper proposes a multi-scale graph based on spatio-temporal-radiometric information interaction (STRMG) for SAR images. Initially, it partitions multitemporal images into image patches and constructs multi-scale K-nearest neighbor graphs for each image using the image patches as vertices. These graphs then capture spatial information, radiometric information, and multi-scale details within the images. Then, a graph fusion strategy is devised to fuse graphs constructed at different scales, enhancing the accuracy and richness of the image’s structural information. Subsequently, STRMG maps a graph constructed on one temporal image to another, enabling spatio-temporal-radiometric interaction between graph models. Finally, by comparing the original graph and the mapped graph models, a change metric is constructed, and a Markov random field model (MRF) model is employed to segment the DI and extract the final change map (CM).
The main contributions of this paper are summarized as follows:
  • This paper introduces a multi-scale graph model and devises a well-designed graph fusion strategy, enabling the comprehensive utilization of multi-scale information present in remote sensing images. This approach better represents the intricate relationships among features within the images.
  • This study achieves spatio-temporal-radiometric information interaction between graph models by employing graph mapping, which can effectively explore the associations between multi-temporal images and result in more accurate extraction of changes between images.
  • Experimental comparisons with several state-of-the-art methods on three datasets demonstrate the competitive performance of the proposed STRMG method, underscoring its strong capabilities in change detection.

2. Methodology

The SAR images from pre-event ( t 1 ) and post-event ( t 2 ), denoted as X ˜ , Y ˜ R M × N , respectively, are registered. Their corresponding pixel values are represented as x ˜ m , n , y ˜ m , n . The objective of change detection is to determine a binary change map B R M × N , which indicates whether each pixel has undergone a change. The proposed method in this paper consists of three main steps: (1) pre-processing: dividing the images into image patches; (2) generating the DI: constructing KNN graphs for SAR images, implementing spatio-temporal-radiometric interaction through graph mapping, and quantifying the change level; (3) solving for the CM: formulating an MRF model to segment the DI and extract the changed region. The algorithmic framework is depicted in Figure 1.

2.1. Pre-Processing

First, the multi-temporal SAR images are partitioned into non-overlapping square image patches at S different scales s = 1 , 2 , , S . Taking the SAR image X ˜ at time t 1 as an example, it is divided into image blocks of size s p × s p , where p 2 is a positive integer. These image patches are vectorized and stacked to form the image patch group matrix (PGM) at different scales, denoted as X s R s 2 p 2 × N s , where N s = M / s p × N / s p , and · denotes the ceiling operation. Similarly, for the SAR image Y ˜ at time t 2 , the same partitioning procedure is applied to obtain the PGM Y s R s 2 p 2 × N s . As a result, for the same image patches X i s and Y i s , i = 1 , 2 , , N s in X and Y , respectively, they still represent the same geographic region.

2.2. Multi-Scale Graph Construction

Based on the self-similarity property of the images, for every small image patch, there are always other image patches within the image that are very similar to it. We refer to this similarity relationship among the image patches within the image as the image structure. After obtaining the PGMs at different scales, we construct S multi-scale KNN graphs to represent the image structure. Taking the time t 1 image X ˜ as an example, at each scale s, we treat the image patches X i s as vertices in the graph, and connect each vertex to its k = N s most similar neighbors. This results in the construction of S different-scale graph models, denoted as G t 1 s = V t 1 s , E t 1 s , W t 1 s with
V t 1 s = X i s | i = 1 , , N s E t 1 s = X i s , X j s | j N i x s ; i = 1 , , N s W t 1 s i , j = exp λ d i , j x s ; if X i s , X j s E t 1 s 0 ; otherwise ,
where λ > 0 denotes the parameter that controls the exponential kernel bandwidth, which is set to be λ = 0.5 in the proposed method, d i , j x s = d i s t X i s , X j s denotes the distance between different patches of X i s and X j s , and  N i x s denotes the KNN index set of X i s in the vertex set V t 1 s , and  W t 1 s denotes the weighting matrix.
Regarding the distance metric d i s t X i s , X j s , due to the influence of multiplicative speckle noise following a gamma distribution in SAR images, we employ the following function derived by using the generalized likelihood ratio in [41]:
d i , j x s = d i s t X i s , X j s = 1 s p 2 q = 1 s p 2 log x i , q s + x j , q s 2 x i , q s x j , q s ,
where x i , q s and x j , q s are the q-th pixels in the patches of X i s and X j s , respectively. For the construction of the KNN index set N i x s , it can be obtained by sorting the distance vector d i , j x s | j = 1 , 2 , , N s and extracting the indices of the smallest k values among them.
For the post-event image Y ˜ obtained at time t 2 , we construct S different-scale graph models G t 2 s = V t 2 s , E t 2 s , W t 2 s in a similar manner as follows:
V t 2 s = Y i s | i = 1 , , N s E t 2 s = Y i s , Y j s | j N i y s ; i = 1 , , N s W t 2 s i , j = exp λ d i , j y s ; if Y i s , Y j s E t 2 s 0 ; otherwise ,
where d i , j y s = d i s t Y i s , Y j s denotes the distance between different patches of Y i s and Y j s , and  N i y s denotes the KNN index set of Y i s in the vertex set V t 2 s that are computed by sorting the distance vector of d i , j y s | j = 1 , 2 , , N s .

2.3. Fusing the Multi-Scale Graphs

Once the multi-scale graphs of G t 1 s and G t 2 s are constructed, we employ a fusion strategy to fuse the multi-scale graphs, aiming to comprehensively represent multi-scale information within the images. This enables the graph model to more accurately capture the relationships among objects of various sizes in the SAR image.
Taking the pre-event image X ˜ as an example, for each patch X i s , i = 1 , 2 , , N s , s 2 , at the coarsest scale parameter, it can be derived by merging several patches X j 1 at the finest scale parameter. This implies the presence of a parent–child aggregation relationship among these patches under different scales. Since patches at the finest scale parameter retain more detailed information and exhibit higher internal homogeneity, we propose to design a fusion matrix to integrate G t 1 s , s 2 and G t 1 1 . We intend to utilize the area ratio and intensity similarity between child-patches and parent-patches to define S fusion matrices, denoted as F t 1 s R N 1 × N s , with elements defined as follows:
F t 1 s i , j = # X i 1 # X j s exp λ · d i s t ¯ X i 1 , X j s ; if X i 1 X j s 0 otherwise ,
where # X j s represents the number of pixels contained in the patch X j s , then we have # X i 1 # X j s = 1 / s 2 . And  d i s t ¯ X i 1 , X j s represents the mean radiometric intensity distance between two patches, defined as d i s t ¯ X i 1 , X j s = log m e a n X i 1 + m e a n X j s 2 m e a n X i 1 m e a n X j s , with m e a n X j s = 1 s 2 p 2 q = 1 s 2 p 2 x j , q s denoting the average intensity value. If the intensity values of the parent-patch and child-patch are close, and their area ratio is substantial, the corresponding element value in the fusion matrix F s would be higher. This is because in such cases, the parent-patch and child-patch would contain the same or closely similar types of objects. Consequently, we can obtain the fused graph G t 1 f = V t 1 f , E t 1 f , W t 1 f after fusing multi-scale information with
V t 1 f = X i 1 | i = 1 , , N 1 E t 1 f = X i 1 , X j 1 W t 1 f i , j 0 W t 1 f = s = 1 S F t 1 s W t 1 s F t 1 s T .
In this way, the multi-scale graphs with different number of vertices are fused.
For the post-event image Y ˜ , obtained at time t 2 , we fuse the S different-scale graphs G t 2 s in a similar manner to construct the graph G t 2 f = V t 2 f , E t 2 f , W t 2 f as follows:
V t 2 f = Y i 1 | i = 1 , , N 1 E t 2 f = Y i 1 , Y j 1 W t 2 f i , j 0 W t 2 f = s = 1 S F t 2 s W t 2 s F t 2 s T ,
where F t 2 s R N 1 × N s denotes the s-th fusion matrix, similar to the construction of F t 1 s in (4).
The fusion strategy mentioned above is based on the following considerations: The finest scale graph can retain more detailed information and better avoid the issue of inaccurate feature representation caused by a patch containing multiple land cover types. However, relying solely on a patch at the finest scale is insufficient to accurately represent objects with various shapes and sizes. Reasonably fusing information from coarser scales helps extract multi-scale features from the image, resulting in richer image structural information.

2.4. Spatio-Temporal-Radiometric Interaction

After obtaining the fused multi-scale graphs of G t 1 f and G t 2 f , they respectively depict the structures of the preceding and succeeding temporal images, encompassing the spatial and radiometric information of each image. Subsequently, we employ a graph mapping approach to construct new graphs of G t 1 m and G t 2 m to facilitate the temporal interaction between these two graph models. First, we map the graph G t 1 f (5) constructed on pre-event image X ˜ into the post-event image Y ˜ to obtain the mapped graph G t 1 m = V t 1 m , E t 1 m , W t 1 m as
V t 1 m = Y i 1 | i = 1 , , N 1 E t 1 m = Y i 1 , Y j 1 W t 1 m i , j 0 W t 1 m = s = 1 S F t 2 s Q t 2 s F t 2 s T ,
with the matrix Q t 2 s defined as:
Q t 2 s i , j = exp λ d i , j y s ; if W t 1 s i , j 0 0 ; otherwise .
By comparing G t 1 f (5) and G t 1 m (7), we have that because the positions of the nonzero elements in matrices W t 1 s and Q t 2 s are the same, and the positions of the nonzero elements in F t 1 s and F t 2 s are also the same, then the positions of the nonzero elements in matrices W t 1 f and W t 1 m are the same; that is, the edge connectivity in graphs G t 1 f and G t 1 m are the same. However, the computation of their edge weights differs, namely W t 1 f and W t 1 m are computed based on images X ˜ and Y ˜ , respectively. Contrasting G t 2 f (6) and G t 1 m (7) reveals that, while their edge weights are both based on image Y ˜ , their edge connectivity is distinct. Hence, it can be observed that G t 1 m encompasses both structural information from image X ˜ (as seen in the edge connectivity of E t 1 m ) and radiometric intensity information from image Y ˜ (as seen in the computation of edge weights W t 1 m ).
Second, we map the graph G t 2 f (6) constructed on post-event image Y ˜ into the pre-event image X ˜ to obtain the mapped graph G t 2 m = V t 2 m , E t 2 m , W t 2 m as:
V t 2 m = X i 1 | i = 1 , , N 1 E t 2 m = X i 1 , X j 1 W t 2 m i , j 0 W t 2 m = s = 1 S F t 1 s Q t 1 s F t 1 s T ,
with the matrix Q t 1 s defined as:
Q t 1 s i , j = exp λ d i , j x s ; if W t 2 s i , j 0 0 ; otherwise .
Similarly, we have that G t 2 m encompasses both structural information from image Y ˜ (as seen in the edge connectivity of E t 2 m ) and radiometric intensity information from image X ˜ (as seen in the computation of edge weights W t 2 m ).

2.5. DI Calculation

Based on the structure consistency between multi-temporal images [12,28], for unchanged images, despite differences in imaging conditions between the preceding and subsequent moments (such as sensor variations, humidity, etc.)—which may result in inconsistent radiometric intensity values and consequently pixel value discrepancies for the same objects—the structural characteristics of the two images remain consistent. In other words, if image patches X i 1 and X j 1 represent identical types of objects (i.e., X i 1 and X j 1 are highly similar), then, for unchanged patches Y i 1 and Y j 1 in the other image, they will likewise represent identical types of objects (in this case, Y i 1 and Y j 1 are also highly similar). Conversely, when the area represented by image patches undergoes changes, this similarity relationship—which signifies structural consistency—will also change, rendering it suitable for detecting regions of change.
To quantify the change level in image patches, a careful comparison of the structural information represented by different graph models reveals that they depict the structural differences between the multi-temporal images. Specifically, to measure the change level α i of the i-th patch X i ( 1 ) , we compare the similarity relationships between X i 1 and the unchanged X j 1 in the graph G t 1 f and the similarity relationships between X i 1 and the unchanged X j 1 in the graph G t 2 m , that is,
α i = j = 1 N 1 W t 1 f i , j 1 p j j = 1 N 1 δ W t 1 f i , j 0 1 p j + ϵ j = 1 N 1 W t 2 m i , j 1 p j j = 1 N 1 δ W t 2 m i , j 0 1 p j + ϵ ,
where ϵ = 10 8 is used to make the equation well-defined, δ ( · ) equals 1 if the specified condition inside holds, and 0 otherwise, and  p j represents the change probability of the j-th patch, which is used to reduce the influence of the changed patches in the change measurement.
Similarly, we can compute the structural differences of the post-event image on graphs G t 2 f and G t 1 m to calculate the change level β i of the i-th patch Y i 1 as:
β i = j = 1 N 1 W t 2 f i , j 1 p j j = 1 N 1 δ W t 2 f i , j 0 1 p j + ϵ j = 1 N 1 W t 1 m i , j 1 p j j = 1 N 1 δ W t 1 m i , j 0 1 p j + ϵ .
In (11) and (12), p j is used to represent the change probability. However, since we cannot pre-determine the change probability for each image patch, an iterative process is employed to compute the change probability vector p with four steps.
Step 1. Compute the initial change probability p o by using the Log-ratio operator, and compute the initial α o and β o with p o by using (11) and (12), respectively.
Step 2. Compute the fused change level η o = α o + β o 2 , and update the change probability p = η o min η o max η o min η o .
Step 3. Substitute p into (11) and (12) to update the α and β , respectively.
Step 4. Compute the fused change level η = α + β 2 and the final change probability of p * as p * = η min η max η min η .
The purpose of this iterative process to obtain more accurate structural differences between images and change probability vector p * , which in turn results in better DI computed by using p * . As can be seen from Step 1 to Step 4, we used (11) and (12) twice to iteratively correct the change probability vector p . Therefore, the main computational cost of this iterative process is the two calculations of (11) and (12), which is a very low computational complexity of O ( N 1 × N 1 ) .
Then, we obtain the change probability for each node (patch) and compute the DI by measuring the structural differences between different graph models.

2.6. CM Computation by MRF Segmentation

After obtaining the DI, the change detection problem can be regarded as an image binary classification task, which can be addressed using the following approaches: (1) threshold segmentation methods, such as Otsu’s method [42]; (2) clustering methods, like k-means clustering [43] and fuzzy c-means clustering [44]; (3) random field methods, such as Markov random field (MRF) [45]. Thresholding and clustering methods often neglect the neighborhood information of pixels in the DI, leading to the emergence of considerable “salt-and-pepper noise” in the segmented CM. MRF can incorporate both spatial and change information from the DI, thereby improving the outcomes of change detection. In this paper, we directly adopted the MRF segmentation method proposed in [45], but with a modification where we replaced the R-adjacency spatial neighbors of superpixels in [45] with the 8-connected neighbors of image patches. Once the MRF model is solved by using the min-cut/max-flow algorithm [46], we can obtain the change label of each patch, and subsequently generate the CM. The framework of the proposed multi-scale graph based method with spatio-temporal-radiometric information interaction for CD is summarized in Algorithm 1.
Algorithm 1: STRMG based CD.
Input: Images of X ˜ and Y ˜ .
            Parameters of p and S.
Pre-processing:
   Segment X ˜ and Y ˜ into patches with different scales.
   Stack the patches to obtain the multi-scale PGM of X s and Y s .
Computing the DI:
   Construct the multi-scale graphs of G t 1 s and G t 2 s .
   Compute the fusion matrices of F t 1 s and F t 2 s .
   Fuse the multi-scale graphs to obtain graphs of G t 1 f and G t 2 f .
   Construct the mapped graphs of G t 1 m and G t 2 m .
   Compute the change probability vector of p * .
Computing the CM:
   Compute final CM by using MRF segmentation method.

3. Experiment Results and Discussion

In this section, we validate the performance of the proposed algorithm through comparative experiments with various SAR change detection methods on three real datasets.

3.1. Experimental Settings

We conducted tests on three pairs of SAR images. As illustrated in Figure 2, Dataset A and Dataset B were acquired using the Radarsat-2 SAR sensor over the Yellow River Estuary in China. The pre-event images were obtained in June 2008, while the post-event images were captured in June 2009. The spatial resolution of the images was 8 m per pixel. Dataset A shows the changes in the coastline, where only a small portion of the pixels of the image have changed in this dataset. Dataset B shows a section of inland water, which was chosen because the area of change is concentrated on the boundary line of the river, which is relatively difficult to detect. Dataset C was collected using the COSMO-SkyMed SAR sensor in Guizhou, China. The pre-event images were captured in June 2016, and the post-event images were acquired in April 2017; they encompass various features such as mountains, rivers, and trees. The spatial resolution of the images was 1 m per pixel. The ground change maps are labeled by experts based on these SAR images, optical images from the same time period, and in combination with practical examinations. The details of these datasets are presented in Table 1.
We selected the following ten SAR change detection methods for comparison: the classical operators of difference (Diff), log-ratio (LogR) [16], mean-ratio (MeanR) [17], neighborhood-ratio operator (NbhdR) [18], the sparsity-driven method for CD (SDCD) [14], and the recently proposed graph-based methods of improved nonlocal patch-based graph model (INLPG) [12], iterative robust graph with MRF co-segmentation method (IRG-McS) [29], the graph signal processing-based CD method (GSPCD) [28], as well as the deep learning-based methods of convolutional-wavelet neural network- (CWNN) [8] and deep convolutional generative adversarial network-based robust unsupervised CD method (DcGANCD) [10]. For these comparative methods (excluding the classical operators), we directly utilized their open-source code with default parameters. For the proposed STRMG, we set the patch size as p = 2 and the scale parameter as S = 3 .
To assess the performance of different methods, we employed two categories of evaluation metrics. Firstly, we directly assessed the DI’s quality using: (1) the empirical receiver operating characteristics (ROC) curve along with the areas beneath the ROC curve (AUR); (2) the precision–recall (PR) curve and the corresponding areas under the PR curve (AUP). Secondly, we evaluated the CM’s quality using the false alarm rate (FA) calculated by FA = FP / ( FP + TN ) , the miss rate (MR) calculated by MR = FN / ( TP + FN ) , the overall accuracy (OA) calculated by OA = ( TP + TN ) / ( TN + FN + TP + FP ) , and the Kappa coefficient (KC) calculated by KC = ( OA PRE ) / ( 1 PRE ) with
PRE = ( TP + FN ) ( TP + FP ) + ( TN + FP ) ( TN + FN ) ( TN + FN + TP + FP ) 2 ,
and the F1-score was calculated by F 1 = 2 TP / ( FN + FP + 2 TP ) .

3.2. Experimental Results

3.2.1. Results on the Dataset A

In Figure 3, we show the DIs generated by the proposed STRMG on Dataset A against other currently popular methods. Compared to other traditional methods, the proposed STRMG is able to achieve better DI, which is more robust to the noise. For example, the DIs generated by the traditional algebraic operators of Diff, LogR, MeanR and NbhdR are heavily influenced by the speckle noise, as they are all based on direct comparison of pixels or image patches, making them difficult to effectively distinguish between changed and unchanged areas. In contrast, the image structure-based methods perform better due to the relative stability of image structure, such as the GSPCD. Further comparison between the proposed STRMG and other structure-based methods (i.e., INLPG, IRG-McS and GSPCD) shows that STRMG, which incorporates multi-scale information and spatio-temporal-radiometric information, can effectively improve DI performance, which is also confirmed by the ROC and PR curves plotted in Figure 4.
For a more intuitive understanding of the differences between our proposed method and other approaches, we also show the CMs on Dataset A in Figure 5, where we distinguish FN and FP by using distinct colors, cyan and red, to facilitate visual examination and comparison. From Figure 5, we can see that there is a significant number of false alarms in CMs generated by LogR, MeanR, NbhdR, and CWNN due to the speckle noise in SAR images. In contrast, INLPG, DcGANCD, and STRMG generate fewer false alarms. Further comparison of these three in Figure 5 also shows that STRMG has the least missed detections. In Table 2, we present the quantitative comparison results of the DIs and CMs generated by these methods on Dataset A. It can be seen that the proposed STRMG achieves the best scores across all evaluation criteria. Compared to the advanced deep learning-based CWNN and DcGANCD, our method has at least a 10.0% improvement in F1-scores. When compared to the recently proposed graph model-based INLPG, IRG-McS, and GSPCD, our method shows an improvement of 7.2–44.9% in F1-scores.

3.2.2. Results on the Dataset B

Figure 6 shows the DIs generated by all the comparison methods on the Dataset B, from which it can be seen that most of the methods are able to highlight the changed areas to some extent. A careful comparison shows that the Diff, LogR, and SDCD are weak at distinguishing between changed and unchanged areas, which is mainly affected by inherent speckle noise. The DI results from Dataset A and Dataset B also confirm the poor detection performance achieved by traditional algebraic operators for SAR image CD where the speckle noise is more severe, whereas graph model-based methods such as INLPG, IRG-McS, GSPCD, and STRMG are able to take full advantage of the structural relationships within the image that are relatively robust to noise, thus achieving better detection results. Figure 7 displays the corresponding ROC and PR curves, and we can find that the proposed STRMG can obtain the highest AUR and AUP, as reported in Table 3.
In Figure 8, we show the CMs generated by different methods on Dataset B, from which we can find that DcGANCD, CWNN, GSPCD, and STRMG perform better than other comparison methods, especially with fewer false alarms in their CMs. In addition, further comparison also reveals that the missed detections in STRMG are also relatively rare with a small MR of 0.085, as listed in Table 3; thus, STRMG is able to obtain better detection results than the other compared methods, for example, its KC and F1 are higher than the second-ranked DcGANCD by 5.0% and 4.9%, respectively.

3.2.3. Results on the Dataset C

Figure 9 presents the DIs of various methods on Dataset C. It can be observed that some methods do not perform as well on this dataset as they did on the first two datasets, such as the INLPG, CWNN, and IRG-McS. This is due to two reasons: one is that the noise on the SAR images in Dataset C is much more severe, which leads to difficulties in accurately portraying the structures of images, and tends to produce a lot of bright patches in the difference maps, which can be mistakenly detected as changed regions; on the other hand, there are very few changed regions in this dataset, i.e., the changed and unchanged categories are extremely unbalanced, which can easily lead to the changed regions being submerged in the unchanged regions, resulting in missed detection. In comparison, the STRMG proposed in this paper is able to achieve better DI, which can also demonstrated by the ROC and PR curves shown in Figure 10.
We present the CMs of each method on Dataset C in Figure 11. From Figure 11a–f, it can be seen that some methods create a large number of false alarms in the change detection results, leading to very high FA values of 10.4–55.1% as listed in Table 4. By comparing Figure 11g,i–k, we can observe that, although DcGANCD, IRG-McS, and GSPCD yield relatively clean detection results with fewer false alarms, they also have difficulty in detecting smaller changed regions. In contrast, the proposed STRMG produces fewer detection errors and more accurate edge compared to other methods. In particular, the F1-score of STRMG is higher than the second-ranked DcGANCD by 14.4%. This demonstrates the effectiveness of the multi-scale graph model and the spatio-temporal-radiometric information interaction in the method.

3.3. Ablation Study and Discussion

3.3.1. Ablation Study

To validate the effectiveness and rationality of the components in STRMG, we conducted ablation experiments on all the evaluated datasets. We construct a baseline by directly comparing the single-scale graphs of G t 1 1 and G t 2 1 to detect the changes without utilizing the multi-scale information and spatio-temporal-radiometric interaction between the graph models. We use the aforementioned metrics to evaluate the impact of the multi-scale graph (MsG) and spatio-temporal-radiometric interaction (STR) on change detection performance. Table 5 reports the average quantitative measures of the baseline model (denoted as “Base”), and the baseline model with multi-scale graph (denoted as “Base+MsG”), the baseline model with spatio-temporal-radiometric interaction (denoted as “Base+STR”), and the proposed STRMG.
By comparing the “Base” and “STRMG”, we can find that, when multi-scale graphs are not used to mine the multi-scale information in SAR images and the spatio-temporal-radiometric interaction is not used in the graph mapping, the change detection performance will greatly decrease. As listed in Table 5, when the multi-scale graphs are employed in the structure representation, the average AUP is increased by 11.5% by comparing the “Base” and “Base+MsG”. When the spatio-temporal-radiometric interaction is utilized in the structure comparison, more accurate detection results are obtained, leading to a 7.6% increase of KC by comparing the “Base” and “Base+STR” and a 5.9% increase of KC by comparing the “Base+MsG” and “STRMG”.

3.3.2. Parameter Analysis

Next, we discuss the impact of the scale parameter S and patch size p on the proposed STRMG. In Figure 12, we show the KC of CMs obtained by STRMG on all the evaluated datasets with different scale parameters, i.e., S = 2 , 3 , 4 . As shown in Figure 12, higher-order multi-scale graphs give better detection results than multi-scale graphs using only two layers ( S = 2 ), especially on Dataset A. The KC values obtained at S = 3 are 3.5%, 1.2%, and 2.3% higher than those obtained at S = 2 on the three Datasets, A, B, and C, respectively. In addition, the performance tends to slightly decrease when S = 4 . This could be attributed to the larger scale, where individual image patches encompass various types of land features, resulting in less accurate representation of image structures by the larger-scale graph models. Thus, in this paper, we set the scale parameter as S = 3 .
In Figure 13, we vary the patch size p from 2 to 6 with step one. It can be found that the patch size has an important impact on the STRMG. When the patch size p is too large, such as p = 6 , it will cause a dramatic degradation of STRMG performance. This is mainly due to the fact that too-large image patches result in over-smoothing and the fact that the interior of a single image patch may contain different types of objects (internal homogeneity is destroyed), making it difficult for the constructed graph model to accurately portray the image structure. Obviously, setting p = 2 is a good choice as shown in the Figure 13.

3.3.3. Test of Different Noise Levels

To evaluate the performance of the proposed STRMG under different noise conditions, we use a simulated SAR dataset as shown in the top row of Figure 14, where Figure 14a,b show two simulated multi-temporal images, and Figure 14c shows the ground truth image. By adding different levels of multiplicative gamma-distributed speckle noise ( L = 10 , 5 , 2 ) to the multi-temporal images, we can obtain different DIs of STRMG at different noise levels, as shown in Figure 14c. From these results, we can find that the proposed STRMG is very robust to the speckle noise. This is due to the fact that STRMG utilizes the image structure represented by the similarity between image patches, thus mitigating the effect of noise. Moreover, since the proposed method is insensitive to speckle noise, we believe it can be valuable in other similar application scenarios; for example, in the change detection of synthetic aperture sonar (SAS) images [47,48,49,50], as the basic theory of SAS and SAR is identical to some degree.

4. Conclusions

In this study, we have proposed a novel multi-scale graph-based method with spatio-temporal-radiometric information interaction, named STRMG, for SAR image change detection. To mitigate the impact of inherent speckle noise in SAR images on change detection, we have chosen to compare the structural relationships between two images, which is more robust to noise. This transforms the change detection problem into an assessment of image structural differences. To more accurately depict the intricate structural relationships within the images, we have constructed multi-scale graph models and designed an effective graph fusion strategy, which can fully exploit the spatial information, radiometric information, and multi-scale details within the images. In order to adequately compare graph models to accurately detect changes, we have employed the graph mapping approach to realize the spatio-temporal-radiometric information interaction between graph models, which can effectively explore the associations and differences between multi-temporal images, leading to more accurate change detection results. Experiments on different datasets have demonstrated the effectiveness of the proposed method. In addition, compared with the traditional KNN graph, graph neural networks (GNN) have stronger structural characterization ability, which can more efficiently capture the complex relationships between land features within the SAR image, especially for the change detection of high-resolution SAR images. Therefore, future work might consider applying the idea of multi-scale graphs and spatio-temporal-radiometric interaction to GNN-based SAR image change detection, expecting a better detection performance.

Author Contributions

Methodology, P.Z.; Project administration, J.J.; Formal analysis, P.K. Original draft preparation, B.W.; writing—review and editing, P.Z.; supervision, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by project grant number 2023JSM08.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest. Author Bin Wang was employed by the company Xi’an Co-Build Regal Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Singh, A. Review Article Digital change detection techniques using remotely-sensed data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef]
  2. Liu, S.; Marinelli, D.; Bruzzone, L.; Bovolo, F. A Review of Change Detection in Multitemporal Hyperspectral Images: Current Techniques, Applications, and Challenges. IEEE Geosci. Remote Sens. Mag. 2019, 7, 140–158. [Google Scholar] [CrossRef]
  3. Lv, Z.; Liu, T.; Benediktsson, J.A.; Falco, N. Land Cover Change Detection Techniques: Very-High-Resolution Optical Images: A Review. IEEE Geosci. Remote. Sens. Mag. 2021, 10, 2–21. [Google Scholar] [CrossRef]
  4. Wen, D.; Huang, X.; Bovolo, F.; Li, J.; Ke, X.; Zhang, A.; Benediktsson, J.A. Change Detection From Very-High-Spatial-Resolution Optical Remote Sensing Images: Methods, applications, and future directions. IEEE Geosci. Remote Sens. Mag. 2021, 9, 68–101. [Google Scholar] [CrossRef]
  5. Sun, Y.; Lei, L.; Guan, D.; Li, X.; Kuang, G. SAR Image Change Detection Based on Nonlocal Low-Rank Model and Two-Level Clustering. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 293–306. [Google Scholar] [CrossRef]
  6. Mastro, P.; Masiello, G.; Serio, C.; Pepe, A. Change Detection Techniques with Synthetic Aperture Radar Images: Experiments with Random Forests and Sentinel-1 Observations. Remote Sens. 2022, 14, 3323. [Google Scholar] [CrossRef]
  7. Du, Y.; Zhong, R.; Li, Q.; Zhang, F. TransUNet++ SAR: Change detection with deep learning about architectural ensemble in SAR images. Remote Sens. 2022, 15, 6. [Google Scholar] [CrossRef]
  8. Gao, F.; Wang, X.; Gao, Y.; Dong, J.; Wang, S. Sea ice change detection in SAR images based on convolutional-wavelet neural networks. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1240–1244. [Google Scholar] [CrossRef]
  9. Wu, C.; Du, B.; Zhang, L. Fully Convolutional Change Detection Framework with Generative Adversarial Network for Unsupervised, Weakly Supervised and Regional Supervised Change Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 9774–9788. [Google Scholar] [CrossRef]
  10. Zhang, X.; Su, H.; Zhang, C.; Gu, X.; Tan, X.; Atkinson, P.M. Robust unsupervised small area change detection from SAR imagery using deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 173, 79–94. [Google Scholar] [CrossRef]
  11. Bruzzone, L.; Prieto, D. An adaptive semiparametric and context-based approach to unsupervised change detection in multitemporal remote-sensing images. IEEE Trans. Image Process. 2002, 11, 452–466. [Google Scholar] [CrossRef] [PubMed]
  12. Sun, Y.; Lei, L.; Li, X.; Tan, X.; Kuang, G. Structure Consistency-Based Graph for Unsupervised Change Detection With Homogeneous and Heterogeneous Remote Sensing Images. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 4700221. [Google Scholar] [CrossRef]
  13. Zheng, X.; Guan, D.; Li, B.; Chen, Z.; Pan, L. Global and Local Graph-Based Difference Image Enhancement for Change Detection. Remote Sens. 2023, 15, 1194. [Google Scholar] [CrossRef]
  14. Nar, F.; Özgür, A.; Saran, A.N. Sparsity-Driven Change Detection in Multitemporal SAR Images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1032–1036. [Google Scholar] [CrossRef]
  15. Moser, G.; Serpico, S. Generalized minimum-error thresholding for unsupervised change detection from SAR amplitude imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2972–2982. [Google Scholar] [CrossRef]
  16. Bovolo, F.; Bruzzone, L. A detail-preserving scale-driven approach to change detection in multitemporal SAR images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2963–2972. [Google Scholar] [CrossRef]
  17. Inglada, J.; Mercier, G. A new statistical similarity measure for change detection in multitemporal SAR images and its extension to multiscale change analysis. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1432–1445. [Google Scholar] [CrossRef]
  18. Gong, M.; Cao, Y.; Wu, Q. A neighborhood-based ratio approach for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2011, 9, 307–311. [Google Scholar] [CrossRef]
  19. Ma, J.; Gong, M.; Zhou, Z. Wavelet fusion on ratio images for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2012, 9, 1122–1126. [Google Scholar] [CrossRef]
  20. Hou, B.; Wei, Q.; Zheng, Y.; Wang, S. Unsupervised change detection in SAR image based on gauss-log ratio image fusion and compressed projection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3297–3317. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Wang, S.; Wang, C.; Li, J.; Zhang, H. SAR image change detection using saliency extraction and shearlet transform. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4701–4710. [Google Scholar] [CrossRef]
  22. Zheng, Y.; Jiao, L.; Liu, H.; Zhang, X.; Hou, B.; Wang, S. Unsupervised saliency-guided SAR image change detection. Pattern Recognit. 2017, 61, 309–326. [Google Scholar] [CrossRef]
  23. Wang, R.; Chen, J.W.; Jiao, L.; Wang, M. How can despeckling and structural features benefit to change detection on bitemporal SAR images? Remote Sens. 2019, 11, 421. [Google Scholar] [CrossRef]
  24. Pham, M.T.; Mercier, G.; Michel, J. Change detection between SAR images using a pointwise approach and graph theory. IEEE Trans. Geosci. Remote Sens. 2015, 54, 2020–2032. [Google Scholar] [CrossRef]
  25. Wu, J.; Li, B.; Qin, Y.; Ni, W.; Zhang, H. An object-based graph model for unsupervised change detection in high resolution remote sensing images. Int. J. Remote Sens. 2021, 42, 6209–6227. [Google Scholar] [CrossRef]
  26. Wang, J.; Yang, X.; Jia, L.; Yang, X.; Dong, Z. Pointwise SAR image change detection using stereo-graph cuts with spatio-temporal information. Remote Sens. Lett. 2019, 10, 421–429. [Google Scholar] [CrossRef]
  27. Wang, J.; Yang, X.; Yang, X.; Jia, L.; Fang, S. Unsupervised change detection between SAR images based on hypergraphs. ISPRS J. Photogramm. Remote Sens. 2020, 164, 61–72. [Google Scholar] [CrossRef]
  28. Sun, Y.; Lei, L.; Guan, D.; Kuang, G.; Liu, L. Graph Signal Processing for Heterogeneous Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4415823. [Google Scholar] [CrossRef]
  29. Sun, Y.; Lei, L.; Guan, D.; Kuang, G. Iterative Robust Graph for Unsupervised Change Detection of Heterogeneous Remote Sensing Images. IEEE Trans. Image Process. 2021, 30, 6277–6291. [Google Scholar] [CrossRef] [PubMed]
  30. Sun, Y.; Lei, L.; Li, X.; Sun, H.; Kuang, G. Nonlocal patch similarity based heterogeneous remote sensing change detection. Pattern Recognit. 2021, 109, 107598. [Google Scholar] [CrossRef]
  31. Sun, Y.; Lei, L.; Liu, L.; Kuang, G. Structural Regression Fusion for Unsupervised Multimodal Change Detection. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 4504018. [Google Scholar] [CrossRef]
  32. Sun, Y.; Lei, L.; Guan, D.; Wu, J.; Kuang, G.; Liu, L. Image Regression With Structure Cycle Consistency for Heterogeneous Change Detection. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–15. [Google Scholar] [CrossRef] [PubMed]
  33. Sun, Y.; Lei, L.; Tan, X.; Guan, D.; Wu, J.; Kuang, G. Structured graph based image regression for unsupervised multimodal change detection. ISPRS J. Photogramm. Remote Sens. 2022, 185, 16–31. [Google Scholar] [CrossRef]
  34. Chen, H.; Yokoya, N.; Wu, C.; Du, B. Unsupervised Multimodal Change Detection Based on Structural Relationship Graph Representation Learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5635318. [Google Scholar] [CrossRef]
  35. Chen, H.; Yokoya, N.; Chini, M. Fourier domain structural relationship analysis for unsupervised multimodal change detection. ISPRS J. Photogramm. Remote Sens. 2023, 198, 99–114. [Google Scholar] [CrossRef]
  36. Zheng, X.; Guan, D.; Li, B.; Chen, Z.; Li, X. Change Smoothness-Based Signal Decomposition Method for Multimodal Change Detection. IEEE Geosci. Remote Sens. Lett. 2022, 19, 2507605. [Google Scholar] [CrossRef]
  37. Wang, R.; Wang, L.; Wei, X.; Chen, J.W.; Jiao, L. Dynamic graph-level neural network for SAR image change detection. IEEE Geosci. Remote Sens. Lett. 2021, 19, 4501005. [Google Scholar] [CrossRef]
  38. Wu, J.; Li, B.; Qin, Y.; Ni, W.; Zhang, H.; Fu, R.; Sun, Y. A multiscale graph convolutional network for change detection in homogeneous and heterogeneous remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102615. [Google Scholar] [CrossRef]
  39. Su, H.; Zhang, X.; Luo, Y.; Zhang, C.; Zhou, X.; Atkinson, P.M. Nonlocal feature learning based on a variational graph auto-encoder network for small area change detection using SAR imagery. ISPRS J. Photogramm. Remote Sens. 2022, 193, 137–149. [Google Scholar] [CrossRef]
  40. Wang, J.; Gao, F.; Dong, J.; Zhang, S.; Du, Q. Change detection from synthetic aperture radar images via graph-based knowledge supplement network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1823–1836. [Google Scholar] [CrossRef]
  41. Deledalle, C.A.; Denis, L.; Tupin, F. How to compare noisy patches? Patch similarity beyond Gaussian noise. Int. J. Comput. Vis. 2012, 99, 86–102. [Google Scholar] [CrossRef]
  42. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  43. Celik, T. Unsupervised change detection in satellite images using principal component analysis and k-means clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  44. Li, H.C.; Celik, T.; Longbotham, N.; Emery, W.J. Gabor feature based unsupervised change detection of multitemporal SAR images based on two-level clustering. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2458–2462. [Google Scholar]
  45. Sun, Y.; Lei, L.; Guan, D.; Li, M.; Kuang, G. Sparse-Constrained Adaptive Structure Consistency-Based Unsupervised Image Regression for Heterogeneous Remote Sensing Change Detection. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 4405814. [Google Scholar] [CrossRef]
  46. Boykov, Y.; Kolmogorov, V. An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1124–1137. [Google Scholar] [CrossRef] [PubMed]
  47. Zhang, X.; Wu, H.; Sun, H.; Ying, W. Multireceiver SAS imagery based on monostatic conversion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10835–10853. [Google Scholar] [CrossRef]
  48. Yang, P. An imaging algorithm for high-resolution imaging sonar system. Multimed. Tools Appl. 2023, 1–17. [Google Scholar] [CrossRef]
  49. Zhang, X.; Yang, P.; Sun, H. Frequency-domain multireceiver synthetic aperture sonar imagery with Chebyshev polynomials. Electron. Lett. 2022, 58, 995–998. [Google Scholar] [CrossRef]
  50. Zhang, X.; Yang, P. An improved imaging algorithm for multi-receiver SAS system with wide-bandwidth signal. Remote Sens. 2021, 13, 5008. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed multi-scale graph-based method with spatio-temporal-radiometric information interaction for CD.
Figure 1. Framework of the proposed multi-scale graph-based method with spatio-temporal-radiometric information interaction for CD.
Remotesensing 16 00560 g001
Figure 2. Datasets. From top to bottom are Dataset A, Dataset B, and Dataset C, respectively. From left to right are: (a) pre-event image; (b) post-event image; (c) ground truth. The ground truth is labeled by experts based on these SAR images, optical images from the same time period, and in combination with practical examinations.
Figure 2. Datasets. From top to bottom are Dataset A, Dataset B, and Dataset C, respectively. From left to right are: (a) pre-event image; (b) post-event image; (c) ground truth. The ground truth is labeled by experts based on these SAR images, optical images from the same time period, and in combination with practical examinations.
Remotesensing 16 00560 g002
Figure 3. DIs of Dataset A generated by different methods. From left to right: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth.
Figure 3. DIs of Dataset A generated by different methods. From left to right: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth.
Remotesensing 16 00560 g003
Figure 4. ROC and PR curves of Dataset A generated by different methods.
Figure 4. ROC and PR curves of Dataset A generated by different methods.
Remotesensing 16 00560 g004
Figure 5. CMs of Dataset A generated by different methods. From left to right: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth. In the CM, White: true positives (TP); Red: false positives (FP); Black: true negatives (TN); Cyan: false negatives (FN).
Figure 5. CMs of Dataset A generated by different methods. From left to right: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth. In the CM, White: true positives (TP); Red: false positives (FP); Black: true negatives (TN); Cyan: false negatives (FN).
Remotesensing 16 00560 g005
Figure 6. DIs of Dataset B generated by different methods. From left to right: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth.
Figure 6. DIs of Dataset B generated by different methods. From left to right: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth.
Remotesensing 16 00560 g006
Figure 7. ROC and PR curves of Dataset B generated by different methods.
Figure 7. ROC and PR curves of Dataset B generated by different methods.
Remotesensing 16 00560 g007
Figure 8. CMs of Dataset B generated by different methods. From left to right: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth.
Figure 8. CMs of Dataset B generated by different methods. From left to right: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth.
Remotesensing 16 00560 g008
Figure 9. DIs of Dataset C generated by different methods. From left to right are: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth.
Figure 9. DIs of Dataset C generated by different methods. From left to right are: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth.
Remotesensing 16 00560 g009
Figure 10. ROC and PR curves of Dataset C generated by different methods.
Figure 10. ROC and PR curves of Dataset C generated by different methods.
Remotesensing 16 00560 g010
Figure 11. CMs of Dataset C generated by different methods. From left to right are: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth.
Figure 11. CMs of Dataset C generated by different methods. From left to right are: (a) Diff; (b) LogR; (c) MeanR; (d) NbhdR; (e) SDCD; (f) INLPG; (g) DcGANCD; (h) CWNN; (i) IRG-McS; (j) GSPCD; (k) STRMG; and (l) ground truth.
Remotesensing 16 00560 g011
Figure 12. Sensitivity analysis of parameter S in STRMG.
Figure 12. Sensitivity analysis of parameter S in STRMG.
Remotesensing 16 00560 g012
Figure 13. The AUP of DIs generated by STRMG with different patch sizes p.
Figure 13. The AUP of DIs generated by STRMG with different patch sizes p.
Remotesensing 16 00560 g013
Figure 14. DI generated by STRMG on the simulated dataset contaminated by different levels of speckle noise with L = 10 , 5 , 2 . The first row, from (ac) corresponds to the simulated pre-event image, simulated post-event image, and the ground truth, respectively. From the second row to the fourth row, they correspond to the datasets contaminated by different levels of speckle noise with L = 10 , 5 , 2 , and the DIs generated by STRMG.
Figure 14. DI generated by STRMG on the simulated dataset contaminated by different levels of speckle noise with L = 10 , 5 , 2 . The first row, from (ac) corresponds to the simulated pre-event image, simulated post-event image, and the ground truth, respectively. From the second row to the fourth row, they correspond to the datasets contaminated by different levels of speckle noise with L = 10 , 5 , 2 , and the DIs generated by STRMG.
Remotesensing 16 00560 g014
Table 1. Description of the datasets.
Table 1. Description of the datasets.
DatasetDateSensorLocationImage SizeResolutionPolarizationsWaveband
Dataset AJune 2008–June 2009Radarsat-2Yellow River Estuary, China 280 × 450 8 mHHC-band
Dataset BJune 2008–June 2009Radarsat-2Yellow River Estuary, China 444 × 291 8 mHHC-band
Dataset CJune 2016–April 2017COSMO-SkyMedGuizhou, China 400 × 400 1 mHHX-band
Table 2. Quantitative measures of DIs and CMs on Dataset A.
Table 2. Quantitative measures of DIs and CMs on Dataset A.
MethodsAUR ↑AUP ↑FA ↓MR ↓OA ↑KC ↑F1 ↑
Diff0.8450.0860.0830.4610.9120.0990.116
LogR0.8510.0860.2410.2080.7590.0460.066
MeanR0.9730.8130.3520.0290.6520.0360.056
NbhdR0.9830.7580.3860.0080.6190.0330.053
SDCD0.9430.3420.0440.2160.9550.2570.270
INLPG0.9920.8790.0010.2460.9960.8130.815
DcGANCD0.9750.8080.0010.2810.9960.7840.787
CWNN0.9710.6680.1060.0380.8950.1470.163
IRG-McS0.9540.2290.0200.1970.9780.4290.438
GSPCD0.9310.7110.0050.2710.9930.6740.678
STRMG0.9920.9120.0010.0950.9980.8860.887
Table 3. Quantitative measures of DIs and CMs on Dataset B.
Table 3. Quantitative measures of DIs and CMs on Dataset B.
MethodsAUR ↑AUP ↑FA ↓MR ↓OA ↑KC ↑F1 ↑
Diff0.7880.2100.2570.3230.7410.0940.147
LogR0.9160.5200.1850.1300.8170.1920.238
MeanR0.9740.8020.3140.0230.6960.1220.175
NbhdR0.9790.8050.1810.0290.8240.2220.266
SDCD0.9070.4390.1830.1400.8180.1920.238
INLPG0.9920.8250.0070.2900.9840.7320.741
DcGANCD0.9830.8220.0050.2180.9880.8100.816
CWNN0.9780.8230.0120.1140.9850.7830.791
IRG-McS0.9770.5150.0130.1600.9830.7510.760
GSPCD0.9700.7900.0070.2110.9860.7870.794
STRMG0.9950.9200.0070.0850.9910.8600.865
Table 4. Quantitative measures of DIs and CMs on Dataset C.
Table 4. Quantitative measures of DIs and CMs on Dataset C.
MethodsAUR ↑AUP ↑FA ↓MR ↓OA ↑KC ↑F1 ↑
Diff0.9150.3030.3530.0800.6480.0200.032
LogR0.7850.0880.1040.7890.8910.0120.024
MeanR0.9700.2920.3140.0150.6880.0270.039
NbhdR0.9020.0500.3480.0560.6540.0210.034
SDCD0.9820.4930.1180.0260.8830.0850.096
INLPG0.9600.6140.5510.0090.4520.0100.023
DcGANCD0.9910.8060.0020.2860.9960.7140.716
CWNN0.9120.0480.1070.3860.8910.0560.067
IRG-McS0.9300.2470.0050.4390.9930.4880.492
GSPCD0.9650.2470.0030.3070.9950.6350.638
STRMG0.9940.9000.0010.1250.9980.8590.860
Table 5. Ablation study of STRMG measured by the average scores.
Table 5. Ablation study of STRMG measured by the average scores.
MethodsAUR ↑AUP ↑FA ↓MR ↓OA ↑KC ↑F1 ↑
Base0.9820.7360.0060.1510.9920.7750.778
Base+MsG0.9870.8510.0040.1980.9940.8090.812
Base+STR0.9900.8780.0030.1180.9950.8510.854
STRMG0.9940.9110.0030.1020.9950.8680.871
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, P.; Jiang, J.; Kou, P.; Wang, S.; Wang, B. A Multi-Scale Graph Based on Spatio-Temporal-Radiometric Interaction for SAR Image Change Detection. Remote Sens. 2024, 16, 560. https://doi.org/10.3390/rs16030560

AMA Style

Zhang P, Jiang J, Kou P, Wang S, Wang B. A Multi-Scale Graph Based on Spatio-Temporal-Radiometric Interaction for SAR Image Change Detection. Remote Sensing. 2024; 16(3):560. https://doi.org/10.3390/rs16030560

Chicago/Turabian Style

Zhang, Peijing, Jinbao Jiang, Peng Kou, Shining Wang, and Bin Wang. 2024. "A Multi-Scale Graph Based on Spatio-Temporal-Radiometric Interaction for SAR Image Change Detection" Remote Sensing 16, no. 3: 560. https://doi.org/10.3390/rs16030560

APA Style

Zhang, P., Jiang, J., Kou, P., Wang, S., & Wang, B. (2024). A Multi-Scale Graph Based on Spatio-Temporal-Radiometric Interaction for SAR Image Change Detection. Remote Sensing, 16(3), 560. https://doi.org/10.3390/rs16030560

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop