Next Article in Journal
Restoration of Binocular Images Degraded by Optical Scattering through Estimation of Atmospheric Coefficients
Previous Article in Journal
Label-Free SERS and MD Analysis of Biomarkers for Rapid Point-of-Care Sensors Detecting Head and Neck Cancer and Infections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing RABASAR for Multi-Temporal SAR Image Despeckling through Directional Filtering and Wavelet Transform

1
School of Automation and Electronic Information, Xiangtan University, Xiangtan 411105, China
2
School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China
3
National Center for Applied Mathematics in Hunan Laboratory, Xiangtan 411105, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(21), 8916; https://doi.org/10.3390/s23218916
Submission received: 26 September 2023 / Revised: 18 October 2023 / Accepted: 27 October 2023 / Published: 2 November 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
The presence of speckle noise severely hampers the interpretability of synthetic aperture radar (SAR) images. While research on despeckling single-temporal SAR images is well-established, there remains a significant gap in the study of despeckling multi-temporal SAR images. Addressing the limitations in the acquisition of the “superimage” and the generation of ratio images within the RABASAR despeckling framework, this paper proposes an enhanced framework. This enhanced framework proposes a direction-based segmentation approach for multi-temporal SAR non-local means filtering (DSMT-NLM) to obtain the “superimage”. The DSMT-NLM incorporates the concept of directional segmentation and extends the application of the non-local means (NLM) algorithm to multi-temporal images. Simultaneously, the enhanced framework employs a weighted averaging method based on wavelet transform (WAMWT) to generate superimposed images, thereby enhancing the generation process of ratio images. Experimental results demonstrate that compared to RABASAR, Frost, and NLM, the proposed method exhibits outstanding performance. It not only effectively removes speckle noise from multi-temporal SAR images and reduces the generation of false details, but also successfully achieves the fusion of multi-temporal information, aligning with experimental expectations.

1. Introduction

Multi-temporal SAR images utilize various auxiliary data sources for cross-validation and information supplementation [1], allowing for the observation of surface changes. Multi-temporal SAR provides a more comprehensive and detailed view of surface and dynamic changes [2]. SAR not only overcomes the limitations of time and weather conditions in ground observation [3], specifically demonstrating the capability of all-weather, all-time, and the continuous observation of moving targets [4,5], but also exhibits a certain ability to penetrate vegetation, soil, and occlusions [6,7]. Given these unique advantages of SAR, its applications are extremely diverse. For example, Zhang et al. proposed a near real-time method for monitoring the progression of forest fires [8]. Fu et al. mapped mangrove species and elucidated their scattering characteristics to monitor the extent and health of mangroves [9]. However, the presence of speckle noise significantly affects the quality and resolution of SAR images [10]. Therefore, removing speckle noise has always been a key issue for the further processing and applications of SAR images. Speckle removal can be broadly categorized into two types: single-temporal SAR denoising and multi-temporal SAR denoising.
Methods for single-temporal SAR speckle removal can be broadly categorized into three main types: spatial domain filtering, transform domain filtering, and deep learning filtering [11]. Spatial domain filtering includes Lee filtering [12], Frost filtering [13], and Kuan filtering [14], among others. The performance of spatial domain filtering is highly affected by the size of the filtering window, as smaller windows may not effectively suppress noise, while larger windows may lead to the loss of image texture details during denoising [15]. Transform domain filtering commonly utilizes Fourier transform [16], wavelet transform [17], and other techniques. Deep learning filtering includes MRDDANet [18] and AGSDNet [19], among others. SAR images inherently contain speckle noise during the imaging process, making it impossible to obtain completely clean images [20]. Supervised learning requires a training dataset with clean images, but there are no clean images in SAR. Moreover, deep learning models lack interpretability [21], which restricts their further applications in certain scenarios where model explanations are needed.
With the continuous development of SAR satellite technology, SAR satellites are capable of capturing multiple images of the same target with shorter time intervals. Given the increasing demand for multi-temporal SAR speckle removal, several common methods have emerged. Lê et al. proposed a novel method for the temporal adaptive despeckling of multi-temporal SAR images [22]. Chierchia et al. proposed a despeckling algorithm for multi-temporal SAR images, utilizing the principles of block-matching and collaborative filtering [23]. The RABASAR [24] proposed by Zhao et al. in 2019, is one of the most remarkable frameworks for multi-temporal SAR image despeckling in recent years [25]. The key idea of the RABASAR lies in the utilization of ratio images, as they are more amenable to despeckling due to their better spatial stationarity. However, the RABASAR still has some limitations in terms of obtaining the “superimage” and the ratio image.
Regarding the method of obtaining the “superimage”, the RABASAR adopts a weighted averaging technique during its first step to generate the so-called “superimage”. This approach may cause significant information loss in the input of multi-temporal SAR images, contradicting the goal of enriching image information through multi-temporal SAR images. To address this issue, we propose utilizing the DSMT-NLM to obtain the “superimage”. Antoni Buades et al. introduced the NLM [26], which incorporates both local and global information. Its concept still represents a groundbreaking advancement. However, the algorithm still has limitations, which can be analyzed from two aspects. Firstly, the NLM itself has room for improvement. Within the target window, there may be pixels with low relevance to the center pixel, resulting in situations where the weighted similarity calculation assigns too low of a weight to the center pixel values of the target window, despite their close resemblance to the center pixel values of the sliding window. This leads to significant discrepancies between the updated pixel values and the true values when updating through weighting. Secondly, the application of the NLM to multi-temporal SAR images poses challenges. Existing research methods often utilize single-temporal SAR images for NLM despeckling, which may result in insufficient information. The presence of speckle noise caused by coherence effects in SAR imaging blurs the boundary between the speckle noise and image details, making it challenging to distinguish speckle noise from details in SAR images. Multi-temporal SAR images are designed to address this information deficiency. To address the above issues, we propose the DSMT-NLM. The DSMT-NLM will utilize directional segmentation to identify the window with the highest correlation as the target window, effectively overcoming the distortion at the image edges. Additionally, we will traverse all time-series windows using a sliding window to maximize the utilization of information from all multi-temporal SAR images, thus improving the despeckling performance. Lastly, by combining data from multiple information sources, we will perform information fusion, thereby supplementing the content and dimensions lacking in a single information source, enhancing the completeness and accuracy of SAR image information, and obtaining the “superimage”.
Regarding the approach to obtain the ratio image, during the experimental process of the RABASAR we observed a phenomenon: by only selecting one image of interest for ratio calculation with the “superimage”, we found that this approach resulted in the final image containing only the geographical information of the selected interest image. This processing approach contradicts the goal of using the rich information from multi-temporal data to generate the “superimage”. To address this issue, we improved the method of generating the ratio image. We adopted a WAMWT to fuse information from the multi-temporal SAR images, resulting in the creation of a superimposed image. Subsequently, we performed the ratio operation between the superimposed image and the “superimage” to generate the ratio image. This processing approach preserves the characteristics of the multi-temporal information, thus obtaining more accurate despeckling results.
The main contributions of this study are as follows:
  • We propose a DSMT-NLM to acquire a high-quality “superimage”.
  • We employ a WAMWT to fuse information from multi-temporal SAR images, producing a superimposed image. Subsequently, we perform a ratio operation between the superimposed image and the “superimage” to generate the ratio image.
  • We introduce a directional segmentation method to calculate the window with the highest correlation as the target window.
  • By employing a sliding window to traverse all time-series images, we maximize the utilization of information from all temporal SAR images, significantly enhancing the despeckling effect.

2. Research Methodology

The flowchart of the whole framework is illustrated in Figure 1. The algorithm comprises the following steps:
Step 1: From the input multi-temporal SAR images, sequentially select each SAR image of the time series as the reference image.
Step 2: For the pixels to be despeckled in the reference image, perform directional segmentation within their neighborhood, resulting in eight directional windows: up, down, left, right, left-up, left-down, right-up, and right-down. Calculate the weighted average of pixels within each directional window to obtain their mean values. Then, utilize the correlation distance to calculate the relevance between the pixel mean values of each directional window and the pixels to be despeckled in the reference image. Among the eight directional windows, identify the one with the maximum relevance to the pixels to be despeckled in the reference image, and select that directional window as the target window.
Step 3: For each SAR image in the time series, including the selected reference image, set a search window. Calculate the similarity between the target window and the sliding windows within the search window to determine the weights of the center pixels in the sliding windows. Multiply each center pixel value of the sliding window by its corresponding weight, and then calculate the weighted average of these products. The resulting value is used to update the pixel to be despeckled in the target window. Repeat the above process for each pixel in the reference image, thus completing the despeckling process for the reference image. Similarly, repeat the above steps for other SAR images selected as reference images to achieve the despeckling for all reference images.
Step 4: Apply wavelet transform to each filtered reference image to decompose it into different frequency components. Merge these components and reconstruct the “superimage” through wavelet inverse transform. Perform MuLoG-BM3D filtering on the “superimage”. At the same time, follow the same fusion process for the unfiltered input images to obtain the superimposed image. Then, calculate the ratio image by performing the ratio operation between the superimposed image and the filtered “superimage”.
Step 5: Apply RuLoG filtering to the ratio image, and then perform the inverse transform to obtain the final image.

2.1. Preliminary Despeckling of Multi-Temporal SAR Images Based on Directional Segmentation

The original RABASAR utilizes a weighted average method to generate the “superimage”, which achieves initial despeckling and integrates the information from multi-temporal SAR images. However, this simple approach leads to unsatisfactory despeckling results and fails to fully exploit the abundant information features in multi-temporal SAR images. Moreover, the use of weighted averaging causes the smoothing of image texture, resulting in the severe loss of feature information. This contradicts the fundamental idea of utilizing multi-temporal SAR images to compensate for the insufficient information in single-temporal SAR images. Therefore, to address these issues, we propose utilizing the DSMT-NLM method to generate the “superimage”, as illustrated in Figure 2. The specific steps are detailed in this section.

2.1.1. Selection of Target Window Based on Directional Segmentation

In the original NLM and its subsequent improvements, preserving edge details is often overlooked. This is because there are pixels in the center pixel block that have a low correlation with the center pixel, especially in the regions near the edges. This significantly interferes with the weight calculation between the target window and the sliding window, causing blurriness at the image edges and leading to the loss of edge details. To address this issue, our algorithm improves upon the NLM by introducing directional segmentation to guide the selection of the target window, thereby mitigating edge blurriness.
For each pixel ( i , j ) to be despeckled in the reference image, along with its corresponding neighborhood region w i j , our algorithm adopts directional segmentation to obtain eight directional windows, denoted as w i j k , where k represents the eight directions: left-up, left-down, right-up, right-down, up, down, left, and right. The schematic diagram of the directional segmentation is illustrated in Figure 3.
By taking the weighted average of the pixel values within the directional window w i j k , we obtain the pixel value mean of the directional window. To provide a clearer explanation, let us take the left-up directional window w i j L e f t u p as an example. Figure 4 shows the pixel points within w i j L e f t u p , denoted as ( a 1 , b 1 ) , ( a 2 , b 2 ) , ( a 3 , b 3 ) , and ( i , j ) . We assign the weights w a 1 b 1 ,   w a 2 b 2 ,   w a 3 b 3 , and w i j to the pixel values p a 1 b 1 , p a 2 b 2 , p a 3 b 3 , and p i j , respectively. After the weighted average calculation, we obtain the pixel value mean M L e f t u p , as shown in Equation (1), where N L e f t u p represents the sum of the weights of each pixel point, as shown in Equation (2).
M L e f t u p = 1 N L e f t u p ( p a 1 b 1 · w a 1 b 1 + p a 2 b 2 · w a 2 b 2 + p a 3 b 3 · w a 3 b 3 + p i j · w i j )
N L e f t u p = w a 1 b 1 + w a 2 b 2 + w a 3 b 3 + w i j
Next, we calculate the correlation distance R i j k between the pixel mean M k of each directional window and the pixel value to be despeckled p i j using Equation (3). Then, we select the directional window corresponding to the highest correlation distance as the target window W 1 for the pixel to be despeckled, as shown in Equation (4).
R i j k = | p i j M k |
W 1 = a r g m i n k R i j k
Here, k { left-up, left-down, right-up, right-down, up, down, left, and right } .

2.1.2. Despeckling of Multi-Temporal SAR Images Based on DSMT-NLM

In previous research on SAR image despeckling, the NLM was commonly used, but it was typically applied only to single-temporal SAR images. Due to the speckle noise in SAR images, the uniqueness of this speckle noise makes it difficult to accurately distinguish between the speckle noise and image details, leading to a blurring of the boundary between the speckle noise and image details, which increases the difficulty of speckle noise removal while preserving image details. The NLM is primarily designed for denoising single-temporal images and may have limited effectiveness in handling speckle noise. Compared to single-temporal SAR images, multi-temporal SAR images contain richer information about the scene, and the correlation information from multiple sequences can better estimate noise and preserve image details accurately. Therefore, in order to better distinguish between speckle noise and image details and provide more accurate noise estimation, we extended the principles of the NLM to better adapt it to multi-temporal SAR images. This specific method involves expanding the traditional NLM algorithm’s sliding window selection strategy from a single image to multiple images, allowing the sliding window to traverse all temporal SAR images. This extension allows the NLM to better utilize temporal information and achieve more accurate and reliable SAR despeckling.
In the search window of the multi-temporal SAR image, we select the sliding window W 2 and traverse it across all temporal SAR images. The size of the sliding window W 2 remains consistent with the target window W 1 selected in Section 2.1.1. The similarity between the target window W 1 and the sliding window W 2 is calculated using Equation (5). The similarity S ( W 1 , W 2 ) is used to calculate the weight w ( W 1 , W 2 ) corresponding to the center pixel value p x y of the sliding window W 2 , as shown in Equation (6), where h is the smoothing parameter and T is the normalization coefficient. By multiplying p x y by the corresponding weight w ( W 1 , W 2 ) and summing the values, then taking the average, we can update the pixel value p i j of the target window W 1 to the despeckled pixel value p i j ~ , as shown in Equation (8). We repeat this process for each pixel point of the reference image to obtain the despeckled reference image C i .
S ( W 1 , W 2 ) = 1 H W u = 0 H 1 v = 0 W 1 ( W 1 u , v W 2 ( u , v ) ) 2
where W 1 u , v represents the pixel value at point u , v in the target window, and W 2 ( u , v ) represents the corresponding pixel value at point u , v in the sliding window. Given that the sizes of the target window and sliding window are equal, the terms H and W mentioned here respectively denote the number of rows and columns in either the target window or the sliding window.
w ( W 1 , W 2 ) = 1 T e x p ( S ( W 1 , W 2 ) h 2 )
Here,
T = e x p ( S ( W 1 , W 2 ) h 2 )
p i j ~ = x , y w ( W 1 , W 2 ) · p x y

2.1.3. Weighted Average Information Fusion Method Based on Wavelet Transform

Due to the richer information content in multi-temporal SAR images, information fusion, which combines features from different sources to obtain more comprehensive and reliable information, is often necessary to obtain a “superimage.” Conventional approaches in previous research typically involve weighted averaging of multiple SAR images, which is a relatively simple form of information fusion. However, this method directly uses fixed weights to average the pixel values in the images, leading to the loss of rich texture details in the multi-temporal SAR images and a decrease in image clarity. To address these issues, we propose using a WAMWT to replace the conventional simple weighted averaging approach used in previous studies. This method utilizes wavelet transform for multi-scale decomposition, fully considering the frequency characteristics of SAR images and more finely processing information in different frequency ranges. By decomposing, fusing, and reconstructing multiple images, a more comprehensive “superimage” is synthesized, achieving more accurate information fusion. The specific steps of this method are as follows: First, the despeckled reference images C i are subjected to four-scale decomposition using the Daubechies 4 wavelet, obtaining different wavelet coefficients representing information in different frequency ranges, including the low-frequency sub-band (cA), horizontal high-frequency sub-band (cH), vertical high-frequency sub-band (cV), and diagonal high-frequency sub-band (cD), as shown in Equation (9).
[ c A i , c H i , c V i , c D i ] = d w t 2 [ C i ]
where C i represents the input i-th reference image, and d w t 2 is the function for the two-dimensional discrete wavelet transform.
For each wavelet sub-band, a weighted average is performed based on the corresponding weights, as shown in Equation (10).
c A f u s e d = α 1 × c A 1 + · · · + α i × c A i c H f u s e d = α 1 × c H 1 + · · · + α i × c H i c V f u s e d = α 1 × c V 1 + · · · + α i × c V i c D f u s e d = α 1 × c D 1 + · · · + α i × c D i
The weighted average of the wavelet sub-bands is then reconstructed through the inverse wavelet transform, specifically using the Daubechies 4 wavelet, resulting in the final “superimage” A , as shown in Equation (11).
A = i d w t 2 ( c A f u s e d , c H f u s e d , c V f u s e d , c D f u s e d )
where i d w t 2 represents the two-dimensional discrete inverse wavelet transform.

2.2. Residual Speckle Noise Removal Based on Ratio Image

After applying the DSMT-NLM, we obtained the “superimage”. This image has undergone initial despeckling and effective information fusion, but residual speckle noise still persists. To address this issue, the RABASAR employs the concept of a ratio image as its core approach. In the first section, we discussed the RABASAR’s method of selecting one SAR image of interest and generating a ratio image with the “superimage”. However, this approach fails to fully utilize the abundant information contained in the multi-temporal SAR images and contradicts the objective of extensively exploiting the multi-temporal information to generate the “superimage”. Therefore, we propose a new method, the weighted average based on wavelet transform, to replace the RABASAR’s approach of selecting only one SAR image of interest and generating a ratio image with the “superimage”. With this method, we can better utilize the information from the multi-temporal SAR images while maintaining the consistency of the previous step.
We applied the WAMWT to process the input multi-temporal SAR images, following a procedure similar to that described in Section 2.1.3, to obtain the superimposed image S . According to the RABASAR, by applying the MuLoG-BM3D filter to the “superimage”, we obtained the processed “superimage” [24]. Then, by performing a ratio operation between the superimposed image S and the processed “superimage” A , we obtained the ratio image τ , as shown in Equation (12).
T = S A
Next, we denoised the ratio image using the RuLoG algorithm [24]. Finally, we performed a restoration operation on the filtered ratio image by multiplying it with the despeckled “superimage” to obtain the final image, denoted as u ^ t , as shown in Equation (13).
u ^ t = A · τ ^

3. Experiment

3.1. Evaluation Metrics

To objectively assess the effectiveness of the proposed framework, we employed several evaluation metrics to evaluate the experimental results, including the Structural Similarity (SSIM), Natural Image Quality Evaluator (Niqe), Correlation Coefficient (corrcoef), Signal-to-Noise Ratio (SNR), Peak Signal-to-Noise Ratio (PSNR), and the Equivalent Number of Looks (ENL). These evaluation metrics provide quantitative measures of image quality, noise level, and detail preservation ability, allowing for a comprehensive assessment of the algorithm’s performance.

3.1.1. SSIM

SSIM takes into consideration the similarity in brightness, contrast, and structure to evaluate the resemblance between two images, as illustrated in the Equation (14).
S S I M = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
Here, x and y represent the two images. μ x and μ y are the pixel means of x and y , respectively. σ x and σ y are the pixel variances of x and y , respectively. σ x y is the pixel covariance between x and y . C 1 and C 2 are constants used to stabilize division to avoid division by zero in the denominator.

3.1.2. Niqe

Niqe utilizes the statistical characteristics of images, such as contrast, brightness, and sharpness, to assess image quality. A lower Niqe score indicates higher image quality, making it a no-reference image quality assessment metric. The mathematical expression of Niqe is quite complex and will be omitted here.

3.1.3. Corrcoef

Corrcoef is a function used to calculate the correlation coefficient between two sets of data. Mathematically, the correlation coefficient measures the degree of linear association between two sets of data. It ranges from −1 to 1. A correlation coefficient of 1 indicates a perfect positive correlation between the two sets of data.

3.1.4. SNR

SNR is a metric used to evaluate the denoising effectiveness of an image. A higher SNR value indicates better image quality. It is calculated using Equation (15), where m and n represent the number of pixels along the length and width of the image, respectively. x ( i , j ) and y ( i , j ) represent the pixel values at location ( i , j ) in the original and filtered images, respectively.
S N R = 10 l o g 10 i = 1 m j = 1 n x ( i , j ) 2 i = 1 m j = 1 n [ x i , j y ( i , j ) ] 2

3.1.5. PSNR

PSNR is one of the most widely used evaluation metrics in image visual processing, which is utilized to represent the degree of image quality loss. The larger the PSNR value, the higher the image quality, indicating a smaller degree of distortion between two images and a higher degree of similarity, as shown in Equation (16).
PSNR = 10 × log 10 ( 2 n 1 ) 2 M S E
Here,
M S E = 1 n i = 1 n ( y i ^ y i ) 2
y i ^ represents the final image after undergoing filtering, while y i refers to the input image without filtering.

3.1.6. ENL

ENL is one of the crucial indicators for assessing image quality, representing a dimensionless measure. A larger ENL value indicates a smoother image with lower noise levels, implying more effective noise removal and less coherent speckle noise. ENL is defined as shown in Equation (18), where X denotes the set of sample points in a homogeneous region, E X represents the mean value of the homogeneous region, and D X represents the variance of the homogeneous region.
E N L X = ( E X ) 2 D X

3.2. Experimental Analysis

To validate the effectiveness of the algorithm, experiments were conducted on simulated multi-temporal SAR images and real multi-temporal SAR images, separately.

3.2.1. Analysis of Experiments on Simulated Multi-Temporal SAR Images

A certain aerial image was used as the clean image for simulating the multi-temporal SAR images, with an image size of 256 pixels × 256 pixels. To simulate the randomness of multi-temporal SAR image speckle noise, we applied speckle noise of different variances to the clean image. Specifically, three different noise variance values were selected: 0.04, 0.05, and 0.1. The applied speckle noise follows a uniform distribution with a mean of 0. By applying these speckle noises, we generated the simulated multi-temporal SAR images shown in Figure 5.
In order to assess the performance of our algorithm more accurately, we will employ four methods mentioned in the RABASAR, namely the arithmetic mean (AM), the denoised arithmetic mean (DAM), the binary-weighted arithmetic mean (BWAM), and the denoised binary-weighted arithmetic mean (DBWAM) [24], as comparative methods in our experiments, along with the Frost and NLM.
We compare the final image with the clean image and reference algorithms, as shown in Figure 6. It is worth noting that the conventional NLM and Frost are designed for despeckling individual images. To ensure comparability in our experiments, we combined the images from Sequence 1, Sequence 2, and Sequence 3 to create an image averaging. This image averaging was then used as the input for both the NLM and Frost algorithms.
From a visual perspective based on Figure 6, we observed that the performance of AM and BWAM in removing speckle noise was not satisfactory. Conversely, DAM, DBWAM, Frost, NLM, and the proposed method each demonstrated effective suppression. However, Frost, while removing speckle noise, caused the image to become blurred, resulting in no improvement in the clarity of features after speckle noise removal. Additionally, NLM exhibited over-smoothing, leading to a significant loss of terrain information. Moreover, the white features still retained visible granular speckle noise.
From the comparison of detail amplification in Figure 7, we observed that DAM and DBWAM generated pseudo-details, referring to striped textures that were not present in the clean image. Conversely, Frost, NLM, and the proposed method did not exhibit this phenomenon.
After analyzing the evaluation metrics in Table 1, it is evident that the BWAM, DBWAM, and proposed method have achieved relatively high scores in measuring the structural similarity between the final image and the clean image. This indicates their commendable performance in preserving the image structures. In terms of assessing image clarity and naturalness, it is noteworthy that the Niqe metric for the proposed method attains the lowest value. This implies its exceptional proficiency in maintaining image clarity and naturalness, signifying a superior image quality and enhanced detail clarity. Furthermore, when evaluating the strength and direction of the linear relationship between the final images and the clean images, the proposed method, along with the BWAM and DBWAM, demonstrates commendable metrics. This underscores their excellence in preserving linear relationships among image features.
Taking into account the comprehensive analysis of visual perception, detail amplification, and evaluation metrics, the proposed method excels in all three aspects, showcasing its remarkable capability for enhancing image quality. In contrast, other comparative methods each exhibit their own limitations.

3.2.2. Analysis of Experiments on Real Multi-Temporal SAR Images

We selected two sets of real, spaceborne, multi-temporal SAR images as input data, each containing three temporal sequences. The detailed parameters of each multi-temporal SAR image set are presented in Table 2 and Table 3. Additionally, we performed registration processing on both sets of images to ensure their spatial alignment, as illustrated in Figure 8 and Figure 9.
From the experimental results in Figure 10 and Figure 11, we can observe that both the proposed algorithm and the comparative algorithms achieved certain results in removing speckle noise. However, for the AM and BWAM the removal of speckle noise was not thorough enough, resulting in noticeable speckle grains in the images. The Frost results in image blurriness and lower quality. The NLM suffers from noticeable over-sharpening of the image. In contrast, the proposed algorithm, DAM, and DBWAM achieved more effective suppression of speckle noise, leading to a significant reduction in speckle noise. Although the DBWAM and DAM achieved a certain level of speckle noise suppression, there were still faint traces of speckle noise in the detail regions, affecting the overall image quality. Conversely, the proposed algorithm effectively suppressed speckle noise both globally and locally, resulting in no noticeable speckle noise in the images and the preserving of edge details.
Through the horizontal comparison of the geographic information in Figure 12 and Figure 13, significant differences in geographic information among them are observed. Although the RABASAR was designed for multi-temporal SAR image filtering, it did not effectively perform information fusion, resulting in limited feature fusion and a diminished utilization of information value in multi-temporal images, consequently affecting subsequent data analysis. In contrast, the proposed algorithm achieved feature fusion across the sequences of images, maximizing the utilization of information from all image sequences, leading to a significant improvement in feature detail information.
Based on the analysis of the objective evaluation metrics (Table 4), the proposed algorithm achieved impressive results. The proposed algorithm produced images with higher quality, better visual fidelity, and minimal loss, effectively preserving image details. This is attributed to the directional segmentation of pixels to be denoised around the reference image, selecting the directional window with the highest correlation with the target window, which mitigates the influence of low-correlation pixels on weight calculation between the target and sliding windows, thereby enhancing the capability of coherent speckle noise removal while effectively preserving image details. Additionally, the proposed algorithm considered the characteristics of multi-temporal SAR images and achieved comprehensive information fusion. Unlike traditional non-local mean algorithms, the proposed algorithm traverses multi-temporal SAR images with a sliding window, efficiently utilizing spatial domain methods for multi-temporal information fusion, thereby maximizing the exploitation of information features in multi-temporal images. Therefore, compared to other methods, the proposed algorithm exhibits superior performance in multi-temporal SAR image processing.

4. Conclusions

In this paper, we proposed an algorithm, titled “Enhancing RABASAR for Multi-Temporal SAR Image Denoising through Directional Filtering and Wavelet Transform,” to address the challenge of speckle noise removal in multi-temporal SAR images. The proposed algorithm introduced a novel approach to obtain the “superimage”, referred to as DSMT-NLM. Additionally, we utilized a WAMWT to generate the superimposed image, which was then ratioed with the “superimage” to obtain the ratio image. Through subjective visual evaluation and objective performance metrics, we not only demonstrated the feasibility of the proposed approach but also showcased its superiority over the other methods. However, during the experiments on real multi-temporal SAR image II, we noticed that the image contrast of the experimental results did not reach the level of other comparative experiments, indicating a new challenge that we need to address. Despite the excellent performance of our algorithm in other aspects, the issue of insufficient contrast still requires further in-depth research and resolution. We acknowledge that this problem might stem from certain aspects or parameter settings of the algorithm. Therefore, in future research, we will focus on exploring and optimizing these aspects to achieve better contrast performance.

Author Contributions

All authors have contributed to this paper. L.B. was responsible for urging the progress of the work, guiding the feasibility of the solutions, and providing guidance on writing the paper. J.Z. was responsible for innovating the solutions, designing algorithm models, conducting algorithmic and comparative experiments, and writing the paper. Z.Z., Y.Y. and M.D. provided some guidance on the feasibility of the solutions. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China (Grant No: 2020YFA0713503), the Project of Department of Science and Technology of Hunan Province (Grant No:2022JJ30561), the Research Foundation of the Department of Natural Resources of Hunan Province (Grant No:2022-15), and the Project of Department of Science and Technology of Hunan Province (Grant No:2023JJ30582).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Where data is unavailable due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Montgomery, J.; Mahoney, C.; Brisco, B.; Boychuk, L.; Cobbaert, D.; Hopkinson, C. Remote Sensing of Wetlands in the Prairie Pothole Region of North America. Remote Sens. 2021, 13, 3878. [Google Scholar] [CrossRef]
  2. Xie, Q.; Dou, Q.; Peng, X.; Wang, J.; Lopez-Sanchez, J.M.; Shang, J.; Fu, H.; Zhu, J. Crop Classification Based on the Physically Constrained General Model-Based Decomposition Using Multi-Temporal RADARSAT-2 Data. Remote Sens. 2022, 14, 2668. [Google Scholar] [CrossRef]
  3. Meroni, M.; D’Andrimont, R.; Vrieling, A.; Fasbender, D.; Lemoine, G.; Rembold, F.; Seguini, L.; Verhegghen, A. Comparing land surface phenology of major European crops as derived from SAR and multispectral data of Sentinel-1 and -2. Remote Sens. Environ. 2020, 253, 112232. [Google Scholar] [CrossRef] [PubMed]
  4. Kang, J.; Wang, Z.; Zhu, R.; Xia, J.; Sun, X.; Fernandez-Beltran, R.; Plaza, A. DisOptNet: Distilling Semantic Knowledge From Optical Images for Weather-Independent Building Segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  5. Sun, Z.; Leng, X.; Lei, Y.; Xiong, B.; Ji, K.; Kuang, G. BiFA-YOLO: A novel YOLO-based method for arbitrary-oriented ship detection in high-resolution SAR images. Remote Sens. 2021, 13, 4209. [Google Scholar] [CrossRef]
  6. Bu, L.; Dai, D.; Zhang, Z.; Yang, Y.; Deng, M. Hyperspectral Super-Resolution Reconstruction Network Based on Hybrid Convolution and Spectral Symmetry Preservation. Remote Sens. 2023, 15, 3225. [Google Scholar] [CrossRef]
  7. Yao, H.; Fu, B.; Zhang, Y.; Li, S.; Xie, S.; Qin, J.; Fan, D.; Gao, E. Combination of Hyperspectral and Quad-Polarization SAR Images to Classify Marsh Vegetation Using Stacking Ensemble Learning Algorithm. Remote Sens. 2022, 14, 5478. [Google Scholar] [CrossRef]
  8. Zhang, P.; Ban, Y.; Nascetti, A. Learning U-Net without forgetting for near real-time wildfire monitoring by the fusion of SAR and optical time series. Remote Sens. Environ. 2021, 261, 112467. [Google Scholar] [CrossRef]
  9. Fu, B.; Liang, Y.; Lao, Z.; Sun, X.; Li, S.; He, H.; Sun, W.; Fan, D. Quantifying scattering characteristics of mangrove species from Optuna-based optimal machine learning classification using multi-scale feature selection and SAR image time series. Int. J. Appl. Earth Obs. Geoinf. 2023, 122, 103446. [Google Scholar] [CrossRef]
  10. Baraha, S.; Sahoo, A.K.; Modalavalasa, S. A systematic review on recent developments in nonlocal and variational methods for SAR image despeckling. Signal Process. 2022, 196, 108521. [Google Scholar] [CrossRef]
  11. Shen, H.; Zhou, C.; Li, J.; Yuan, Q. SAR Image Despeckling Employing a Recursive Deep CNN Prior. IEEE Trans. Geosci. Remote Sens. 2020, 59, 273–286. [Google Scholar] [CrossRef]
  12. Yommy, A.S.; Liu, R.; Wu, S. SAR image despeckling using refined Lee filter. In Proceedings of the 2015 7th International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 26–27 August 2015; IEEE: Piscataway, NJ, USA; Volume 2, pp. 260–265. [Google Scholar]
  13. Painam, R.K.; Suchetha, M. Despeckling of SAR Images Using BEMD-Based Adaptive Frost Filter. J. Indian Soc. Remote Sens. 2022, 1–12. [Google Scholar] [CrossRef]
  14. Zhang, X.; Deng, K.; Fan, H. A new SAR image denoising algorithm of fusing Kuan filters and edge extraction. In Proceedings of the International Symposium on Lidar and Radar Mapping 2011: Technologies and Applications, Nanjing, China, 24 October 2011; SPIE: Bellingham, WA, USA, 2011; Volume 8286, pp. 92–98. [Google Scholar]
  15. Liu, S.; Hu, Q.; Liu, T.; Zhao, J. Review on Synthetic Aperture Radar Image Denoising Algorithms. J. Ordnance Equip. Eng. 2018, 39, 106–112+252. [Google Scholar]
  16. Shitole, S.; Jain, V.; Vanama, V.S.K. De-speckling of synthetic aperture radar using discrete fourier transform. In Proceedings of the IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; IEEE: Piscataway, NJ, USA; pp. 1524–1527. [Google Scholar]
  17. Choi, H.; Jeong, J. Speckle Noise Reduction Technique for SAR Images Using Statistical Characteristics of Speckle Noise and Discrete Wavelet Transform. Remote Sens. 2019, 11, 1184. [Google Scholar] [CrossRef]
  18. Liu, S.; Lei, Y.; Zhang, L.; Li, B.; Hu, W.; Zhang, Y.-D. MRDDANet: A Multiscale Residual Dense Dual Attention Network for SAR Image Denoising. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
  19. Thakur, R.K.; Maji, S.K. AGSDNet: Attention and Gradient-Based SAR Denoising Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  20. Molini, A.B.; Valsesia, D.; Fracastoro, G.; Magli, E. Towards Deep Unsupervised Sar Despeckling with Blind-Spot Convolutional Neural Networks. In Proceedings of the IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2507–2510. [Google Scholar]
  21. Yu, F.; Wei, C.; Deng, P.; Peng, T.; Hu, X. Deep exploration of random forest model boosts the interpretability of machine learning studies of complicated immune responses and lung burden of nanoparticles. Sci. Adv. 2021, 7, eabf4130. [Google Scholar] [CrossRef]
  22. Lê, T.T.; Atto, A.M.; Trouve, E.; Nicolas, J.-M. Adaptive Multitemporal SAR Image Filtering Based on the Change Detection Matrix. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1826–1830. [Google Scholar] [CrossRef]
  23. Chierchia, G.; El Gheche, M.; Scarpa, G.; Verdoliva, L. Multitemporal SAR Image Despeckling Based on Block-Matching and Collaborative Filtering. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5467–5480. [Google Scholar] [CrossRef]
  24. Zhao, W.; Deledalle, C.-A.; Denis, L.; Maitre, H.; Nicolas, J.-M.; Tupin, F. Ratio-Based Multitemporal SAR Images Denoising: RABASAR. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3552–3565. [Google Scholar] [CrossRef]
  25. di Martino, G.; di Simone, A.; Iodice, A.; Riccio, D.; Ruello, G. Assessing Performance of Multitemporal SAR Image Despeckling Filters via a Benchmarking Tool. In Proceedings of the IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1536–1539. [Google Scholar]
  26. Buades, A.; Coll, B.; Morel, J.M. Non-local means denoising. Image Process. Line 2011, 1, 208–212. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the whole framework.
Figure 1. Flowchart of the whole framework.
Sensors 23 08916 g001
Figure 2. Illustrative diagram of the DSMT-NLM.
Figure 2. Illustrative diagram of the DSMT-NLM.
Sensors 23 08916 g002
Figure 3. The schematic diagram of the directional segmentation.
Figure 3. The schematic diagram of the directional segmentation.
Sensors 23 08916 g003
Figure 4. The schematic diagram of the directional window (left-up).
Figure 4. The schematic diagram of the directional window (left-up).
Sensors 23 08916 g004
Figure 5. Untreated simulated multi-temporal SAR images.
Figure 5. Untreated simulated multi-temporal SAR images.
Sensors 23 08916 g005
Figure 6. Experimental results.
Figure 6. Experimental results.
Sensors 23 08916 g006
Figure 7. Comparison of detail amplification.
Figure 7. Comparison of detail amplification.
Sensors 23 08916 g007
Figure 8. Untreated real multi-temporal SAR image (I).
Figure 8. Untreated real multi-temporal SAR image (I).
Sensors 23 08916 g008
Figure 9. Untreated real multi-temporal SAR image (II).
Figure 9. Untreated real multi-temporal SAR image (II).
Sensors 23 08916 g009
Figure 10. Experimental results (I).
Figure 10. Experimental results (I).
Sensors 23 08916 g010
Figure 11. Experimental results (II).
Figure 11. Experimental results (II).
Sensors 23 08916 g011
Figure 12. Comparison of detail amplification (I).
Figure 12. Comparison of detail amplification (I).
Sensors 23 08916 g012
Figure 13. Comparison of detail amplification (II).
Figure 13. Comparison of detail amplification (II).
Sensors 23 08916 g013
Table 1. Evaluation Metrics. (Red indicates the best, bold indicates secondary.)
Table 1. Evaluation Metrics. (Red indicates the best, bold indicates secondary.)
Simulated Multi-Temporal SAR ImageSSIMNiqeCorrcoef
AM0.345518.61320.9245
DAM0.345518.61320.9245
BWAM0.59417.43410.9894
DBWAM0.59417.43410.9894
Frost0.488510.43140.9619
NLM0.464912.59850.9693
Proposed0.55347.13880.9893
Table 2. Detailed parameters of the real multi-temporal SAR images.
Table 2. Detailed parameters of the real multi-temporal SAR images.
SatelliteOrbit ViewResolutionPolarizationIncidence AngleRegion
Cosmo SkyMedAscending right-looking3 mHH24.98°~28.35°Shanghai
Table 3. The acquisition date of the SAR images.
Table 3. The acquisition date of the SAR images.
SatelliteSequence 1Sequence 2Sequence 3
Real multi-temporal SAR image (I)2010/02/072010/04/122010/04/28
Real multi-temporal SAR image (II)2010/02/072010/04/122010/04/28
Table 4. Evaluation Metrics. (Red indicates the best, bold indicates secondary.)
Table 4. Evaluation Metrics. (Red indicates the best, bold indicates secondary.)
Real Multi-Temporal SAR Image (I)SNR/dBPSNR/dBENLNiqe
AM8.748420.44141.39986.2595
DAM7.070518.75892.08686.9780
BWAM9.039220.73071.38775.8263
DBWAM7.017218.70552.12827.0502
Frost2.949317.89312.94934.8454
NLM1.558519.58141.55855.5520
Proposed9.563423.73482.18356.0050
Real multi-temporal SAR image (II)SNR/dBPSNR/dBENLNiqe
AM8.259821.01890.64137.1949
DAM6.855419.59090.80839.1173
BWAM8.171921.02380.59166.9635
DBWAM6.800419.55810.77848.6540
Frost1.219117.20001.21915.7820
NLM0.752318.06970.75237.4868
Proposed9.223124.83880.97769.0078
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bu, L.; Zhang, J.; Zhang, Z.; Yang, Y.; Deng, M. Enhancing RABASAR for Multi-Temporal SAR Image Despeckling through Directional Filtering and Wavelet Transform. Sensors 2023, 23, 8916. https://doi.org/10.3390/s23218916

AMA Style

Bu L, Zhang J, Zhang Z, Yang Y, Deng M. Enhancing RABASAR for Multi-Temporal SAR Image Despeckling through Directional Filtering and Wavelet Transform. Sensors. 2023; 23(21):8916. https://doi.org/10.3390/s23218916

Chicago/Turabian Style

Bu, Lijing, Jiayu Zhang, Zhengpeng Zhang, Yin Yang, and Mingjun Deng. 2023. "Enhancing RABASAR for Multi-Temporal SAR Image Despeckling through Directional Filtering and Wavelet Transform" Sensors 23, no. 21: 8916. https://doi.org/10.3390/s23218916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop