Next Article in Journal
Earth Observation Data Exploitation in Urban Surface Modelling: The Urban Energy Balance Response to a Suburban Park Development
Previous Article in Journal
On Spectral-Spatial Classification of Hyperspectral Images Using Image Denoising and Enhancement Techniques, Wavelet Transforms and Controlled Data Set Partitioning
Previous Article in Special Issue
Noise Removal and Feature Extraction in Airborne Radar Sounding Data of Ice Sheets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Collaborative Despeckling Method for SAR Images Based on Texture Classification

1
School of Physics and Electronics, Shandong Normal University, Jinan 250014, China
2
School of Management Engineering, Shandong Jianzhu University, Jinan 250101, China
3
Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(6), 1465; https://doi.org/10.3390/rs14061465
Submission received: 10 February 2022 / Revised: 4 March 2022 / Accepted: 17 March 2022 / Published: 18 March 2022
(This article belongs to the Special Issue Advances of Noise Radar for Remote Sensing (ANR-RS))

Abstract

:
Speckle is an unavoidable noise-like phenomenon in Synthetic Aperture Radar (SAR) imaging. In order to remove speckle, many despeckling methods have been proposed during the past three decades, including spatial-based methods, transform domain-based methods, and non-local filtering methods. However, SAR images usually contain many different types of regions, including homogeneous and heterogeneous regions. Some filters could despeckle effectively in homogeneous regions but could not preserve structures in heterogeneous regions. Some filters preserve structures well but do not suppress speckle effectively. Following this theory, we design a combination of two state-of-the-art despeckling tools that can overcome their respective shortcomings. In order to select the best filter output for each area in the image, the clustering and Gray Level Co-Occurrence Matrices (GLCM) are used for image classification and weighting, respectively. Clustering and GLCM use the co-registered optical images of SAR images because their structure information is consistent, and the optical images are much cleaner than SAR images. The experimental results on synthetic and real-world SAR images show that our proposed method can provide a better objective performance index under a strong noise level. Subjective visual inspection demonstrates that the proposed method has great potential in preserving structural details and suppressing speckle noise.

Graphical Abstract

1. Introduction

Synthetic Aperture Radar (SAR) is the high-resolution imaging radar that can obtain high-resolution radar images similar to optical images under low visibility weather conditions. SAR is not limited by climatic conditions, and, as an active microwave sensor, can continuously observe the earth. Moreover, it has a strong capability to distinguish the features of the surface. The image features obtained are rich in information, including amplitude, phase, and polarization, which compensates for the shortage of other imaging methods, such as visible light and infrared light. Therefore, SAR imaging is one of the most valuable data sources for analysis. Unfortunately, the SAR imaging system is based on coherence, which leads to a multiplicative noise called speckle noise. Therefore, the general models of reducing additive noise are ineffective. Intense speckle noise may seriously impact subsequent processing, such as segmentation, classification, and target detection [1,2,3]. Therefore, the study of despeckling is critical to applying SAR images.
SAR image despeckling has been a hot research field in recent decades [4,5]. Many new algorithms are proposed almost every year. Although traditional spatial filtering methods are simple [6,7,8], they may result in over-smoothing, loss of textures, and reduced resolution. At the end of the 20th century, the wavelet transform provided a new idea for SAR image despeckling [9,10,11]. The transform domain methods can effectively suppress speckle noise, but they may lead to pixel distortions and artifacts. This is mainly due to the selection of the base function in the transform domain. Total variation methods can maintain boundaries well [12,13], but a significant disadvantage is the filtered images’ staircase effect. In recent years, low-rank representation methods have achieved great success in image denoising [14,15]. However, it is necessary to convert the multiplicative noise model into an additive noise model before using low-rank methods. Furthermore, deep learning is showing outstanding performance in many natural image processing methods. Indeed, the remote sensing community is also starting to exploit the potential of this approach [16,17,18,19]. The abilities of deep learning to handle SAR images are advancing by leaps and bounds. Some approaches do not simply transfer the processing of natural images to SAR images, but also utilize the spatial and statistical characteristics of SAR images, or combine more complex methods [20,21,22]. Nevertheless, at present, deep learning-based SAR image despeckling methods require a large number of datasets and reference clean images. It is not easy to obtain the noise-free version of real SAR images.
Non-Local Means (NLM) methods with their particular advantages have achieved excellent despeckling results for SAR images [23]. The core idea of the NLM filter exploits spatial correlation in the entire image for noise removal, which can produce promising results. For example, the Probabilistic Patch-Based (PPB) [24] filter uses a new similarity criterion instead of Euclidean distance and achieves strong speckle suppression by iteratively thinning weights. SAR-Block-Matching 3D (SAR-BM3D) [25] is the SAR version of the BM3D [26] algorithm that combines the non-local methods and the transform methods. Similar patches are found in non-local regions and despeckled in the transform domain. Moreover, Guided Non-Local Means (GNLM) [27] uses the structure information of the co-registered optical images to provide beneficial image filtering, since it is easy to find the best predictor in a noise-free optical image. With this structural similarity, high filter quality can be obtained. Recently, more and more sophisticated despeckling methods have been proposed. Among these, it is worth mentioning that Guo et al. [28] proposed a truncated nonconvex nonsmooth variational model for speckle suppression, and Ferraioli et al. [29] obtained the similarity by evaluating the ratio patch and using the anisotropic method. Penna et al. [30] used the stochastic distance to replace Euclidean distance and despeckling in the Haar wavelet domain. Aranda-Bojorges et al. [31] incorporated clustering and sparse representation in the BM3D framework. In addition, polarimetric SAR can obtain richer target information than single-channel SAR. There are some studies on polarimetric SAR despeckling [32]. Nevertheless, polarimetric SAR despeckling is more complicated, and sophisticated methods are required. Mullisa et al. [33] proposed a multistream complex-valued fully convolution network to despeckle polarimetric SAR images that can effectively estimate the covariance matrix of polarimetric SAR.
However, at present, most of the despeckling methods have some drawbacks. PPB, for example, performs well in homogeneous regions, but cannot preserve texture details well in heterogeneous regions or wavelike visual artifacts. In contrast, SAR-BM3D performs well for texture structure, but the capability of speckle suppression in homogeneous regions is general. In a word, it is obvious that using one method for SAR image despeckling is not enough. Accordingly, this study reports a new despeckling method that combines two complementary filters.
We use co-registered optical images and Superpixel-Based Fast Fuzzy C-Means (SFFCM) clustering [34] to cluster pixels with the same characteristics, providing different weights to the filters based on the consistency of the structure information between co-registered optical images and SAR images. The weights are given by Gray Level Co-Occurrence Matrices (GLCM) [35]. The GLCM is a common method used to describe texture by studying the image gray level’s spatial distribution and correlation characteristics. We analyze the texture of many images in advance and save the optimal weight of the experiment as our reference dataset. When a new image patch is given, the nearest item in the dataset is taken and directly assigned to the weight. Experiments on simulated and real-world SAR images show that the proposed method shows a more significant improvement in objective and subjective indicators than a single method. Meanwhile, the ratio image of our proposed method contains less image texture information.
The following sections of this article are organized as below. We describe the materials and methods in Section 2. The experimental results are described in Section 3. Finally, the discussion and conclusions are given in Section 4 and Section 5, respectively.

2. Materials and Methods

In order to obtain the best despeckling results, we select the state-of-the-art despeckling methods. They are PPB, SAR-BM3D, Weighted Nuclear Norm Minimization (WNNM) [14], and GNLM. As mentioned earlier, PPB has a good speckle suppression effect in homogeneous areas, and SAR-BM3D has a good performance in texture preservation. However, WNNM cannot be used directly for SAR image despeckling as a low-rank method. We must use a logarithmic operator, called H-WNNM (homomorphic version of WNNM), to convert the multiplicative model to the additive model.
Let us consider that Y and X represent the observed SAR image and the noise-free image, respectively. Then Y is related to X by the multiplicative model [36]
Y = N × X
where N is the multiplicative speckle noise. Assuming the speckle noise is fully developed, the noise in the L-look intensity SAR image follows a gamma distribution with unit mean and variance 1/L. The Probability Density Function (PDF) of N is given by [37]
  p ( N ) = L L N L 1 e L N Γ ( L ) ,       N 0 ,   L 1
where the gamma function is denoted as Γ ( L ) = 0 t L 1 e t d t . With a logarithmic operator applied on (1), the multiplicative speckle noise can be transformed to:
l o g Y = l o g N + l o g X
In (3), l o g N and l o g X can be considered as unrelated signals, and   l o g N follows a near-Gaussian distribution with a biased (non-zero) mean value. Therefore, a biased mean correction is required after inverse logarithmic operations, especially for SAR images with high noise levels.
GNLM uses the optical image as guidance in SAR image despeckling. Although SAR and optical images are completely different imaging mechanisms, the structure information of the co-registered SAR and optical images is the same. Therefore, by utilizing this structural consistency, the co-registration of optical images can be very helpful for SAR image despeckling. However, using an optical image to guide SAR despeckling requires special attention. In fact, despite the careful co-registration and the obvious correspondence of the observed scene, important differences exist between optical and SAR images, especially in the presence of man-made objects and regions characterized by a significant orography. Therefore, while the optical data can certainly be helpful in guiding the despeckling process, there is the risk of injecting alien information into the filtered SAR image, generating annoying artifacts. To prevent injecting optical domain information into the filtered image, GNLM performs an SAR-domain statistical test to reject any risky predictors. Furthermore, for each target patch, GNLM carries out a preliminary test to single out unreliable predictors and exclude them altogether from the nonlocal average. GNLM also limits the maximum number of predictors. Thanks to these limitations, the time span and the mismatch do not significantly impact the filter results [27].
The key to this combination method is distinguishing homogeneous regions and heterogeneous regions in the image and allocating different weights to the corresponding filters. The solution in [38] is to compute the Equivalent Number of Looks (ENL) of different pixels to determine whether the area is homogeneous, heterogeneous, or extremely heterogeneous. The ENL can be converted to the weight of (0–1) through the sigmoid function. The larger the ENL, the more likely the region is to be homogeneous. Due to the SAR image being overwhelmed by the strong speckle noise, the ENL of the noisy SAR image may not be accurate in some cases. Therefore, this strategy may have some limitations.
Since co-registered optical images have the same structural information as SAR images, optical images can be used to guide filtering and extract other information. Therefore, the strategy we choose to distinguish different regions is SFFCM, which can divide images into some clusters based on the image self-similarity. Unlike supervised methods, i.e., Convolutional Neural Networks (CNN), Fully Convolutional Networks (FCN), etc., SFFCM is an unsupervised method. The former requires a large number of training samples and label data for feature learning, while the latter does not require any of these. Moreover, reference [34] uses a new watershed transform based on Multiscale Morphological Gradient Reconstruction (MMGR-WT) to generate superpixel images. Based on the superpixel image obtained by MMGR-WT, the objective function of SFFCM is as follows:
F m = n = 1 q k = 1 C S n u k n m ( 1 S n p R n x p ) v k 2
where n is the color level, 1 n q , q is the number of regions of the superpixel image, n , q N + . Superpixel will segment the original image into several small contiguous regions. Then we replace all pixels in the region with the average of each superpixel region to obtain fewer color levels n to optimize the processing time. C is the number of clusters (see more details in Section 3.1), S n is the number of pixels in the nth region R n , and x p is the color pixel (i.e., the original pixel value) within the nth region of the superpixel image obtained by MMGR-WT. u k n m represents the fuzzy membership of n with respect to the kth clustering center v k , and m is the weighting exponent. The performance of SFFCM is insensitive to the value of m (from 2 to 100) according to [34].
We then compute the minimum of the objective function. By using the Lagrange multiplier operator, the above problem is transformed into an unconstrained optimization problem:
F m ˜ = n = 1 q k = 1 C S n u k n m ( 1 S n p R n x p ) v k 2 λ ( k = 1 C u k n 1 )
where λ is the Lagrange multiplier. Let the partial differential of F m ˜ to u k n and v k equal zero (i.e., F m ˜ u k n = 0 , F m ˜ v k = 0 ).
The corresponding solutions for u k n and v k are computed by:
u k n = ( 1 S n p ϵ R n x p ) v k 2 / ( m 1 ) j = 1 c ( 1 S n p ϵ R n x p ) v j 2 / ( m 1 )
v k = n = 1 q u k n m p ϵ R n x p n = 1 q S n u k n m
First, we need to initialize a random membership partition matrix U ( 0 ) to compute the clustering centers v k . Then we update the membership partition matrix U using (6) until U ( b ) U ( b + 1 ) < η then stop, to reduce accuracy losses, η set to 10−5 empirically. U represents which cluster the pixel belongs to. Finally, we can obtain the output image. The calculation speed is faster because the number of different colors in the superpixel image is much smaller than in the original image. As shown in Figure 1, (a) is the original optical image and (b) is the superpixel image. It can be seen clearly in (c) that SFFCM classifies the image into C classes based on the local spatial information and self-similarity.
In this work, we refer to GLCM for setting weights. However, the GLCM is only a quantitative description of the texture that cannot be used to extract the features of the texture image directly. Therefore, the four feature properties computed by the GLCM can well reflect the four different textures characteristics of the image, namely correlation (COR), contrast (CON), energy (ENE), and homogeneity (HOM). For example, the size of the COR value reflects the local gray correlation in the image, while the ENE is the sum of the squares of the GLCM element values, reflecting the uniformity of the gray level distribution in the image. CON represents the clarity of the textures. The more complex the textures, the higher the value of CON. HOM can be utilized for checking similarity in the image. The feature image is obtained by traversing the entire image using a 7*7 sliding window. If the window size is too small, the texture will not be adequately represented. If it is too large, the computational cost will be significantly increased. As shown in Figure 2, (a) is the COR, (b) is the CON, (c) is the ENE, and (d) is the HOM.
As mentioned above, we set the clustering to C classes. Therefore, we first use SFFCM to segment the image and assign different filter weights to each different cluster. Then, by iteratively changing the weights, the optimal weights are obtained when the best subjective and objective indicators are acquired. The mean and variance of the texture region corresponding to each cluster in the four feature images are computed at this time and then recorded as a set of datasets together with the optimal weight. In our experiment, we learned the best weight of different textures through 50 different types of images (including mountains, rivers, buildings, roads, forests, plains, etc.) in advance. Although this method is complicated, it is useful. When a new image patch is given, the Euclidean distance between the four feature properties of this patch and each dataset group are computed. Then the smallest distance is taken, and the corresponding optimal weights are derived. We summarize the complete algorithm of the proposed method for image despeckling in Algorithm 1.
Algorithm 1. The proposed SAR image despeckling algorithm.
Input: SAR image I, optical image O, cluster number C.
  Obtain the filter 1 and filter 2 results Y1, Y2.
  Clustering by SFFCM.
for each cluster Ci do
     Calculate the feature images by GLCM.
     Estimate the weight wi.
end for
Obtain the weighting map.
Weight sum of Y1, Y2.
Output: The despeckling image I ^ .
Following the method above, the proposed technique’s block diagram is shown in Figure 3. The original SAR image is filtered by filter 1. Filter 2 is performed with the co-registered optical image, if needed. At the same time, the optical image is split up by SFFCM to obtain the corresponding weights of different regions. Finally, a linear combination is performed to obtain the final SAR image.

3. Experimental Results

In this section, we first compared four filters (PPB, SAR-BM3D, H-WNNM, and GNLM) and then selected several complementary combinations. Next, we chose the best combination as the final output and compared it with other despeckling methods. Contrast experiments were performed on synthetic multiplicative noise images and real SAR images. As shown in Figure 4, we selected six images for the test, including three synthetic (a) and three real SAR images (b), with corresponding co-registration optical images at the bottom. According to [39], we selected representative images with homogeneous regions, complex texture regions, roads, etc. To compare the objective indices of the proposed method with others, we computed the Feature Similarity Index (FSIM) [40] and the Peak Signal-to-Noise Ratio (PSNR) in the synthetic multiplicative noise image. Generally, the larger the PSNR is, the better the image quality will be. FSIM is a number between 0 and 1. The larger FSIM shows that the image structure information preservation is better. The no-reference measure we selected is ENL for real-world SAR images. Generally, a larger ENL indicates a stronger capability of the filter to remove speckle. In addition, we also calculated the ratio images to compare the residual speckle noise.

3.1. Parameter Set

In order to test the sensitivity of the proposed method to parameter C, we further discussed the relationship between the number of clusters and the filtering results. All real-world SAR images and co-registered optical images are from Sentinel 1–2 dataset [41]. The SAR images are obtained by Sentinel-1, the pixel spacing is 20 m, azimuth is 5 m, polarization is VV, and acquisition mode is IW. The optical images are obtained by Sentinel-2, only using bands 4, 3, and 2 (i.e., red, green, and blue channels) to generate RGB images. As shown in Figure 4a, we selected three standard test images with rich homogeneous regions and structures to evaluate the impact of cluster C on the performance of PSNR and FSIM. In these experiments, we contaminated the reference test images with various looks of multiplicative speckle noise (L = 1, 2, 4, 8). We used a combination of SAR-BM3D and GNLM as output (the reason is given later) and set C from 2 to 10 to determine the best value. The curve in Figure 5a shows that PSNR increases significantly when C ranges from 2 to 7 and stabilizes when C is greater than 7. Similarly, the curve in Figure 5b shows that the FSIM values have a clearly increasing tendency when C ranges from 2 to 7. However, in the case of a high number of looks (greater than 7), the values of FSIM are stable. Therefore, we set the cluster number C to 7.

3.2. Comparison of Selected Tools

To acquire a comprehensive understanding of the selected filters, we tested the performance of the four filters. The results are shown in Figure 6. The top lines, from left to right, are noise images (L = 1), PPB, SAR-BM3D, H-WNNM, and GNLM. The bottom row is the corresponding ratio images.
From Figure 6, we can see that PPB and H-WNNM are characterized by excessive smoothing and the loss of many texture details, and PPB also produces some artifacts. On the other hand, SAR-BM3D demonstrates its good capability to preserve textures, but it does not perform well in speckle suppression. GNLM seems to be the best filter, performing well both in homogeneous and complex texture areas, and producing no artifacts. However, in some complex areas, GNLMs’ over-smoothing will lead to the loss of some texture details. Table 1 confirms the visual assessment, where the best values are marked in bold. GNLM is an outperforming filter in despeckling and preserving the main structure details; it also presents the highest values of the PSNR and ENL measurements. The best FSIM is given by SAR-BM3D, which is equal to 0.8626, while the lowest is H-WNNM, which is only 0.7450. The ENL of SAR-BM3D is the lowest, which indicates that it has a poor capability to remove speckle. From the ratio images in the bottom row of Figure 6, we can see GNLM and H-WNNM retain a significant amount of textures, indicating that GNLM and H-WNNM have less capability to preserve texture details. The ratio image of PPB also shows slight texture, which indicates that some texture areas are over-smoothed. SAR-BM3D leaves little texture, which indicates that SAR-BM3D has excellent texture preservation characteristics.

3.3. Comparison of Selected Combinations

Based on the analysis of all the candidate filters in the previous section, we selected the following three combinations for further analysis. They were:
  • SAR-BM3D and GNLM (Fusion #1)
  • SAR-BM3D and PPB (Fusion #2)
  • SAR-BM3D and H-WNNM (Fusion #3)
We combined two complementary filters to overcome their respective shortcomings and achieve the best performance. The experimental results are shown in Figure 7. Considering the very strong noise input, it seems certain that Fusion #1 is providing an encouraging filter quality.
The speckle is suppressed effectively without contaminating the image resolution. In addition, most details are well preserved, even complex textures, without introducing significant artifacts. In contrast, Fusions #2 and #3 cannot suppress speckle effectively. The corresponding ratio images are shown at the bottom of Figure 7. An excellent filter should only remove the injected speckle. Therefore, the ratio image should only contain the speckle without texture. There is an obvious structure leakage in the ratio image of Fusions #2 and #3. Fusion#1 also seems satisfactory. The results of RS1 in Figure 8 confirm these conclusions. It is evident that Fusions #2 and #3 have very limited speckle suppression, and the texture regions are distorted. In contrast, Fusion #1 preserves both detail and linear structure while smoothing the image, and the corresponding ratio image retains less structural information.
In Table 2, we show the numerical results obtained from these images. The Fusion #1 approach seems to show significant performance improvement. Indeed, from the PSNR index, Fusion #1 is much higher (more than 1.2 db) than other combinations. Similar behavior is observed in regards to FSIM and ENL. Fusion #1 provides the best FSIM and ENL with respect to all combinations. Hence, we eventually chose Fusion #1 (SAR-BM3D and GNLM) as the final output based on the above evidence.

3.4. Comparison with Other Despeckling Methods

To further quantitatively and qualitatively evaluate the performance of the proposed method, in this section, all of the images were tested (including synthetic and real SAR images).
All results are compared with previously cited filtering methods (with the addition of the wavelet-contourlet filter (W-C) [42] and Fast Adaptive Non-Local SAR (FANS) [43]), and some regions are enlarged for more accurate analysis. Through visual inspection, we find that in Figure 9, PPB and H-WNNM cause over-smoothing of the image and blur the textures and edges of the image. W-C does not suppress the speckle effectively and produces some artifacts. Although SAR-BM3D preserves image details well, the speckle suppression is limited. FANS produces some artifacts when the noise level is high, mainly due to the recognition of structural errors.
GNLM suppresses speckle well, but blurs the details in complex textured areas. The proposed method preserves these structures while ensuring effective speckle suppression, as demonstrated in the zoomed region. In Figure 10, we can see some structures in the ratio images of W-C, PPB, H-WNNM, and GNLM. Fewer structures exist for the proposed method and FANS, while SAR-BM3D has hardly any structural leakage. The experimental results of another synthetic multiplicative noise image are shown in Figure 11 and Figure 12. Similar to the previous analysis, the proposed method achieves the best balance in preserving the textural structure and speckle suppression.
Objective indices of the selected three synthetic multiplicative noise images are shown in Table 3. It can be seen that the FSIM and PSNR of the proposed method are nearly optimal when L is less than 4, and especially when L equal to 1 (the most challenging case). W-C and H-WNNM provide poor results, as shown in Table 3. PPB, SAR-BM3D, GNLM, and FANS obtain good objective values, but they are lower than the proposed method. This means the proposed method can obtain good despeckling results for images corrupted by very strong speckle noise.
However, when the number of looks is high (i.e., the noise level is low), SAR-BM3D and FANS perform quite well. The main reason is that when the noise level is low, the filtering results of SAR-BM3D and FANS do not introduce excessive residues and artifacts, respectively. Regarding ENL, GNLM and H-WNNM maintain nearly the highest ENL. PPB and FANS can obtain better ENL values than W-C and SAR-BM3D. Although the ENL of the proposed method is not the highest, it exceeds PPB and FANS. This indicates that the proposed method can adequately suppress speckle in homogeneous regions.
These results are even more important for real-world SAR images than for synthetic ones. The values of ENL of the three selected real-world SAR images are shown in Table 4. H-WNNM and GNLM are the two filters with the strongest speckle noise suppression capability, while W-C and SAR-BM3D are the worst. PPB provides better performance than FANS.
Since the proposed method combines the features of GNLM and SAR-BM3D, the ENL is lower than that for GNLM. However, it is better than for PPB and FANS, demonstrating that the proposed method guarantees adequate speckle suppression.
Visual inspection is necessary for a solid evaluation. In Figure 13, we show the results of the real-world SAR image RS1 and the zooming area of interest. As shown in Figure 13, PPB, H-WNNM, GNLM, and FANS do not completely preserve texture details. PPB and FANS also produce ghost artifacts. W-C has limited speckle suppression capability, and it produces some artifacts. SAR-BM3D faithfully preserves the texture details, but it retains too much speckle noise. The proposed method shows good performance, not only in effective speckle suppression, but also in the preservation of the most textural details. The ratio images of the proposed method in Figure 14 also contain the least structures. The despeckling results for real-world SAR images RS2 and RS3 are shown in Figure 15, Figure 16, Figure 17 and Figure 18, respectively. It is reconfirmed that the proposed method preserves the edge and image details while removing speckle.
The executable codes of the compared methods can be downloaded from the authors’websites (http://www.math.u-bordeaux1.fr/~cdeledal/ppb; http://www.grip.unina.it/web-download.html; https://github.com/csjunxu/WNNM_CVPR2014; https://github.com/grip-unina/GNLM) accessed on 15 December 2020, and the parameters were set as recommended. All the experiments were run in MATLAB R2017a on a desktop computer with an Intel Pentium 2.80 GHz CPU and 8GB memory. In Table 4, it is shown that the processing speed of FANS is higher than the other competing algorithms. W-C, PPB, and SAR-BM3D also achieve faster speed. H-WNNM takes the longest time. The proposed method uses two different filtering strategies; therefore, it does not yield superior results in terms of processing time. In the future, we will study how to reduce the complexity of the algorithm in order to reduce the running time.

4. Discussion

In this study, two complementary filters are combined for despeckling, and the weights are allocated from an offline database based on GLCM learning. Since a single filter has different advantages and disadvantages, the proposed method can combine these advantages and overcome the disadvantages. Synthetic and real-world SAR images are used in the experiment. The purpose of using synthetic multiplicative noise images is to obtain rich objective indexes; this is helpful for us to evaluate the performance of the proposed method.
The experimental results compared with other filters show that the proposed method exhibits nearly the highest FSIM and PSNR at a high noise level (L less than 8). Although the ENL of the proposed method is not the highest, the visual inspection proves that the noise suppression capability of the proposed method is better. We observe similar results described previously in real-world SAR images, and the proposed method achieves an optimal balance between preserving image edges and textures and suppressing speckle. However, when the noise level is low, FANS achieves better filtering results. Perhaps when L is more than 4, we could add FANS as the third combination.
In the future, we will consider how to better solve this problem, and we will optimize the code scheme to reduce the processing time of the proposed method. Moreover, the probability of the incorrect assignment of weights can be reduced by increasing the training data. This would help reduce the generation of artifacts.

5. Conclusions

We propose a new weighted combination method based on block classification for the two most advanced SAR image despeckling methods. By clustering the co-registered optical images with SFFCM, similar blocks are obtained and weights are assigned to two different filters. The weights are set empirically by the feature images of the GLCM. Two complementary combinations of filters can produce better results. The two complementary filters are selected through experimental analysis of the existing filter schemes. The experimental results show that our proposed method provides the highest objective indices under very strong noise contamination when compared to other methods. Strong speckle suppression is observed, together with a faithful preservation of details. Other methods may show insufficient or excessive speckle rejection. Experiments on real-world SAR images are likewise encouraging, as the proposed method does an excellent job of texture preservation and artifact reduction. For SAR images with low noise level, although our method achieves good despeckling results, FANS seems to obtain even better results.
Since a large part of our scheme relies on the co-registration of optical images, the current research is not fully automated. Future work may resort to deep-learning methods to improve this issue. We hope that more people will invest in this research. Another important drawback of our method is its complexity. Simplifying this method is also a direction for our future research efforts.

Author Contributions

Conceptualization, J.F., G.W. and F.B.; methodology, J.F. and F.B.; validation, X.C., W.L. and S.H.; investigation, G.W. and F.B.; writing—original draft preparation, F.B.; writing—review and editing, J.F.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of China under grant Nos. 62002208 and 62172030 and the Natural Science Foundation of Shandong Province under grant Nos. ZR2020MA082 and ZR2020MF119.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The images used in this paper come from the library of the Technical University of Munich: https://mediatum.ub.tum.de/1436631, accessed on 9 February 2022.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ENLEquivalent Numbers of Looks
FANSFast Adaptive Non-Local SAR
FSIMFeature Similarity Index
GLCMGray Level Co-Occurrence Matrices
GNLMGuided Non-Local Means
H-WNNMHomomorphic version of Weighted Nuclear Norm Minimization
MMGR-WTMultiscale Morphological Gradient Reconstruction-Based Watershed Transform
NLMNon-Local Means
PPBProbabilistic Patch-Based
PSNRPeak Signal-to-Noise Ratio
SARSynthetic Aperture Radar
SAR-BM3DSAR-Block-Matching 3D
SFFCMSuperpixel-Based Fast Fuzzy C-Means
W-CWavelet-Contourlet filter

References

  1. Cui, Z.; Qin, Y.; Zhong, Y.; Cao, Z.; Yang, H. Target Detection in High-Resolution SAR Image via Iterating Outliers and Recursing Saliency Depth. Remote Sens. 2021, 13, 4315. [Google Scholar] [CrossRef]
  2. Shang, R.; Peng, P.; Shang, F.; Jiao, L.; Shen, Y.; Stolkin, R. Semantic Segmentation for SAR Image Based on Texture Complexity Analysis and Key Superpixels. Remote Sens. 2020, 12, 2141. [Google Scholar] [CrossRef]
  3. Yu, M.; Dong, G.; Fan, H.; Kuang, G. SAR Target Recognition via Local Sparse Representation of Multi-Manifold Regularized Low-Rank Approximation. Remote Sens. 2018, 10, 211. [Google Scholar] [CrossRef] [Green Version]
  4. Ponmani, E.; Saravanan, P. Image denoising and despeckling methods for SAR images to improve image enhancement performance: A survey. Multi. Tools Appl. 2021, 80, 26547–26569. [Google Scholar] [CrossRef]
  5. Argenti, F.; Lapini, A.; Bianchi, T.; Alparone, L. A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–35. [Google Scholar] [CrossRef] [Green Version]
  6. Hou, S.; Sun, Z.; Yang, L.; Song, Y. Kirsch Direction Template Despeckling Algorithm of High-Resolution SAR Images-Based on Structural Information Detection. IEEE Geosci. Remote Sens. Lett. 2021, 18, 177–181. [Google Scholar] [CrossRef]
  7. Lee, J. Digital Image Enhancement and Noise Filtering by Use of Local Statistics. IEEE Trans. Pattern Anal. Mach. Intell. 1980, 2, 165–168. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Lee, J.; Wen, J.; Ainsworth, T.; Chen, K.; Chen, A. Improved Sigma Filter for Speckle Filtering of SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 202–213. [Google Scholar]
  9. Bianchi, T.; Argenti, F.; Alparone, L. Segmentation-Based MAP Despeckling of SAR Images in the Undecimated Wavelet Domain. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2728–2742. [Google Scholar] [CrossRef]
  10. Argenti, F.; Bianchi, T.; Lapini, A.; Alparone, L. Fast MAP Despeckling Based on Laplacian–Gaussian Modeling of Wavelet Coefficients. IEEE Geosci. Remote Sens. Lett. 2012, 9, 13–17. [Google Scholar] [CrossRef]
  11. Bhateja, V.; Tripathi, A.; Gupta, A.; Lay-Ekuakille, A. Speckle suppression in SAR images employing modified anisotropic diffusion filtering in wavelet domain for environment monitoring. Measurement 2015, 74, 246–254. [Google Scholar] [CrossRef]
  12. Sun, Y.; Lei, L.; Guan, D.; Li, X.; Kuang, G. SAR Image Speckle Reduction Based on Nonconvex Hybrid Total Variation Model. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1231–1249. [Google Scholar] [CrossRef]
  13. Maji, S.; Thakur, R.; Yahia, H. SAR image denoising based on multifractal feature analysis and TV regularisation. IET Image Process. 2020, 14, 4158–4167. [Google Scholar] [CrossRef]
  14. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted Nuclear Norm Minimization with Application to Image Denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
  15. Guan, D.; Xiang, D.; Tang, X.; Kuang, G. Sar image despeckling based on nonlocal low-rank regularization. IEEE Trans. Geosci. Remote Sens. 2018, 57, 3472–3489. [Google Scholar] [CrossRef]
  16. Dalsasso, E.; Yang, X.; Denis, L.; Tupin, F.; Yang, W. SAR Image Despeckling by Deep Neural Networks: From a Pre-Trained Model to an End-to-End Training Strategy. Remote Sens. 2020, 12, 2636. [Google Scholar] [CrossRef]
  17. Liu, S.; Gao, L.; Lei, Y.; Wang, M.; Hu, Q.; Ma, X.; Zhang, Y. SAR Speckle Removal Using Hybrid Frequency Modulations. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3956–3966. [Google Scholar] [CrossRef]
  18. Lattari, F.; Gonzalez Leon, B.; Asaro, F.; Rucci, A.; Prati, C.; Matteucci, M. Deep Learning for SAR Image Despeckling. Remote Sens. 2019, 11, 1532. [Google Scholar] [CrossRef] [Green Version]
  19. Zhang, Q.; Yuan, Q.; Li, J.; Yang, Z.; Ma, X. Learning a Dilated Residual Network for SAR Image Despeckling. Remote Sens. 2018, 10, 196. [Google Scholar] [CrossRef] [Green Version]
  20. Vitale, S.; Ferraioli, G.; Pascazio, V. Multi-Objective CNN-Based Algorithm for SAR Despeckling. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9336–9349. [Google Scholar] [CrossRef]
  21. Liu, Z.; Lai, R.; Guan, J. Spatial and transform domain CNN for SAR image despeckling. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  22. Xiong, K.; Zhao, G.; Wang, Y.; Shi, G.; Ma, X. SAR Imaging and Despeckling Based on Sparse, Low-Rank, and Deep CNN Priors. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  23. Buades, A.; Coll, B.; Morel, J. A non-local algorithm for image denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), San Diego, CA, USA, 20–26 June 2005; pp. 60–65. [Google Scholar]
  24. Deledalle, C.A.; Denis, L.; Tupin, F. Iterative Weighted Maximum Likelihood Denoising With Probabilistic Patch-Based Weights. IEEE Trans. Image Process. 2009, 18, 2661–2672. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Parrilli, S.; Poderico, M.; Angelino, C.; Verdoliva, L. A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2012, 50, 606–616. [Google Scholar] [CrossRef]
  26. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  27. Vitale, S.; Cozzolino, D.; Scarpa, G.; Verdoliva, L.; Poggi, G. Guided Patchwise Nonlocal SAR Despeckling. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6484–6498. [Google Scholar] [CrossRef] [Green Version]
  28. Guo, M.; Han, C.; Wang, W.; Zhong, S.; Lv, R.; Liu, Z. A novel truncated nonconvex nonsmooth variational method for SAR image despeckling. Remote Sens Lett. 2021, 12, 122–131. [Google Scholar] [CrossRef]
  29. Ferraioli, G.; Pascazio, V.; Schirinzi, G. Ratio-based nonlocal anisotropic despeckling approach for SAR images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7785–7798. [Google Scholar] [CrossRef]
  30. Penna, P.A.; Mascarenhas, N.D. SAR speckle nonlocal filtering with statistical modeling of HAAR wavelet coefficients and stochastic distances. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7194–7208. [Google Scholar] [CrossRef]
  31. Aranda-Bojorges, G.; Ponomaryov, V.; Reyes-Reyes, R.; Sadovnychiy, S.; Cruz-Ramos, C. Clustering-Based 3-D-MAP Despeckling of SAR Images Using Sparse Wavelet Representation. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  32. Mullissa, A.G.; Marcos, D.; Tuia, D.; Herold, M.; Reiche, J. DeSpeckNet: Generalizing deep learning-based SAR image despeckling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  33. Mullissa, A.G.; Persello, C.; Reiche, J. Despeckling polarimetric SAR data using a multistream complex-valued fully convolutional network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  34. Lei, T.; Jia, X.; Zhang, Y.; Liu, S.; Meng, H.; Nandi, A.K. Superpixel-Based Fast Fuzzy C-Means Clustering for Color Image Segmentation. IEEE Trans. Fuzzy Syst. 2019, 27, 1753–1766. [Google Scholar] [CrossRef] [Green Version]
  35. Yu, J. Texture Image Segmentation Based on Gaussian Mixture Models and Gray Level Co-occurrence Matrix. In Proceedings of the IEEE International Symposium on Information Science and Engineering (ISISE), Shanghai, China, 24–26 September 2010; pp. 149–152. [Google Scholar]
  36. Hua, X.; Pierce, L.E.; Ulaby, F.T. Statistical properties of logarithmically transformed speckle. IEEE Trans. Geosci. Remote Sens. 2002, 40, 721–727. [Google Scholar]
  37. Ulaby, F.T.; Dobson, M.C. Handbook of Radar Scattering Statistics for Terrain; Artech House, Inc.: Norwood, MA, USA, 1989; p. 357. [Google Scholar]
  38. Gragnaniello, D.; Poggi, G.; Scarpa, G.; Verdoliva, L. SAR Image Despeckling by Soft Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2118–2130. [Google Scholar] [CrossRef]
  39. Di Martino, G.; Poderico, M.; Poggi, G.; Riccio, D.; Verdoliva, L. Benchmarking Framework for SAR Despeckling. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1596–1615. [Google Scholar] [CrossRef]
  40. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. Fsim: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Schmitt, M.; Hughes, L.H.; Zhu, X.X. The SEN1-2 dataset for deep learning in SAR-optical data fusion. arXiv 2018, arXiv:1807.01569. [Google Scholar] [CrossRef] [Green Version]
  42. Fang, J.; Wang, D.; Xiao, Y.; Ajay Saikrishna, D. De-noising of SAR images based on Wavelet-Contourlet domain and PCA. In Proceedings of the IEEE International Conference on Signal Processing (ICSP), Hangzhou, China, 19–23 October 2014; pp. 942–945. [Google Scholar]
  43. Cozzolino, D.; Parrilli, S.; Scarpa, G.; Poggi, G.; Verdoliva, L. Fast Adaptive Nonlocal SAR Despeckling. IEEE Geosci. Remote Sens. Lett. 2014, 11, 524–528. [Google Scholar] [CrossRef]
Figure 1. Superpixel-based Fast Fuzzy C-Means (SFFCM) clustering segmentation example. (a) Original image; (b) Superpixel image; (c) Output image.
Figure 1. Superpixel-based Fast Fuzzy C-Means (SFFCM) clustering segmentation example. (a) Original image; (b) Superpixel image; (c) Output image.
Remotesensing 14 01465 g001
Figure 2. Feature images of Gray Level Co-Occurrence Matrices (GLCM) computed by Figure 1a. (a) Correlation (COR); (b) Contrast (CON); (c) Energy (ENE); (d) Homogeneity (HOM).
Figure 2. Feature images of Gray Level Co-Occurrence Matrices (GLCM) computed by Figure 1a. (a) Correlation (COR); (b) Contrast (CON); (c) Energy (ENE); (d) Homogeneity (HOM).
Remotesensing 14 01465 g002
Figure 3. Block diagram of the proposed despeckling method.
Figure 3. Block diagram of the proposed despeckling method.
Remotesensing 14 01465 g003
Figure 4. (a) Test synthetic multiplicative noise images (top line, from left to right: S1, S2, and S3, L = 1); (b) Test real-world SAR images (top line, from left to right: RS1, RS2, and RS3). Bottom line, from left to right, are the corresponding co-registered optical images. The blue boxes highlight the regions of interest being used to compute the Equivalent Number of Looks (ENL).
Figure 4. (a) Test synthetic multiplicative noise images (top line, from left to right: S1, S2, and S3, L = 1); (b) Test real-world SAR images (top line, from left to right: RS1, RS2, and RS3). Bottom line, from left to right, are the corresponding co-registered optical images. The blue boxes highlight the regions of interest being used to compute the Equivalent Number of Looks (ENL).
Remotesensing 14 01465 g004
Figure 5. A comparison of the despeckling results with the parameter C ranging from 2 to 10. (a) Average Peak Signal-to-Noise Ratio (PSNR) values; (b) Average Feature Similarity Index (FSIM) values.
Figure 5. A comparison of the despeckling results with the parameter C ranging from 2 to 10. (a) Average Peak Signal-to-Noise Ratio (PSNR) values; (b) Average Feature Similarity Index (FSIM) values.
Remotesensing 14 01465 g005
Figure 6. The results of the candidate despeckling methods on S1 (L = 1), and the bottom row are the corresponding ration images. (a) Noisy image; (b) Probabilistic Patch-Based (PPB); (c) SAR-block matching 3D (SAR-BM3D); (d) Homomorphic Weighted Nuclear Norm Minimization (H-WNNM); (e) Guided Non-Local means (GNLM); (f) PPB; (g) SAR-BM3D; (h) H-WNNM; (i) GNLM.
Figure 6. The results of the candidate despeckling methods on S1 (L = 1), and the bottom row are the corresponding ration images. (a) Noisy image; (b) Probabilistic Patch-Based (PPB); (c) SAR-block matching 3D (SAR-BM3D); (d) Homomorphic Weighted Nuclear Norm Minimization (H-WNNM); (e) Guided Non-Local means (GNLM); (f) PPB; (g) SAR-BM3D; (h) H-WNNM; (i) GNLM.
Remotesensing 14 01465 g006
Figure 7. The results of the candidate combinations on S2 (L = 1), and the bottom row are the corresponding ration images. (a) Noisy image; (b) Fusion #1; (c) Fusion #2; (d) Fusion #3; (e) Fusion #1; (f) Fusion #2; (g) Fusion #3.
Figure 7. The results of the candidate combinations on S2 (L = 1), and the bottom row are the corresponding ration images. (a) Noisy image; (b) Fusion #1; (c) Fusion #2; (d) Fusion #3; (e) Fusion #1; (f) Fusion #2; (g) Fusion #3.
Remotesensing 14 01465 g007
Figure 8. The results of the candidate combinations on RS1, and the bottom row are the corresponding ration images. (a) Noisy image; (b) Fusion #1; (c) Fusion #2; (d) Fusion #3; (e) Fusion #1; (f) Fusion #2; (g) Fusion #3.
Figure 8. The results of the candidate combinations on RS1, and the bottom row are the corresponding ration images. (a) Noisy image; (b) Fusion #1; (c) Fusion #2; (d) Fusion #3; (e) Fusion #1; (f) Fusion #2; (g) Fusion #3.
Remotesensing 14 01465 g008
Figure 9. The results of all filters on S1 (L = 1). The red box highlights the zoomed region of interest. (a) Clean image; (b) Wavelet-Contourlet filter (W-C); (c) PPB; (d) SAR-BM3D; (e) H-WNNM; (f) GNLM; (g) Fast Adaptive Non-Local SAR (FANS); (h) Proposed method.
Figure 9. The results of all filters on S1 (L = 1). The red box highlights the zoomed region of interest. (a) Clean image; (b) Wavelet-Contourlet filter (W-C); (c) PPB; (d) SAR-BM3D; (e) H-WNNM; (f) GNLM; (g) Fast Adaptive Non-Local SAR (FANS); (h) Proposed method.
Remotesensing 14 01465 g009
Figure 10. The corresponding ratio images for S1. (a) W-C; (b) PPB; (c) SAR-BM3D; (d) H-WNNM; (e) GNLM; (f) FANS; (g) Proposed method.
Figure 10. The corresponding ratio images for S1. (a) W-C; (b) PPB; (c) SAR-BM3D; (d) H-WNNM; (e) GNLM; (f) FANS; (g) Proposed method.
Remotesensing 14 01465 g010
Figure 11. The results of all filters on S3 (L = 1). The red box highlights the zoomed region of interest. (a) Clean image; (b) W-C; (c) PPB; (d) SAR-BM3D; (e) H-WNNM; (f) GNLM; (g) FANS; (h) Proposed method.
Figure 11. The results of all filters on S3 (L = 1). The red box highlights the zoomed region of interest. (a) Clean image; (b) W-C; (c) PPB; (d) SAR-BM3D; (e) H-WNNM; (f) GNLM; (g) FANS; (h) Proposed method.
Remotesensing 14 01465 g011
Figure 12. The corresponding ratio images for S3. (a) W-C; (b) PPB; (c) SAR-BM3D; (d) H-WNNM; (e) GNLM; (f) FANS; (g) Proposed method.
Figure 12. The corresponding ratio images for S3. (a) W-C; (b) PPB; (c) SAR-BM3D; (d) H-WNNM; (e) GNLM; (f) FANS; (g) Proposed method.
Remotesensing 14 01465 g012
Figure 13. The results of all filters on RS1. The red box highlights the zoomed region of interest. (a) Noisy image; (b) W-C; (c) PPB; (d) SAR-BM3D; (e) H-WNNM; (f) GNLM; (g) FANS; (h) Proposed method.
Figure 13. The results of all filters on RS1. The red box highlights the zoomed region of interest. (a) Noisy image; (b) W-C; (c) PPB; (d) SAR-BM3D; (e) H-WNNM; (f) GNLM; (g) FANS; (h) Proposed method.
Remotesensing 14 01465 g013
Figure 14. The corresponding ratio images for RS1. (a) W-C; (b) PPB; (c) SAR-BM3D; (d) H-WNNM; (e) GNLM; (f) FANS; (g) Proposed method.
Figure 14. The corresponding ratio images for RS1. (a) W-C; (b) PPB; (c) SAR-BM3D; (d) H-WNNM; (e) GNLM; (f) FANS; (g) Proposed method.
Remotesensing 14 01465 g014
Figure 15. The results of all filters on RS2. The red box highlights the zoomed region of interest. (a) Noisy image; (b) W-C; (c) PPB; (d) SAR-BM3D; (e) H-WNNM; (f) GNLM; (g) FANS; (h) Proposed method.
Figure 15. The results of all filters on RS2. The red box highlights the zoomed region of interest. (a) Noisy image; (b) W-C; (c) PPB; (d) SAR-BM3D; (e) H-WNNM; (f) GNLM; (g) FANS; (h) Proposed method.
Remotesensing 14 01465 g015
Figure 16. The corresponding ratio images for RS2. (a) W-C; (b) PPB; (c) SAR-BM3D; (d) H-WNNM; (e) GNLM; (f) FANS; (g) Proposed method.
Figure 16. The corresponding ratio images for RS2. (a) W-C; (b) PPB; (c) SAR-BM3D; (d) H-WNNM; (e) GNLM; (f) FANS; (g) Proposed method.
Remotesensing 14 01465 g016
Figure 17. The results of all filters on RS3. The red box highlights the zoomed region of interest. (a) Noisy image; (b) W-C; (c) PPB; (d) SAR-BM3D; (e) H-WNNM; (f) GNLM; (g) FANS; (h) Proposed method.
Figure 17. The results of all filters on RS3. The red box highlights the zoomed region of interest. (a) Noisy image; (b) W-C; (c) PPB; (d) SAR-BM3D; (e) H-WNNM; (f) GNLM; (g) FANS; (h) Proposed method.
Remotesensing 14 01465 g017
Figure 18. The corresponding ratio images for RS3. (a) W-C; (b) PPB; (c) SAR-BM3D; (d) H-WNNM; (e) GNLM; (f) FANS; (g) Proposed method.
Figure 18. The corresponding ratio images for RS3. (a) W-C; (b) PPB; (c) SAR-BM3D; (d) H-WNNM; (e) GNLM; (f) FANS; (g) Proposed method.
Remotesensing 14 01465 g018
Table 1. FSIM, PSNR, and ENL of the selected despeckling methods.
Table 1. FSIM, PSNR, and ENL of the selected despeckling methods.
MethodsFSIMPSNRENL
Single-Look0.724317.631.28
PPB0.857325.29161.69
SAR-BM3D0.862624.8612.38
H-WNNM0.745021.78136.24
GNLM0.850226.66358.58
Table 2. The FSIM, PSNR, and ENL of the candidate combinations. (FSIM, PSNR, and ENL1 are computed on S2; ENL2 is computed on RS1).
Table 2. The FSIM, PSNR, and ENL of the candidate combinations. (FSIM, PSNR, and ENL1 are computed on S2; ENL2 is computed on RS1).
MethodsFSIMPSNRENL1ENL2
Single-Look0.707614.761.533.57
Fusion #10.835623.86384.89263.20
Fusion #20.810922.6392.44170.91
Fusion #30.775821.12138.64227.08
Table 3. The FSIM, PSNR, and ENL of all filters on the synthesized multiplicative noise images (L = 1, 2, 4, 8).
Table 3. The FSIM, PSNR, and ENL of all filters on the synthesized multiplicative noise images (L = 1, 2, 4, 8).
L = 1 L = 2 L = 4 L = 8
MethodsFSIMPSNRENLFSIMPSNRENLFSIMPSNRENLFSIMPSNRENL
Noisy image0.724317.631.280.794020.782.110.853423.023.380.900525.927.87
W-C0.729221.0614.620.764623.1219.080.800024.7126.870.833226.3642.74
PPB0.857325.29161.690.893926.89175.640.919528.58243.170.939230.37399.28
SAR-BM3D0.862624.8612.380.896526.8118.870.924328.8920.870.947731.1943.47
S1H-WNNM0.745021.78136.240.803224.42154.090.862227.06299.540.863027.63417.73
GNLM0.850226.66358.580.880927.96403.740.912229.22641.590.936430.49765.68
FANS0.876226.6173.950.908128.5779.960.931830.42154.110.950632.42197.41
Proposed0.888727.4087.770.912428.80119.870.930830.01142.130.947031.33247.33
Noisy image0.707614.761.530.771017.322.390.831120.153.980.881123.057.40
W-C0.754118.953.820.786520.9422.250.819722.6119.820.852224.0827.80
PPB0.734221.77192.730.820423.43353.060.881224.92456.780.911526.27885.19
SAR-BM3D0.818621.637.830.859623.3616.170.897925.5723.240.926827.5738.33
S2H-WNNM0.690317.80410.320.772121.431522.130.831724.493853.190.837426.355100.64
GNLM0.779323.324641.760.814224.057809.260.870625.364167.690.911726.507353.35
FANS0.786122.8567.770.861524.70389.370.903726.62706.510.929528.42479.94
Proposed0.835623.86384.890.868124.95802.320.901726.28693.170.926527.501122.42
Noisy image0.651019.470.960.738522.201.800.817425.083.730.879928.016.77
W-C0.777825.0234.890.792326.6341.510.814127.7562.340.839228.8888.23
PPB0.810228.34131.320.852429.86188.350.890331.30226.180.920332.59241.50
SAR-BM3D0.840527.4110.120.876829.1612.570.911631.2718.240.935933.3128.60
S3H-WNNM0.708124.591467.090.754527.342285.850.809829.692540.520.825030.572052.38
GNLM0.834030.14190.290.847430.92297.080.875731.94264.050.911833.27233.44
FANS0.830829.19121.250.872331.03271.440.904632.69185.470.932834.42130.82
Proposed0.864930.47183.720.881831.53196.250.901332.59191.620.926133.81139.86
Table 4. The ENL and run times (in seconds) of all filters on real-world SAR images.
Table 4. The ENL and run times (in seconds) of all filters on real-world SAR images.
MethodsRS1RS2RS3Time (s)
Noisy image3.1513.2726.60/
W-C15.6642.19195.0618
PPB434.64459.311137.3423
SAR-BM3D55.4224.4859.5632
H-WNNM5352.23643.132411.27138
GNLM338.711417.925127.5789
FANS166.9662.79215.8216
Proposed263.20589.102435.91125
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, G.; Bo, F.; Chen, X.; Lu, W.; Hu, S.; Fang, J. A Collaborative Despeckling Method for SAR Images Based on Texture Classification. Remote Sens. 2022, 14, 1465. https://doi.org/10.3390/rs14061465

AMA Style

Wang G, Bo F, Chen X, Lu W, Hu S, Fang J. A Collaborative Despeckling Method for SAR Images Based on Texture Classification. Remote Sensing. 2022; 14(6):1465. https://doi.org/10.3390/rs14061465

Chicago/Turabian Style

Wang, Gongtang, Fuyu Bo, Xue Chen, Wenfeng Lu, Shaohai Hu, and Jing Fang. 2022. "A Collaborative Despeckling Method for SAR Images Based on Texture Classification" Remote Sensing 14, no. 6: 1465. https://doi.org/10.3390/rs14061465

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop