Next Article in Journal
Retrieval of Aerosol Optical Depth in the Arid or Semiarid Region of Northern Xinjiang, China
Next Article in Special Issue
Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery
Previous Article in Journal
On the Use of the Eddy Covariance Latent Heat Flux and Sap Flow Transpiration for the Validation of a Surface Energy Balance Model
Previous Article in Special Issue
Comparative Analysis of Responses of Land Surface Temperature to Long-Term Land Use/Cover Changes between a Coastal and Inland City: A Case of Freetown and Bo Town in Sierra Leone
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning a Dilated Residual Network for SAR Image Despeckling

1
School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
2
International School of Software, Wuhan University, Wuhan 430079, China
3
School of Resource and Environmental Science, Wuhan University, Wuhan 430079, China
4
School of Resources and Environmental Engineering, Anhui University, Hefei 230000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(2), 196; https://doi.org/10.3390/rs10020196
Submission received: 13 November 2017 / Revised: 17 January 2018 / Accepted: 24 January 2018 / Published: 29 January 2018
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)

Abstract

:
In this paper, to break the limit of the traditional linear models for synthetic aperture radar (SAR) image despeckling, we propose a novel deep learning approach by learning a non-linear end-to-end mapping between the noisy and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections and a residual learning strategy are added to the despeckling model to maintain the image details and reduce the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed method shows a superior performance over the state-of-the-art methods in both quantitative and visual assessments, especially for strong speckle noise.

Graphical Abstract

1. Introduction

A synthetic aperture radar (SAR) is a coherent imaging sensor, which can access a wide range of high-quality massive surface data. Moreover, with the ability to operate at night and in adverse weather conditions such as thin clouds and haze, SAR has gradually become a significant source of remote sensing data in the fields of geographic mapping, resource surveying, and military reconnaissance. However, SAR images are inherently affected by multiplicative noise, i.e., speckle noise, which is caused by the coherent nature of the scattering phenomena [1]. The presence of speckle severely affects the quality of SAR images, and greatly reduces the utilization efficiency in SAR image interpretation, retrieval, and other applications [2,3,4]. Consequently, SAR image speckle reduction is an essential preprocessing step and has become a hot research topic.
For the purpose of removing the speckle noise of SAR images, scholars firstly proposed spatial linear filters such as the Lee filter [5], Kuan filter [6], and Frost filter [7]. These methods usually assume that the image filtering result values have a linear relationship with the original image, through searching for a relevant combination of the central pixel intensity in a moving window with a mean intensity of the filter window. Thus, the spatial linear filters achieve a trade-off between balancing in homogeneous areas and a constant all-pass identity filter in edge included areas. The results have confirmed that spatial-domain filters are adept at suppressing speckle noise for some critical features. However, due to the nature of local processing, the spatial linear filter methods often fail to integrally preserve edges and details, which exhibit the following deficiencies: (1) unable to preserve the average value, especially when the equivalent number of look (ENL) of the original SAR image is small; (2) the powerfully reflective specific targets like points and small surficial features are easily blurred or erased; and (3) speckle noise in dark scenes is not removed [8].
Except for the spatial-domain filters above, wavelet theory has also been applied to speckle reduction. Starck et al. [9] primarily employed ridgelet transform as a component step, and implemented curvelet sub-bands using a filter bank of the discrete wavelet transform (DWT) filters for image denoising. For the case of speckle noise, Solbo et al. [10] utilized the DWT of the log-transformed speckled image in homomorphic filtering, which is empirically convergent in a self-adaptive strategy and calculated in the Fourier space. In summary, the major weaknesses of this type of approach are the backscatter mean preservation in homogeneous areas, details preservation, and producing an artificial effect that is incorporated into the results, such as ring effects [11].
Aimed at overcoming these deficiencies, the nonlocal means (NLM) algorithm [12,13,14] has provided a breakthrough in detail preservation in SAR image despeckling. The basic idea of the NLM-based methods [12] is that natural images have self-similarity and there are similar patches repeating over and over throughout the whole image. For SAR images, Deledalle et al. [13] modified the choice of weights, which can be iteratively determined based on both the similarity between noisy patches and the similarity of patches extracted from the previous estimate. Besides, Parrilli et al. [14] used the local linear minimum mean square error (LLMMSE) criterion and undecimated wavelet transform considering the peculiarities of SAR images, allowing for a sparse Wiener filtering representation and an effective separation between original signal and speckle noise through predefined thresholding, which has become one of the most effective SAR despeckling methods. However, the low computational efficiency of the similar patch searching restricts its application.
In addition, the variational-based methods [15,16,17,18] have gradually been utilized for SAR image despeckling because of their stability and flexibility, which break through the traditional idea of filters by solving the problem of energy optimization. Then, the despeckling task is cast as the inverse problem of recovering the original noise-free image based upon reasonable assumptions or prior knowledge of the noise observation model with log-transform, such as the total variation (TV) model [15], sparse representation [16], and so on. Although these variational methods have achieved a good reduction of speckle noise, the result is usually dependent on the choice of model parameters and prior information, and is often time-consuming. In addition, the variational-based methods cannot accurately describe the distribution of speckle noise, which also constraints the performance of speckle noise reduction.
In general, although many SAR despeckling methods have been proposed, they sometimes fail to preserve sharp features in domains of a complicated texture, or even create some block artifacts in the speckled image. In this paper, considering that image speckle noise can be expressed more accurately through non-linear models than linear models, and to overcome the above-mentioned limitations of the linear models, we propose a novel deep neural network-based approach for SAR image despeckling, learning a non-linear end-to-end mapping between the speckled and clean SAR images by a dilated residual network (SAR-DRN). Our despeckling model employs dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. Furthermore, skip connections are added to the despeckling model to maintain the image details and avoid the vanishing gradient problem. Compared with the traditional despeckling methods in both simulated and real SAR experiments, the proposed approach shows a state-of-the-art performance in both quantitative and visual assessments, especially for strong speckle noise.
The rest of this paper is organized as follows. The SAR image speckling noise degradation model and the related deep convolution neural network method are introduced in Section 2. The network architecture of the proposed SAR-DRN and details of its structure are described in Section 3. Then, the results of the despeckling assessment in both simulated and real SAR image experiments are presented in Section 4. Finally, the conclusions and future research are summarized in Section 5.

2. Related Work

2.1. SAR Image Speckling Noise Degradation Model

For SAR images, the main reason for the degradation of the image quality is multiplicative speckle noise. Differing from additive white Gaussian noise (AWGN) in nature or hyperspectral images [19,20], speckle noise is described by the multiplicative noise model:
y = x n
where y is the speckled noise image, x is the clean image, and n represents the speckle noise. It is well-known that, for SAR amplitude images, the speckle follows a Gamma distribution [21]:
ρ n ( n ) = L L n L 1 exp ( n L ) Γ ( L )
where L 1 , n 0 , Γ is the Gamma function, and L is the equivalent number of looks (ENL), as defined in Equation (3), which is usually regarded as the quantitative evaluation index for real SAR image despeckling experiments in the homogeneous areas.
E N L = x ¯ var
where x ¯ and var , respectively, represent the image mean and variance.
Therefore, for this non-linear multiplicative noise, choosing a non-linear expression for speckle reduction is an important strategy. In the following, we briefly introduce the use of convolutional neural networks (CNNs) for SAR image despeckling, considering both the low-level features as the bottom level and the output feature representation from the top level of the network.

2.2. CNNs for SAR Image Despeckling

With recent advances made by deep learning for computer vision and image processing applications, it has gradually become an efficient tool which has been successfully applied to many computer vision tasks such as image classification, segmentation, object recognition, scene classification, and so on [22,23,24]. CNNs can extract the internal and underlying features of images and avoid complex priori constraints, organized in the j -th feature map O j ( l ) ( j = 1 , 2 , M ( l ) ) of l -th layer, within which each unit is connected to local patches of the previous layer O j ( l 1 ) ( j = 1 , 2 , M ( l 1 ) ) through a set of weight parameters W j ( l ) and bias parameters b j ( l ) . The output feature map is:
L j ( l ) ( m , n ) = F ( O j ( l ) ( m , n ) )
And
O j ( l ) ( m , n ) = i = 1 M ( l ) u , v = 0 S 1 W j i ( l ) ( u , v ) L i ( l 1 ) ( m u , n v ) + b j ( l )
where F ( ) is the nonlinear activation function, and O j ( l ) ( m , n ) represents the convolutional weighted sum of the previous layer’s results, to the j -th output feature map at pixel ( m , n ) . Besides, the special parameters in the convolution layer contain the number of output feature maps j , and filter kernel size S × S . Particularly, the network parameters W and b need to be regenerated through the back-propagation (BP) algorithm and the chain rule of derivation [25].
To ensure that the output of the CNNs is a non-linear combination of the input, due to the relationship between the input data and the output label usually being a highly nonlinear mapping, a non-linear function is introduced as an excitation function, such as the rectified linear unit (ReLU), which is defined as:
F ( O j ( l ) ) = max ( 0 , O j ( l ) )
After finishing each process of forward propagation, the BP algorithm starts to perform for update trainable parameters of networks, to better learn the relationships between label data and reconstructing data. From the top layer of the network to the bottom, BP updates the trainable parameters of the l -th layer through the outputs of the l + 1 -th layer. The partial derivative of loss function with respect to convolution kernels W j i ( l ) and bias b j ( l ) of the l -th convolution layer is respectively calculated as follows:
L W j i ( l ) = m , n δ j ( l ) ( m , n ) L j ( l ) ( m u , y v )
L b j ( l ) = m , n δ j ( l ) ( m , n )
where the error map δ j ( l ) is defined as
δ j ( l ) = j u , v = 0 S 1 W j i ( l + 1 ) ( u , v ) δ j ( l + 1 ) ( m + u , n + v )
The iterative training rule for updating the network parameters W j i ( l ) and b j ( l ) is through the gradient descent strategy as follows:
W j i ( l ) = W j i ( l ) α L W j i ( l )
b j ( l ) = b j ( l ) α L b j ( l )
where α is a preset hyperparameter for the whole network, which is also named the learning rate in a deep learning framework and controls the sampling interval of the trainable parameter.
For natural Gaussian noise reduction, a new method named the feed-forward denoising convolutional neural network (DnCNN) [26] has recently shown excellent performances, in contrast with the traditional methods which employ a deep convolutional neural network. DnCNN employs a 20 convolutional layers structure, a learning strategy of residual learning to remove the latent original image in the hidden layers, and an output data regularization method of batch normalization [27], which can deal with several universal image restoration tasks such as blind or non-blind image Gaussian denoising, and single image super-resolution and JPEG image deblocking.
Recently, borrowing the thought of the DnCNN model, Chierchia et al. [28] also employed a set of convolutional layers named SAR-CNN, along with batch normalization (BN) and ReLU activation function, and a component-wise division residual layer to estimate the speckled image. As an alternative way of dealing with the multiplicative noise of SAR images, SAR-CNN uses the homomorphic approach with coupled logarithm and exponent transforms in combination with a similarity measure for speckle noise distribution. In addition, Wang et al. [29] also used a similar structure like DnCNN, with eight-layers of the Conv-BN-ReLU block, and replaced residual mean square error (MSE) with a combination of Euclidean loss and total variation loss, which is incorporated into the total loss function to facilitate more smooth results.

3. Proposed Method

In this paper, rather than using log-transform [28] or modifying training loss function like [29], we propose a novel network for SAR image despeckling with a dilated residual network (SAR-DRN), which is trained in an end-to-end fashion using a combination of dilated convolutions and skip connections with a residual learning structure. Instead of relying on a pre-determined image, a priori knowledge, or a noise description model, the main superiority of using the deep neural network strategy for SAR image despeckling is that the model can directly acquire and update the network parameters from the training data and the corresponding labels, which need not manually adjust critical parameters and can automatically learn the complex internal non-linear relations with trainable network parameters from the massive training simulative data.
The proposed holistic neural network model (SAR-DRN) for SAR image despeckling contains seven dilated convolution layers and two skip connections, as illustrated in Figure 1. In addition, the proposed model uses a residual learning strategy to predict the speckled image, which adequately utilizes the non-linear expression ability of deep learning. The details of the algorithm are described in the following.

3.1. Dilated Convolutions

In image restoration problems such as single-image super-resolution (SISR) [30], denoising [31], and deblurring [32], contextual information can effectively facilitate the recovery of degraded regions. In deep convolutional networks, the contextual information is mainly augmented through enlarging the receptive field. Generically, there are two ways to achieve this purpose: (1) increasing the network depth; and (2) enlarging the filter size. Nevertheless, as the network depth increases, the accuracy becomes “saturated” and then degrades rapidly. Enlarging the filter size can also lead to more convolution parameters, which greatly increases the calculative burden and training times.
To solve this problem effectively, dilated convolutions were first proposed in [33], which can both enlarge the receptive field and maintain the filter size. Let C be an input discrete two-dimensional matrix such as an image, and let k be a discrete convolution filter of size ( 2 r + 1 ) × ( 2 r + 1 ) . Then, the original discrete convolution operator can be given as
( C k ) ( p ) = i + j = p C ( i ) k ( j )
After defined this convolution operator , let d be a dilation factor and let d be equivalent to
( C d k ) ( p ) = i + d j = p C ( i ) k ( j )
where d is served as the dilated convolution or a d -dilated convolution. Particularly, the common discrete convolution can be regarded as the l -dilated convolution. Setting the size of the convolutional kernel with 3 × 3 as an example, let k l be the discrete 3 × 3 convolution filters. Consider applying the filters with exponentially increasing dilation as
R l + 1 = R l ϕ k l
where l = 0 ,   1 ,   ,   n 2 , ϕ = 2 l , and R l represents the size of the receptive field. The common convolution receptive field has a linear correlation with the layer depth, in that the receptive field size: R l c = ( 2 l + 1 ) × ( 2 l + 1 ) . By contrast, the dilated convolution receptive field has an exponential correlation with the layer depth, where the receptive field size: R l d = ( 2 l + 1 1 ) × ( 2 l + 1 1 ) . For instance, when l = 4 , R l c = 9 × 9 , while R l d = 31 × 31 with the same layer depth. Figure 2 illustrates the dilated convolution receptive field size, which: (a) corresponds to the one-dilated convolution, which is equivalent to the common convolution operation at this point; (b) corresponds to the two-dilated convolution; and (c) corresponds to the four-dilated convolution.
In the proposed SAR-DRN model, considering that trade-off between feature extraction ability and reducing training time, the dilation factors of the 3 × 3 dilated convolutions from layer 1 to layer 7 are respectively set to 1, 2, 3, 4, 3, 2, and 1, empirically. Compared with other deep neural networks, we propose a lightweight model with only seven dilated convolution layers, as shown in Figure 3.

3.2. Skip Connections

Although the increase of network layer depth can help to obtain more data feature expressions, it often results in the vanishing gradient problem, which makes the training of the model much harder. To solve this problem, a new structure called skip connection [34] has been created for the DCNNs, to obtain better training results. The skip connection can pass the previous layer’s feature information to its posterior layer, maintaining the image details and avoiding or reducing the vanishing gradient problem. For the l -th layer, let L ( l ) be the input data, and let f ( L ( l ) , { W , b } ) be its feed-forward propagation with trainable parameters. The output of the ( l + k ) -th layer with k -interval skip connection is recursively defined as follows:
L ( l + k ) = f ( L ( l ) , { W , b } l + 1 l + k ) + L ( l )
For clarity, in the proposed SAR-DRN model, two skip connections are employed to connect layer 1 to layer 3 (as shown in Figure 4a) and layer 4 to layer 7 (as shown in Figure 4b), whose effects are compared with no skip connections in the discussion section.

3.3. Residual Learning

Compared with traditional data mapping, He et al. [35] found that residual mapping can acquire a more effective learning effect and rapidly reduce the training loss after passing through a multi-layer network, which has achieved a state-of-the-art performance in object detection [36], image super-resolution [37], and so on. Essentially, Szegedy et al. [38] demonstrated that residual networks take full advantage of identity shortcut connections, which can efficiently transfer various levels of feature information between not directly connected layers without attenuation. In the proposed SAR-DRN model, the residual image φ is defined as follows:
φ = y i x i
As the layer depth increases, the degradation phenomenon manifests that common deep networks might have difficulties in approximating identical mappings by stacked non-linear layers like the Conv-BN-ReLU block. By contrast, it is reasonable to consider that most pixel values in residual image φ are very close to zero, and the spatial distribution of the residual feature maps should be very sparse, which can transfer the gradient descent process to a much smoother hyper-surface of loss to filtering parameters. Thus, searching for an allocation which is on the verge of the optimal for the network’s parameters becomes much quicker and easier, allowing us to add more trainable layers to the network and improve its performance. The learning procedure with a residual unit is easier to approximate to the original multiplicative speckle noise through the deeper and intrinsic non-linear feature extraction and expression, which can better weaken the range difference between optical images and SAR images.
Specifically for the proposed SAR-DRN, we choose a collection of N training image pairs { x i , y i } N from the training data sets as described in 4.1 below, where y i is the speckled image, and θ is the network parameters. Our model uses the mean squared error (MSE) as the loss function:
l o s s ( Θ ) = 1 2 N i = 1 N ϕ ( y i , θ ) φ 2 2
In summary, with the dilated convolution, skip connections and residual learning structure, the flowchart of learning a deep network for the SAR image despeckling process is described in Figure 5. To learn the complicated non-linear relation between the speckled image y and original image x , the proposed SAR-DRN model is employed with converged loss between the residual image φ and the output ϕ ( y , θ ) , then preparing for real speckle SAR image processing as illuminated in Figure 5.

4. Experimental Results and Analysis

4.1. Implementation Details

4.1.1. Training and Test Datasets

Considering that it is quite hard to obtain clean reference training SAR images without speckle at all, we used the UC Merced land-use dataset [39] as our training dataset with different numbers of looks for simulating SAR image despeckling, which contains 21 scene classes with 100 images per class. Because the optical images and SAR images are statistically different, the amplitude information of optical images is processed before training for single-polarization SAR data despeckling, to better accord with the data distribution property of SAR images. To train the proposed SAR-DRN, we chose 400 images of size 256 × 256 from this dataset and set each patch size as 40 × 40 and stride equal to 10. Then, 193,664 patches are cropped for training SAR-DRN with a batch size of 128 for parallel computing. Additionally, the number of looks L was set to noise levels of 1, 2, 4, and 8 for adding multiplicative speckle noise, respectively.
To test the performance of the proposed model, three examples of the Airplanes, Buildings, and Rivers classes were respectively set up as simulated images. For the real SAR image despeckling experiments, we used the classic Flevoland SAR image (cropped to 500 × 600), Deathvalley SAR image (cropped to 600 × 600), and San Francisco SAR image (cropped to 400 × 400), which are commonly used in real SAR data image despeckling.

4.1.2. Parameter Setting and Network Training

Table 1 lists the network parameters of each layer for SAR-DRN. The proposed model was trained using the Adam [40] algorithm as the gradient descent optimization method, with momentum β 1 = 0.9, momentum β 2 = 0.999, and ε = 10 8 , where the learning rate α was initialized to 0.01 for the whole network. The optimization procedure is given below.
m t = β 1 m t 1 + ( 1 β 1 ) L θ t
n t = β 2 n t 1 + ( 1 β 2 ) ( L θ t ) 2
Δ θ t = α m t n t + ε
where θ is the trainable parameter in the network of the t-th iteration. The training process of SAR-DRN took 50 epochs (about 1500 iterations), and after every 10 epochs, the learning rate was reduced through being multiplied by a descending factor g a m m a = 0.5. We used the Caffe [41] framework to train the proposed SAR-DRN in the Windows 7 environment, 16 GB-RAM, with an Nvidia Titan-X (Pascal) GPU. The total training time costs about 4 h 30 min, which is less than SAR-CNN [28] with about 9 h 45 min under the same computational environment.

4.1.3. Compared Algorithms and Quantitative Evaluations

To verify the proposed method, we compared the SAR-DRN method with four mainstream despeckling methods: The probabilistic patch-based (PPB) filter [13] based on patch matching, SAR-BM3D [14] based on 3-D patch matching and wavelet, SAR-POTDF [16] based on sparse representation, and SAR-CNN [28] based on the deep neural network. In the simulated-image experiments, the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were employed as the quantitative evaluation indexes. In the real-image experiments, the ENL was considered as the smoothness of a homogeneous region after SAR image despeckling (the ENL is commonly regarded as the quantitative evaluation index for real SAR image despeckling experiments), whose value is larger, demonstrating that the homogeneous region is smoother, as defined in Equation (3).

4.2. Simulated-Data Experiments

To verify the effectiveness of the proposed SAR-DRN model in SAR image despeckling, four different speckle noise levels of looks L = 1, 2, 4, and 8 were set up for the three simulated images for PPB, SAR-BM3D, SAR-POTDF, SAR-CNN, and ours. The PSNR and SSIM evaluation indexes and their standard deviations of the 10 simulated experiments with the three images are listed in Table 2, Table 3 and Table 4, respectively, where the best performance is marked in bold.
As shown in Table 2, Table 3 and Table 4, the proposed SAR-DRN model obtains all the best PSNR results and nine of the twelve best SSIM results in the four noise levels. When L = 1, the proposed method outperforms SAR-BM3D by about 0.9 dB/0.6 dB/0.6 dB for Airplane, Building, and Highway images, respectively. When L = 2 and 4, SAR-DRN outperforms PPB, SAR-POTDF, SAR-BM3D, and SAR-CNN by at least 0.5 dB/0.7 dB/0.3 dB and 0.4 dB/0.3 dB/0.2 dB for Airplane/Building/Highway, respectively. Compared with the traditional despeckling methods above, the proposed method shows a superior performance over the state-of-the-art methods in both quantitative and visual assessments, especially for strong speckle noise.
Figure 6, Figure 7 and Figure 8 correspondingly show the filtered images for the Airplane/Building/Highway images contaminated by two-look speckle, four-look speckle, and four-look speckle, respectively. It can be clearly seen that PPB has a good speckle-reduction ability, but PPB simultaneously creates many texture distortions, especially around the edges of the airplane, building, and highway. SAR-BM3D and SAR-POTDF perform better than PPB for the Airplane, Building, and Highway images, especially for strong speckle noise such as L = 1, 2, or 4, which reveals an excellent speckle-reduction ability and local detail preservation ability. Furthermore, they generate fewer texture distortions, as shown in Figure 6, Figure 7 and Figure 8. However, SAR-BM3D and SAR-POTDF also simultaneously result in over-smoothing, to some degree, as they mainly concentrate on some complex geometric features. SAR-CNN also shows a good speckle-reduction ability and local detail preservation ability, but introduces some radiation distortions in homogeneous regions. Compared with the other algorithms above, SAR-DRN achieves the best performance in speckle reduction, concurrently avoiding introducing radiation and geometric distortion. In addition, from the red boxes of the Airplane and Building images in Figure 6, Figure 7 and Figure 8, respectively, it can be clearly seen that SAR-DRN also shows the best local detail preservation ability, while the other methods either miss partial texture details or produce blurry results, to some extent.

4.3. Real-Data Experiments

As shown in Figure 9, Figure 10 and Figure 11, we also compared the proposed method with the four state-of-the-art methods described above for three real SAR images. These three SAR images are all acquired by the Airborne Synthetic Aperture Radar (AIRSAR), which are all four-look data. In Figure 9, it can be clearly seen that the result of SAR-BM3D still contains a great deal of residual speckle noise, while the results of PPB, SAR-POTDF, SAR-CNN, and the proposed SAR-DRN method reveal a good speckle-reduction ability. PPB performs very well in speckle reduction, but it generates a few texture distortions in the edges of prominent objects. In homogeneous regions, SAR-POTDF does not perform as well in speckle reduction as the proposed SAR-DRN. As for SAR-CNN, its edge-preserving ability is weaker than that of SAR-DRN. Visually, SAR-DRN achieves the best performance in speckle reduction and local detail preservation, performing better than the other mainstream methods; in Figure 10, all the five methods can reduce the speckle noise well, but PPB obviously results in an over-smoothing phenomenon. Besides, in Figure 11, the result of SAR-CNN still contains some residual speckle noise. Simultaneously, PPB, SAR-BM3D, and SAR-POTDF also result in an over-smoothing phenomenon, to some degree, as shown in the marked regions with complex geometric features. It can be clearly seen that the proposed method has both a well speckled noise reduction ability and preserving detail ability for the edge and texture information.
In addition, we also evaluated the filtered results, through ENL in Table 5 and EPD-ROA [15] in Table 6 to measure the speckle-reduction and edge-preserving ability [42], respectively. Because it is difficult to find homogeneous regions in Figure 11, the ENL values were respectively estimated from four chosen homogeneous regions of Figure 9 and Figure 10 (the red boxes in Figure 9a and Figure 10a). Clearly, SAR-DRN has a much better speckle-reduction ability than the other methods, which is consistent with the visual observation.

4.4. Discussion

4.4.1. Dilated Convolutions and Skip Connections

As mentioned in Section III, dilated convolutions are employed in the proposed method, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections are also added to the despeckling model to maintain the image details and reduce the vanishing gradient problem. To verify the effectiveness of the dilated convolutions and skip connections, we implemented four sets of experiments in the same environment as that shown in Figure 12: (1) with dilated convolutions and skip connections (the red line); (2) with dilated convolutions but without skip connections (the green line); (3) without dilated convolutions but with skip connections (the blue line); and (4) without dilated convolutions and skip connections (the black line).
As Figure 12 implies, the dilated convolutions can effectively reduce the training loss and enhance the despeckling performance (the less training Loss and the best PSNR), which also testifies that augmenting the contextual information through enlarging the receptive field is effective for recovering the degraded image, as demonstrated in Section III for dilated convolution. Meanwhile, the skip connections also accelerate the convergence speed of the network and enhance the model stability, as is shown by the comparison with or without skip connection in Figure 12. Besides, the combination of dilated convolution and skip connections can promote each other’s effect, up from about 1.1 dB in PSNR compared with the combination of without dilated convolution and without skip connections.

4.4.2. With or without Batch Normalization (BN) in the Network

Unlike the methods proposed in [28,29], which utilize batch normalization to normalize the output features, SAR-DRN does not add this preprocessing layer, considering that the skip connections can also maintain the outputs of the data distribution in the different dilated convolution layers. The quantitative comparison of the two structures for SAR image despeckling is provided in Section IV. Furthermore, getting rid of the BN layers can simultaneously reduce the amount of computation, saving about 3 h of training time in the same environment. Figure 13 shows that this modification improves the despeckling performance and reduces the complexity of the model. Regarding this phenomenon, we suggest that a probable reason is that the input and output have a highly similar spatial distribution for this regression problem, while the BN layers normalize the hidden layers’ output, which destroys the representation of the original space [44].

4.4.3. Runtime Comparisons

For evaluating the efficiency of despeckling algorithms, we make statistics of runtime under the same environment with MALAB R2014b, as listed in Table 7. Distinctly, SAR-DRN exhibits the lowest run-time complexity than other algorithms, because of the lightweight model with only seven layers than other deep learning methods like SAR-CNN [28] with 17 layers.

5. Conclusions

In this paper, we have proposed a novel deep learning approach for the SAR image despeckling task, learning an end-to-end mapping between the noisy and clean SAR images. Differently from common convolutions operation, the presented approach is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size with a lightweight structure. Furthermore, skip connections are added to the despeckling model to maintain the image details and avoid the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed SAR-DRN approach shows a state-of-the-art performance in both simulated and real SAR image despeckling experiments, especially for strong speckle noise.
In our future work, we will investigate more powerful learning models to deal with the complex real scenes in SAR images. Considering that the training of our current method performed for each number of looks, we will explore an integrated model to solve this problem. Furthermore, the proposed approach will be extended to polarimetric SAR image despeckling, whose noise model is much more complicated than that of single-polarization SAR. Besides, for better reducing speckle noise in more complex real SAR image data, some prior constraint like multi-channel patch matching, band selection, location prior, and locality adaptive discriminant analysis [45,46,47,48], can also be considered to improve the precision of despeckling results. In addition, we will try to collect enough SAR images and then train the model with multi-temporal data [49] for SAR image despeckling, which will be sequentially explored in future studies.

Acknowledgments

This work was supported by the National Key Research and Development Program of China under Grant 2016YFB0501403, the National Natural Science Foundation of China under Grants 61671334, the Fundamental Research Funds for the Central Universities under Grant 2042017kf0180, and the Natural Science Foundation of Hubei Province under Grant ZRMS2016000241.

Author Contributions

Qiang Zhang proposed the method and performed the experiments; Qiang Zhang, Qiangqiang Yuan., Jie Li., and Zhen Yang conceived and designed the experiments; Qiang Zhang, Qiangqiang Yuan., Jie Li. Zhen Yang, and Xiaoshuang Ma wrote the manuscript. All the authors read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodman, J. Some fundamental properties of speckle. J. Opt. Soc. Am. 1976, 66, 1145–1150. [Google Scholar] [CrossRef]
  2. Li, H.; Hong, W.; Wu, Y.; Fan, P. Bayesian wavelet shrinkage with heterogeneity-adaptive threshold for SAR image despeckling based on generalized gamma distribution. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2388–2402. [Google Scholar] [CrossRef]
  3. Xu, B.; Cui, Y.; Li, Z.; Yang, J. An iterative SAR image filtering method using nonlocal sparse model. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1635–1639. [Google Scholar]
  4. Wu, J.; Liu, F.; Hao, H.; Li, L.; Jiao, L.; Zhang, X. A nonlocal means for speckle reduction of SAR image with multiscale-fusion-based steerable kernel function. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1646–1650. [Google Scholar] [CrossRef]
  5. Lee, J. Digital image enhancement and noise filtering by use of local statistics. IEEE Trans. Pattern Anal. Mach. Intell. 1980, 2, 165–168. [Google Scholar] [CrossRef] [PubMed]
  6. Kuan, D.; Sawchuk, A.; Strand, T.; Chavel, P. Adaptive noise smoothing filter for images with signal-dependent noise. IEEE Trans. Pattern Anal. Mach. Intell. 1985, 2, 165–177. [Google Scholar] [CrossRef]
  7. Frost, V.; Stiles, J.; Shanmugan, K.; Holtzman, J. A model for radar images and its application to adaptive digital filtering of multiplicative noise. IEEE Trans. Pattern Anal. Mach. Intell. 1982, 2, 157–166. [Google Scholar] [CrossRef]
  8. Yahya, N.; Kamel, N.S.; Malik, A.S. Subspace-based technique for speckle noise reduction in SAR images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6257–6271. [Google Scholar] [CrossRef]
  9. Starck, J.; Candès, E.; Donoho, D. The curvelet transform for image denoising. IEEE Trans. Image Process. 2002, 11, 670–684. [Google Scholar] [CrossRef] [PubMed]
  10. Solbo, S.; Eltoft, T. Homomorphic wavelet-based statistical despeckling of SAR images. IEEE Trans. Geosci. Remote Sens. 2004, 42, 711–721. [Google Scholar] [CrossRef]
  11. López, C.M.; Fàbregas, X.M. Reduction of SAR interferometric phase noise in the wavelet domain. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2553–2566. [Google Scholar] [CrossRef]
  12. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 60–65. [Google Scholar]
  13. Deledalle, C.A.; Denis, L.; Tupin, F. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. IEEE Trans. Image Process. 2009, 18, 2661–2672. [Google Scholar] [CrossRef] [PubMed]
  14. Parrilli, S.; Poderico, M.; Angelino, C.V.; Verdoliva, L. A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2012, 50, 606–616. [Google Scholar] [CrossRef]
  15. Ma, X.; Shen, H.; Zhao, X.; Zhang, L. SAR image despeckling by the use of variational methods with adaptive nonlocal functionals. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3421–3435. [Google Scholar] [CrossRef]
  16. Xu, B.; Cui, Y.; Li, Z.; Zuo, B.; Yang, J.; Song, J. Patch ordering-based SAR image despeckling via transform-domain filtering. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1682–1695. [Google Scholar] [CrossRef]
  17. Feng, W.; Lei, H.; Gao, Y. Speckle reduction via higher order total variation approach. IEEE Trans. Image Process. 2014, 23, 1831–1843. [Google Scholar] [CrossRef] [PubMed]
  18. Zhao, Y.; Liu, J.; Zhang, B.; Hong, W.; Wu, Y. Adaptive total variation regularization based SAR image despeckling and despeckling evaluation index. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2765–2774. [Google Scholar] [CrossRef]
  19. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral image denoising employing a spectral-spatial adaptive total variation model. IEEE Trans. Geosci. Remote Sens. 2012, 10, 3660–3677. [Google Scholar] [CrossRef]
  20. Li, J.; Yuan, Q.; Shen, H.; Zhang, L. Noise removal from hyperspectral image with joint spectral-spatial distributed sparse representation. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5425–5439. [Google Scholar] [CrossRef]
  21. Ranjani, J.J.; Thiruvengadam, S.J. Dual-tree complex wavelet transform based SAR despeckling using interscale dependence. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2723–2731. [Google Scholar] [CrossRef]
  22. LeCun, Y.A.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  23. Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  24. Xia, G.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y. AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
  25. LeCun, Y.A.; Boser, B.; Denker, J.S.; Howard, R.E.; Habbard, W.; Jackel, L.D. Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1990; pp. 396–404. [Google Scholar]
  26. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed]
  27. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  28. Chierchia, G.; Cozzolino, D.; Poggi, G.; Verdoliva, L. SAR image despeckling through convolutional neural networks. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar]
  29. Wang, P.; Zhang, H.; Patel, V.M. SAR image despeckling using a convolutional neural network. IEEE Signal Process. Lett. 2017, 24, 1763–1767. [Google Scholar] [CrossRef]
  30. Dong, C.; Loy, C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
  31. Zhang, L.; Zuo, W. Image restoration: From Sparse and Low-Rank Priors to Deep Priors [Lecture Notes]. IEEE Signal Process. Mag. 2017, 34, 172–179. [Google Scholar] [CrossRef]
  32. Chakrabarti, A. A neural approach to blind motion deblurring. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 221–235. [Google Scholar]
  33. Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. In Proceedings of the 2016 International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  34. Mao, X.; Shen, C.; Yang, Y.-B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 2802–2810. [Google Scholar]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference Computer Vision and Pattern Recognition, Seattle, WA, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  36. Zhang, X.; Zou, J.; He, K.; Sun, J. Accelerating very deep convolutional networks for classification and detection. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1943–1955. [Google Scholar] [CrossRef] [PubMed]
  37. Kim, J.; Kwon, L.J.; Mu, L.K. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference Computer Vision and Pattern Recognition, Seattle, WA, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
  38. Szegedy, C.; Ioffe, S.; Vanhoucke, V. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [Google Scholar]
  39. Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
  40. Kingma, D.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  41. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 675–678. [Google Scholar]
  42. Luis, G.; Maria, E.B.; Julio, C.; Marta, E. A new image quality index for objectively evaluating despeckling filtering in SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1297–1307. [Google Scholar]
  43. Zeyde, R.; Elad, M.; Protter, M. On single image scale-up using sparse-representations. In Proceedings of the International Conference on Curves and Surfaces, Avignon, France, 24–30 June 2010; pp. 711–730. [Google Scholar]
  44. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  45. Li, J.; Yuan, Q.; Shen, H.; Zhang, L. Hyperspectral image recovery employing a multidimensional nonlocal total variation model. Signal Process. 2015, 111, 230–248. [Google Scholar] [CrossRef]
  46. Wang, Q.; Lin, J.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef] [PubMed]
  47. Wang, Q.; Meng, Z.; Li, X. Locality adaptive discriminant analysis for spectral-spatial classification of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2077–2081. [Google Scholar] [CrossRef]
  48. Wang, Q.; Gao, J.; Yuan, Y. Embedding structured contour and location prior in siamesed fully convolutional networks for road detection. IEEE Trans. Intell. Transp. Syst. 2017, 99, 230–241. [Google Scholar] [CrossRef]
  49. Ma, X.; Wu, P.; Wu, Y.; Shen, H. A review on recent developments in fully polarimetric SAR image despeckling. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 99, 1–16. [Google Scholar] [CrossRef]
Figure 1. The architecture of the proposed SAR-DRN.
Figure 1. The architecture of the proposed SAR-DRN.
Remotesensing 10 00196 g001
Figure 2. Receptive field size of different dilated convolution. ( d = 1, 2, and 4, where the dark color regions represent the receptive field).
Figure 2. Receptive field size of different dilated convolution. ( d = 1, 2, and 4, where the dark color regions represent the receptive field).
Remotesensing 10 00196 g002
Figure 3. Dilated convolution in the proposed model.
Figure 3. Dilated convolution in the proposed model.
Remotesensing 10 00196 g003
Figure 4. Diagram of skip connection structure in the proposed model. (a) Connecting dilated convolution layer 1 to dilated convolution layer 3. (b) Dilated convolution layer 4 to dilated convolution layer 7.
Figure 4. Diagram of skip connection structure in the proposed model. (a) Connecting dilated convolution layer 1 to dilated convolution layer 3. (b) Dilated convolution layer 4 to dilated convolution layer 7.
Remotesensing 10 00196 g004
Figure 5. The framework of SAR image despeckling based on deep learning.
Figure 5. The framework of SAR image despeckling based on deep learning.
Remotesensing 10 00196 g005
Figure 6. Filtered images for the Airplane image contaminated by two-look speckle. (a) Original image. (b) Speckled image. (c) PPB [13]. (d) SAR-BM3D [14]. (e) SAR-POTDF [16]. (f) SAR-CNN [28]. (g) SAR-DRN.
Figure 6. Filtered images for the Airplane image contaminated by two-look speckle. (a) Original image. (b) Speckled image. (c) PPB [13]. (d) SAR-BM3D [14]. (e) SAR-POTDF [16]. (f) SAR-CNN [28]. (g) SAR-DRN.
Remotesensing 10 00196 g006aRemotesensing 10 00196 g006b
Figure 7. Filtered images for the Building image contaminated by four-look speckle. (a) Original image. (b) Speckled image. (c) PPB [13]. (d) SAR-BM3D [14]. (e) SAR-POTDF [16]. (f) SAR-CNN [28]. (g) SAR-DRN.
Figure 7. Filtered images for the Building image contaminated by four-look speckle. (a) Original image. (b) Speckled image. (c) PPB [13]. (d) SAR-BM3D [14]. (e) SAR-POTDF [16]. (f) SAR-CNN [28]. (g) SAR-DRN.
Remotesensing 10 00196 g007
Figure 8. Filtered images for the Highway image contaminated by four-look speckle. (a) Original image. (b) Speckled image. (c) PPB [13]. (d) SAR-BM3D [14]. (e) SAR-POTDF [16]. (f) SAR-CNN [28]. (g) SAR-DRN.
Figure 8. Filtered images for the Highway image contaminated by four-look speckle. (a) Original image. (b) Speckled image. (c) PPB [13]. (d) SAR-BM3D [14]. (e) SAR-POTDF [16]. (f) SAR-CNN [28]. (g) SAR-DRN.
Remotesensing 10 00196 g008
Figure 9. Filtered images for the Flevoland SAR image contaminated by four-look speckle. (a) Original image. (b) PPB [13]. (c) SAR-BM3D [14]. (d) SAR-POTDF [16]. (e) SAR-CNN [28]. (f) SAR-DRN.
Figure 9. Filtered images for the Flevoland SAR image contaminated by four-look speckle. (a) Original image. (b) PPB [13]. (c) SAR-BM3D [14]. (d) SAR-POTDF [16]. (e) SAR-CNN [28]. (f) SAR-DRN.
Remotesensing 10 00196 g009aRemotesensing 10 00196 g009b
Figure 10. Filtered images for the Deathvalley SAR image contaminated by four-look speckle. (a) Original image. (b) PPB [13]. (c) SAR-BM3D [14]. (d) SAR-POTDF [16]. (e) SAR-CNN [28]. (f) SAR-DRN.
Figure 10. Filtered images for the Deathvalley SAR image contaminated by four-look speckle. (a) Original image. (b) PPB [13]. (c) SAR-BM3D [14]. (d) SAR-POTDF [16]. (e) SAR-CNN [28]. (f) SAR-DRN.
Remotesensing 10 00196 g010
Figure 11. Filtered images for the San Francisco SAR image contaminated by four-look speckle. (a) Original image. (b) PPB [13]. (c) SAR-BM3D [14]. (d) SAR-POTDF [16]. (e) SAR-CNN [28]. (f) SAR-DRN.
Figure 11. Filtered images for the San Francisco SAR image contaminated by four-look speckle. (a) Original image. (b) PPB [13]. (c) SAR-BM3D [14]. (d) SAR-POTDF [16]. (e) SAR-CNN [28]. (f) SAR-DRN.
Remotesensing 10 00196 g011aRemotesensing 10 00196 g011b
Figure 12. The simulated SAR image despeckling results of the four specific models in (a) training loss and (b) average PSNR, with respect to iterations. The four specific models were different combinations of dilated convolutions (Dconv) and skip connections (SK), and were trained with one-look images in the same environment. The results were evaluated for the Set14 [43] dataset.
Figure 12. The simulated SAR image despeckling results of the four specific models in (a) training loss and (b) average PSNR, with respect to iterations. The four specific models were different combinations of dilated convolutions (Dconv) and skip connections (SK), and were trained with one-look images in the same environment. The results were evaluated for the Set14 [43] dataset.
Remotesensing 10 00196 g012
Figure 13. The simulated SAR image despeckling results of the two specific models with/without batch normalization (BN). The two specific models were trained with one-look images in the same environment, and the results were evaluated for the Set14 [43] dataset.
Figure 13. The simulated SAR image despeckling results of the two specific models with/without batch normalization (BN). The two specific models were trained with one-look images in the same environment, and the results were evaluated for the Set14 [43] dataset.
Remotesensing 10 00196 g013
Table 1. The network configuration of the SAR-DRN model.
Table 1. The network configuration of the SAR-DRN model.
Layer NumberNetwork Configurations
Layer 1Dilated Conv + ReLU: 64 × 3 × 3, dilate = 1, stride = 1, pad = 1
Layer 2Dilated Conv + ReLU: 64 × 3 × 3, dilate = 2, stride = 1, pad = 2
Layer 3Dilated Conv + ReLU: 64 × 3 × 3, dilate = 3, stride = 1, pad = 3
Layer 4Dilated Conv + ReLU: 64 × 3 × 3, dilate = 4, stride = 1, pad = 4
Layer 5Dilated Conv + ReLU: 64 × 3 × 3, dilate = 3, stride = 1, pad = 3
Layer 6Dilated Conv + ReLU: 64 × 3 × 3, dilate = 2, stride = 1, pad = 2
Layer 7Dilated Conv: 64 × 3 × 3, dilate = 1, stride = 1, pad = 1
Table 2. Mean and Stand Deviation Results of PSNR (dB) and SSIM for Airplane with L = 1, 2, 4, and 8.
Table 2. Mean and Stand Deviation Results of PSNR (dB) and SSIM for Airplane with L = 1, 2, 4, and 8.
LooksIndexPPBSAR-BM3DSAR-POTDFSAR-CNNSAR-DRN
L = 1PSNR20.11 ± 0.06521.83 ± 0.05121.75 ± 0.06122.06 ± 0.05322.97 ± 0.052
SSIM0.512 ± 0.0010.623 ± 0.0030.604 ± 0.0030.623 ± 0.0020.656 ± 0.001
L = 2PSNR21.72 ± 0.05523.59 ± 0.06223.79 ± 0.04124.13 ± 0.04824.54 ± 0.043
SSIM0.601 ± 0.0010.693 ± 0.0040.686 ± 0.0030.710 ± 0.0020.726 ± 0.002
L = 4PSNR23.48 ± 0.07325.51 ± 0.07925.84 ± 0.04725.97 ± 0.05126.52 ± 0.046
SSIM0.678 ± 0.0030.755 ± 0.0020.752 ± 0.0020.748 ± 0.0030.763 ± 0.002
L = 8PSNR24.98 ± 0.08427.17 ± 0.06427.56 ± 0.06027.89 ± 0.06228.01 ± 0.058
SSIM0.743 ± 0.0030.800 ± 0.0030.794 ± 0.0040.801 ± 0.0020.819 ± 0.003
Table 3. Mean and Stand Deviation Results of PSNR (dB) and SSIM for Building with L = 1, 2, 4, and 8.
Table 3. Mean and Stand Deviation Results of PSNR (dB) and SSIM for Building with L = 1, 2, 4, and 8.
LooksIndexPPBSAR-BM3DSAR-POTDFSAR-CNNSAR-DRN
L = 1PSNR25.05 ± 0.03626.14 ± 0.05925.10 ± 0.03526.25 ± 0.05226.80 ± 0.044
SSIM0.715 ± 0.0020.786 ± 0.0050.731 ± 0.0010.775 ± 0.0020.796 ± 0.003
L = 2PSNR26.36 ± 0.06427.95 ± 0.04627.44 ± 0.04127.98 ± 0.05828.39 ± 0.045
SSIM0.778 ± 0.0030.831 ± 0.0040.811 ± 0.0030.826 ± 0.0030.838 ± 0.002
L = 4PSNR28.05 ± 0.05329.84 ± 0.03329.56 ± 0.06629.96 ± 0.05730.14 ± 0.048
SSIM0.833 ± 0.0020.879 ± 0.0020.866 ± 0.0020.869 ± 0.0030.870 ± 0.002
L = 8PSNR29.50 ± 0.06931.36 ± 0.07031.55 ± 0.05131.63 ± 0.05431.78 ± 0.058
SSIM0.871 ± 0.000.902 ± 0.0010.900 ± 0.0020.901 ± 0.0020.901 ± 0.001
Table 4. Mean and Stand Deviation Results of PSNR (dB) and SSIM for Highway with L = 1, 2, 4, and 8.
Table 4. Mean and Stand Deviation Results of PSNR (dB) and SSIM for Highway with L = 1, 2, 4, and 8.
LooksIndexPPBSAR-BM3DSAR-POTDFSAR-CNNSAR-DRN
L = 1PSNR20.13 ± 0.05921.12 ± 0.03120.63 ± 0.04721.07 ± 0.03621.71 ± 0.024
SSIM0.472 ± 0.0020.558 ± 0.0020.530 ± 0.0020.552 ± 0.0030.613 ± 0.003
L = 2PSNR21.40 ± 0.07322.62 ± 0.02822.51 ± 0.06322.88 ± 0.06222.96 ± 0.057
SSIM0.572 ± 0.0020.646 ± 0.0020.637 ± 0.0030.641 ± 0.0020.644 ± 0.003
L = 4PSNR22.61 ± 0.03724.29 ± 0.04924.39 ± 0.07124.46 ± 0.06124.64 ± 0.063
SSIM0.674 ± 0.0020.765 ± 0.0030.768 ± 0.0040.762 ± 0.0030.772 ± 0.002
L = 8PSNR24.90 ± 0.04526.41 ± 0.07526.37 ± 0.04426.48 ± 0.05826.53 ± 0.046
SSIM0.764 ± 0.0050.834 ± 0.0020.837 ± 0.0020.834 ± 0.0030.836 ± 0.002
Table 5. ENL results for the Flevoland and Deathvalley images.
Table 5. ENL results for the Flevoland and Deathvalley images.
DataOriginalPPBSAR-BM3DSAR-POTDFSAR-CNNSAR-DRN
Figure 9Region I4.36122.2467.43120.3286.29137.63
Region II4.1156.8924.9638. 9023.3845.64
Figure 10Region I5.7614.3712.6512.7213.2614.58
Region II4.5243.9755.7644.8737.4548.32
Table 6. EPD-ROA indexes for the real despeckling results.
Table 6. EPD-ROA indexes for the real despeckling results.
DataPPBSAR-BM3DSAR-POTDFSAR-CNNSAR-DRN
Figure 90.6190.7330.7140.7480.754
Figure 100.5870.7140.7020.6980.723
Figure 110.6320.6850.6540.6210.673
Table 7. Runtime comparisons for five despeckling methods with an image of size 256 × 256 (s).
Table 7. Runtime comparisons for five despeckling methods with an image of size 256 × 256 (s).
MethodPPBSAR-BM3DSAR-POTDFSAR-CNNOurs
Runtime10.1316.4812.831.130.38

Share and Cite

MDPI and ACS Style

Zhang, Q.; Yuan, Q.; Li, J.; Yang, Z.; Ma, X. Learning a Dilated Residual Network for SAR Image Despeckling. Remote Sens. 2018, 10, 196. https://doi.org/10.3390/rs10020196

AMA Style

Zhang Q, Yuan Q, Li J, Yang Z, Ma X. Learning a Dilated Residual Network for SAR Image Despeckling. Remote Sensing. 2018; 10(2):196. https://doi.org/10.3390/rs10020196

Chicago/Turabian Style

Zhang, Qiang, Qiangqiang Yuan, Jie Li, Zhen Yang, and Xiaoshuang Ma. 2018. "Learning a Dilated Residual Network for SAR Image Despeckling" Remote Sensing 10, no. 2: 196. https://doi.org/10.3390/rs10020196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop