Next Article in Journal
Enhancement of Component Images of Multispectral Data by Denoising with Reference
Previous Article in Journal
Tight Fusion of a Monocular Camera, MEMS-IMU, and Single-Frequency Multi-GNSS RTK for Precise Navigation in GNSS-Challenged Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Remote Sensing Image Stripe Detecting and Destriping Using the Joint Sparsity Constraint with Iterative Support Detection

1
School of Mathematical Sciences/Research Center for Image and Vision Computing, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China
2
School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, Shaanxi, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(6), 608; https://doi.org/10.3390/rs11060608
Submission received: 18 February 2019 / Revised: 5 March 2019 / Accepted: 6 March 2019 / Published: 13 March 2019
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Remote sensing images have been applied to a wide range of fields, but they are often degraded by various types of stripes, which affect the image visual quality and limit the subsequent processing tasks. Most existing destriping methods fail to exploit the stripe properties adequately, leading to suboptimal performance. Based on a full consideration of the stripe properties, we propose a new destriping model to achieve stripe detection and stripe removal simultaneously. In this model, we adopt the unidirectional total variation regularization to depict the directional property of stripes and the weighted 2 , 1 -norm regularization to depict the joint sparsity of stripes. Then, we combine the alternating direction method of multipliers and iterative support detection to solve the proposed model effectively. Comparison results on simulated and real data suggest that the proposed method can remove and detect stripes effectively while preserving image edges and details.

Graphical Abstract

1. Introduction

Remote sensing images have been widely used in many fields, such as urban planning, environment monitoring, and precision farming [1]. Unfortunately, remote sensing images are often contaminated by noises. One common and important noise is the stripe noise, which is mainly due to lines being dropped during scanning, differences between forward and reverse scanning [2], and variations in calibrations across sensor arrays in multisensor instruments. In real applications, stripe noise in remote sensing images not only reduces the image visual quality, but also lowers the accuracy of further processing tasks, such as classification [3], superresolution [4], and unmixing [5,6]. Therefore, the task of destriping is an important problem before further applications. Destriping aims at removing the stripes while maintaining image features, such as edges and details.

1.1. Existing Methods

Existing destriping methods can be categorized into three classes: filter-based, statistics-based, and optimization-based.
Filtering-based methods assume that stripes are periodic and can be distinguished from the power spectrum. Therefore, they suppress the specific frequencies caused by stripe noise in the transform domain, such as low-pass filter [2], wavelet analysis [7,8,9,10], and the Fourier transform [2,11]. Although these methods are easy to implement and perform well in specific cases, they generally lead to blurry artifacts in the resulting images due to the shrinkage of the frequencies of image details. To overcome this drawback, Munch et al. [12] proposed a combined Fourier and wavelet filter-based method to exploit the directional property of stripes via wavelet decomposition, achieving results with less blurry artifacts.
Statistics-based methods depend on the statistical characteristic of the digital numbers of each sensor [13,14,15,16,17,18], such as moment matching [14,17] and histogram matching [15,18]. The histogram matching assumes that the probability distribution of scene radiances seen by each sensor is the same and performs well only for single-type sensors competitively. To solve this problem, Wegner [18] proposed calculating the statistics only over homogeneous image regions; Gorsini et al. [19] applied the radiometric equalization to remove non-periodic stripes. In general, the main problem of statistics-based methods is that these methods greatly depend on the pre-established reference moment or histogram.
The optimization-based methods formulate destriping as an ill-posed problem [20,21,22,23], in which the true image and stripes are characterized by prior information [24,25]. For example, Shen and Zhang [22] proposed a maximum a posteriori method based on Huber–Markov regularization for image destriping and inpainting. Bouali and Ladjal [20] used the unidirectional total variation (UTV) to exploit the direction features of stripes in Moderate-Resolution Imaging Spectroradiometer (MODIS) images. Subsequently, UTV [26,27] and its variants have been widely used for destriping [21,23,28,29,30,31]. For example, Dou et al. [28] proposed a non-convex destriping model by employing the 0 -norm variant of UTV and the global sparsity regularization, which is depicted by the 1 -norm to constrain the stripe noise. The method performs well, but it fails to consider the inner structure among the stripes. Compared with Dou’s model, we excavate the directional property of strips more deeply and use a joint sparsity regularization, which is depicted by the weighted 2 , 1 -norm ( w , 2 , 1 -norm), to constrain the stripe component. Recently, some researchers addressed multispectral and hyperspectral image [32,33,34] destriping by exploiting the high coherence between different image bands adequately [35,36]. There are several commonly-used sparsity metrics [37] to describe the sparsity of stripes, e.g., 0 -norm, 1 -norm, 2 , 0 -norm, and 2 , 1 -norm. Here, 0 -norm and 1 -norm penalize, respectively, the number and the magnitude of non-zero elements, which describes the global sparsity. On the other hand, 2 , 0 -norm and 2 , 1 -norm penalize, respectively, the number and the magnitude of non-zero columns, which describes the group sparsity to explore the structural sparsity of stripes. As summarized in Table 1, we discuss the differences and relations among several state-of-the-art optimization-based methods.

1.2. Motivation

Most existing destriping methods concentrate on estimating the clean image directly from the observed image, while paying little attention to the stripe component. Besides the directional and structural properties of the stripes, the stripe position is also an important feature, and effective exploitation of this feature can significantly improve the destriping performance.
In this paper, we propose a novel destriping model that can detect the stripe position and remove the stripes simultaneously, by fully exploiting the properties of the stripes. Our model exploits both the structural and the directional properties of stripes. On the one hand, we use the w , 2 , 1 -norm to characterize the joint sparsity (structural property) of stripes. In addition, we use the iterative support detection (ISD) [41,42] to calculate the weight vector in the w , 2 , 1 -norm, which shows the stripe position in the observed image. On the other hand, we use UTV to characterize the directional property of stripes. We apply the alternating direction method of multipliers (ADMM) [6,43,44,45] algorithm to solve the proposed model efficiently. In summary, our contributions are as follows:
(1)
We propose a non-convex optimization model to characterize the position properties and the joint sparsity of stripes using the w , 2 , 1 -norm, which can significantly improve the destriping performance.
(2)
We combine ADMM and ISD to solve the proposed model effectively. Under specified conditions, the convergence of the proposed method can be guaranteed. In the solving process, we achieve stripe detection by calculating the weight vector. Meanwhile, we design new indices to analyze the accuracy of detection.
The remainder of this paper is organized as follows. Section 2 gives the proposed model with analyses of the stripe properties. Section 3 presents the optimization algorithm. Section 4 shows the experimental results and provides the comparisons with other methods. Section 5 concludes this paper.

2. The Proposed Model

Without loss of generality, we consider each stripe as a column, since we can rotate the image to make the stripe vertical if the stripe is horizontal. We consider the following additive degradation model [20,38]:
F = U + S ,
where F R m × n , U R m × n , and S R m × n represent the observed image, the ground-truth image, and the stripe noise, respectively. This degradation process is shown in Figure 1.
From a degraded image F, we detect the stripe position and estimate the stripe S simultaneously by solving the following model:
min S D y S 1 , 1 + λ 1 S ω , 2 , 1 + λ 2 D x F D x S 1 , 1 ,
where D y S 1 , 1 , D x F D x S 1 , 1 , and S ω , 2 , 1 are denoted as:
D y S 1 , 1 : = j = 1 n D y S : j 1 ,
D x F D x U 1 , 1 : = j = 1 n D x F : j D x U : j 1 ,
S ω , 2 , 1 : = j = 1 n ω j S : j 2 ,
where the subscript : j denotes the j th column; D y and D x are linear second-order difference operators along the vertical direction and the horizontal direction, respectively; λ 1 > 0 and λ 2 > 0 are regularization parameters balancing the three terms; ω = [ ω 1 , , ω n ] is a weight vector. After estimating the stripe S from (2), the ground-truth image U is obtained by F S .
We explain in detail the motivation of each term in our model. To illustrate the stripe properties appropriately, we use GSUTV [40], which achieves the destriping by estimating the stripe component, to remove stripes in the Terra MODIS image and explore the stripe properties in results. Figure 1a–c presents the degraded image, the destriping result, and the stripe component, respectively. Figure 1d–i shows the gradients of the observed image, the destriping image, and the stripe image in two directions: horizontal (across the stripes) and vertical (along the stripes), respectively.
We analyze the gradient statistical distribution before and after the degradation process firstly. It is easy to see that the horizontal gradient mapping between Figure 1d,f differs greatly in the image domain, while the vertical gradient mapping between Figure 1e,g is similar. In addition, we plot their corresponding gradient histograms in Figure 1j,k. We observe that the gradient distributions of Figure 1d,f are absolutely different, while the gradient distributions of Figure 1e,g are almost the same. Such an observation is not surprising, since the stripe noise has significantly directional feature. As shown in Figure 1c, we observe that the stripe noise possesses a significant structural property. Furthermore, we plot 2 -norm values of the stripe component (shown in Figure 1l).
The smoothness along the vertical direction of the stripe: The first term D y S 1 , 1 penalizes the gradient of the stripe image along the vertical direction. Figure 1c shows the great smoothness along the vertical direction of the stripe image, and Figure 1i shows that the vertical gradient of the stripe image has significant sparsity. Thus, we chose the 1 -norm of the vertical gradient of stripe image to keep the stripe component smoothness well. Although the p -norm ( 0 < p < 1 ) [39,46] can depict sparsity more accurately than the 1 -norm [28], here we did not adopt them because they are nonconvex and have a high possibility of affecting the convergence of the algorithm.
Joint sparsity of the stripe: The second term S ω , 2 , 1 characterizes the joint sparsity of stripes. As shown in Figure 1c, the stripe component has a special column structure. As Chen et al. [40] pointed out, the stripe component is composed by column vectors, and each column can be considered as a group. Moreover, we observe that the structure of the stripe component exhibited joint sparsity in the sense that the non-zeros of S 1 : , , S m : absolutely appeared in the same position. This motivated us to use a joint sparsity regularization. As shown in Figure 1l, most of the 2 -norm values of columns are close to zero, while the other values are much bigger than them. This explains again that the joint sparsity regularization is more appropriate to depict the structural property of stripes than the normal group sparsity [47] regularization.
Compared with GSUTV, the main difference of the proposed model is the weight parameter vector ω . We can remove some columns from the regularization by setting the weighted vector ω , if we regard them as true stripe columns. Therefore, the ω , 2 , 1 -norm can depict joint sparsity more appropriately than the 2 , 1 -norm. It is worth mentioning that unlike most of the present weighted models, which acquire ω : j from the set of positive real numbers, we adopt ω : j from { 0 , 1 } due to the specific property of the stripes. This means that if we regard S : j as the true non-zero column (true stripe column), we can forbid it from moving nearer to zero, and set ω : j as zero. Because the position information about non-zero is unknown, ω j is not given beforehand. Therefore, we used ISD [42], as a self-learning scheme, to explore partial support information gradually.
The smoothness along the horizontal direction of the ground-truth image: The third term D x F D x S 1 , 1 , in the proposed model, penalizes the difference between the gradient of the ground-truth image and the stripe image along the horizontal direction, which enhances the constraint on the smoothness of the ground-truth image along the horizontal direction. Generally speaking, the ground-truth image exhibits smoothness along both horizontal and vertical directions, resulting in sparsity in the gradient domain. However, the stripe noise mainly damages the sparsity along the horizontal direction (Figure 1d), rather than the vertical direction (Figure 1e). Therefore, we chose the 1 -norm of the horizontal gradient of the ground-truth image to make the desired ground-truth image smoother along the horizontal direction.
The framework of the proposed method is illustrated in Figure 2, and the details of the optimization method are shown in the next section.
Remark 1.
The proposed model is non-convex due to the multiplicative nature of ω and S in the regularization term S ω , 2 , 1 . It is worth mentioning that the proposed model is convex with respect to the stripe S.

3. The Optimization Algorithm

To solve the proposed model, we alternately used ISD [42], a self-learning scheme to explore true non-zeros gradually, and ADMM [43,48,49], a useful approach to deal with a non-smooth, but convex optimization model. The optimization method that combines ISD and ADMM is reasonable. The model is hard to solve due to the non-convex regularization term S ω , 2 , 1 and the nature of the remainder non-smooth regularization terms. To overcome the solving difficulty caused by S ω , 2 , 1 , we extended the idea of ISD in our method from normal sparsity to joint sparsity. We can detect the position of non-zeros and update the value of the weight vector based on ISD. It is easy to see that when the weight parameter vector is a constant, the proposed model degrades to an ordinary convex model in terms of S, and we call it the joint sparsity model. The joint sparsity model, which is non-smooth, but convex, is simple to deal with by ADMM. Therefore, we used ADMM to handle the problem caused by the non-smooth regularizer. Therefore, we solved this model by executing the following two steps:
(1)
Step 1: Keep the weight vector fixed, and solve the joint sparsity model by ADMM. In the first iteration, we set the initial value of ω as [ 1 , , 1 ] and obtained an estimation S 1 by solving the joint sparsity model.
(2)
Step 2: Using the estimated S from Step 1, update ω via a support detection operation.
The details of the two steps are shown as follows.

3.1. Solving the Joint Sparsity Model

The main idea of ADMM is, by introducing auxiliary variables, to transform the unconstrained optimization model into a constrained optimization model with the splitting structure of variables. By introducing three auxiliary variables X, Y, and Z, we rewrite (2) as follows:
min S , X , Y , Z X 1 , 1 + λ 1 Y ω , 2 , 1 + λ 2 Z 1 , 1 s . t . X = D y S , Y = S , Z = D x F D x S .
The augmented Lagrangian function of (6) is as follows:
L β ( S , X , Y , Z , A 1 , A 2 , A 3 ) = X 1 , 1 + λ 1 Y ω , 2 , 1 + λ 2 Z 1 , 1 + A 1 , D y S X + A 2 , S Y
+ A 3 , D x F D x S Z + β 2 D y S X F 2 + β 2 S Y F 2 + β 2 D x F D x S Z F 2 ,
where A 1 , A 2 , and A 3 are the Lagrange multipliers and β > 0 is a penalty scalar. ADMM iterates as follows:
X ( k + 1 ) = argmin X L β ( S ( k ) , X , Y ( k ) , Z ( k ) , A 1 ( k ) , A 2 ( k ) , A 3 ( k ) ) , Y ( k + 1 ) = argmin Y L β ( S ( k ) , X ( k + 1 ) , Y , Z ( k ) , A 1 ( k ) , A 2 ( k ) , A 3 ( k ) ) , Z ( k + 1 ) = argmin Z L β ( S ( k ) , X ( k + 1 ) , Y ( k + 1 ) , Z , A 1 ( k ) , A 2 ( k ) , A 3 ( k ) ) , S ( k + 1 ) = argmin S L β ( S , X ( k + 1 ) , Y ( k + 1 ) , Z ( k + 1 ) , A 1 ( k ) , A 2 ( k ) , A 3 ( k ) ) , A 1 ( k + 1 ) = A 1 ( k ) + β ( D y S ( k + 1 ) X ( k + 1 ) ) , A 2 ( k + 1 ) = A 2 ( k ) + β ( S ( k + 1 ) Y ( k + 1 ) ) , A 3 ( k + 1 ) = A 3 ( k ) + β ( D x F D x S ( k + 1 ) Z ( k + 1 ) ) .
The details of solving the four subproblems are shown as follows:
(1)
The X-subproblem can be rewritten by completing the square as follows:
argmin X X 1 , 1 + A 1 , D y S X + β 2 D y S X F 2 = argmin X X 1 , 1 + β 2 D y S X + A 1 β F 2 ,
whose solution can be given by the following column-wise vector-soft threshold function [50,51]:
X i j ( k + 1 ) = shrink ( ( D y S ( k ) + A 1 ( k ) β ) i j , 1 β ) = max { | ( D y S ( k ) + A 1 ( k ) β ) i j | 1 β , 0 } ( D y S ( k ) + A 1 ( k ) β ) i j | ( D y S ( k ) + A 1 ( k ) β ) i j | .
Considering the matrix X R m × n , the cost of computing X is O ( m n ) .
(2)
The same as the X-subproblem, the Z-subproblem can be solved by the shrinkage operator:
argmin Z λ 2 Z 1 , 1 + A 3 , D x F D x S Z + β 2 D x F D x S Z F 2 = argmin Z λ 2 Z 1 , 1 + β 2 D x F D x S Z + A 3 β F 2 ,
so:
Z i j ( k + 1 ) = max { | ( D x F D x S ( k ) + A 3 ( k ) β ) i j | λ 2 β , 0 } ( D x F D x S ( k ) + A 3 ( k ) β ) i j | ( D x F D x S ( k ) + A 3 ( k ) β ) i j | .
The same as the X-subproblem, the cost of computing Z is also O ( m n ) .
(3)
The Y-subproblem is:
argmin Y λ 1 Y ω , 2 , 1 + A 2 , S Y + β 2 S Y F 2 ,
which also can be solved by a shrinkage operator [49,50,51]:
Y : j ( k + 1 ) = max { ( S ( k ) + A 2 ( k ) β ) : j 2 λ 1 β ω j , 0 } ( S ( k ) + A 2 ( k ) β ) : j ( S ( k ) + A 2 ( k ) β ) : j 2 .
The cost of computing Y is also O ( m n ) .
(4)
The S-subproblem:
argmin S β 2 D y S X + A 1 β F 2 + β 2 S Y + A 2 β F 2 + β 2 D x F D x S Z + A 3 β F 2
is a quadratic optimization, which has the solution:
S ( k + 1 ) = ( D y D y + I + D x D x ) 1 [ D y ( X ( k + 1 ) A 1 ( k ) β ) + ( Y ( k + 1 ) A 2 ( k ) β ) + D x ( D x F Z ( k + 1 ) + A 3 ( k ) β ) ] .
We assume that the boundary of S is periodic in this paper; thus, D y D y , D x D x are both block cyclic system matrices. Therefore, (16) can be efficiently solved by fast Fourier transforms [42].
S ( k + 1 ) = F 1 ξ F ( D y ) ¯ F ( D y ) + 1 + F ( D x ) ¯ F ( D x ) ,
where:
ξ = F ( D y ) ¯ F ( X ( k + 1 ) A 1 ( k ) β ) + F ( Y ( k + 1 ) A 2 ( k ) β ) + F ( D x ) ¯ F ( D x F Z ( k + 1 ) + A 3 ( k ) β ) ,
F ( · ) and F ( · ) 1 denote the fast Fourier transform and its reverse transform, respectively. The symbol ⊗ means component-wise multiplication, the the same as the division, and the symbol ( · ) ¯ means component-wise complex conjugate. The cost of computing S is O ( m n log m n ) by using fast Fourier transform. At each iteration, the cost of computing all the variables X, Y, Z, and S is O ( m n log m n ) .

3.2. Update the Weight Vector ω via ISD

In this step, we used ISD to detect the position of stripes and update the weight vector ω based on the current intermediate solution S.
Firstly, we give the definition of the set whose elements are indices of detected nonzero columns as follows:
I ( v + 1 ) : = { j : | s j ( v ) | > ϵ ( v ) } , v = 1 , 2 , ,
where s j = S : j 2 , s is a sequence containing | s 1 | , , | s n | .
Secondly, we chose threshold ϵ ( v ) by the “first significant jump”. In detail, after sorting sequence s ( v ) from small to large, the “first significant jump” scheme finds the smallest j satisfying the inequality:
| s k j + 1 ( v ) | | s k j ( v ) | > τ ( v ) ,
where s k j is the j th number of s ( v ) ¯ , which is sorted. There are several easy methods [42] to define τ ( v ) ; in this case, we set it as the average value of | s 1 | , , | s n | .
Thirdly, we set ϵ ( v ) = | s k j ( v ) | . It is easy to see that the true stripe entries of | s ( v ) | are large in magnitude and small in number, and the false ones are large in number and small in magnitude. Therefore, the “first significant jump” scheme spread out the magnitudes of the true stripe entries and clustered those of the false ones.
Finally, we can obtain the index set I ( v + 1 ) and give the definition of the set whose elements are indices of detected nonzero columns as follows:
T ( v + 1 ) : = ( I ( v + 1 ) ) c = { { 1 , 2 , , n } I ( v + 1 ) } .
We update the weight vector ω ( v + 1 ) according to the two index sets as follows:
ω j ( v + 1 ) = 1 , j T ( v + 1 ) , ω j ( v + 1 ) = 0 , j I ( v + 1 ) .
The main computational cost of the weight vector ω stems from computing the S in the inner loop. Moreover, the iteration number is small empirically, as per the following discussion in Section 4.1. Therefore, the total complexity of the proposed algorithm is O ( m n log m n ) .
The pseudocode of the algorithm combining ADMM and ISD is summarized in Algorithm 1.
Algorithm 1 The optimization algorithm combining ADMM and ISD.
Input: The observed image F, the parameters λ 1 , λ 2 , the penalty parameter β , the
     outer iteration number V, and the inner iteration number K m a x .
Initialize: ω j ( 0 ) = 1 , ( j = 1 , , n ) .
Iteration:
   1: for v = 0 to V do
   2:  for k = 0 to K m a x do
   3:  Initialize for inner loop: ω = ω ( v ) , S ( 0 ) = 0 , X ( 0 ) = 0 , Y ( 0 ) = 0 , Z ( 0 ) = D x F ,
      A i ( 0 ) = 0 ( i = 1 , 2 , 3 ) , and ε = 10 4 .
   4:  Update X ( k + 1 ) , Y ( k + 1 ) , Z ( k + 1 ) by column-wise vector-softthreshold via ( 10 )
       ( 14 ) , and ( 12 ) .
   5:  Update S ( k + 1 ) by ( 16 ) .
   6:  Update the multiplier A i ( k + 1 ) ( i = 1 , 2 , 3 ) by ( 8 ) .
   7:  Check the convergence condition:
       ( F S ( k + 1 ) ) ( F S ( k ) ) / ( F S ( k ) ) ε .
   8:  end for
   9: Estimate the intermediate ground-truth image U ( v ) = F S ( k ) .
   10: Calculate the index set I ( v + 1 ) by the method of threshold choice for S = S ( k ) .
   11: Update the weight vector ω ( v + 1 ) by ( 21 ) .
   12: end for
Output: The final weight vector ω ( V ) and the estimation of the ground-truth image U.
Remark 2.
Although we set ω from the set of { 0 , 1 } , we can consider ω from real continuous values as an alternative.

4. Experiments

In this section, we conduct experiments on both simulated and real data to test the efficiency of the proposed method. Five state-of-the-art destriping methods were chosen for comparison: the filtering-based method [12] (WAFT), the statistics-based method [13] (SLD), and optimization-based methods such as the weighted UTV-based method [29] (WDSUV), the low-rank decomposition-based method [38] (LRSID), and the group sparsity regularization-based method [40] (GSUTV). In the simulated experiments, the original MODIS Image Band 32 of size 400 × 400 , which is available at https://ladsweb.nascom.nasa.gov/, and the IKONOS image of size 377 × 331 , which can be downloaded from https://openremotesensing.net/, were used by adding synthetic stripe noises to the clean image. In the real experiments, we used two real MODIS images, which are available at https://ladsweb.nascom.nasa.gov/, degraded by different types of stripes. For the convenience of quantitative evaluation and parameter selection, the gray value of the test images was normalized to [0,1]. We used a desktop of 16 GB RAM, Intel (R) Core (TM) i5-4590U CPU, @3.30 GHz to run all codes in MATLAB (R2017a). To evaluate the results of both simulated and real experiments, we chose several qualitative and quantitative assessments. The qualitative assessments include the visual effect of destriping image and the mean cross-track profile [38]. In the simulated experiments, we chose full-reference evaluation indices: the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) [52]. Moreover, to evaluate the correctness of stripe detection, we defined two indices called the detection error rate (DER) and the detection missing rate (DMR) as follows:
D E R ( S , W ) = a n , D M R ( S , W ) = b n ,
where S is the synthetic stripe component, W is the last weight vector, a is the number of error detected stripe columns, and b is the number of missing detected stripe columns. In real experiments, we selected two no-reference indices including noise reduction (NR) [16,22,36] and mean relative deviation (MRD) [16,36], since the ground-truth image was unavailable. NR depicts the ratio of stripes reduction in the frequency domain; MRD evaluates the ability of the method to retain the pixel value in stripe-free regions. In short, larger values of PSNR, SSIM, and NR indicate better destriping performance, and smaller values of MRD indicate better destriping performance.

4.1. Parameter Setting

In the inner loop, there are two regularization parameters λ 1 , λ 2 and the penalty parameter β . We empirically found that the optimal values of λ 1 and λ 2 varied for different test images. Therefore, we used the hand-tuning strategy to tune them. To show the effects of the three parameters on the destriping performance, we performed an analysis by using a simulated experiment (Case 3) as an example. Figure 3 shows the corresponding PSNR index for different λ 1 , λ 2 , and β , respectively. From Figure 3, we find that the PSNR curves reached the highest point at λ 1 = 0.004 ( λ 2 = 0.001 , β = 0.1 ), λ 2 = 0.0005 ( λ 1 = 0.004 , β = 0.1 ), and β = 0.1 ( λ 1 = 0.004 , λ 2 = 0.0005 ). In our experiments, we set the λ 1 [ 0.001 , 0.01 ] with increments of 0.001 and λ 2 [ 0.0001 , 0.001 ] with increments of 0.0001. In theory [43], the penalty parameter β does not impact the convergence of ADMM greatly, so we set β [ 0.1 , 1 ] with increment of 0.1 by experience.
In the outer loop, the key index is the iterative number that decides the threshold. Fan et al. [41] pointed out that the threshold value will not change much, even if the iterative number goes to infinity. Empirically, we set the iterative number as five to stabilize the threshold. For the compared methods, we set the parameters as the suggested values in the references.

4.2. Simulated Experiments

In the simulated experiments, we added both periodical and non-periodical types of synthetic stripe based on the observed model (1) to two different remote images. We denoted the periodic stripe level by using a vector with three elements, e.g., (period, intensity, and rate), where “period” denotes the number of columns between each stripe column (denoted p), “intensity” denotes the absolution value of pixels of each stripe column (denoted I), and “rate” denotes the rate of the stripe area within the ground-truth image (denoted r). Similar to the periodic stripe, we denoted the non-periodic stripe level by using a vector with two elements, e.g., (intensity and rate), where “intensity” and “rate” have the same definition as the elements listed above. In our simulation experiments, p is in the set of { 10 , 15 } , I is in the set of { 50 , 100 } , and r is in the set of { 0.2 , 0.5 , 0.8 } . For both periodic and nonperiodic cases, the stripe locations were randomly distributed on the image, and the absolute value of the stripe column pixels was the same. From Case 1–3, we gradually increased the rate of the stripes with p = 10 , I = 50 , leading to some different striping effects. From Case 4–6, we gradually increased the rate of the stripes with p = 15 , I = 50 . From Case 7–9, the rate also has been increased with I = 50 . From Case 10–12, the rate also has been increased with I = 100 .
As shown in Figure 4 and Figure 5, the visual results of Case 9 and Case 3 are displayed as examples. In Figure 4, we observe that the WAFT method blurred the boundaries and many residual stripes still existed; the IRUV method lost the detailed information; the LRSID method blurred the whole image. Although the stripes were well removed by the SLD and GSUTV methods, some blurred regions are shown in the red circle. As a comparison, the proposed method achieved better visual performance. Figure 6 and Figure 7 show the estimated stripe images of different methods and the corresponding error images. The error images of the five comparative methods show that they failed to estimate the stripe component precisely. In contrast, the error image of the proposed method had fewer stripe detail information, which suggests that the proposed method had a better destriping performance. The quantitative results of all the cases are listed in Table 2. Moreover, to test the effect of stripe position detecting, we show the results of DER and DMR in Table 3. From Table 2, we observe that the proposed method obtained better PSNR and SSIM values in the simulation experiments. From Table 3, with the striping effect increasing, the DER values increased gradually, and most of the DMR values were zero. In general, it is clear that the proposed method could detect the stripe position almost precisely. In summary, the quantitative comparison conformed with the above visual results presented. In addition, Figure 8 and Figure 9 show the mean cross-track profiles of Figure 4 and Figure 5, respectively. Figure 8 shows that the proposed method kept a right curve tendency, and the column mean values of the destriping image were almost as the same as the true image. This illustrates that the proposed method can preserve the edges and details of the image, while it can accurately estimate the stripe component.

4.3. Real Experiments

In this subsection, Figure 10 and Figure 11 show the results from all six methods on two real remote sensing images. The real images of different stripe noise were selected from Terra MODIS and Aqua MODIS, as shown in Figure 10a and Figure 11a. The Terra MODIS image was highly influenced by non-periodical stripe noise, and the Aqua MODIS image was gravely contaminated by a periodical one.
As shown in Figure 10, the WAFT and SLD methods blurred the boundaries, and many residual stripes still existed; the IRUV method removed the stripes completely, while blurring some regions; the LRSID method had a bad performance in preserving the stripe-free information, and some residual stripes still existed. For the GSUTV method, all stripes could be removed completely, but some edge regions are shallower in Figure 10f. The proposed method successfully suppressed the stripe noise with fewer artifacts. Figure 10 and Figure 11 show that the JSUTV method was better than the five compared methods in terms of removing stripes and preserving the stripe-free information.
Moreover, in Table 4, we list the quantitative NR and MRD indices’ values of the real experiments. In order to avoid the affect of external factors, five 10 × 10 homogeneous regions were selected to calculate MRD, then to obtain the mean value of MRD. For the NR and MRD indices, we can see that the proposed method achieved better results. In summary, the proposed method can obtain both a better visual effect and quantitative indices.

4.4. Numerical Convergence of the Proposed Algorithm

Since our model was convex for a fixed ω , and the convergence of the S subproblem could be guaranteed according to the convergence result of ADMM. We give the following Figure 12 to numerically illustrate the convergence of our algorithm. Figure 12 shows the relative error curves of the successive estimated stripes S ( k ) and S ( k + 1 ) , and the relative error kept decreasing as the iteration number increases, which illustrates that the proposed algorithm was convergent numerically.

5. Conclusions

In this paper, we have proposed a new destriping model to achieve the stripe detection and stripe removal simultaneously. The proposed model exploits both the stripe directional property and the stripe structural property and utilizes the joint sparsity prior to describe the structural property. We have proposed a multi-stage alternative optimization method, which combines ISD and ADMM, to solve the minimization problem. The proposed method has been compared with WAFT, SLD, IRUV, LRSID, and GSUTV, and experimental results show the superiority of the proposed method in terms of both quantitative and qualitative assessments. Moreover, simulated experimental results show the accuracy of the stripe detection.
Although the proposed method removes both periodic and nonperiodic stripes well in the horizontal or vertical direction, it still has a shortcoming: it cannot remove oblique stripes. Thus, we will focus on improving our method to overcome the mentioned challenge.

Author Contributions

All authors contributed to the design of the methodology and the validation of experimental exercise: Y.-J.S. and T.-Z.H. wrote the draft; T.-H.M. and Y.C. reviewed and revised the paper.

Funding

This research is supported by NSFC (61772003), the Fundamental Research Funds for the Central Universities (ZYGX2016J132), the National Postdoctoral Program for Innovative Talents (BX20180252), and the Project funded by China Postdoctoral Science Foundation (2018M643611).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, Y.; Huang, T.Z.; Zhao, X.L. Destriping of multispectral remote sensing image using low-rank tensor decomposition. IEEE J. STARS 2018, 11, 4950–4967. [Google Scholar] [CrossRef]
  2. Chen, J.; Shao, Y.; Guo, H.; Wang, W. Destriping CMODIS data by power filtering. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2119–2124. [Google Scholar] [CrossRef]
  3. Zhu, Z.; Yin, H.; Chai, Y.; Li, Y.; Qi, G. A novel multi-modality image fusion method based on image decomposition. and sparse representation. Inf. Sci. 2018, 432, 516–529. [Google Scholar] [CrossRef]
  4. Chappalli, M.B.; Bose, N.K. Simultaneous noise filtering and super-resolution with second-generation wavelets. IEEE Signal Process. Lett. 2005, 12, 772–775. [Google Scholar] [CrossRef]
  5. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Total variation spatial regularization for sparse hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4484–4502. [Google Scholar] [CrossRef]
  6. Zhao, X.L.; Wang, F.; Huang, T.Z.; Ng, M.K.; Plemmons, R.J. Deblurring and sparse unmixing for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4045–4058. [Google Scholar] [CrossRef]
  7. Mann, B.L.; Koger, C.H.; Li, J. Dimensionality reduction of hyperspectral data using discrete wavelet transform feature extraction. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2331–2338. [Google Scholar]
  8. Murphy, J.M.; Le, M.J.; Harding, D.J. Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1685–1704. [Google Scholar] [CrossRef] [PubMed]
  9. Pande-Chhetri, R.; Abd-Elrahman, A. De-striping hyperspectral imagery using wavelet transform and adaptive frequency domain filtering. ISPRS J. Photogramm. Remote Sens. 2011, 66, 620–636. [Google Scholar] [CrossRef]
  10. Torres, J.; Infante, S.O. Wavelet analysis for the elimination of striping noise in satellite images. Opt. Eng. 2001, 40, 1309–1314. [Google Scholar]
  11. Pan, J.J.; Chang, C.I. Destriping of landsat MSS images by filtering techniques. Photogramm. Eng. Remote Sens. 1992, 58, 1417–1423. [Google Scholar]
  12. Munch, B.; Marone, F.; Stampanoni, M.; Trtik, P. Stripe and ring artifact removal with combined wavelet—Fourier filtering. Opt. Express 2009, 17, 8567–8591. [Google Scholar] [CrossRef]
  13. Carfantan, H.; Idier, J. Statistical linear destriping of satellite-based pushbroom-type images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1860–1871. [Google Scholar] [CrossRef]
  14. Gadallah, F.L.; Csillag, F.; Smith, E.J.M. Destriping multisensor imagery with moment matching. Int. J. Remote Sens. 2000, 21, 2505–2511. [Google Scholar] [CrossRef]
  15. Horn, B.K.P.; Woodham, R.J. Destriping LANDSAT MSS images by histogram modification. Comput. Graph. Image Process. 1979, 10, 69–83. [Google Scholar] [CrossRef] [Green Version]
  16. Ma, N.; Zhou, Z.; Luo, L.; Wang, M. Stripe noise reduction in MODIS data: a variational approach. Proc. SPIE Int. Soc. Opt. Eng. 2011, 8193, 393–403. [Google Scholar]
  17. Sun, L.; Neville, R.; Staenz, K.; White, H.P. Automatic destriping of Hyperion imagery based on spectral moment matching. Can. J. Remote Sens. 2008, 34 (Supp. 1), S68–S81. [Google Scholar] [CrossRef]
  18. Wegener, M. Destriping multiple sensor imagery by improved histogram matching. Int. J. Remote Sens. 1990, 11, 859–875. [Google Scholar] [CrossRef]
  19. Corsini, G.; Diani, M.; Walzel, T. Striping removal in MOS-B data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1439–1446. [Google Scholar] [CrossRef]
  20. Bouali, M.; Ladjal, S. Toward optimal destriping of MODIS data using a unidirectional variational model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2924–2935. [Google Scholar] [CrossRef]
  21. Chang, Y.; Fang, H.; Yan, L.; Liu, H. Robust destriping method with unidirectional total variation and framelet regularization. Opt. Express 2013, 21, 23307–23323. [Google Scholar] [CrossRef] [PubMed]
  22. Shen, H.; Zhang, L. A MAP-based algorithm for destriping and inpainting of remotely sensed images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1492–1502. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Zhou, G.; Yan, L.; Zhang, T. A destriping algorithm based on TV-Stokes and unidirectional total variation model. Opt. Int. J. Light Electron Opt. 2016, 127, 428–439. [Google Scholar] [CrossRef]
  24. Zorzi, M.; Chiuso, A. Sparse plus Low rank Network Identification: A Nonparametric Approach. Automatica 2017, 76, 355–366. [Google Scholar] [CrossRef]
  25. Zorzi, M.; Sepulchre, R. AR Identification of Latent-Variable Graphical Models. Trans. Autom. Control 2016, 61, 2327–2340. [Google Scholar] [CrossRef] [Green Version]
  26. Jiang, T.X.; Huang, T.Z.; Zhao, X.L.; Deng, L.J.; Wang, Y. Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. 2019, 28, 2089–2102. [Google Scholar] [CrossRef] [PubMed]
  27. Zheng, Y.B.; Huang, T.Z.; Ji, T.Y.; Zhao, X.L.; Jiang, T.X.; Ma, T.H. Low-rank tensor completion via smooth matrix factorization. Appl. Math. Model. 2019, 70, 677–695. [Google Scholar] [CrossRef]
  28. Dou, H.X.; Huang, T.Z.; Deng, L.J.; Zhao, X.L.; Huang, J. Directional l0 sparse modeling for image stripe noise removal. Remote Sens. 2018, 10, 361. [Google Scholar] [CrossRef]
  29. Huang, Y.; Cong, H.; Fang, H.; Wang, X. Iteratively reweighted unidirectional variational model for stripe non-uniformity correction. Infrared Phys. Technol. 2016, 75, 107–116. [Google Scholar] [CrossRef]
  30. Song, Q.; Wang, Y.; Yan, X.; Gu, H. Remote sensing images stripe noise removal by double sparse regulation and region separation. Remote Sens. 2018, 10, 998. [Google Scholar] [CrossRef]
  31. Zhou, G.; Fang, H.; Lu, C.; Wang, S.; Zuo, Z.; Hu, J. Robust destriping of MODIS and hyperspectral data using a hybrid unidirectional total variation model. Opt. Int. J. Light Electron Opt. 2015, 126, 838–845. [Google Scholar] [CrossRef]
  32. Chen, Y.; Huang, T.Z.; Zhao, X.L.; Deng, L.J. Hyperspectral image restoration using framelet-regularized low-rank nonnegative matrix factorization. Appl. Math. Model. 2018, 63, 128–147. [Google Scholar] [CrossRef]
  33. Prasad, S.; Labate, D.; Cui, M.; Zhang, Y. Morphologically Decoupled Structured Sparsity for Rotation-Invariant Hyperspectral Image Analysis. IEEE Trans. Geosci. Remote Sens. 2017, 99, 1–12. [Google Scholar] [CrossRef]
  34. Wang, Y.; Peng, J.J.; Leung, Y.; Zhao, X.L.; Meng, D.Y. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. STARS 2018, 11, 1227–1243. [Google Scholar] [CrossRef]
  35. Chang, Y.; Yan, L.; Fang, H.; Luo, C. Anisotropic spectral-spatial total variation model for multispectral remote sensing image destriping. IEEE Trans. Image Process. 2015, 24, 1852–1866. [Google Scholar] [CrossRef] [PubMed]
  36. Lu, X.; Wang, Y.; Yuan, Y. Graph-regularized low-rank representation for destriping of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4009–4018. [Google Scholar] [CrossRef]
  37. Qi, G.; Zhu, Z.; Chen, Y.; Wang, J.; Zhang, Q.; Zeng, F. Morphology-based visible-infrared image fusion framework for smart city. Int. J. Simul. Process Model. 2018, 13, 523–536. [Google Scholar] [CrossRef]
  38. Chang, Y.; Yan, L.; Wu, T.; Sheng, Z. Remote sensing image stripe noise removal: From image decomposition perspective. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7018–7031. [Google Scholar] [CrossRef]
  39. Liu, X.; Lu, X.; Shen, H.; Yuan, Q.; Jiao, Y.; Zhang, L. Stripe noise separation and removal in remote sensing images by consideration of the global sparsity and local variational properties. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3049–3060. [Google Scholar] [CrossRef]
  40. Chen, Y.; Huang, T.Z.; Deng, L.J.; Zhao, X.L.; Wang, M. Group sparsity based regularization model for remote sensing image stripe noise removal. Neurocomputing 2017, 267, 95–106. [Google Scholar] [CrossRef]
  41. Fan, Y.R.; Wang, Y.; Huang, T.Z. Enhanced joint sparsity via iterative support detection. Inf. Sci. 2017, 415–416, 298–318. [Google Scholar] [CrossRef]
  42. Wang, Y.; Yin, W. Sparse Signal Reconstruction via Iterative Support Detection. SIAM J. Imaging Sci. 2010, 3, 462–491. [Google Scholar] [CrossRef] [Green Version]
  43. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2010, 3, 1–122. [Google Scholar] [CrossRef]
  44. Mei, J.J.; Dong, Y.Q.; Huang, T.Z.; Yin, W.T. Cauchy noise removal by nonconvex admm with convergence guarantees. J. Sci. Comput. 2017, 74, 743–766. [Google Scholar] [CrossRef]
  45. Zhao, X.L.; Wang, W.; Zeng, T.Y.; Huang, T.Z.; Ng, M.K. Total variation structured total least squares method for image restoration. SIAM J. Sci. Comput. 2013, 35, B1304–B1320. [Google Scholar] [CrossRef]
  46. Zuo, W.; Meng, D.; Zhang, L.; Feng, X. A generalized iterated shrinkage algorithm for non-convex sparse coding. In IEEE Int. Conf. Comput. Vis. 2013, 217–224. [Google Scholar] [CrossRef]
  47. Wang, Y.T.; Zhao, X.L.; Jiang, T.X.; Deng, L.J.; Zhang, Y.T. A total variation and group sparsity based tensor optimization model for video rain streak removal. Signal Process. Image Commun. 2018. [Google Scholar] [CrossRef]
  48. Eckstein, J.; Bertsekas, D.P. On the douglas-rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Programm. 1992, 55, 293–318. [Google Scholar] [CrossRef]
  49. Glowinski, R. Lectures on Numerical Methods for Nonlinear Variational Problems; Springer: Berlin/Heidelberg, Germany, 1980. [Google Scholar]
  50. Donoho, D. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef] [Green Version]
  51. Donoho, D.; Johnstone, I. Adapting to unknown smoothness via Wavelet Shrinkage. Publ. Am. Stat. Assoc. 1995, 90, 1200–1224. [Google Scholar] [CrossRef]
  52. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Destriping results in Terra MODIS by GSUTV. The first line: (a) Degraded image. (b) Destriping image. (c) Stripe component. The second line: (d) Horizontal gradient of the observed image. (e) Vertical gradient of the observed image. (f) Horizontal gradient of the destriping image. (g) Vertical gradient of the destriping image. (h) Horizontal gradient of the stripe component. (i) Vertical gradient of the stripe component. The third line: (j) Histogram of horizontal gradient. (k) Histogram of vertical gradient. The fourth line: (l) 2 -norm values of the stripe component.
Figure 1. Destriping results in Terra MODIS by GSUTV. The first line: (a) Degraded image. (b) Destriping image. (c) Stripe component. The second line: (d) Horizontal gradient of the observed image. (e) Vertical gradient of the observed image. (f) Horizontal gradient of the destriping image. (g) Vertical gradient of the destriping image. (h) Horizontal gradient of the stripe component. (i) Vertical gradient of the stripe component. The third line: (j) Histogram of horizontal gradient. (k) Histogram of vertical gradient. The fourth line: (l) 2 -norm values of the stripe component.
Remotesensing 11 00608 g001
Figure 2. The framework of the proposed model. ADMM, alternating direction method of multipliers; ISD, iterative support detection.
Figure 2. The framework of the proposed model. ADMM, alternating direction method of multipliers; ISD, iterative support detection.
Remotesensing 11 00608 g002
Figure 3. Parameter analysis of Case 3. The PSNR curve as a function of: (a) λ 1 ; (b) λ 2 ; (c) β .
Figure 3. Parameter analysis of Case 3. The PSNR curve as a function of: (a) λ 1 ; (b) λ 2 ; (c) β .
Remotesensing 11 00608 g003
Figure 4. Simulated destriping results of Case 9. (a) Original image; (b) degraded image; destriping images of: (c) WAFT; (d) SLD; (e) IRUV; (f) LRSID; (g) GSUTV; (h) JSUTV.
Figure 4. Simulated destriping results of Case 9. (a) Original image; (b) degraded image; destriping images of: (c) WAFT; (d) SLD; (e) IRUV; (f) LRSID; (g) GSUTV; (h) JSUTV.
Remotesensing 11 00608 g004
Figure 5. Simulated destriping results of Case 3. (a) Original image; (b) degraded image; destriping images of: (c) WAFT; (d) SLD; (e) IRUV; (f) LRSID; (g) GSUTV; (h) JSUTV.
Figure 5. Simulated destriping results of Case 3. (a) Original image; (b) degraded image; destriping images of: (c) WAFT; (d) SLD; (e) IRUV; (f) LRSID; (g) GSUTV; (h) JSUTV.
Remotesensing 11 00608 g005
Figure 6. The estimated stripe of Case 9. (a) The true stripe. Estimated stripe images by (b) WAFT, (c) SLD, (d) IRUV, (e) LRSID, (f) GSUTV, and (g) JSUTV. The corresponding error images of (h) the true stripe, (i) WAFT, (j) SLD, (k) IRUV, (l) LRSID, (m) GSUTV, and (n) JSUTV.
Figure 6. The estimated stripe of Case 9. (a) The true stripe. Estimated stripe images by (b) WAFT, (c) SLD, (d) IRUV, (e) LRSID, (f) GSUTV, and (g) JSUTV. The corresponding error images of (h) the true stripe, (i) WAFT, (j) SLD, (k) IRUV, (l) LRSID, (m) GSUTV, and (n) JSUTV.
Remotesensing 11 00608 g006
Figure 7. The estimated stripe of Case 3. (a) The true stripe. Estimated stripe images by (b) WAFT, (c) SLD, (d) IRUV, (e) LRSID, (f) GSUTV, and (g) JSUTV. The corresponding error images of (h) the true stripe, (i) WAFT, (j) SLD, (k) IRUV, (l) LRSID, (m) GSUTV, and (n) JSUTV.
Figure 7. The estimated stripe of Case 3. (a) The true stripe. Estimated stripe images by (b) WAFT, (c) SLD, (d) IRUV, (e) LRSID, (f) GSUTV, and (g) JSUTV. The corresponding error images of (h) the true stripe, (i) WAFT, (j) SLD, (k) IRUV, (l) LRSID, (m) GSUTV, and (n) JSUTV.
Remotesensing 11 00608 g007
Figure 8. Column mean cross-track profiles of Figure 4. Column mean cross-track profiles’ comparison between the original image and the estimated stripe by: (a) WAFT; (b) SLD; (c) IRUV; (d) LRSID; (e) GSUTV; and (f) JSUTV.
Figure 8. Column mean cross-track profiles of Figure 4. Column mean cross-track profiles’ comparison between the original image and the estimated stripe by: (a) WAFT; (b) SLD; (c) IRUV; (d) LRSID; (e) GSUTV; and (f) JSUTV.
Remotesensing 11 00608 g008
Figure 9. Column mean cross-track profiles of Figure 5. Column mean cross-track profiles’ comparison between the original image and the estimated stripe by: (a) WAFT; (b) SLD; (c) IRUV; (d) LRSID; (e) GSUTV; and (f) JSUTV.
Figure 9. Column mean cross-track profiles of Figure 5. Column mean cross-track profiles’ comparison between the original image and the estimated stripe by: (a) WAFT; (b) SLD; (c) IRUV; (d) LRSID; (e) GSUTV; and (f) JSUTV.
Remotesensing 11 00608 g009
Figure 10. Destriping results of the Terra MODIS image. (a) Degraded image; destriping images by: (b) WAFT; (c) SLD; (d) IRUV; (e) LRSID; (f) GSUTV; (g) JSUTV.
Figure 10. Destriping results of the Terra MODIS image. (a) Degraded image; destriping images by: (b) WAFT; (c) SLD; (d) IRUV; (e) LRSID; (f) GSUTV; (g) JSUTV.
Remotesensing 11 00608 g010
Figure 11. Destriping results of the Aqua MODIS image. (a) Degraded image; destriping images by: (b) WAFT; (c) SLD; (d) IRUV; (e) LRSID; (f) GSUTV; (g) JSUTV.
Figure 11. Destriping results of the Aqua MODIS image. (a) Degraded image; destriping images by: (b) WAFT; (c) SLD; (d) IRUV; (e) LRSID; (f) GSUTV; (g) JSUTV.
Remotesensing 11 00608 g011
Figure 12. Curves of relative error values versus: (a) inner iterations; (b) outer iterations.
Figure 12. Curves of relative error values versus: (a) inner iterations; (b) outer iterations.
Remotesensing 11 00608 g012
Table 1. A comparison of the related optimization-based methods and their properties. UTV, unidirectional total variation.
Table 1. A comparison of the related optimization-based methods and their properties. UTV, unidirectional total variation.
MethodStripe PriorImage PriorAlgorithm
SparsityLow-RanknessSmoothnessSmoothness
IRUV [29]UTV for
factors( p norm)
UTV for
factors( p norm)
an iterative reweighted
least squares algorithm
WDSUV [30] 0 normUTV (the combination
of the weighted
1 norm and 0 norm)
the weighted
UTV
( 1 norm)
an alternative direction
multiplier method algorithm
LRSID [38]nuclear normanisotropic TVan alternative direction
multiplier method algorithm
GSLV [39] 0 normUTV
( 1 norm)
UTV
( 1 norm)
an alternative direction
multiplier method algorithm
DSM [28] 1 normUTV
( 0 norm)
UTV
( 1 norm)
a proximal alternative direction
multiplier method algorithm
GSUTV [40] 2 , 1 normUTV
( 1 norm)
UTV
( 1 norm)
an alternative direction
multiplier method algorithm
JSUTV w , 2 , 1 normUTV
( 1 norm)
UTV
( 1 norm)
a combined algorithm of
the alternative direction multiplier method
and iterative support detection
Table 2. Quantitative results (PSNR (dB), SSIM values, and CPU time) of the simulated experiments.
Table 2. Quantitative results (PSNR (dB), SSIM values, and CPU time) of the simulated experiments.
Stripe TypeCaseIndexDegradeWAFTSLDIRUVLRSIDGSUTVJSUTV
Case 1PSNR17.6238.0741.2740.5840.2653.5054.78
(p= 10, r = 0.2, I = 100)SSIM0.4030.9910.9980.9970.9940.9990.999
CPU time0.4330.1392.8776.8904.5075.142
Case 2PSNR13.5733.2230.3130.2829.7653.2754.26
(p = 10, r = 0.5, I = 100)SSIM0.1830.9800.9800.9780.9680.9990.999
CPU time0.3720.32218.2607.9165.0705.343
Case 3PSNR11.5734.2337.4534.8634.5349.3951.30
Periodical(p = 10, r = 0.8, I = 100)SSIM0.1370.9860.9970.9920.9830.9980.999
(image: CPU time0.5230.2782.7836.6554.9965.503
IKONOS)Case 4PSNR17.5232.4833.0532.9632.8752.3354.73
(p = 15, r = 0.2, I = 100)SSIM0.4090.9810.9890.9880.9820.9990.999
CPU time0.4410.2883.0837.6852.6904.382
Case 5PSNR13.9033.2234.7234.4133.8651.3953.93
(p = 15, r = 0.5, I = 100)SSIM0.1940.9810.9920.9890.9790.9980.999
CPU time0.4570.4492.8247.6923.5994.996
Case 6PSNR11.5034.3840.0735.8834.2548.8150.12
(p = 15, r = 0.8, I = 100)SSIM0.1340.9830.9980.9930.9750.9980.998
CPU time0.2880.1802.7827.0595.4985.298
Case 7PSNR19.7936.3944.2037.5938.9751.0052.60
(r = 0.2, I = 50)SSIM0.3980.9860.9980.9990.9930.9990.999
CPU time0.2220.31811.0307.4993.4903.678
Case 8PSNR16.1031.0034.4036.5535.0850.5751.89
(r = 0.5, I = 50)SSIM0.2230.9750.9910.9960.9830.9990.999
CPU time0.2730.2008.0569.8375.3115.765
Case 9PSNR14.2130.7837.7834.3333.9045.1747.78
Non-periodical(r = 0.8, I = 50)SSIM0.1820.9750.9980.9940.9780.9970.999
(image: CPU time0.2870.3167.2508.1475.5585.982
MODIS)Case 10PSNR17.3533.2143.9338.8838.0450.7551.61
(r = 0.2, I = 100)SSIM0.2660.9820.9970.9950.9910.9990.999
CPU time0.2320.0787.2508.8373.9094.092
Case 11PSNR13.5530.0733.6233.3332.4546.4549.231
(r = 0.5, I = 100)SSIM0.1540.9760.9930.9960.9880.9990.999
CPU time0.2410.0768.9069,8735.1215.738
Case 12PSNR11.5729.3532.2532.1431.3242.6746.76
(r = 0.8, I = 100)SSIM0.1170.9740.9970.9970.9730.9980.999
CPU time0.2890.2488.3768.9836.0046.839
Table 3. The detection error rate (DER) and detection missing rate (DMR) results of simulated experiments.
Table 3. The detection error rate (DER) and detection missing rate (DMR) results of simulated experiments.
CaseCase 1Case 2Case 3Case 4Case 5Case 6Case 7Case 8Case 9Case 10Case 11Case 12
DER0.00000.00000.00300.00000.00000.00000.00250.00500.02500.00750.01250.0075
DMR0.00000.00000.00000.00000.00000.00300.00250.01250.01000.00000.00000.0000
a0010001210353
b000001154000
Table 4. Quantitative results on real data using NR and mean relative deviation (MRD).
Table 4. Quantitative results on real data using NR and mean relative deviation (MRD).
ImageIndexWAFTSLDIRUVLRSIDGSUTVJSUTV
Terra MODIS imageNR2.942.984.824.903.443.71
MRD0.11010.04750.01530.99610.07580.0101
Aqua MODIS imageNR3.452.443.533.733.563.60
MRD0.10870.00760.11400.99600.01010.0387

Share and Cite

MDPI and ACS Style

Sun, Y.-J.; Huang, T.-Z.; Ma, T.-H.; Chen, Y. Remote Sensing Image Stripe Detecting and Destriping Using the Joint Sparsity Constraint with Iterative Support Detection. Remote Sens. 2019, 11, 608. https://doi.org/10.3390/rs11060608

AMA Style

Sun Y-J, Huang T-Z, Ma T-H, Chen Y. Remote Sensing Image Stripe Detecting and Destriping Using the Joint Sparsity Constraint with Iterative Support Detection. Remote Sensing. 2019; 11(6):608. https://doi.org/10.3390/rs11060608

Chicago/Turabian Style

Sun, Yun-Jia, Ting-Zhu Huang, Tian-Hui Ma, and Yong Chen. 2019. "Remote Sensing Image Stripe Detecting and Destriping Using the Joint Sparsity Constraint with Iterative Support Detection" Remote Sensing 11, no. 6: 608. https://doi.org/10.3390/rs11060608

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop