Next Article in Journal
Remote Sensing Image Compression Based on Direction Lifting-Based Block Transform with Content-Driven Quadtree Coding Adaptively
Previous Article in Journal
Using 250-M Surface Reflectance MODIS Aqua/Terra Product to Estimate Turbidity in a Macro-Tidal Harbour: Darwin Harbour, Australia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Remote Sensing Images Stripe Noise Removal by Double Sparse Regulation and Region Separation

1
School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
2
National Key Laboratory of Science and Technology on Multi-spectral Information Processing, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(7), 998; https://doi.org/10.3390/rs10070998
Submission received: 11 June 2018 / Accepted: 15 June 2018 / Published: 22 June 2018
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Stripe noise removal continues to be an active field of research for remote image processing. Most existing approaches are prone to generating artifacts in extreme areas and removing the stripe-like details. In this paper, a weighted double sparsity unidirectional variation (WDSUV) model is constructed to reduce this phenomenon. The WDSUV takes advantage of both the spatial domain and the gradient domain’s sparse property of stripe noise, and processes the heavy stripe area, extreme area and regular noise corrupted areas using different strategies. The proposed model consists of two variation terms and two sparsity terms that can well exploit the intrinsic properties of stripe noise. Then, the alternating direction method of multipliers (ADMM) optimal solver is employed to solve the optimization model in an alternating minimization scheme. Compared with the state-of-the-art approaches, the experimental results on both the synthetic and real remote sensing data demonstrate that the proposed model has a better destriping performance in terms of the preservation of small details, stripe noise estimation and in the mean time for artifacts’ reduction.

Graphical Abstract

1. Introduction

The remote sensing image plays an important role in environment monitoring, resource monitoring and military and battlefield situation observations [1,2,3]. However, the output of sensing images often suffers from stripe-like noise, which seriously degrades the image’s visual quality and also yields a negative influence on high-level application, such as target detection and data classification [4,5,6,7]. Due to the inconsistent responses of detectors and the imperfect calibration of amplifiers, the gain and offset of true signals are various, producing stripe noise on Moderate Resolution Imaging spectrometer (MODIS) data and hyperspectral images. The MODIS data covers 36 spectral bands ranging in wavelength from 0.41 μ m to 14.4 μ m. Three typical striped images are displayed in Figure 1, and the stripe effect is obvious by zooming in. This noise is periodic for 10 pixels for the detectors’ calibration errors and the charge-coupled device array scanning forward and reverse across-track [8,9].
In recent decades, a large number of destriping algorithms have been discussed for remote sensing images and can be grouped into several categories by various mechanisms, such as the filtering-based methods, statistics-based methods and optimization model-based approaches.
The statistical-based methods include a moment matching algorithm [8,10,11] and midway equalization algorithm [12]. Moment matching techniques assume that the sensors’ outputs have the same statistical characteristics including mean and deviation, and they set all sensors’ output to a reference one. In [11], the authors proposed a piece-wise approach to remove the irregular stripes by local statistical information. The midway equalization method supposes that the histogram distribution between neighbouring columns is similar and a local midway histogram is computed for the current column. The performance of these methods is limited to the hypothesis, which does not always hold, that when a structure exists along the stripe direction, the statistics between neighbouring columns are distinct.
Filtering-based algorithms remove stripe ingredients by a type of filter in the transformed domain after discrete Fourier transform [13] or wavelet decomposition [14,15,16]. A regular periodic stripe exhibits a particular frequency component, and can be easily identified in the transformed domain. Unfortunately, blurring effects and artifacts may appear when filter models are not well designed, and some textures similar to stripe are also prone to be smoothed, as they are likely to be identified as the noise.
Recently, optimal model-based destriping methods have attracted many endeavours’ interest and numbers of models have been formulated. People introduce prior knowledge of noise and ideal remote sensing images into an energy function to recover the latent content. The unidirectional variation model (UV) was first proposed by Bouali et al. [17]. They formulated a differential optimal model based on the stripe noise’s direction property that stripe noise only affects the gradient information along one direction and will not change it in another direction. To overcome several limitations [18] of the UV model, improvement algorithms were subsequently designed. Zhou et al. [19] designed a hybrid unidirectional total variation (HUTV) model that combines a l 1 data fidelity term and gradient fidelity term aiming to remove stripe noise of various intensities. To distinguish the noise from the texture and edge, Zhang et al. [18] proposed a structure guided UV model. Wang et al. [20] utilized difference curvature to extract the spatial information, and formulated a spatially weighted UV model. The authors in [21] combined the UV model and framelet regularization to preserve the detail information while removing the stripe noise. In [22], the UV model was converted to a least square problem by an iteratively reweighted technique that was easy to implement. Regarding the destriping problem as an ill-posed inverse problem, the (column) sparsity property and low rank property of the stripe noise served as a regularization to improve the stripe estimation performance in [23,24,25]. Chen et al. [26] combined the group sparsity constraint and total variation regularization to remove the stripe noise and preserve edge information. Dou et al. [27] proposed a directional l 0 sparse model for stripe noise removal. Some researchers also exploited the high spectral correlation property among the different bands in hyperspectral data to recover the latent image [28,29,30]. With the rapid development of deep learning techniques, the deep convolutional networks based destriping methods [31] were proposed and showed a competitive stripe noise removal ability in the infrared image. However, their framework was designed for weak stripe noise only, and could not be suitable to the strong stripe noise.
In summary, most existing methods can recover clean images directly or indirectly from degraded images based on the stripe noise property, such as gradient information and sparsity characteristics. However, a common problem existing in recent research efforts is that the details or structure along the stripe direction can not be recognized well and probably get confused with the stripe noise, resulting in an oversmoothing effect. In addition, once the data are corrupted by serious stripe noise, the output can not recover the scene well and often suffers from stripe artifacts. We in this paper attempt to alleviate the two problems. The sparsity property not only exists in the spatial domain, but also exists in the gradient domain. Thus, we here present an optimization framework that utilizes a double sparsity counting scheme to estimate the stripe noise more completely to protect the details from being destroyed during the destriping process. A region separated processed strategy is adopted here. Specifically, for the heavy stripe corrupted area, we utilized the texture diffusion method only on the direction across the track to inpaint them. Extreme dark or extreme bright areas were kept the same as the original. For the normal stripe noise corrupted area, the noise was estimated by the stripe’s characteristics.
The remainder of this paper is organized as follows: Section 2 introduces some properties of stripe noise. Section 3 presents the proposed weighted double sparse destriping model. Section 4 discusses the experimental results and comparison analysis. Section 5 presents some discussions about the proposed model. Finally, conclusions are provided in Section 6.

2. Stripe Noise Properties Analysis

2.1. Stripe Variation Property

In most model-based destriping methods, the output of a noisy image is formulated as an additive noise formulation [21,24,32], as follows:
Y u , v = X u , v + S u , v ,
where X and Y denote the unknown desired image and observed degraded image, respectively. S represents the stripe noise and ( u , v ) is the spatial coordinate in an image. The purpose of a destriping technique is to estimate a clean image X from Equation (1). It is a typical ill-posed inverse problem and some additional regulations are desired to constrain the solutions. In this paper, we assume the stripe direction is along the u-axis.
Stripe noise obscures image details and sometimes even destroys the texture, which poses quite a challenge to recover the real signal, as shown in Figure 1. Fortunately, this type of noise has a good directional property, for it usually only appears along one direction, resulting in the gradient across the stripe direction being far greater than that along the stripe, as illustrated in Figure 2. This property can be expressed by:
Y u Y v .
Based on this property, the unidirectional variation (UV) energy model [9] is formulated as:
E ( X ) = T V u ( Y X ) + λ T V v ( X ) ,
in which T V u ( X ) = Ω ( | X u | ) d u d v is the total variation of X along the u-axis, and λ is the regularization parameter to determine the smooth degree on the v-axis. The UV model (3) attempts to keep the variance of the destriped image X along the u-axis with that of the corrupted image, while reducing the variance across the stripe direction on the v-axis. It seems reasonable, yet there are two shortcomings with the UV model. First, it is prone to producing artifacts when the noise intensity is so serious that it damages some texture structure. Second, certain small details are easily prone to be smoothed out during the iterative procedure when λ is set to be large.

2.2. Stripe Structure Property

Stripe noise presents a particular structure in which there are many zero elements in stripe-free regions, as different from random noise. In view of this fact, the authors in [24] adopted the l 0 norm as a regulation to constrain the noise matrix S:
R 1 ( S ) = S 0 .
Although l 0 norm provides more zero elements in S, the nonzero elements are distributed randomly, according to the definition of l 0 norm. As a result, the non-stripe ingredients are also probably to be treated as noise, causing the details’ smoothing affects. Furthermore, in case of a great proportion of stripe noise, the l 0 norm constraint is unreliable. Observing the noise matrix, we discover that the gradient of S along the stripe direction u also exhibits significant sparsity characteristics, no matter the proportion of noise. The regular stripe noise particularly exhibits this property in evidence, i.e., all zeros in matrix u S , regardless of noise levels and proportion. Accordingly, a unidirectional gradient sparsity regulation is expressed by:
R 2 ( S ) = u S 0 .

3. Methodology

Based on the above analysis, we first present the double sparse UV model (DSUV), which is also introduced within the variation framework. Taking advantage of the double sparsity feature of stripe noise, i.e., the signal sparsity and unidirectional gradient sparsity, combined with the variation property, the DSUV model is formulated as follows:
J ( S ) = u S 1 + λ 1 v ( Y S ) 1 + λ 2 S 0 + λ 3 u S 0
in which ▽ denotes the gradient operator, · 0 is the l 0 -norm counting the number of non-zeros in a matrix, and · 1 is the l 1 -norm, which summarizes all elements’ absolute values. In model (6), the front part is the UV minimization. The rear part measures the global sparsity of stripe noise S and the unidirectional gradient. The variable λ 1 stands for the variance parameter and λ 2 , λ 3 are the sparse counting parameters. The three regularization parameters balance the four constraint terms together. After S is estimated from model (6), the desired image X will be obtained by subtracting S from degraded image Y. The WDUV extracts the stripe component from the whole image. However, distinct areas with different features should be processed separately, and we analyse it next.

3.1. Region Separation

Many destriping techniques assume that stripe noise presents in the whole column on an image, which is not always true. In some extremely dark or too bright areas, stripe noise is saturated and noise in these regions is almost zero. Therefore, it is not necessary to further estimate noise in these areas. Nevertheless, if an area’s extreme values were generated by a strong stripe, we should recover these elements. Here, we name these areas strong stripe area (SSA), also called the extreme stripe area. Therefore, distinct extreme area (EA) and SSA are necessary. An example of SSA and EA is illustrated in Figure 3.
In this subsection, a simple and effective method for detecting and distinguishing EA and SSA’s method is proposed. It is based on the stripe noise’s property. Take the extremely dark area detection, for example. In both the SSA and EA, grey values are all close to zero and can be detected by some thresholds. Then, we should separate the dark area from the strong stripe noise. An important and significant factor that discriminates the two areas is the fact that the stripe noise is only along one direction with the width usually smaller than some values, two lines in MODIS data, for example. Thus, once a pixel’s neighbour extreme number across stripe direction exceeds a given value, it belongs to EA. Figure 4 displays an example of detecting extremely dark areas and SSA in MODIS data. Figure 4a is an original stripe noise corrupted subimage cropped from Terra MODIS data band 27. Figure 4b displays the extreme dark area detection result in which the neighbouring zero values exceed 2 lines in the vertical direction. Then, the horizontal extreme area is calculated similarly in Figure 4c. Figure 4d is the dark extreme stripe obtained by subtracting the extreme area (b) from the horizon dark area (c). However, there are some small fragments in Figure 4d because the detected dark area and horizon dark area are not perfectly coincident. To remove these fragment stripes, we employed morphological operations, i.e., dilation and erosion operators, on Figure 4d, and a final refined extreme dark stripe is obtained in Figure 4e. Thus, the dark extreme stripe area and dark extreme area are separated, and detecting extreme bright areas and extreme bright stripe can be done in the same manner.

3.2. Proposed Weighted Double Sparse UV Model

As analysed in the last subsection, EA and SSA should be processed separately. Here, we denote the extreme area as Ψ e and let the strong stripe area be Ψ s . To address Ψ s , the texture in these areas are badly corrupted, and some inpainting methods can recover the corrupted contents [33,34,35]. However, they usually need a large clean region around the missing content or utilize multichannel data. In addition, these methods usually involve heavy computation. Here, an optional strategy is adopted in which two indicative factors are designed that aim to handle the special areas while removing the normal stripe noise. The indicative factor for badly corrupted elements SSA is defined as:
W s ( u , v ) = 0 , if ( u , v ) Ψ s , 1 , otherwise .
For these areas, we prefer to update them only across the stripe direction. Similarly, an indicative function for EA is expressed by:
W e ( u , v ) = 0 , if ( u , v ) Ψ e , 1 , otherwise .
Combining the DSUV model, strong stripe factor W s and extreme area factor W e , a more robust adaptive version of DSUV, the weighted DSUV model (WDSUV), is finally formulated as follows:
J ( S ) = W u u ( S ) 1 + λ 1 W e v ( Y S ) 1 + λ 2 S 0 + λ 3 u S 0 ,
in which W u = W s · W e denotes various changing levels of S along the stripe direction u, and W e is the weight of the recovered data across the stripe direction. From Equation (9), we can see that the EA in original image remains original, and the SSA updates only across the stripe direction. According to Equation (9), our WDSUV model is a general variational framework of the UV and SUV. The UV is a particular case of WDSUV when W u = 1 , W s = 1 , λ 2 = 0 and λ 3 = 0 . Our model converts to SUV when W u = 1 , W s = 1 and λ 3 = 0 . Note that model in Equation (9) is similar to the direction sparse l 0 model in [27] to some extent, since they employ the l 0 norm to directional S as well. However, they enforce l 1 norm to S, whereas our model employs l 0 norm to S. The l 0 norm is prone to generate more regular S than l 1 norm. Moreover, the directional sparse l 0 model estimates the noise without considering these special areas and may generate artifacts in the SSA and EA. With the introduction of double sparsity norm of S, our model can yield an estimated stripe more regularly. Figure 5 illustrates the proposed model flow.

3.3. Model Optimization

In this subsection, we estimate stripe noise S from the optimization model in Equation (9). The l 0 regulation is more difficult to solve than the l 2 norm for it is not differential and not convex; utilizing a trivial manner, such as gradient descend strategy, can not obtain its solution. Here, we adopt the alternating direction method of multipliers (ADMM) optimization technique [36,37], which is based on introducing auxiliary variables and updating them iteratively to solve the original optimization for its fast convergency and stability [21]. By introducing variables d 1 , d 2 and d 3 , unconstrained optimization in Equation (9) converts to a constrained problem, as:
S = a r g m i n S { W u d 1 1 + λ 1 W e d 2 1 + λ 2 d 3 0 + λ 3 d 1 0 } , s . t . d 1 = u S , d 2 = v ( Y S ) , d 3 = S .
Then, using the Lagrangian multiplier method model, Equation (10) can be converted to an unconstrained minimization by using a penalty function, expressed by:
S = a r g m i n S , d 1 , d 2 , d 3 { W u d 1 1 + p 1 T ( u S d 1 ) + β 1 2 d 1 u S 2 2 + λ 1 W e d 2 1 + p 2 T ( v ( Y S ) d 2 ) + β 2 2 d 2 v ( Y S ) 2 2 + λ 2 d 3 0 + p 3 T ( S d 3 ) + β 3 2 d 3 S 2 2 + λ 3 d 1 0 }
in which β 1 , β 2 and β 3 are penalization parameters, and p 1 , p 2 and p 3 are the Lagrange multipliers. Now, in Equation (11), the unknown variables are split, and four subminimization problems can be iteratively solved for S , d 1 , d 2 and d 3 .
First, the d 1 related subproblem is given by:
a r g m i n d 1 { W u d 1 1 + p 1 T ( u S d 1 ) + β 1 2 d 1 u S 2 2 + λ 3 d 1 0 } .
There are both l 0 and l 1 norms for d 1 , and the solution can be computed by the following expression:
d 1 ( k + 1 ) = c s h r i n k ( u ( S ( k ) ) + p 1 ( k ) β 1 , W u β 1 , 2 λ 3 β 1 ) ,
where c s h r i n k ( X , θ , θ ) is calculated as:
X = X θ , if X > θ + θ , 0 , if | X | θ + θ , X + θ , if X < θ θ ,
and k denotes the iteration times.
Then, we solve d 2 by following the minimization extracted from Equation (11):
a r g m i n d 2 { λ 1 W e d 2 1 + p 2 T ( v ( Y S ) d 2 ) + β 2 2 d 2 v ( Y S ) 2 2 } .
The solution for minimization Equation (15) can be obtained by the soft-threshold shrinkage operator [38]:
d 2 ( k + 1 ) = s o f t s h r i n k ( v ( Y S ) ( k ) + p 2 ( k ) β 2 , λ 1 W e β 2 )
in which s o f t s h r i n k ( r , θ ) = r | r | m a x ( | r | θ , 0 ) .
Similarly, the d 3 related subproblem is writen by:
a r g m i n d 3 { λ 2 d 3 0 + p 3 T ( S d 3 ) + β 3 2 d 3 S 2 2 } .
Based on a hard thresholding operator for l 0 norm [39], we can then update d 3 ( k + 1 ) as follows:
d 3 ( k + 1 ) = h a r d s h r i n k ( S ( k ) + p 3 ( k ) β 3 , 2 λ 2 β 3 )
in which h a r d s h r i n k ( ϕ , θ ) = ϕ ( | ϕ | > θ ) .
Followingly, the S related subproblem is formulated as:
S = a r g m i n S { p 1 T ( u S d 1 ) + β 1 2 d 1 u S 2 2 + p 2 T ( v ( Y S ) d 2 ) + β 2 2 v ( Y S ) d 2 2 2 + p 3 T ( S d 3 ) + β 3 2 S d 3 2 2 } .
The minimization expression in Equation (19) is a quadratic optimal formulation. It is differentiable and the optimal S can be solved by the Euler–Lagrange equation:
β 1 x T x + β 2 y T y S + β 3 S = β 1 x T d 1 p 1 β 1 + β 2 y T y Y d 2 + p 2 β 2 + β 3 d 3 p 3 β 3
and a close-form solution via 2D fast Fourier transform (FFT) is given by
S ( k + 1 ) = F 1 G β 1 F ( u ) F ( u ) + β 2 F ( v ) F ( v ) + β 3
in which
G = β 1 F ( u ) F ( d 1 ( k ) p 1 ( k ) β 1 ) +    β 2 F ( v ) F ( v Y + p 2 ( k ) β 2 d 2 ( k ) ) + β 3 F ( d 3 ( k ) p 3 ( k ) β 3 ) ,
where F and F 1 denote the 2D FFT and inverse 2D FFT, respectively, ∘ represents a component-wise multiplication operator, and denotes the complex conjugation operator.
Finally, the Lagrange multipliers p 1 , p 2 and p 3 are updated by the following expressions:
p 1 ( k + 1 ) = p 1 ( k ) + β 1 ( u ( S ) ( k ) d 1 ( k ) ) ,
p 2 ( k + 1 ) = p 2 ( k ) + β 2 ( v ( Y S ) ( k ) d 2 ( k ) ) ,
p 3 ( k + 1 ) = p 3 ( k ) + β 3 ( S ( k ) d 3 ( k ) ) .
Thus, utilizing the ADMM technique, the original minimization model in Equation (9) can be solved by four separable subproblems, and the solution of the subproblems can be efficiently obtained by softshrink operator, hardshrink operator and cshrink operator. This iterative scheme decreases J ( s ) in Equation (9) in each step, and it converges to a local minimum and obtains the estimated noise S. Algorithm 1 summarizes the proposed model.
Algorithm 1: The proposed WDSUV algorithm
Input: stripe noise image Y
Output: destriped image X
Initialize: Set d 1 ( 0 ) = d 2 ( 0 ) = d 3 ( 0 ) = 0 , S ( 0 ) = 0 , p 1 ( 0 ) = p 2 ( 0 ) = p 3 ( 0 ) = 0 , ϵ = 0.00001 , k m a x = 150 .
 Detect extreme area and extreme stripe area.
 Calculate weight matrix W e and W u .
While S ( k + 1 ) S ( k ) S ( k ) > ϵ and k < k m a x
 update d 1 ( k + 1 ) using (13),
 update d 2 ( k + 1 ) using (16)
 update d 3 ( k + 1 ) using (18),
 update S ( k + 1 ) by solving (21),
 update p 1 ( k + 1 ) , p 2 ( k + 1 ) , p 3 ( k + 1 ) by (23), (24), (25), k = k + 1 .
End while
 Destriped image X = Y−S.

4. Experiment Results

In this section, a series of experimental results are presented to verify the destriping property of the proposed algorithm on stripe noise removal, small details reservation and artifacts’ reduction. In the experiments, both synthesized images and real noise corrupted remote sensing images were tested, and we compared the proposed model with several typical state-of-the-art destriping methods, including the spatial domain filter method based on guided filter (GF-based) [40], the frequency domain filter method wavelet-Fourier filtering method (WAFT) [15], the unidirectional variational based models, including the UV method [9], HUTV method [19], sparse UV model (SUV) [24] and convolutional neural network based method stripe noise removal convolutional neural network (SNRCNN) [31]. The traditional denoising method block-matching and 3D filtering (BM3D) [41] are also selected to be compared. To provide an overall evaluation, the performance of the destriping was verified by both subjective and objective evaluations. For the simulated stripe removal, we adopted the structural similarity index (SSIM) [42] and peak signal-to-noise ratio (PSNR) to test the destriping quality, as they are the most common used full-reference indices by modern denoising algorithms [43,44,45]. In the real stripe noise image experiments, we selected the mean of inverse coefficient of variation (MICV) and mean of mean relative deviation (MMRD) [24] indices to validate the effect of the destriping approaches. We also compared the mean cross-track curve [25] to display the destriping ability.
Parameter setting : It is difficult to automate the parameters of the proposed model for all striped images, since a good destriping performance not only depends on the stripe’s type and levels, but also relates to image content and a combination of parameters. In (9), the regulation coefficient, λ 1 , determines the smooth degree across the stripe direction, which is dependent on the dense level of the stripe component, and we set it empirically in the range [ 0.05 , 0.5 ] . Parameters λ 2 and λ 3 constrain the nonzero counting in S and u S . Generally, a sparser stripe prefers a higher λ 2 . A regular stripe corresponds to a large λ 3 , which is determined by both stripe distribution situations and the image detail’s property. We have found that λ 2 [ 0.0001 , 0.005 ] and λ 3 [ 0.01 , 0.2 ] generally yield good results in most experiments. The Lagrange multipliers were set as β 1 = β 2 = β 3 = 100 λ 1 empirically. The range of degraded images was compressed to [ 0 , 1 ] in the calculation. The parameters’ estimation of the proposed model is discussed in more detail in Section 5.1. The parameters of competing methods were set optimally, according to the original paper’s proposal.

4.1. Simulated Experiments

To verify the robustness of the proposed model, we chose six typical remote images with various content in the simulated experiment. Terra MODIS data and Aqua MODIS data can be downloaded from the official website [46]. Each MODIS data set contains 36 bands. Band 32 with less noise was selected as the ground truth data. Figure 6a shows a 500 × 400 relatively rich texture subimage with several extreme areas in it and Figure 6b is a relatively smooth 450 × 400 subimage that was extracted from entire swaths. The Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) hyperspectral data is available from [47]. It contains 220 bands and Figure 6c displays a subimage cropped from band 72 with some stripe-like details. The Washington DC Mall hyperspectral data downloaded from [48] was used to test the large stripe structure preserving ability, as shown in Figure 6d. It covers roofs, streets and path types of scenes. Figure 6e shows an Aqua MODIS image with both rich texture and smooth area in it. In addition, Figure 6f is a complex ground scene that is available from [49]. Before adding the artificial stripe, the 14-bit high dynamic range of original data was linearly compressed to 8 bits for display convenience as:
I o u t = ( 2 B o u t 1 ) I i n 2 B i n 1
in which I i n and I o u t are the input 14-bit data and output 8-bit data, respectively. B i n = 14 and B o u t = 8 denote the bit-depth of input data and output data. Then, in our simulation, both periodical and nonperiodical stripes with various strengths were added on the tested images. In Figure 6, the top row includes six selected remote sensing data, and the bottom row includes the corresponding degraded images with artificial stripe. We randomly selected six percent rows in data S1 and added them with random intensity values. In data S3, stripe is added every ten lines; however, the intensity is randomly distributed. We generated a bright dark adjacent stripe on data S2 and data S4. The width of synthetic stripe of S1 to S4 (Figure 6g–j) is set as two lines. To further illustrate the various types of stripe removal ability of the proposed model, we also simulated the stripe with different strength and different width on S5 and S6 (Figure 6k,l). In the destriping procedure of WDSUV, the range of the striped data was compressed to [ 0 , 1 ] in all experiments.
Comparisons of the proposed method with competing techniques on simulated stripe remote data are depicted in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14. The superiority of the proposed model can be seen. We analysed destriping results in three-fold, i.e., extreme stripe estimation, stripe-like structure preservation and artifacts’ reduction.
First, we analysed the destriping performance of BM3D and SNRCNN methods. There are plenty of obvious residual stripes existing in the results of BM3D (Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14a) and SNRCNN (Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14b). The strong stripe noise seriously affects the grouping step and collaborative filtering step in BM3D, resulting in a poor destriping ability. The weak performance of SNRCNN on stripe noise removal can be attributed to the difference between the training data of SNRCNN and our striped data. In the training procedure of SNRCNN, the small intensity stripe noise is added on the clean training data. However, the stripe of various intensity is generated in our simulated data.
Next, we checked the ability of different methods in handling the EA and SSA. The Terra MODIS data S1 and S2 contain several extreme dark areas and some extreme stripes. Figure 7 and Figure 8 display the destriping results. In addition to BM3D and SNRCNN, other methods can well eliminate stripe in normal intensity areas. Nevertheless, the recovery capability for SSA is distinct. Obvious stripe effects still exist in SSA for the GF-based, WAFT, UV, HUTV and SUV methods, as marked by the orange yellow rectangle in Figure 7c–g. These extreme stripes increase the variation of stripe along the stripe direction, which violates the original assumption of UV model. Thus, when we estimate stripe noise using variation (3), the stripe effect is easily generated in the SSA and EA. Our model can detect SSAs and inpaint them by a diffusion technique, and achieve better visual results with less undesired artifacts, as shown in Figure 7h and Figure 8h.
In AVIRIS hyperspectral data (Figure 6c), there are some small horizontal structures that are very similar to stripe noise. Figure 9 shows the corresponding denoising results. As can be observed, unsurprisingly, the UV, HUTV and SUV approaches smooth these details while removing the true stripe as shown in Figure 9e–g. Because these methods are local variation-based and the small structure’s property has a high similarity with noise, they are easily treated as stripe noise and removed. As to the comparison results of data S4 in Figure 10, besides BM3D and SNRCNN, other methods seem to succeed in preserving the large edge structure. However, the loss of the original image content is different. Figure 11 presents the stripe noise estimation results, and it can be easily observed that the GF-based, WAFT and HUTV methods wrongly remove some scene structures, as in Figure 11c,d,f. By introducing the gradient sparse regulation, little superfluous and unwanted content exists in Figure 11h, indicating that our model shows better texture preservation capabilities and distinguishes more structure from the stripe noise. Although the GF-based method has a good preservation of stripe details capability in Figure 9c, it may fail when extreme dark areas exist as depicted in Figure 7c and Figure 8c.
Figure 12 and Figure 14 display the complex stripe noise removal comparisons for S5 and S6. In Figure 12, the stripe noise in the smooth area almost vanishes for GF based method, WAFT, UV, HUTV, SUV and WDSUV. However, some weak stripe trace still can be sensed from GF based and WAFT, as in Figure 12c,d. Moreover, the GF based, WAFT and UV methods smooth some details in the complex area of S5, which can be inferred from the stripe estimation in Figure 13c–e.
Table 1 lists the PSNR and SSIM results of different techniques for six simulated data. Our method outperforms the other techniques in both measures, indicating that the proposed model is robust for various kinds of stripes. The BM3D method even generates worse results than degraded images, indicating that the BM3D is not well suitable to the stripe noise removal problem.
In addition, we adopted the mean cross-track index to measure the destriping performance of the proposed method. Figure 15 and Figure 16 display the mean cross-track profile of different methods on the simulated data S3 and S4. The horizonal axis stands for the row number, and the vertical axis denotes the mean value of each row. We can observe a lot mild burrs in the curves of BM3D (Figure 15 and Figure 16a) and SNRCNN (Figure 15 and Figure 16b), indicating that obvious stripe still existed. In Figure 15, the output of the GF-based, WAFT, UV, and HUTV methods deviate significantly from the ground truth. To a great extent, this is because the stripe structure is also removed as noise. The curves of the SUV and WDSUV fluctuate around the clean image’s data, whereas the result of the WDSUV is more coherent with truth than the SUV. In Figure 16, the WAFT and UV exhibit over-smoothing curves, which means that some useful details are lost. Among these outputs, the WDSUV also has the best agreement with the original, demonstrating the perfect performance of the proposed model.
To further illustrate the robustness of the proposed model, we also added both periodical stripe and nonperiodical stripe with different intensities and different proportions on images S1 to S4 in simulated experiments. The quantitative indices PSNR and SSIM values of eight comparison methods are presented in Table 2 and Table 3, respectively. We simulated the striped images in the same way as [25] and the simulated stripe generating code can be download from [50]. In the tables, r denotes the proportion of stripe noise in an image and the intensity means the mean absolute value of the simulated stripe lines. In the simulated data, if the gray values exceeded the range [ 0 , 255 ] , they were cut off. The highest PNSR and SSIM values are highlighted in bold. In general, the destriping performance of each method reduces as the noise levels increases and the proportion enlarges. With the increasing stripe intensity, the original content along the entire rows may be destroyed, which make it more difficult to recover the underlying image. Table 2 and Table 3 show that the proposed WDSUV model achieves the highest PSNR and SSIM values than the state-of-the-art method in most cases. It should be noted that the GF-based method, WAFT, UV and HUTV output for S4 got lower PSNR and SSIM values than the degraded images when r = 0.1 and i n t e n s i t y = 10 , as denoted by cyan color, can be attributed to these methods removing too much stripe like structure in S4 images.

4.2. Real Experiments

Here, we conducted some experimental comparisons on real stripe noise contaminated remote sensing images. Four Terra MODIS data and two Hyperion data [51] were chosen with various extents of stripe noise. Figure 17a and Figure 18a are two subimages from Terra MODIS data band 27. There are some EAs in Figure 17a and many SSAs in Figure 18a. The two images are seriously contaminated by detector-to-detector periodical stripe noise. In addition, two other MODIS data with a moderate level of stripe noise were selected. Figure 20a is a subimage cropped from Terra MODIS data band 28 with nonperiodical stripe and Figure 19a is degraded by periodical stripe that is cropped from band 30. Two Hyperion data with nonperiodical stripe are shown in Figure 21a and Figure 22a, respectively.

4.2.1. Visual Comparison

First, we tested the heavy stripe removal ability of the comparison methods. The visual quality of MODIS data R1 and R2 by the comparison techniques are provided in Figure 17 and Figure 18. As seen, the BM3D and SNRCNN do not show a proper destriping performance. The GF-based method also exhibits a poor destriping performance with some obvious residual stripes in Figure 17d and Figure 18d because these stripes are so serious that they will not totally be separated when the GF is first used. Thus, some stripes are left and then are present in the final results. The visual effect of the HUTV method could be acceptable; most stripes vanish. Nevertheless, some small-scale details are also removed, as denoted by the orange yellow ellipse in Figure 17g, making the resulting image look flat. It appears to be difficult with the WAFT approach to keep the extreme dark domain, as pointed out in Figure 17e, because the latent noise is saturated by an extreme area, and artifacts appear when there is an extremely estimated stripe in these areas. In the SSAs, stripe effect is generated in most of the state-of-the-art methods, including the SUV, as presented in the zoomed patches in Figure 17 and Figure 18. On the other hand, the proposed WDSUV model eliminates most stripe noise with a faithful details’ preservation property. Moreover, due to a rational weight was designed for the EA and SSA, the proposed model contains less artifacts in these special regions than compared approaches, as displayed in Figure 17i and Figure 18i.
Figure 19 and Figure 20 display the moderate level stripe estimation comparison. In each method’s result, the top part is the destriping result, and the bottom part is the corresponding stripe noise estimation. To provide a better visualization, these estimated stripes are linearly stretched to range [ 0 , 255 ] . Generally, besides BM3D, most noise is removed by most methods. However, the blur degree and noise estimation ability of these methods vary. Some tiny stripes remain in the GF-based method, as marked in the orange yellow ellipsoid in Figure 20d. For the UV and HUTV methods, the blur effect appears in the structure similar to stripe, as in Figure 19f,g and Figure 20f,g. The SUV model can estimate the most stripe noise. Unfortunately, some tiny horizontal details are also removed, resulting in details loss, as displayed in Figure 19h and Figure 20h. Observing the estimated stripe noise in Figure 19 and Figure 20, we can discover that our WDSUV model generates a more regular stripe image than the other state-of-the-art methods, demonstrating that the proposed model has a better stripe noise estimation ability.
Furthermore, we tested the performance of proposed model on Hyperion data corrupted by various nonperiodical stripe noise. Figure 21a is a subimage of Hyperion data band 35 with few stripes corrupted, while relatively many stripes exist in band 135, as displayed in Figure 22a. Figure 21 and Figure 22 display two comparisons by the different methods. As seen, in addition to BM3D and SNRCNN, most methods can remove the stripe noise in Figure 21a. Unfortunately, some image context is removed by the GF-based, WAFT and UV methods, as denoted in Figure 21d–f. Obviously, the SUV and our WDSUV’s estimation for data R5 are much cleaner and more sparse than other techniques. However, the SUV cannot recognize the stripe-like structure and smooth them as noise, as denoted in Figure 21h. The excessive estimation also exists in the GF-based, WAFT, UV and HUTV methods for second Hyperion data R6, as shown in Figure 22d–g.

4.2.2. Qualitative Analysis

In this subsection, the qualitative analysis for real-world striped images is illustrated. There is no clean image as a baseline, and we here adopt three nonreference indices, i.e., the MICV, the MMRD [24,28] and the mean cross-track profile in real data experiments. The index MRD measures the relative distortion of original subarea, expressed as:
M R D = 1 M N u = 1 M v = 1 N | Y ( u , v ) X ( u , v ) | Y ( u , v ) ,
where Y ( u , v ) stands for original subarea pixel value at ( u , v ) and X ( u , v ) denotes the destriping result pixel value at ( u , v ) . Then, the MMRD is the mean of some subareas’ MRD values. Usually, the small patch that contains a sharp edge is selected to calculate an MRD index. The ICV index measures the smoothness of a homogenous region, written as:
I C V = M ( X ) S t d ( X )
in which X denotes a small region after destriping. M ( X ) is the mean of X and S t d ( X ) is the standard deviation of window X. MMRD denotes the mean of MRD values of some patches. Generally, the larger MICV index and the smaller MMRD index mean a better destriping performance. Table 4 lists the MICV and MMRD results of comparison methods for the six real data experiments. The best values are highlighted in bold. As shown in Table 4, the proposed WDSUV model achieves the smallest MMRD values all images except R3. The WDSUV doesn’t obtain all the largest MICV values. It is lower than HUTV for images R1, R4 and R5. This phenomenon can be mainly attributed to the oversmoothing effect of HUTV, which can be inferred in Figure 17g and Figure 20g. The oversmoothing effect also exists in BM3D, resulting in the largest MICV values for images R5 and R6. Although SNRCNN obtains larger MICV values for images R4 and R5, and smaller MMRD value for image R3 than WDSUV, there are obvious residual stripes in SNRCNN, as seen in Figure 19c, Figure 20c and Figure 21c. Both the SUV and WDSUV can estimate the few stripes in Hyperion data R5, resulting in the same MICV and MMRD values. However, the SUV may lose some stripe-like structures, as pointed out in Figure 21h.
Figure 23, Figure 24 and Figure 25 display three examples of mean cross-track profiles for Terra MODIS data R1, R3 and R4. The horizontal axis denotes the row number and the vertical axis represents the mean value of each row. As can been seen, the BM3D and SNRCNN show a weak destripng ability that the results’ curves are similar to the original degraded images’. In Figure 23, the WAFT shows an oversmoothing profile in Figure 23e, which means that many underlying details are lost. Some obvious fluctuation in the GF-based method (Figure 23d) and the HUTV curve (Figure 23g) indicates that some residual stripes still exist. On the other hand, the profile of the proposed method (Figure 23i) can smooth the huge fluctuation and keep the underlying details. In Figure 24 and Figure 25, the GF-based, WAFT, UV and HUTV methods all exhibit oversmoothing profiles, which means that some useful details are also removed. The difference of the mean cross-track profiles between the SUV and the WDSUV, such as in Figure 24h,i, is not obvious. It is mainly attributed to the fact that the mean of the overestimated structure by the SUV is too small to be sensed.

5. Discussion

5.1. Analysis of Parameters

For a destriping optimal model, it is difficult to set the unify parameters to handle all types of stripe noise. Researchers usually turn the parameters empirically according to the stripe level and image content. In this paper, there are mainly three regularization parameters in the proposed WDSUV destriping model (9): λ 1 , λ 2 and λ 3 . Generally, strong stripes prefer a large λ 1 . The regular stripes need a large λ 3 . If stripes are dense, the λ 2 should be small. Moreover, the interaction among the three parameters should not be ignored, and a combination of parameters will provide a satisfied result. However, determining the relationship between the parameters and the result is not an easy task, and may need many calculations. Here, we adopt the same strategy in [26] that tunes one parameter while others are fixed to analyse the relationship. We evaluate the PSNR values as a function of λ 1 when λ 2 and λ 3 are fixed. Then, we adjust parameters λ 2 and λ 3 in the same manner.
In the experiment, a clean subimage was cropped from MODIS data. After adding the artificial stripe noise, we calculated the PSNR values while tuning the three parameters, respectively. Figure 26 shows the experimental results. The PSNR values of the WDSUV model corresponding to λ 1 , λ 2 and λ 3 are presented in Figure 26a–c, respectively. In terms of λ 1 , the PSNR has a rising trend in interval [ 0.05 , 0.1 ] . However, it is much reduced when λ 1 increases further. In Figure 26b, the PSNR increases slightly when λ 2 increases from 0.0005 to 0.002 , then reduces dramatically in [ 0.002 , 0.0045 ] , and presents a gentle decline in [ 0.0045 , 0.01 ] . Figure 26c shows that the PSNR curve rises slightly when λ 3 is in [ 0.01 , 0.05 ] , and it nearly converges to 50.8 as λ 3 increases to 0.2. Thus, we empirically set the three parameters as follows: λ 1 [ 0.05 , 0.5 ] , λ 2 [ 0.001 , 0.05 ] and λ 3 [ 0.01 , 0.2 ] . The range of the three parameters are larger than the best performance, as displayed in Figure 26, to tackle more various types of stripe noise.

5.2. Limitation

In this paper, we have designed double sparsity regulations to distinguish (stripe) texture from stripe noise. It proved to retain some small scale structure, whereas the small scale stripe noise are also prone to be kept in the destriping result. One failure example is shown in Figure 27. An optional way to reduce this disadvantage is to reduce the value of parameter λ 3 in (9). However, how to distinguish the tiny stripes noise and small details similar to the stripe still needs to be solved, and we will endeavour this task in the future.

6. Conclusions

In this paper, we presented a new model for the remote sensing images stripe noise removal problem. It incorporates the double sparsity property, i.e., the global sparsity and the unidirectional gradient sparsity, of stripe noise into a unidirectional variation framework. Furthermore, to reduce artifacts in the extreme area and extreme stripe, a rational weight was designed in different regions. The efficient ADMM algorithm was employed to solve the optimal model in an iterative procedure. We simulated types of stripes on clean data with various content. In particular, some images contain structures similar to stripe and extreme areas. We also collected some real typical striped remote sensing images with complicated structures to testify the performance of the proposed model. Both subjective and objective quantitative measures were employed to compare the destriping ability of the different methods. Experimental results of both simulation data and real noise corrupted data demonstrate that the proposed model achieves a better destriping performance than the state-of-the-art methods, in terms of noise removal, small structure preservation and less undesired artifacts introduced.

Author Contributions

Q.S. designed the framework and wrote the draft. X.Y. and H.G. performed some experiments. Y.W. analyzed the results and revised the manuscript.

Funding

This research is supported by the Chinese advanced research project No. 41415020402 and No. 30503040301.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADMMalternating direction method of multipliers
MODISModerate resolution imaging spectrometer
AVIRISAirborne Visible InfraRed Imaging Spectrometer
SSAStrong stripe area
EAExtreme area
SSIMStructural similarity
PSNRPeak signal-to-noise ratio
MICVMean of inverse coefficient of variation
MMRDMean of mean relative deviation

References

  1. Zhang, L.; Zhang, L.; Tao, D.; Huang, X.; Du, B. Hyperspectral Remote Sensing Image Subpixel Target Detection Based on Supervised Metric Learning. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4955–4965. [Google Scholar] [CrossRef]
  2. Zhu, D.; Wang, B.; Zhang, L. Airport Target Detection in Remote Sensing Images: A New Method Based on Two-Way Saliency. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1096–1100. [Google Scholar]
  3. Huang, Z.; Zhang, Y.; Li, Q.; Zhang, T.; Sang, N.; Hong, H. Progressive dual-domain filter for enhancing and denoising optical remote sensing images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 759–763. [Google Scholar] [CrossRef]
  4. Bian, X.; Chen, C.; Xu, Y.; Du, Q. Robust Hyperspectral Image Classification by MultiLayer SpatialSpectral Sparse Representations. Remote Sens. 2016, 8, 985. [Google Scholar] [CrossRef]
  5. Jiang, J.; Chen, C.; Yu, Y.; Jiang, X.; Ma, J. Spatial-aware collaborative representation for hyperspectral remote sensing image classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 404–408. [Google Scholar] [CrossRef]
  6. Li, C.; Ma, Y.; Mei, X.; Liu, C.; Ma, J. Hyperspectral image classification with robust sparse representation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 641–645. [Google Scholar] [CrossRef]
  7. Han, J.; Zhang, D.; Cheng, G.; Guo, L.; Ren, J. Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3325–3337. [Google Scholar] [CrossRef]
  8. Carfantan, H.; Idier, J. Statistical Linear Destriping of Satellite-Based Pushbroom-Type Images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1860–1871. [Google Scholar] [CrossRef] [Green Version]
  9. Bouali, M.; Ignatov, A. Estimation of Detector Biases in MODIS thermal emissive bands. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4339–4348. [Google Scholar] [CrossRef]
  10. Gadallah, F.L.; Csillag, F.; Smith, E.J.M. Destriping multisensor imagery with moment matching. Int. J. Remote Sens. 2000, 21, 2505–2511. [Google Scholar] [CrossRef]
  11. Shen, H.; Jiang, W.; Zhang, H.; Zhang, L. A piece-wise approach to removing the nonlinear and irregular stripes in MODIS data. Int. J. Remote Sens. 2014, 35, 44–53. [Google Scholar] [CrossRef]
  12. Tendero, Y.; Landeau, S.; Gilles, J. Non-uniformity Correction of Infrared Images by Midway Equalization. Image Proc. Line 2012, 2, 134–146. [Google Scholar] [CrossRef] [Green Version]
  13. Chen, J.; Shao, Y.; Guo, H.; Wang, W.; Zhu, B. Destriping CMODIS data by power filtering. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2119–2124. [Google Scholar] [CrossRef]
  14. Cao, Y.; He, Z.; Yang, J.; Ye, X.; Cao, Y. A multi-scale non-uniformity correction method based on wavelet decomposition and guided filtering for uncooled long wave infrared camera. Signal Proc. Image Commun. 2018, 60, 13–21. [Google Scholar] [CrossRef]
  15. Münch, B.; Trtik, P.; Marone, F.; Stampanoni, M. Stripe and ring artifact removal with combined wavelet—Fourier filtering. Opt. Exp. 2009, 17, 8567–8591. [Google Scholar] [CrossRef]
  16. Jorge Torres, S.O.I. Wavelet analysis for the elimination of striping noise in satellite images. Opt. Eng. 2001, 40, 1309–1315. [Google Scholar]
  17. Bouali, M.; Ladjal, S. Toward optimal destriping of MODIS data using a unidirectional variational model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2924–2935. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Zhang, T. Structure-guided unidirectional variation de-striping in the infrared bands of MODIS and hyperspectral images. Infrared Phys. Technol. 2016, 77, 132–143. [Google Scholar] [CrossRef]
  19. Zhou, G.; Fang, H.; Lu, C.; Wang, S.; Zuo, Z.; Hu, J. Robust destriping of MODIS and hyperspectral data using a hybrid unidirectional total variation model. Optik Int. J. Light Electron. Opt. 2015, 126, 838–845. [Google Scholar] [CrossRef]
  20. Wang, M.; Zheng, X.; Pan, J.; Wang, B. Unidirectional total variation destriping using difference curvature in MODIS emissive bands. Infrared Phys. Technol. 2016, 75, 1–11. [Google Scholar] [CrossRef]
  21. Chang, Y.; Fang, H.; Yan, L.; Liu, H. Robust destriping method with unidirectional total variation and framelet regularization. Opt. Exp. 2013, 21, 23307–23323. [Google Scholar] [CrossRef] [PubMed]
  22. Huang, Y.; He, C.; Fang, H.; Wang, X. Iteratively reweighted unidirectional variational model for stripe non-uniformity correction. Infrared Phys. Technol. 2016, 75, 107–116. [Google Scholar] [CrossRef]
  23. Chen, Y.; Huang, T.Z.; Deng, L.J.; Zhao, X.L.; Wang, M. Group sparsity based regularization model for remote sensing image stripe noise removal. Neurocomputing 2017, 267, 95–106. [Google Scholar] [CrossRef]
  24. Liu, X.; Lu, X.; Shen, H.; Yuan, Q.; Jiao, Y.; Zhang, L. Stripe Noise Separation and Removal in Remote Sensing Images by Consideration of the Global Sparsity and Local Variational Properties. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3049–3060. [Google Scholar] [CrossRef]
  25. Chang, Y.; Yan, L.; Wu, T.; Zhong, S. Remote Sensing Image Stripe Noise Removal: From Image Decomposition Perspective. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7018–7031. [Google Scholar] [CrossRef]
  26. Chen, Y.; Huang, T.Z.; Zhao, X.L.; Deng, L.J.; Huang, J. Stripe noise removal of remote sensing images by total variation regularization and group sparsity constraint. Remote Sens. 2017, 9, 559. [Google Scholar] [CrossRef]
  27. Dou, H.X.; Huang, T.Z.; Deng, L.J.; Zhao, X.L.; Huang, J. Directional l0 Sparse Modeling for Image Stripe Noise Removal. Remote Sens. 2018, 10, 361. [Google Scholar] [CrossRef]
  28. Lu, X.; Wang, Y.; Yuan, Y. Graph-Regularized Low-Rank Representation for Destriping of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4009–4018. [Google Scholar] [CrossRef]
  29. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  30. Ma, J.; Li, C.; Ma, Y.; Wang, Z. Hyperspectral Image Denoising Based on Low-Rank Representation and Superpixel Segmentation. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3086–3090. [Google Scholar]
  31. Kuang, X.; Sui, X.; Chen, Q.; Gu, G. Single Infrared Image Stripe Noise Removal Using Deep Convolutional Networks. IEEE Photonics J. 2017, 9, 1–13. [Google Scholar] [CrossRef]
  32. Wang, S.P. Stripe noise removal for infrared image by minimizing difference between columns. Infrared Phys. Technol. 2016, 77, 58–64. [Google Scholar] [CrossRef]
  33. Xu, Z.; Sun, J. Image Inpainting by Patch Propagation Using Patch Sparsity. IEEE Trans. Image Proc. 2010, 19, 1153–1165. [Google Scholar]
  34. Cheng, Q.; Shen, H.; Zhang, L.; Li, P. Inpainting for remotely sensed images with a multichannel nonlocal total variation model. IEEE Trans. Geosci. Remote Sens. 2014, 52, 175–187. [Google Scholar] [CrossRef]
  35. Cai, J.F.; Chan, R.H.; Shen, Z. A framelet-based image inpainting algorithm. Appl. Comput. Harmonic Anal. 2008, 24, 131–149. [Google Scholar] [CrossRef]
  36. Wahlberg, B.; Boyd, S.; Annergren, M.; Wang, Y. An ADMM Algorithm for a Class of Total Variation Regularized Estimation Problems. IFAC Proc. Vol. 2012, 45, 83–88. [Google Scholar] [CrossRef]
  37. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  38. Bredies, K.; Lorenz, D.A. Linear convergence of iterative soft-thresholding. J. Fourier Anal. Appl. 2008, 14, 813–837. [Google Scholar] [CrossRef]
  39. Blumensath, T.; Davies, M.E. Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 2008, 14, 629–654. [Google Scholar] [CrossRef]
  40. Cao, Y.; Yang, M.Y.; Tisse, C. Effective Strip Noise Removal for Low-Textured Infrared Images Based on 1-D Guided Filtering. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 2176–2188. [Google Scholar] [CrossRef]
  41. Image and Video Denoising by Sparse 3D Transform-Domain Collaborative Filtering. Available online: http://www.cs.tut.fi//~foi/GCF-BM3D/ (accessed on 23 March 2018).
  42. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Proc. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  43. Huang, Z.; Li, Q.; Fang, H.; Zhang, T.; Sang, N. Iterative weighted nuclear norm for X-ray cardiovascular angiogram image denoising. Signal Image Video Proc. 2017, 11, 1445–1452. [Google Scholar] [CrossRef]
  44. Ghimpeteanu, G.; Batard, T.; Bertalmio, M.; Levine, S. A Decomposition Framework for Image Denoising Algorithms. IEEE Trans. Image Proc. 2016, 25, 388–399. [Google Scholar] [CrossRef] [PubMed]
  45. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Proc. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. MODIS Data. Available online: https://modis.gsfc.nasa.gov/data/ (accessed on 30 January 2018).
  47. AVIRIS Data. Available online: http://aviris.jpl.nasa.gov/ (accessed on 30 January 2018).
  48. A Freeware Multispectral Image Data Analysis System. Available online: https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html (accessed on 30 January 2018).
  49. Gloabal Digital Product Sample. Available online: http://www.digitalglobe.com/product-samples (accessed on 30 January 2018).
  50. Changyi’s Homepage on Escience.cn. Available online: http://www.escience.cn/people/changyi/index.html (accessed on 23 March 2018).
  51. Hyperion Data. Available online: https://eo1.usgs.gov/sensors/hyperion (accessed on 30 January 2018).
Figure 1. Three typical stripe images of MODIS data. (a) Regular stripes in Terra MODIS data band 27; (b) Irregular stripes in Terra MODIS data band 34; (c) Stripes in Aqua MODIS data band 21.
Figure 1. Three typical stripe images of MODIS data. (a) Regular stripes in Terra MODIS data band 27; (b) Irregular stripes in Terra MODIS data band 34; (c) Stripes in Aqua MODIS data band 21.
Remotesensing 10 00998 g001
Figure 2. Gradient properties in MODIS data. (a) original stripe noise image; (b) gradient image vertical to the stripe; (c) gradient image along stripe.
Figure 2. Gradient properties in MODIS data. (a) original stripe noise image; (b) gradient image vertical to the stripe; (c) gradient image along stripe.
Remotesensing 10 00998 g002
Figure 3. Illustration of Extreme Stripe and Extreme Area.
Figure 3. Illustration of Extreme Stripe and Extreme Area.
Remotesensing 10 00998 g003
Figure 4. Illustration of extremely dark area and extremely dark stripe detection. (a) original subimage extracted from Terra data; (b) extreme area in vertical; (c) extreme area in horizonal; (d) initial extreme dark stripe; (e) refined extreme dark stripe.
Figure 4. Illustration of extremely dark area and extremely dark stripe detection. (a) original subimage extracted from Terra data; (b) extreme area in vertical; (c) extreme area in horizonal; (d) initial extreme dark stripe; (e) refined extreme dark stripe.
Remotesensing 10 00998 g004
Figure 5. Flow chart of the proposed method.
Figure 5. Flow chart of the proposed method.
Remotesensing 10 00998 g005
Figure 6. Simulated stripe. (a) Terra MODIS data S1. (b) Terra MODIS data S2. (c) Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) hyperspectral data S3. (d) Washington DC Mall hyperspectral data S4. (e) Aqua MODIS data S5. (f) Washington DC multispectral data S6. (gl) simulated stripe images.
Figure 6. Simulated stripe. (a) Terra MODIS data S1. (b) Terra MODIS data S2. (c) Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) hyperspectral data S3. (d) Washington DC Mall hyperspectral data S4. (e) Aqua MODIS data S5. (f) Washington DC multispectral data S6. (gl) simulated stripe images.
Remotesensing 10 00998 g006
Figure 7. Results of different methods for simulated stripe MODIS data S1. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Figure 7. Results of different methods for simulated stripe MODIS data S1. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Remotesensing 10 00998 g007
Figure 8. Results of comparison methods for simulated stripe Terra MODIS data S2. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Figure 8. Results of comparison methods for simulated stripe Terra MODIS data S2. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Remotesensing 10 00998 g008
Figure 9. Results of different methods for simulated stripe AVIRIS hyperspectral data S3. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Figure 9. Results of different methods for simulated stripe AVIRIS hyperspectral data S3. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Remotesensing 10 00998 g009
Figure 10. Results of comparison methods for simulated stripe Washington DC Mall hyperspectral data S4. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Figure 10. Results of comparison methods for simulated stripe Washington DC Mall hyperspectral data S4. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Remotesensing 10 00998 g010
Figure 11. Noise estimation comparison results for simulated stripe Washington DC Mall hyperspectral data S4. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Figure 11. Noise estimation comparison results for simulated stripe Washington DC Mall hyperspectral data S4. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Remotesensing 10 00998 g011
Figure 12. Comparison results for simulated stripe Aqua MODIS data S5. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Figure 12. Comparison results for simulated stripe Aqua MODIS data S5. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Remotesensing 10 00998 g012
Figure 13. Noise estimation comparison results for simulated stripe Aqua MODIS data S5. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Figure 13. Noise estimation comparison results for simulated stripe Aqua MODIS data S5. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Remotesensing 10 00998 g013
Figure 14. Comparison results for simulated stripe Washington DC multispectral data S6. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Figure 14. Comparison results for simulated stripe Washington DC multispectral data S6. (a) BM3D; (b) SNRCNN; (c) GF-based; (d) WAFT; (e) UV; (f) HUTV; (g) SUV; (h) WDSUV.
Remotesensing 10 00998 g014
Figure 15. Mean cross-track profiles of comparison methods for simulated stripe Hyperion data S3. (a) striped data; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Figure 15. Mean cross-track profiles of comparison methods for simulated stripe Hyperion data S3. (a) striped data; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Remotesensing 10 00998 g015
Figure 16. Mean cross-track profiles of comparison methods for simulated stripe Washington DC Mall hyperspectral data S4. (a) striped data; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Figure 16. Mean cross-track profiles of comparison methods for simulated stripe Washington DC Mall hyperspectral data S4. (a) striped data; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Remotesensing 10 00998 g016
Figure 17. Results of different methods on MODIS data band 27 R1. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV. Zoom in for better visualization.
Figure 17. Results of different methods on MODIS data band 27 R1. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV. Zoom in for better visualization.
Remotesensing 10 00998 g017
Figure 18. Results of different methods on Terra MODIS data band 27 R2. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Figure 18. Results of different methods on Terra MODIS data band 27 R2. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Remotesensing 10 00998 g018
Figure 19. Results of different methods on Terra MODIS data band 30 R3. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV. Zoom in for better visualization. In each result, the top part is the destriping result, and the bottom part is the corresponding stripe noise estimation.
Figure 19. Results of different methods on Terra MODIS data band 30 R3. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV. Zoom in for better visualization. In each result, the top part is the destriping result, and the bottom part is the corresponding stripe noise estimation.
Remotesensing 10 00998 g019
Figure 20. Results of different methods on Terra MODIS data band 28 R4. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV. In each result, the top part is the destriping result, and the bottom part is the corresponding stripe noise estimation.
Figure 20. Results of different methods on Terra MODIS data band 28 R4. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV. In each result, the top part is the destriping result, and the bottom part is the corresponding stripe noise estimation.
Remotesensing 10 00998 g020
Figure 21. Results of different methods on Hyperion data band 35 R5. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV. In each result, the top part is the destriping result, and the bottom part is the corresponding stripe noise estimation.
Figure 21. Results of different methods on Hyperion data band 35 R5. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV. In each result, the top part is the destriping result, and the bottom part is the corresponding stripe noise estimation.
Remotesensing 10 00998 g021
Figure 22. Results of different methods on Hyperion data band 135 R6. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV. In each result, the top part is the destriping result, and the bottom part is the corresponding stripe noise estimation.
Figure 22. Results of different methods on Hyperion data band 135 R6. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV. In each result, the top part is the destriping result, and the bottom part is the corresponding stripe noise estimation.
Remotesensing 10 00998 g022
Figure 23. Mean cross-track profiles of Terra MODIS data R1. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Figure 23. Mean cross-track profiles of Terra MODIS data R1. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Remotesensing 10 00998 g023
Figure 24. Mean cross-track profiles of Terra MODIS data R3. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Figure 24. Mean cross-track profiles of Terra MODIS data R3. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Remotesensing 10 00998 g024
Figure 25. Mean cross-track profiles of Terra MODIS data R4. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Figure 25. Mean cross-track profiles of Terra MODIS data R4. (a) original; (b) BM3D; (c) SNRCNN; (d) GF-based; (e) WAFT; (f) UV; (g) HUTV; (h) SUV; (i) WDSUV.
Remotesensing 10 00998 g025
Figure 26. PSNR curves of regulation parameters. (a) PSNR curve of parameter λ 1 ( λ 2 = 0.001 , λ 3 = 0.05 ); (b) PSNR curve of parameter λ 2 ( λ 1 = 0.15 , λ 3 = 0.05 ); (c) PSNR curve of parameter λ 3 ( λ 1 = 0.15 , λ 2 = 0.001 ).
Figure 26. PSNR curves of regulation parameters. (a) PSNR curve of parameter λ 1 ( λ 2 = 0.001 , λ 3 = 0.05 ); (b) PSNR curve of parameter λ 2 ( λ 1 = 0.15 , λ 3 = 0.05 ); (c) PSNR curve of parameter λ 3 ( λ 1 = 0.15 , λ 2 = 0.001 ).
Remotesensing 10 00998 g026
Figure 27. Limitations. An example of proposed model failure to remove fragile stripe. (a) original striped MODIS data; (b) the output of WDSUV.
Figure 27. Limitations. An example of proposed model failure to remove fragile stripe. (a) original striped MODIS data; (b) the output of WDSUV.
Remotesensing 10 00998 g027
Table 1. Quantitative assessment of six simulated data.
Table 1. Quantitative assessment of six simulated data.
ImagesIndexDegradeBM3DSNRCNNGF-BasedWAFTUVHUTVSUVWDSUV
Terra MODISPSNR26.084025.520027.211732.018131.000732.641731.701038.324541.0911
data S1SSIM0.73240.58700.77030.89180.88900.88780.88060.97860.9848
Terra MODISPSNR23.402223.351524.521239.385240.860740.976136.262446.710648.2076
data S2SSIM0.43820.34840.42700.94720.94980.95670.89240.98420.9904
AVIRIS hyperspectralPSNR26.220825.795127.317635.598833.670334.275731.899439.501947.1249
data S3SSIM0.64090.41650.68490.95400.91040.90770.87340.93890.9856
Washington DCPSNR24.242724.144025.621134.644932.472233.984629.478337.929143.9164
Mall S4SSIM0.67500.63920.70080.94810.90600.92370.83040.97000.9883
Aqua MODISPSNR22.357322.339823.379930.626028.496327.580528.658535.216336.4584
data S5SSIM0.56700.49390.59510.78930.79020.85020.80700.96810.9832
Washington DCPSNR25.068924.952026.179233.007931.457731.256531.894032.632036.9800
multispectral S6SSIM0.61350.48130.63220.88580.86110.87200.85870.91200.9547
Table 2. Peak signal-to-noise ratio (PSNR) index results of different methods on simulated data under various noise levels.
Table 2. Peak signal-to-noise ratio (PSNR) index results of different methods on simulated data under various noise levels.
ImageMethodr = 0.1r = 0.4r = 0.6r = 0.8
IntensityIntensityIntensityIntensity
10305080103050801030508010305080
MODIS data S1
periodical stripe
Degrade38.363628.941624.657120.976532.340622.918218.646514.981930.578821.153516.878313.215629.328819.910115.623011.9539
BM3D37.973028.718424.182620.516032.288222.870418.538714.889830.546321.118616.806713.160929.307519.885815.571011.9200
SNRCNN39.490831.839526.330121.891737.003525.214419.697215.478835.315623.126617.621413.448434.623221.343516.245612.1758
GF-based39.013837.968836.237030.001038.381234.360030.424826.100437.772132.228428.869824.020037.652131.798127.481522.3266
WAFT42.427837.060230.643730.154039.467134.955229.254327.090538.940833.537528.185525.143138.660732.763327.227223.5700
UV38.469935.625034.837033.850038.275233.880132.319028.616237.959032.322829.666124.896637.911932.052028.919324.1231
HUTV39.570436.279634.220430.470238.710333.859530.549826.612837.332828.613427.408223.321938.009531.450526.271923.0960
SUV48.485044.914943.014738.022841.052839.925635.897333.345835.705337.447832.937527.081034.779135.536830.938826.2138
WDSUV50.269745.323944.224842.232243.266840.083239.402236.101439.139238.916236.802032.330139.745439.481335.719230.3441
MODIS data S1
nonperiodical stripe
Degrade38.421928.995124.710821.040032.318422.902318.616114.933830.598921.190716.918013.283329.341619.924415.637811.9716
BM3D37.973328.751024.218120.561132.269422.852818.507714.838530.569321.157716.847313.229629.324019.899315.585611.9358
SNRCNN39.361831.367226.014021.701536.578824.953419.539415.308134.703522.814517.613213.540333.567021.272716.148312.1258
GF-based38.935936.679734.086229.755938.044133.526628.660225.427436.656429.953526.717623.002336.271429.655426.125921.8469
WAFT41.820835.755031.585527.167940.122233.627329.416425.159936.267130.057526.086823.794036.278129.169025.595126.0789
UV38.471036.581633.615130.867338.073334.345530.696626.857337.426231.081926.720722.742937.083830.451225.873321.6048
HUTV39.527135.325232.513030.324838.404833.422529.199726.695236.691629.256926.850922.246936.425528.313625.352220.7944
SUV46.067644.340038.925837.731138.737139.697535.226331.918734.278931.540927.870523.375131.514929.887026.150721.5511
WDSUV47.005147.091943.478741.768540.919240.426338.208433.311838.159334.032430.734926.078934.808332.510130.183324.5578
MODIS data S2
periodical stripe
Degrade38.144128.646924.253020.286532.123522.630018.239814.271430.362220.866816.476412.506729.112819.617515.229211.2611
BM3D37.660128.511724.146120.223432.013022.599018.212914.258830.281620.842516.458712.499229.047219.599015.216311.2565
SNRCNN42.976432.412926.181221.247939.947625.180919.356214.761537.402923.014917.224112.676536.727821.063315.860111.4613
GF-based45.616743.325841.571837.652144.371540.023737.864432.507243.262839.304335.904929.498544.678338.811834.047727.3518
WAFT47.240142.872537.103636.852643.468441.382135.803233.521743.117739.900134.764931.530443.420740.649134.727430.5963
UV43.417643.226343.049142.297543.166741.554540.177936.679342.995440.173237.946531.120843.371340.735037.946531.7282
HUTV44.175039.273038.822935.061243.308138.689634.752130.613341.448036.731832.549430.130443.029337.479331.622630.2447
SUV55.253452.698751.450647.806744.847044.579443.964539.828540.548343.150139.906134.368440.020940.113039.503034.7828
WDSUV54.773753.464147.486745.745146.471844.629244.995341.383543.217446.300741.423338.332741.509041.023239.965436.4712
MODIS data S2
nonperiodical stripe
Degrade38.146228.668824.290620.320632.124122.631718.244714.274530.360720.854716.460512.490729.112019.611415.222311.2544
BM3D37.604128.529624.183120.254132.012622.600018.218614.261430.284320.831716.443212.483329.051119.593415.209311.2493
SNRCNN42.297231.712225.830621.043539.362224.881919.223214.627536.349822.574217.169412.713735.141621.000115.731711.3596
GF-based44.327139.576436.922434.606343.764737.051834.659630.464039.860333.348629.583325.074639.966135.291229.666625.5680
WAFT45.534838.997836.434634.413643.171837.013034.047929.331739.153130.957530.501427.759039.710333.693531.334028.4894
UV43.194140.783738.159734.578342.876538.970835.597131.383040.486732.977528.400423.909541.436034.674829.809824.6459
HUTV44.406337.830635.088833.330142.983737.095633.502431.625039.170932.550329.529124.817539.522932.500228.973223.8722
SUV53.306751.717146.322146.140446.516546.795442.651940.857936.986735.737730.448325.020333.865234.748829.867125.1922
WDSUV57.729653.363748.788850.037847.258746.892845.675342.404141.011736.480832.065526.872736.984136.958434.167628.5905
Hyperspectral data S3
periodical stripe
Degrade38.131928.590324.210620.938832.111022.570418.192314.923930.349920.808116.433913.175429.100419.559215.180711.9043
BM3D37.429928.303423.887920.711231.953622.502018.111914.880330.248220.759316.379313.149729.017919.521815.139211.8848
SNRCNN39.007931.737726.055321.946037.315624.901819.351215.507435.761822.817517.213813.456234.928821.003115.926212.2672
GF-based42.506041.257440.844437.125642.306340.707339.482131.514041.749839.868238.154228.892942.042640.433836.849226.7469
WAFT42.192439.158935.267134.591739.800638.610634.703530.917639.612338.018034.284929.342139.738339.109534.767328.1147
UV40.815538.343438.225336.828240.676837.661036.993032.019240.525136.767635.285827.993640.845837.976436.756327.9281
HUTV39.168037.521636.029833.429538.753735.882634.240830.609637.872832.248232.049827.639538.465835.481431.242827.8749
SUV51.943046.563945.647241.483142.893445.027043.022136.494038.309443.612040.982131.526036.939742.185739.156031.2246
WDSUV52.440750.128840.317736.210348.143247.247339.753234.996945.823145.591737.298132.855044.453743.582332.357629.1048
Hyperspectral data S3
nonperiodical stripe
Degrade38.130828.589724.202720.920432.110722.569818.196014.906530.349620.808616.431713.173229.100719.559815.189911.9409
BM3D37.384828.290723.874920.688031.960322.500818.114314.858930.245820.760616.376613.143929.021119.522815.145911.9203
SNRCNN38.784530.957125.571921.655436.756924.779419.277015.418335.398622.517817.204413.520533.548620.890115.762412.1881
GF-based42.299139.752837.950134.759140.757636.663033.692828.942140.218035.319433.086426.998337.798131.795629.787524.2069
WAFT41.783137.789035.402633.404139.247434.298631.859828.465838.310933.057232.108628.494436.469530.623530.645725.6458
UV40.791639.427836.154232.877339.820435.563331.175327.602639.581334.421930.175325.838038.046330.791726.332822.1651
HUTV39.320736.881735.138433.277537.876733.462830.018928.922537.997132.530830.065225.839436.039729.033627.128520.5277
SUV47.683046.430240.220938.825739.467639.461438.326332.146036.456637.349433.514126.114430.634531.048726.721620.4176
WDSUV50.625950.589245.972643.107144.047646.218842.451335.863244.331741.565533.908328.022739.724935.883230.579325.0109
Hyperspectral data S4
periodical stripe
Degrade38.308128.765724.438421.054032.287522.745318.423215.041130.526720.984216.666213.275229.277319.734815.412612.0143
BM3D37.760828.531424.048520.653732.188522.685118.333714.962130.464420.940316.608613.228229.233919.700715.372211.9862
SNRCNN38.319131.269326.095121.982636.163124.832619.496915.588034.651122.807217.405013.533533.809221.080016.090312.3224
GF-based35.457435.440235.403831.984534.525534.100833.915828.867834.446333.995233.267326.949734.438834.035333.010325.4585
WAFT35.501232.462229.116628.814333.020232.364128.981027.525933.002532.179428.870225.868333.030832.456728.942525.2356
UV34.695231.863531.671730.938934.698431.609231.302728.299134.667731.243330.635925.758234.637931.792430.955925.7389
HUTV34.873132.628932.043528.642434.710731.842629.755426.972234.229329.522427.973325.384034.786630.877427.212725.2576
SUV45.832537.988137.802132.600737.482437.594836.913430.556233.044836.879132.547527.916832.890936.469432.204827.3382
WDSUV47.423142.725438.080034.048942.487141.453637.127432.472840.925239.658334.671230.573240.762039.827533.609528.3756
Hyperspectral data S4
nonperiodical stripe
Degrade38.130828.588424.241720.824732.110222.567818.248414.861930.349420.807016.490613.118329.100019.557615.242211.8480
BM3D37.584728.348823.861820.424032.022822.511418.163014.783830.294520.765316.435413.074329.060619.525015.200611.8110
SNRCNN38.219330.726325.563521.560735.506824.546419.275315.370434.355622.384817.224513.440633.119220.872815.796612.0450
GF-based35.315034.243032.939331.265435.038932.414331.147227.252134.181632.637731.262226.043234.033232.197530.129024.2097
WAFT35.352731.803929.833729.049634.721531.230028.576625.718332.709330.357628.243625.944932.692930.204128.011625.0329
UV34.602333.247030.649828.566934.388132.159829.040925.848334.388131.934328.594124.680434.256831.345327.842523.4028
HUTV34.792232.335030.441228.265434.162531.124127.766226.023134.346129.643627.778724.645834.047828.068726.420922.0203
SUV42.443538.020933.381032.593336.571637.412832.346728.093833.812832.917731.496525.516430.947131.276127.770322.6486
WDSUV46.417645.402236.880836.541842.748242.708232.243730.020641.686339.500033.506528.515139.047433.665631.111325.3288
Table 3. Structural similarity index (SSIM) index results of different methods on simulated data under various noise levels.
Table 3. Structural similarity index (SSIM) index results of different methods on simulated data under various noise levels.
ImageMethodr = 0.1r = 0.4r = 0.6r = 0.8
IntensityIntensityIntensityIntensity
10305080103050801030508010305080
MODIS data S1
periodical stripe
Degrade0.91170.76890.67370.59320.77480.44300.28980.17450.71920.36330.20670.10780.65720.28000.14260.0665
BM3D0.88850.71570.52860.35590.75710.41170.22400.10270.70350.33600.15800.06090.64220.25850.10920.0382
SNRCNN0.92940.80760.68570.58440.88570.55050.33360.18900.86740.45950.23640.11480.82730.33200.15740.0692
GF-based0.92380.91090.89390.86560.91300.87330.81480.74690.90750.84540.80540.66680.89960.82570.76780.5675
WAFT0.92910.90720.88050.86540.91200.88270.83820.78850.90900.86900.82020.74790.90530.85910.80260.7105
UV0.91870.90200.89690.88490.91090.87940.86340.80590.90660.86090.82710.71690.90420.85360.81500.7050
HUTV0.92030.89880.88310.81850.91070.87590.81880.67200.90300.76090.75310.61510.90260.84600.67420.5601
SUV0.99550.99270.98920.97700.98370.97370.95910.92210.95420.95780.92030.76520.94500.93590.85020.7367
WDSUV0.99620.99410.99100.98590.98450.98000.97350.94850.98840.97190.97280.94020.99250.96870.93960.8902
MODIS data S1
nonperiodical stripe
Degrade0.92610.80090.71810.64890.76600.44410.28010.16710.72520.37580.21690.11690.67560.30520.15960.0758
BM3D0.90030.74380.56540.40080.74970.41200.21540.09520.70940.34810.16620.06700.66050.28180.12160.0436
SNRCNN0.93250.82890.72410.63920.88070.53820.31830.17880.86400.45640.24480.12340.82940.36520.17500.0782
GF-based0.92450.89600.88830.86670.91380.86570.82920.74610.90780.85180.79750.66530.90240.83460.76660.5769
WAFT0.92920.90580.89130.84580.91640.87690.84100.78370.90260.85920.81140.73860.89940.83730.79510.7020
UV0.91910.90990.89420.86710.91140.88410.85630.79690.90670.86660.82100.72470.90260.85210.78480.6579
HUTV0.92070.89750.87240.82620.91060.87510.80340.73360.90390.84050.79130.63780.89810.80790.75190.4897
SUV0.99610.99380.98500.97700.98080.97450.95570.91130.94590.92460.88070.74590.91710.87430.79730.6599
WDSUV0.99530.99490.99200.98660.99340.97820.97030.93700.98900.97740.93490.86570.98250.96600.91000.8187
MODIS data S2
periodical stripe
Degrade0.77800.55380.46830.40960.46610.13960.06510.03080.38720.09700.04060.01690.30350.06240.02480.0098
BM3D0.68680.44760.32750.24380.41510.11260.04280.01570.34490.07800.02670.00870.27020.05040.01660.0051
SNRCNN0.90780.60960.46810.39190.82170.22690.08540.03740.77460.15050.05000.01830.63710.08050.02820.0104
GF-based0.96980.95500.90590.92210.95880.93160.88310.63380.94700.93140.81480.46610.95930.91100.71850.3084
WAFT0.96560.95300.91430.90980.95950.94540.90340.88430.95780.93740.89500.86480.95830.93890.89310.8577
UV0.96370.95950.96130.95800.96200.95100.94880.92860.96060.94270.93400.80500.96180.94290.93700.8797
HUTV0.95310.90820.89780.79030.94600.89350.79970.56700.93800.86300.72900.43150.94070.85750.52100.3616
SUV0.99570.99470.99270.98900.98820.98460.98020.96790.97540.97480.96450.90990.96220.95780.93590.8876
WDSUV0.99860.99540.99700.99250.98990.98800.98650.98130.99590.98410.97680.95380.99470.97290.96090.9329
MODIS data S2
nonperiodical stripe
Degrade0.80190.60480.53210.48380.46690.14590.07060.03470.39260.09980.04190.01700.32750.07140.02820.0109
BM3D0.70610.48800.37470.29370.41630.11830.04770.01920.34900.07980.02720.00840.29080.05690.01820.0053
SNRCNN0.90800.64150.52330.46050.81830.21500.08700.03860.74990.14670.05070.01830.65610.09620.03190.0112
GF-based0.96680.94830.94090.90900.96380.92310.87950.64400.95140.88420.80200.46920.94570.90520.72170.3296
WAFT0.96680.94300.92520.91450.95220.93130.91050.86920.94910.87460.86990.85500.94770.89680.86510.8437
UV0.96360.95820.95180.93360.96330.94980.93940.90420.95950.92370.87250.77020.95960.92190.86050.7199
HUTV0.95790.90590.85860.76860.95030.89320.80040.68630.93590.84410.71520.57150.92910.81780.70360.4286
SUV0.99660.99470.98930.98770.98950.98480.97540.96280.94230.95100.90070.74730.91240.91370.86960.6503
WDSUV0.99820.99650.99340.99410.99720.98750.98600.97720.99460.96350.98430.95990.99110.95340.98320.9608
Hyperspectral data S3
periodical stripe
Degrade0.87300.64990.53450.47110.64140.23750.10970.05480.56520.16610.07360.03040.46520.11520.03880.0114
BM3D0.81650.53960.31830.19310.59730.18980.05690.01560.52500.13040.03970.01050.42780.09130.01770.0007
SNRCNN0.88760.68820.52560.44390.84560.33940.13970.06580.82670.24100.08940.03370.72930.14280.04520.0128
GF-based0.95910.96140.95370.90660.95840.95570.92600.71910.95630.94200.89690.59360.95630.95020.83540.4289
WAFT0.95000.92390.85610.83050.93690.92300.84700.72500.93230.92260.84020.68730.93230.92370.83690.6312
UV0.94030.91390.91280.88590.94050.91040.89990.77410.94040.90490.88440.63960.94060.91120.88680.6278
HUTV0.91990.89520.87500.76980.91770.87790.78890.56020.91190.83520.71780.50320.91270.85760.60430.4058
SUV0.99540.98520.98180.91910.97170.97820.96380.82780.91940.97060.94540.68880.90510.96160.92420.6142
WDSUV0.99550.99260.98720.95790.99530.98550.96810.89820.99330.97910.97140.91190.99120.97070.95870.8711
Hyperspectral data S3
nonperiodical stripe
Degrade0.88450.68180.58840.53630.64080.23010.10680.05120.55240.15990.06560.02480.51720.14510.05870.0205
BM3D0.82720.56550.36000.23270.59630.18200.05480.01400.51200.12510.03250.00570.47910.11580.03250.0078
SNRCNN0.88440.69450.56500.49920.84780.32390.13220.05780.79230.21780.07840.02770.75560.18710.06690.0222
GF-based0.95950.95910.95450.90460.95730.95100.92050.71740.95640.94420.88590.58200.95330.93190.84480.4633
WAFT0.95160.92210.88850.85910.93860.91250.87190.74160.92960.90870.83780.68140.92800.84650.82760.6128
UV0.94080.93850.91070.85580.94030.93260.88340.74100.93980.93030.87110.65760.93760.90990.81340.5254
HUTV0.92160.89470.86390.77490.91530.87420.75720.62000.91420.84150.74900.47680.90850.79470.70950.3061
SUV0.99320.98580.94340.92060.96150.93310.91760.79220.93780.92160.89030.62240.89850.86090.79000.4566
WDSUV0.99800.99360.98220.97020.99200.98530.96180.94320.99090.96980.96630.89910.98570.98290.95790.8471
Hyperspectral data S4
periodical stripe
Degrade0.95840.81990.71630.63170.86010.52600.34220.21950.81690.43900.25510.13910.75710.34760.18490.0948
BM3D0.94600.78210.60950.43840.84750.49690.28360.13530.80440.41360.20870.08090.74430.32630.14950.0525
SNRCNN0.95590.86020.73980.63650.93800.62370.39190.24390.92860.53090.28670.14760.89210.39870.20160.0986
GF-based0.94800.94730.94860.91810.93810.94740.93690.82340.93750.94620.92720.75480.93800.94070.91220.6662
WAFT0.93940.90610.84590.82930.91530.90600.84010.77400.91530.90550.83580.71990.91540.90590.83260.6822
UV0.92980.89100.88870.87060.92990.88880.87950.78930.92980.88570.86910.71970.92960.89000.86960.6976
HUTV0.92490.90350.89710.81350.92470.89840.83660.70850.92210.85330.78740.64940.92530.87710.70960.5733
SUV0.99350.97260.97010.90850.96660.96800.95670.84180.91840.96350.90250.75730.91400.95860.89360.7114
WDSUV0.99520.98760.98030.95750.99120.98310.96880.92040.98930.97850.94520.87730.98860.97560.93370.8802
Hyperspectral data S4
nonperiodical stripe
Degrade0.95610.82040.72270.64750.85590.50610.30980.17480.80400.40950.22660.11130.76210.34360.17370.0746
BM3D0.94400.78370.61740.45250.84410.47840.25680.10760.79150.38550.18740.06800.74940.32260.14200.0428
SNRCNN0.95690.85590.74120.65240.93770.60500.35700.19250.91490.48770.25460.11850.89300.40470.19050.0776
GF-based0.94780.94580.94380.91660.94740.94430.93080.81420.93740.94310.92280.74030.93770.94460.90890.6613
WAFT0.93920.90470.87250.85430.93660.90330.86250.76090.91440.89950.83340.72190.91470.90030.82880.6813
UV0.92960.92200.88600.84070.92960.91960.87270.76740.92930.91910.86520.71420.92920.91780.85720.6605
HUTV0.92530.90410.87790.80520.92200.89780.81830.72650.92330.87080.83360.63080.92070.82590.79590.5198
SUV0.98650.97140.92630.90980.95980.92230.90750.81290.94730.91680.89660.71560.92010.90720.87090.6324
WDSUV0.99550.99110.96600.96110.97200.98480.93920.93130.99000.98120.95850.90840.98810.95450.94580.8550
Table 4. Quantitative assessment of real data.
Table 4. Quantitative assessment of real data.
ImagesIndexBM3DSNRCNNGF BasedWAFTUVHUTVSUVWDSUV
MODIS dataMICV4.31454.747730.727036.529435.536340.643836.497737.1580
band 27 R1MMRD0.37940.45709.89354.53282.62389.68830.40780.3357
MODIS dataMICV4.84304.973332.769449.790044.140150.066143.588352.2439
band 27 R2MMRD0.46440.15370.88640.83940.92640.77100.59380.1482
MODIS dataMICV16.205718.772827.430927.062217.086529.092928.390829.1068
band 30 R3MMRD0.06990.02660.10170.16770.08600.07860.06310.0367
MODIS dataMICV32.739886.824968.426967.239279.038082.361177.548576.2834
band 28 R4MMRD0.24260.35950.52780.19220.41780.20820.07230.0631
Hyperion dataMICV67.893342.064133.764429.056833.257139.530133.767233.7672
band 35 R5MMRD0.08020.02180.02740.03770.02840.029600
Hyperion dataMICV26.337526.198623.095419.702523.378123.160323.772323.5074
band 135 R6MMRD0.04740.02810.03700.03300.04190.04140.02450.0224

Share and Cite

MDPI and ACS Style

Song, Q.; Wang, Y.; Yan, X.; Gu, H. Remote Sensing Images Stripe Noise Removal by Double Sparse Regulation and Region Separation. Remote Sens. 2018, 10, 998. https://doi.org/10.3390/rs10070998

AMA Style

Song Q, Wang Y, Yan X, Gu H. Remote Sensing Images Stripe Noise Removal by Double Sparse Regulation and Region Separation. Remote Sensing. 2018; 10(7):998. https://doi.org/10.3390/rs10070998

Chicago/Turabian Style

Song, Qiong, Yuehuan Wang, Xiaoyun Yan, and Haiguo Gu. 2018. "Remote Sensing Images Stripe Noise Removal by Double Sparse Regulation and Region Separation" Remote Sensing 10, no. 7: 998. https://doi.org/10.3390/rs10070998

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop