Next Article in Journal
A Flexible Sensing Unit Manufacturing Method of Electrochemical Seismic Sensor
Next Article in Special Issue
Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain
Previous Article in Journal
A Novel “Off-On” Fluorescent Probe Based on Carbon Nitride Nanoribbons for the Detection of Citrate Anion and Live Cell Imaging
Previous Article in Special Issue
Convolutional Neural Network-Based Classification of Driver’s Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Edge-Aware Unidirectional Total Variation Model for Stripe Non-Uniformity Correction

School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(4), 1164; https://doi.org/10.3390/s18041164
Submission received: 26 February 2018 / Revised: 28 March 2018 / Accepted: 28 March 2018 / Published: 11 April 2018
(This article belongs to the Special Issue Advances in Infrared Imaging: Sensing, Exploitation and Applications)

Abstract

:
The problem of stripe non-uniformity in array-based infrared imaging systems has been the focus of many research studies. Among the proposed correction techniques, total variation models have been proven to significantly reduce the effect of this type of noise on the captured image. However, they also cause the loss of some image details and textures due to over-smoothing effect. In this paper, a correction scheme is proposed based on unidirectional variation model to exploit the direction characteristic of the stripe noise, in which an edge-aware weighting is incorporated to convey image structure retaining ability to the overall algorithm. Moreover, a statistical-based regularization is also introduced to further enhance correction performance around strong edges. The proposed approach is thoroughly scrutinized and compared to the state-of-the-art de-striping techniques using real stripe non-uniform images. Results demonstrate a significant improvement in edge preservation with better correction performance.

1. Introduction

Spatial non-uniformity continues to represent a major downside in Focal Plane Arrays (FPA)—based infrared imaging systems. Non-uniformity is observed when the response of different array detectors to the same scene is different [1], such phenomenon creates an undesirable and time-dependent fixed pattern noise (FPN) imposed on the raw image which degrades its quality and undermines the performance of the imaging system. Hence, non-uniformity correction (NUC) that compensates for this undesirable noise has to be conducted before any other process can be efficiently performed on the captured image.
Many correction methods were reported and widely used which can be categorized into two main approaches, namely: calibration-based non-uniformity correction CBNUC [1,2] and scene-based non-uniformity correction SBNUC [3,4,5]. Traditional one-point and two-point methods that rely on uniform reference sources to extract and eliminate the non-uniform pattern fall in the class of calibration-based techniques. On the other hand, methods that exploit scene information are considered as scene-based approaches. In this category, we can find statistical approaches [3] and registration-based methods [4,5] both considered as classical techniques in the field of non-uniformity correction.
One type of the fixed pattern noise that commonly appears in infrared images is the stripe non-uniformity. It is mainly caused by the fact that detectors in the same column (respectively line) share one amplifier leading to vertical (respectively horizontal) grid-like lines to be forced upon the image scene, which makes the content of the image unrecognizable and difficult to exploit. Previously discussed traditional NUC techniques are not suitable for this kind of FPN and can find difficulties to correct for it [6]. However, other methods that deal specifically with stripe non-uniformity have been proposed. These de-striping approaches can be assigned to different categories. The first category consists of methods that exploit data and noise characteristics observed in the array. For instance, Tendero et al. [7] assumed that detectors in adjacent columns observe the same range of data implying that their histograms should be nearly equal, otherwise they are mapped to match a fixed reference histogram. Cao et al. [6,8] studied the relationship between the stripe noise and the scene data and derived a polynomial model that they used to distinguish between edges and textures that belong to the scene and the actual stripe FPN. Chang et al. [9] proposed to treat the destriping problem as a decomposition task using both the low-rank constraint, that exploits characteristics of the stripe noise, and the spectral information of the remote sensing images. Liu et al. [10] also proposed a method that separate stripe noise from the image using three constraints based on properties of the noise, namely: the sparsity, smoothness and discontinuity. The second category covers approaches that engage the stripe non-uniformity problem in the frequency domain, benefiting from the periodic nature of stripes it applies an adequate filter to remove them [11]. Recently, a new category was prposed by Kuang et al. [12] based on the exploitation of deep convolutional networks, where they used both image denoising and super resolution to eliminate the stripe noise in the input and produce a clear image at the output with well-preserved edges. Finally, the third category and most relevant to the present work is the optimization-based approach [13,14,15], where the correction process aims to estimate the corrected image by minimizing a cost function that mutually ensures the correction for stripe FPN along with the preservation of image details. Under these methods the cost function is usually formed by two terms, the first one responsible for removing the stripe noise called the “regularization term” and the second one helps to preserve the detail information during the correction called the “fidelity term”.
All approaches mentioned above trade-off in their process between two major goals, reducing the effect of the FPN on the image and retaining image texture and details. The effort to find a balance between these two objectives represents the main challenge of these methods. In the case of total variation techniques, the focus has been turned to the regularization term where a mechanism to distinguish between stripe noise and image edges is crucial for achieving the aforementioned goals. Zhao et al. [13] proposed a gradient-constrained approach where the gradient along the stripe direction is preserved while the energy of the one along the opposite direction is minimized. Chang et al. [14] proposed a two-part regularization term, the first part puts constraints on gradients both across the stripe lines to remove them and along the stripe lines to preserve detail information, in addition to a second part that uses the framelet regularization to preserve structural details that the first part cannot properly retain. Huang et al. [15] proposed a unidirectional variational model optimization method that uses iteratively reweighted least square technique, their method provides an automatic formula to appropriately update the regularization parameter in order to achieve efficient correction.
Based on previous work on variation model correction, the proposed work consists of an improved unidirectional version that eliminates stripe noise by penalizing the gradient across the stripe direction under the guidance of an edge-aware weighting matrix. The weighting values are attributed according to the nature of the pixel in the image structure. A regularization operation is also applied to the estimated stripe FPN to preserve strong edges that may still be smoothed after the correction. In this regularization, separation between edges and noise is made based on assumptions on the noise statistical model, hence the name statistical regularization.
This paper is organized as follows. Section 2 is dedicated to a detailed description of the proposed algorithm. Then, experimental results are presented along with discussions in Section 3 followed by conclusions in Section 4 to sum up the presented work.

2. Experimental Details

2.1. Total Variation Optimization

Stripe non-uniformity is usually modeled as an additive noise, which allows us to represent the non-uniformity of an infrared image as follow:
f ( x , y ) = u ( x , y ) + n ( x , y ) ,
where f ( x , y ) is the observed noisy image, u ( x , y ) is the clean image and n ( x , y ) is the stripe FPN. Total variation approach for non-uniformity correction falls in the category of solving an inverse problem where the estimated true scene is deduced from the degraded image. Moreover, this problem is ill-posed and requires additional information that helps put regularizing constraints on the solution to obtain satisfying results. Stripe structure is a well-known propriety of the stripe noise, which is widely used as prior information. Degradation is more severe in the horizontal direction (assuming that stripes are vertical) than in the vertical direction, such behavior can be observed using the horizontal and vertical gradients of a corrupted image as depicted in Figure 1. We can clearly see that the horizontal gradient suffers from considerable variations while the vertical gradient is hardly affected by the stripe noise.
Hence, for a correction scheme that involves variational model it is intuitive to set the fidelity term as to sustain the vertical gradient and set the regularization term as to penalize the energy of the horizontal gradient. Such scheme is best described using the following energy function of a unidirectional variational model:
E ( u ) = 1 2 y ( u f ) 1 + λ x ( u ) 1 ,
where . 1 represents the l 1 -norm, x and y refers to the horizontal and vertical gradient operator respectively and λ is a regularization parameter that controls the smoothness of the corrected image. The motivation behind choosing the l 1 -norm comes from its better performance in edge preserving [16].
Minimizing the energy function in Equation (2) as it is cannot produce a satisfactory solution to the de-striping problem. Setting the same regularization parameter for the whole image is unreasonable given the different features present in it. In other words, weights assigned to stripe noise should not be the same as the one assigned to an image edge otherwise the solution will be over-smoothed and detail information will be lost. One way to overcome this issue is to assign a weighting matrix to the regularization term where: high-value weights are attributed to regions where the stripe noise is important to eliminate it, while small value weights are chosen for regions containing image textures and details to avoid smoothing them. Hence, the new cost function will have the following form:
E ( u ) = 1 2 y ( u f ) 1 + λ D f x ( u ) 1 ,
where D f is the weighting matrix containing per-pixel weights that control the amount of influence the horizontal gradient constraint should have on the final image. Its computation is explained in the next section. The optimization of Equation (3) presents some difficulties, mainly the non-differentiability of the l 1 -norm. Many solutions have been proposed to deal with this problem, for instance the Split Bregman iteration [17] and the iterative reweighted least squares (IRLS) method [18]. In our work, we opted for the latter for its computational efficiency and flexibility. Under the IRLS algorithm the variational functional in Equation (3) is approximated as follow:
E ˜ ( u ) = 1 2 W 1 1 / 2 ( y ( u f ) ) 2 2 + λ 2 D f W 2 1 / 2 x ( u ) 2 2 ,
where W 1 = d i a g ( 2 ϕ ( y ( u f ) ) ) , W 2 = d i a g ( 2 Ψ ( x ( u ) ) ) . The notation d i a g ( v ) refers to a diagonal matrix of the vector v, ϕ ( v ) and Ψ ( v ) are given as:
ϕ ( v ) = | v | 1 , | v | > ϵ 1 , ϵ 1 1 , | v | ϵ 1 , , Ψ ( v ) = | v | 1 , | v | > ϵ 2 , ϵ 2 1 , | v | ϵ 2 ,
where ϵ 1 and ϵ 2 are small positive numbers chosen to avoid division by zero-valued components. The new cost function E ˜ ( u ) gradient is evaluated as follow:
E ˜ ( u ) = ( y ) T W 1 ( y ( u f ) ) + λ D f ( x ) T W 2 x ( u ) .
Finally, a gradient descent scheme is used to update the solution:
u n + 1 = u n Δ t E ˜ ( u ) .
where Δ t is the convergence step. The iterative solving process is halted under the following condition: | u n + 1 u n | t o l , where t o l is the tolerance parameter.

2.2. Edge-Aware Weighting

In light of the above discussion, the weight matrix of the regularization term must control the penalization of stripe noise in order to preserve image textures. In other words, the weight matrix have to efficiently extract detail information and separate them from noise structure present in the image. To carry out this task, a well-known edge-aware weighting is adopted for a total variation approach to correct stripe FPN.
Inspired by previous work on gradient domain optimization and edge-aware constraints [19,20], an explicit weighting Γ f ( p ) that efficiently describes image edges is defined. It is computed using local variances of 3 × 3 and r × r windows of all image pixels in the image f:
Γ f ( p ) = 1 N i = 1 N σ f , 3 ( p ) σ f , r ( p ) + ϵ σ f , 3 ( i ) σ f , r ( i ) + ϵ ,
where σ f , 3 ( p ) and σ f , r ( p ) are the standard deviation of image f in a 3 × 3 window and an r × r window (r represents the window size) respectively both centered at the pixel p, N is the number of pixels in the image f and ϵ is a small positive constant value that is usually selected as ( 0.001 × L ) 2 where L is the dynamic range of the image. From the definition of Γ f ( p ) we can deduce that its role is to measure the importance of a given pixel p with respect to the whole image f. Furthermore, it uses a small scale in addition to a larger scale which helps to efficiently separate edges from fine details and enhances the performance of the weighting factor. Figure 2b shows the ability of the weighting Γ f ( p ) to depict image edges and details with accuracy. However, in case of presence of stripe noise in the image, the weighting will wrongly attribute some noise structure (vertical stripes) to the image edges as seen in Figure 2c. To overcome this problem, we propose to first separate the image smooth part f s from the high frequency part f d using horizontal filtering and then use both parts to set the weighting Γ f ( p ) . The new weighting formula becomes as follow:
Γ f ( p ) = 1 N i = 1 N σ f s , 3 ( p ) σ f d , r ( p ) + ϵ σ f s , 3 ( i ) σ f d , r ( i ) + ϵ ,
The new weighting will play the role of detecting similar structures in both image parts. We will further discuss this in the next section.
Finally the weighting matrix D f used in the variational model can be specified as follow:
D f ( p ) = 1 , i f Γ f ( p ) < S δ , i f Γ f ( p ) S
where D f ( p ) is the edge-aware matrix value at pixel p, S is a threshold to separate edges from smooth regions and δ is a small positive value attributed to weights around edges. Both parameter values of S and δ are chosen experimentally to ensure better performance. As it can be easily deduced from Equation (10), the edge-aware matrix attributes smaller weights to pixels that belong to edges and image structure in order to preserve them.

2.3. Horizontal Filtering

The purpose here is to separate the image into a smooth part completely free from stripe non-uniformity and a detail part that contains both the removed noise and some image textures blurred by the filtering process, then use both parts to construct the edge-aware weight matrix. Many state-of-the-art de-noising filters can be used to extract the high-frequency part of a noisy image without over-smoothing the details [21,22], however, the guided filter offers the highest efficiency in edge-preserving and structure transferring which fits the main objective to retain as many details as possible in the filtered image. Under the 1D guided filter, the noisy image f is filtered under the guidance of an image g. Following a local linear model, the smoothed image f s can be expressed as a linear transform of the guidance image g in a 1D row window w k :
f s ( i ) = a k g ( i ) + b k , i w k
where a k and b k are constant coefficients in the window w k . These coefficients are computed by minimizing the mean square error between the filtered image and the noisy input f for each window w k under some regularization as follow:
E ( a k , b k ) = i w k ( ( a k g ( i ) + b k f ( i ) ) 2 + ξ a k 2 ) ,
where ξ is a regularization term to penalize large values of a k . In the present work, the input image f is used as a guidance image and the regularization ξ is set to a high value in order to completely remove the stripe structure, as for w k a 1 × 9 row window is chosen. The resulting image of this step f s is then subtracted from the noisy one f to obtain the detail layer f d .

2.4. Statistical Regularization

Although the contribution of an edge-aware weighting to the texture preserving capacity of the variational model, some image details are still present in the estimated FPN noise. Hence the motivation for an additional step that further eliminates residual details smoothed by the correction process. To engage this problem, a set of statistical assumptions are first made on the stripe FPN noise:
  • The non-uniformity along the same column is modeled as an unknown random variable that follows a Gaussian distribution with a mean μ y and a standard deviation σ y ,
  • Non-uniformity noise in different columns of the array are considered to be independent of each other.
Following these two assumptions, the estimated stripe noise for each column y is inspected where any value that deviates from the distribution is eliminated as follow:
n ˜ ( x ) = 0 , n v ( x ) μ y 3 σ y , n v ( x ) , n v ( x ) μ y < 3 σ y ,
where n v is the estimated noise obtained by subtracting the corrected image u ^ v estimated by the variational model from the the noisy image f. Finally, the regularized estimated FPN n ˜ will be subtracted from the noisy image f to obtain the final corrected image. A complete scheme representing the whole correction process is presented in Figure 3.

3. Results and Discussion

To test the performance of the proposed edge-aware variational model, several experiments were conducted. First, the edge detection efficiency of the proposed weighting is validated, then the contribution of the statistical regularization to edge preserving is verified and finally the performance of the overall proposed algorithm is evaluated and compared to existing state-of-the-art correction methods.

3.1. Edge Preserving Performance

As mentioned in the previous section, the use of the noisy image to set the weighting will cause the appearance of stripes as part of the image structure that we seek to preserve. Hence the introduction of the horizontal filtering step to provide a smoothed noise-free version of the noisy image. However, a weighting based on this version will only depict edges that were not over smoothed by the horizontal filtering. In the same manner, using the high-frequency part will only show over smoothed edges along with some stripe FPN. Figure 4 shows the two cases where only the smoothed part or the detail part is used in the computing of the weighting Γ f . For the sake of comparison, the clear image based weighting was computed to use it as a reference. In case of the smoothed image, it is seen that some edges belonging to the background buildings are missing, while vertical stripes are visible in the detail image case.
Therefore, in the proposed weighting, both the smoothed part and detail part are used to efficiently extract image structure from the stripe noise. Furthermore, experimental results show that using the small scale on the smoothed image and the large scale on the detail image along with setting the small window to a 3 × 3 window and the large one to 33 × 33 window (as suggested by Kou et al. [20]) provides better results. For instance, Figure 5 shows different possible combinations for setting the weighting factor.
On the first line, results are shown from the case where the small scale (3 × 3 window) is used on the detail part and the large scale on the smoothed part with different sizes of the window r × r, while the second line is dedicated to the inverse case. As it is clearly seen, the small scale is better used on the smoothed part to avoid the appearance of stripe FPN. As for the size of the large scale r, we find that the higher its value is the more details can be detected. However, starting from the value 33, enough image details are already detected to allow the edge-aware weighting to sufficiently represent image structure.

3.2. Validation of the Statistical Regularization

The role of the statistical regularization is to preserve certain image details that are still present in the estimated non-uniformity image after the edge-aware correction. These details usually belong to regions of the image that are too bright or too dark compared to the whole image. Figure 6 shows the importance of this measure to further enhance the ability of the proposed algorithm to retain image structure after correction. Some bright edges that appear in the estimated noise (Figure 6a) can be clearly seen. These edges significantly differ from the noise, which make it easy for the statistical regularization to extract them and remove them from the estimated noise (Figure 6b). This can be done without affecting the correction, since these strong edges are usually not affected by the stripe FPN. The result is sharp and more accurate edges, as we can see in Figure 6c, where the mean cross-track profiles is computed for a 50 × 40 window, over a region where strong edges are present, in both cases before (blue) and after (red) regularization. In case of the later, edges are more sharp and well-defined.

3.3. Real Experiments

In order to validate the efficiency of the proposed algorithm, a comparison was made to three state-of-the-art methods, namely the midway infrared equalization (MIRE) [7], the 1D guided filtering (GIF) based method [6] and the iteratively reweighted unidirectional total variation model IRUTV [15]. Experiments were conducted on real stripe non-uniformity images obtained from a public dataset. Under the guidance of previous section, algorithm parameters are set as follow: λ = 0.1 , ϵ 1 = 0.0001 , ϵ 2 = 0.0001 , r = 33 , S = 0.02 , δ = 0.2 , Δ t = 0.1 , ξ = 0.1 and t o l = 0.0001 . For the other methods, parameters are set according to what is recommended in their work.

3.3.1. Qualitative Study

Figure 7 shows correction results for four images with different scene characteristics. The MIRE and guided filter based approaches both offer good edge preserving abilities but some uncorrected residual noise can be seen around edges (see the highlighted areas). The IRUTV however, exhibits a better noise elimination performance but it comes along with notable edge smoothing and detail blurring. In case of the proposed method, results show that it always outperform other methods providing smoothed and noise-free images with well-preserved edges and textures. The estimated stripe noise for the first two images are shown in Figure 8, some edges and features with varying levels were wrongly considered as FPN in the case of the three state-of-the-art algorithms. While in the case of the proposed method and due to the statistical regularization, the estimated noise image contains mainly the stripe FPN.
To further prove these findings, the mean cross-track profiles is computed for a noisy image and its corrected versions using the four algorithms. Results are shown in Figure 9. The effect of stripe noise can be seen as rapid variation of the mean value from column to column as seen in Figure 9b. After the correction, the MIRE algorithm (Figure 9c) reduces these changes but some small fluctuations are still present which refers to residual non-uniformity that was not corrected. Same remark can be noticed in the case of GIF-based approach (Figure 9e). On the other hand, the IRUTV algorithm completely remove these variations however some variations that belongs to image textures are also smoothed and the mean cross-track appears to have an over-smoothed profile (Figure 9d). Meanwhile, the proposed algorithm efficiently smooths fluctuations belonging to the stripe noise with good preservation of the small changes corresponding to image details (Figure 9f).
Finally, the proposed algorithm was tested using image sequences from the work of Portmann et al. [23]. The results are depicted in video 1 and video 2, where the noisy version is presented (top-left corner) along with the correction results of the IRUTV algorithm (top-right corner), the GIF-based method (bottom-left corner) and the proposed approach (bottom-right corner). An over-smoothing effect can be noticed in the case of the IRUTV algorithm and some residual noise appearing in the case of the GIF-based method. Clearly the proposed algorithm offers better performance by ensuring the smooth and noise-free corrected image with sharp and well-preserved edges.

3.3.2. Quantitative Study

In this part, an evaluation of the method is conducted using a quantitative measure such as the PSNR, which stands for the peak signal-to-noise ratio, and it is defined as follows:
P S N R = 20 × l o g 10 2 b 1 r m s e ,
where b is the number of bits that represents a pixel value in the image (8 bits in our case), r m s e is the root mean square error between the estimated image u ^ and the clear one u, and it is computed as:
R M S E = 1 N . M x = 1 N y = 1 M ( u ^ ( x , y ) u ( x , y ) ) 2 ,
where N and M are the image dimensions. We used clear images and simulated the stripe non-uniformity with random fixed bias for each column. To do so, we generated a random normal distribution that have values between 0 and 1 with a standard deviation 0.05 and a size matching the number of columns, then each value from this distribution is added to one column as a fixed bias. The resulting images are then corrected using five different methods (BM3D [24] (sigma = 13), MIRE, IRUTV, GIF-based and the proposed). The corresponding PSNR results for each method are presented in Table 1. Results show a clear improvement when using the proposed algorithm in all the cases, which further proof the efficiency of the adopted approach.

4. Conclusions

In this work, an efficient method to correct for stripe non-uniformity while preserving edges and image textures is presented. The improved performance of the method comes from the introduction of a weighting factor that considers image structure during correction. The edge-aware weighting can efficiently describe edges and details of the image despite presence of noise. Additionally, a regularization process is also adopted for enhancing strong edges preservation ability. The comparison of the proposed algorithm with state-of-the-art de-striping techniques displayed its great efficiency in terms of noise reduction and edge-preserving.

Acknowledgments

This work was supported by National Natural Science Foundation of China (NSFC) under Grant 91438203 and Grant 31727901.

Author Contributions

Ayoub Boutemedjet and Baojun Zhao conceived and designed the experiments; Ayoub Boutemedjet performed the experiments; Chenwei Deng analyzed the data; and Ayoub Boutemedjet wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Perry, D.L.; Dereniak, E.L. Linear theory of nonuniformity correction in infrared staring sensors. Opt. Eng. 1993, 32, 1854–1860. [Google Scholar] [CrossRef]
  2. Schulz, M.; Caldwell, L. Nonuniformity correction and correctability of infrared focal plane arrays. Infrared Phys. Technol. 1995, 36, 763–777. [Google Scholar] [CrossRef]
  3. Harris, J.G.; Chiang, Y.M. Nonuniformity correction of infrared image sequences using the constant-statistics constraint. IEEE Trans. Image Process. 1999, 8, 1148–1151. [Google Scholar] [CrossRef] [PubMed]
  4. Hardie, R.C.; Hayat, M.M.; Armstrong, E.; Yasuda, B. Scene-based nonuniformity correction with video sequences and registration. Appl. Opt. 2000, 39, 1241–1250. [Google Scholar] [CrossRef] [PubMed]
  5. Zuo, C.; Chen, Q.; Gu, G.; Sui, X. Scene-based nonuniformity correction algorithm based on interframe registration. JOSA A 2011, 28, 1164–1176. [Google Scholar] [CrossRef] [PubMed]
  6. Cao, Y.; Yang, M.Y.; Tisse, C.L. Effective Strip Noise Removal for Low-Textured Infrared Images Based on 1-D Guided Filtering. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 2176–2188. [Google Scholar] [CrossRef]
  7. Tendero, Y.; Landeau, S.; Gilles, J. Non-uniformity correction of infrared images by midway equalization. Image Process. Line 2012, 2, 134–146. [Google Scholar] [CrossRef]
  8. Cao, Y.; Li, Y. Strip non-uniformity correction in uncooled long-wave infrared focal plane array based on noise source characterization. Opt. Commun. 2015, 339, 236–242. [Google Scholar] [CrossRef]
  9. Chang, Y.; Yan, L.; Wu, T.; Zhong, S. Remote sensing image stripe noise removal: From image decomposition perspective. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7018–7031. [Google Scholar] [CrossRef]
  10. Liu, X.; Lu, X.; Shen, H.; Yuan, Q.; Jiao, Y.; Zhang, L. Stripe noise separation and removal in remote sensing images by consideration of the global sparsity and local variational properties. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3049–3060. [Google Scholar] [CrossRef]
  11. Münch, B.; Trtik, P.; Marone, F.; Stampanoni, M. Stripe and ring artifact removal with combined wavelet—Fourier filtering. Opt. Express 2009, 17, 8567–8591. [Google Scholar] [CrossRef] [PubMed]
  12. Kuang, X.; Sui, X.; Chen, Q.; Gu, G. Single infrared image stripe noise removal using deep convolutional networks. IEEE Photonics J. 2017, 9, 1–13. [Google Scholar] [CrossRef]
  13. Zhao, J.; Zhou, Q.; Chen, Y.; Liu, T.; Feng, H.; Xu, Z.; Li, Q. Single image stripe nonuniformity correction with gradient-constrained optimization model for infrared focal plane arrays. Opt. Commun. 2013, 296, 47–52. [Google Scholar] [CrossRef]
  14. Chang, Y.; Fang, H.; Yan, L.; Liu, H. Robust destriping method with unidirectional total variation and framelet regularization. Opt. Express 2013, 21, 23307–23323. [Google Scholar] [CrossRef] [PubMed]
  15. Huang, Y.; He, C.; Fang, H.; Wang, X. Iteratively reweighted unidirectional variational model for stripe non-uniformity correction. Infrared Phys. Technol. 2016, 75, 107–116. [Google Scholar] [CrossRef]
  16. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  17. Goldstein, T.; Osher, S. The Split Bregman Method for L1-Regularized Problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  18. Daubechies, I.; DeVore, R.; Fornasier, M.; Güntürk, C.S. Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 2010, 63, 1–38. [Google Scholar] [CrossRef]
  19. Hua, M.; Bie, X.; Zhang, M.; Wang, W. Edge-aware gradient domain optimization framework for image filtering by local propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2838–2845. [Google Scholar]
  20. Kou, F.; Chen, W.; Wen, C.; Li, Z. Gradient domain guided image filtering. IEEE Trans. Image Process. 2015, 24, 4528–4539. [Google Scholar] [CrossRef] [PubMed]
  21. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  22. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision, Bombay, India, 7 January 1998; pp. 839–846. [Google Scholar]
  23. Portmann, J.; Lynen, S.; Chli, M.; Siegwart, R. People Detection and Tracking from Aerial Thermal Views. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–5 June 2014; pp. 1794–1800. [Google Scholar]
  24. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Directional characteristic of the stripe FPN (a) noisy image (b) Vertical gradient (c) Horizontal gradient.
Figure 1. Directional characteristic of the stripe FPN (a) noisy image (b) Vertical gradient (c) Horizontal gradient.
Sensors 18 01164 g001
Figure 2. Comparison of the edge-aware weighting in case of clear and noisy image (a) Input image (b) Case of clear image (c) Case of noisy image.
Figure 2. Comparison of the edge-aware weighting in case of clear and noisy image (a) Input image (b) Case of clear image (c) Case of noisy image.
Sensors 18 01164 g002
Figure 3. Scheme of the proposed method.
Figure 3. Scheme of the proposed method.
Sensors 18 01164 g003
Figure 4. Comparison of the edge-aware weighting in case of smoothed part and detail part (a) Smoothed part used (b) The clear image used (c) Detail part used.
Figure 4. Comparison of the edge-aware weighting in case of smoothed part and detail part (a) Smoothed part used (b) The clear image used (c) Detail part used.
Sensors 18 01164 g004
Figure 5. Different cases of setting the edge-aware weighting.
Figure 5. Different cases of setting the edge-aware weighting.
Sensors 18 01164 g005
Figure 6. Efficiency of the statistical regularization (a) FPN before regularization (b) FPN after regularization (c) Mean cross-track profiles before (blue) and after (red) regularization.
Figure 6. Efficiency of the statistical regularization (a) FPN before regularization (b) FPN after regularization (c) Mean cross-track profiles before (blue) and after (red) regularization.
Sensors 18 01164 g006
Figure 7. Comparative results of the proposed method and three de-striping techniques [6,7,15].
Figure 7. Comparative results of the proposed method and three de-striping techniques [6,7,15].
Sensors 18 01164 g007
Figure 8. Estimated stripe FPN using the proposed method and three de-striping methods [6,7,15].
Figure 8. Estimated stripe FPN using the proposed method and three de-striping methods [6,7,15].
Sensors 18 01164 g008
Figure 9. Mean cross-track profiles of a corrupted image and its correction versions: (a) Noisy image (b) Without correction (c) MIRE (d) IRUTV (e) GIF-based (f) Proposed.
Figure 9. Mean cross-track profiles of a corrupted image and its correction versions: (a) Noisy image (b) Without correction (c) MIRE (d) IRUTV (e) GIF-based (f) Proposed.
Sensors 18 01164 g009
Table 1. The PSNR value under each correction method.
Table 1. The PSNR value under each correction method.
Images123456789
noisy image26.0724.2325.8526.0326.2826.0925.9125.1826.24
BM3D29.8627.4527.4729.8730.6131.1129.2329.1230.31
MIRE30.4529.4229.4331.2332.5831.1130.8030.9431.12
IRUTV31.8526.8531.0831.9530.8128.6129.1226.6828.51
GIF_based34.1630.5033.3833.7533.1431.5630.6630.6632.00
Proposed35.8233.7434.8535.7536.2533.4332.4132.4133.17

Share and Cite

MDPI and ACS Style

Boutemedjet, A.; Deng, C.; Zhao, B. Edge-Aware Unidirectional Total Variation Model for Stripe Non-Uniformity Correction. Sensors 2018, 18, 1164. https://doi.org/10.3390/s18041164

AMA Style

Boutemedjet A, Deng C, Zhao B. Edge-Aware Unidirectional Total Variation Model for Stripe Non-Uniformity Correction. Sensors. 2018; 18(4):1164. https://doi.org/10.3390/s18041164

Chicago/Turabian Style

Boutemedjet, Ayoub, Chenwei Deng, and Baojun Zhao. 2018. "Edge-Aware Unidirectional Total Variation Model for Stripe Non-Uniformity Correction" Sensors 18, no. 4: 1164. https://doi.org/10.3390/s18041164

APA Style

Boutemedjet, A., Deng, C., & Zhao, B. (2018). Edge-Aware Unidirectional Total Variation Model for Stripe Non-Uniformity Correction. Sensors, 18(4), 1164. https://doi.org/10.3390/s18041164

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop