Next Article in Journal
Bateson and Wright on Number and Quantity: How to Not Separate Thinking from Its Relational Context
Previous Article in Journal
Jacobi Multipliers in Integrability and the Inverse Problem of Mechanics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deblurring Turbulent Images via Maximizing L1 Regularization

1
Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
3
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(8), 1414; https://doi.org/10.3390/sym13081414
Submission received: 29 June 2021 / Revised: 23 July 2021 / Accepted: 28 July 2021 / Published: 3 August 2021
(This article belongs to the Section Computer)

Abstract

:
Atmospheric turbulence significantly degrades image quality. A blind image deblurring algorithm is needed, and a favorable image prior is the key to solving this problem. However, the general sparse priors support blurry images instead of explicit images, so the details of the restored images are lost. The recently developed priors are non-convex, resulting in complex and heuristic optimization. To handle these problems, we first propose a convex image prior; namely, maximizing L1 regularization (ML1). Benefiting from the symmetrybetween ML1 and L1 regularization, the ML1 supports clear images and preserves the image edges better. Then, a novel soft suppression strategy is designed for the deblurring algorithm to inhibit artifacts. A coarse-to-fine scheme and a non-blind algorithm are also constructed. For qualitative comparison, a turbulent blur dataset is built. Experiments on this dataset and real images demonstrate that the proposed method is superior to other state-of-the-art methods in blindly recovering turbulent images.

1. Introduction

As an essential tool for improving image quality, image deblurring has received considerable attention. Blind image deblurring involves estimating a latent sharp image o and a blur kernel h, with a blurry input image g. Mathematically, the blurring process is modeled as:
g = o h + n
where ∗ represents the convolution operator and n denotes the additional noise. Involving an infinite number of solutions, this problem is strongly ill-posed.
To regularize Problem (1), various image priors [1,2,3,4,5,6,7,8,9,10] have been proposed. The simplest is a Gaussian prior based on the natural image obeying a Gaussian distribution [1,2]. To encourage gradients sparsity, the Laplacian and hyper-Laplacian functions [3,4,5,6] are used as image priors. Then, Levin et al. [11] reported that these image priors favor no-blur explanations, which cause the failure of the existing deblurring algorithms. To solve this problem, new image priors that favor clean images over blurred ones have been proposed, such as normalized sparse prior [7], L0-regularized prior [8], and channel-based priors [9,10]. However, these new priors are non-convex, which often involves expensive computations and requires heuristic strategies.
This paper proposes a novel image deblurring algorithm that achieves competitive results on atmospheric turbulent images. Inspired by the success in previous studies where distinct properties were observed of specific images, such as text images [12], face images [13], and lowlight images [14], our work derived from the effects of turbulence degradation. We first analyze how various types of image priors change with turbulent blur to illustrate the strengths of the proposed prior. We use different sizes and types of 2D signals to verify the validity of the prior. We then present a new deblurring algorithm, including a soft suppression strategy, which effectively inhibits artifacts. We iteratively estimate the latent image and the blur kernel using the alternating minimization (AM) method. We further establish a turbulence blur image dataset to evaluate the effectiveness of the proposed method. The experiments on the dataset and authentic images demonstrate that our method achieves state-of-the-art performance compared to recent blind deblurring algorithms.
The contributions of this paper are as follows: (1) We present the maximizing L1 regularization (ML1) prior for latent sharp images. We theoretically illustrate that this prior is better than other priors for deblurring turbulence blur images. (2) Based on the ML1 prior, we propose a deblurring algorithm with a soft suppression strategy, termed suppressed projected alternating minimization (SPAM). (3) To assess the effectiveness of the proposed method, we build a turbulent blur images dataset. (4) Quantitative and qualitative experiments illustrate that our algorithm is superior to state-of-the-art algorithms.

2. Related Work

In recent years, blind image deblurring has significantly improved due to the use of various priors on images and blur kernels. Most works are based on statistical priors and gradient sparsity priors.
Fergus et al. [1] adopted a Gaussian mixture model to learn an image gradient prior through variational Bayesian inference. Levin et al. [2] explored the shortcomings of the naïve MA P x , k method and presented a more robust MA P k algorithm. Babacan et al. [6] proposed a general approach with super-Gaussian sparse image priors. However, these methods are computationally expensive.
Efficient approaches based on gradient sparse priors have also been developed. Chan and Wong [3] proposed total variation (TV) regularization, which achieves good performance on image edges. Since then, many different sparse regularizers [4,5,15,16] have been adopted to support the gradient sparsity of natural images. However, Levin et al. [11] showed that these gradient sparsity priors favor blurry images rather than sharp ones.
To support sharp images, various other image priors have been designed. Krishnan et al. [7] proposed the L1/L2 regularization function and developed a fast algorithm to solve this non-convex problem. Xu et al. [8] used the L0 sparse gradient prior for motion deblurring. Then, Pan et al. [12] promoted this prior to deblur text images. Pan et al. [9] noted that the dark channel of blurred images is less sparse than clear images, then they employed L0 regularization on the dark channel to deblur blurry images. Yan et al. [10] further developed extreme channels prior for better robustness. Instead of exploring the statistical properties of entire images, some methods focus on extracting image edges for kernel estimation [17,18,19,20,21]. This type of method is effective for latent images with solid edges. Another kind of method involves obtaining image patch priors for image estimation, such as a local prior for reducing ringing artifacts [22] and patch priors for modeling image edges [23]. Nevertheless, these methods usually involve heuristic or specially designed optimization algorithms, which means they are somewhat computationally complex.
Lately, the neural network has been employed for image restoration. Sun et al. [24] applied a convolutional neural network (CNN) to remove non-uniform motion blur. Zhang et al. [25] employed recurrent neural networks (RNNs) to model the spatially variant deblurring process. Tao et al. [26] proposed a scale-recurrent network as a pyramid strategy to restoring the sharp image. An end-to-end conditional generative adversarial network (GAN) for motion deblurring was designed and promoted [27,28]. Additionally, a dark and bright channel prior was also embedded into a network for dynamic scene deblurring [29]. However, these methods strongly depend on the training and test data, which limit their applications in the complex and variable turbulent blur.

3. Comparison of Image Priors

In this section, we analyzed the advantages of the developed image prior compared to previous image priors, also known as image regularization terms.
The usual regularization term can be summarized as:
L p ( I ) = i , j | I i j | p
where I is the regularized image, ∇ is the derivative operator, | | is the absolute value function, and different p presents different regularization terms. When p takes a value 1, (2) is the L1 regularization [3,15,22]. When p takes a value of 2, (2) is the Tikhonov regularization [22,30], also the square of the L2 norm. When p takes values of 0–1, (2) is a hyper-Laplacian regularization [4,5,6]. The L1/L2 regularization [7] equals L 1 ( I ) / L 2 ( I ) . The L0 regularization [8,31] calculates the number of nonzero values of I .
The proposed image regularization term is defined as:
L 1 ( I ) = i , j | I i j | + C
where C is a constant that ensures the regularization is loss positive. From the optimization point of view, the developed regularization term is to maximize L1 norm; thus, we named it the maximizing L1 regularization (ML1). To compare the properties of different regularization terms, we simulated atmospheric turbulent blur with a Von Kármán statistic phase screen model [32], as used in [33,34]. To all blurred images, 1% Gaussian noise is added.
First, we analyze the effect of image blur on different regularization terms. A natural image, embedded in Figure 1, is used as the original clear image. As shown in Figure 1, the regularization loss of L1 and L2 norms decreases as the image becomes more blurred. Therefore, the L1 and L2 norms are inappropriate image regularization terms because they support blurry images. Levin et al. [11] studied this behavior in detail. The L1/L2 [7] and the L0 [8,31] regularization terms can correctly favor sharp images. However, these priors are non-convex functions, making optimization complicated and unstable. Additionally, compared to the L1 and L2 regularization terms, they reduce the gap between the blurred image and the original image. The ML1 prior behaves correctly for blur and distinguishes the original image and blurred image to a greater extent than the L1/L2 and L0 regularization terms. Furthermore, the ML1 prior is a symmetric function of L1 regularization, which enables it to retain the differentiation and monotonicity of L1 regularization for image blur. Remarkably, ML1 is still a convex function, so the methods to optimize it are relatively straightforward and stable.
Then, we considered the effect of the feature size on the ML1 prior. We simulated three types of classic 2D signals for analysis. As shown in Figure 2, for all types of signals, small-size signals have higher regularization losses under the same blur. This means that the ML1 prior is more sensitive to the ambiguity of small-scale features. To verify the effectiveness of this discovery, we selected an image with rich details and performed similar experiments. In Figure 3, we observe the same phenomenon. The above illustrates the effectiveness of the ML1 prior for preserving small-scale features.
In conclusion, compared to other existing image regularization terms, our proposed regularization term can distinguish between explicit and blurred images and better preserve edge details. Particularly, the presented regularization term is convex, which means many ordinary algorithms are robust to solving this problem. In the next section, the blind image deblurring algorithm using ML1 prior is introduced in detail.

4. Method

Assuming the image noise is spatially independent Gaussian noise, the general optimization model for the blind deblurring algorithm is:
min o , h o h g 2 2 + λ o ϕ ( o ) + λ h φ ( h )
where ϕ ( o ) and φ ( h ) denote the regularization terms of the estimated sharp image o and blur kernel h, respectively.

4.1. Deblurring Model

Using the ML1 prior for image and the delayed normalization in [15] for the blur kernel, our objective function becomes:
min o , h o h g 2 2 + λ o ( C o 1 ) s . t . h 0 , h 1 = 1
where ∇ is the gradient operator. Inspired by the method of non-maximum suppression (NMS) [35,36], we propose a soft suppression strategy for inhibiting artifacts in the blind image deblurring algorithm, which is
s = [ s i j ] = [ o i j o i j + γ max ( o i j ) ]
where γ is a regulator. The value of s i j is regarded as the possibility that an image point o i j is an edge feature point. Applied this function to adjust the regularization weights, the optimization formulation (4) becomes:
min o , h o h g 2 2 + λ o ( C s o 1 ) s . t . h 0 , h 1 = 1
where ⊙ represents the Hadamard product. This strategy prevents our algorithm from overprotecting the edge.
The alternating minimization (AM) [3,37] is a commonly used solution in blind image deblurring due to its fast speed. We performed a delayed weighting in the AM algorithm to implement the soft suppression strategy. Precisely, we first calculated s based on the previous iteration image and then updated the image with fixed s. The details of the algorithm are described in the next section.

4.2. Optimization

The proposed ML1 prior is convex, so we use the gradient descent method as the specific optimization method for each step. With an initialization of o and h, we conduct one gradient descent step on o and h. The main steps are shown in Algorithm 1.
Algorithm 1 Image deblurring.
Input: The degraded image g, initialization h, and parameters γ and λ o
     for  t = 1 to T do
             compute s using Equation (9)
             update o using Equations (10) and (11)
             update h using Equations (13) and (14)
     end for
Output: estimated latent image o and blur kernel h

4.2.1. Updating o

In this step, we only update o. Given the estimated kernel h, the loss function for image o is:
J x [ o , h ] = Δ o h g 2 2 + ( C λ o s o 1 )
We first compute and fix the s at the tth iteration. With the estimated image obtained from the initialization or the previous iteration, s is updated by:
s t = [ s i j t ] = [ o i j t 1 o i j t 1 + γ max i , j ( o i j t 1 ) ]
Then, update o at the tth iteration with:
o ^ t = o t 1 ε o o J o [ o t 1 , h t 1 ]
for image step ε o > 0 . Lastly, we add a non-negative projection on the image, which is to project o by:
o t = max { o ^ t , 0 }
This step guarantees that all intermediate images conform to the natural rule that the image values are non-negative.

4.2.2. Updating h

In this step, we update h. Given the estimated image, the loss function for kernel h is:
J h [ o , h ] = Δ o h g 2 2
Using gradient descent algorithm, h update with
h ^ t = h t 1 ε h h J h [ o t 1 , h t 1 ]
for kernel step ε h > 0 . Then, project h by
h t = max { h ^ t , 0 } ,   h t = h || h || 1
Modified from projected alternating minimization (PAM) [15], this iterative algorithm is called the suppressed projected alternating minimization (SPAM).

4.2.3. Pyramid Scheme

In turbulent conditions, the blur kernel is often large. Then, a coarse-to-fine recovery strategy is urgently needed. The pyramid scheme can substantially reduce the computational complexity caused by directly recovering on the blind image size domain and speed up the algorithm [38]. We used a pyramid scheme with a similar method as in [1,7]. We downsampled the blurred image with a size ratio of 2 until the corresponding kernel size is 3 × 3 . The blur kernel is initialized as a uniform blur. We start the recovery algorithm at the lowest size and then upsample the recovered sharp image and the estimated blur kernel to initialize the original image and the blur kernel at the next size. At each scale, the number of iterations is not less than 200. The parameters λ o decrease correspondingly with the increase in scale by multiplying by the attenuation factor α . We use bicubic interpolation to complete all the resizing operations. The overall pyramid scheme is as Algorithm 2.
Algorithm 2 Pyramid scheme.
Input: The degraded image g, maximum kernel size K, parameters γ , α , and final λ o
     Downsample blurry image g, initialize o, h, and  λ o
     for s = 1 to S do
              solve o s , h s via Algorithm 1
              if  s < S  then
                       upsample o s , h s
                       update λ o
              end if
     end for
Output: estimated latent image o s and blur kernel h s

4.2.4. Non-Blind Algorithm

Given the estimated kernel at the finest level, we execute a non-blind image deblurring algorithm similar to the existing approaches [7,9,39]. In this paper, the final clear image is obtained by applying the non-blind image deblurring algorithm in [7].

5. Experiments and Discussion

We conducted experiments on the turbulent synthetic dataset and real turbulence-degraded images. Our testing environment was a PC running MS Windows 1064 bit version with an Intel Core i5 CPU. We used MATLAB R2014a to test all the MATLAB codes. We compared our method with the previous eight blind image restoration approaches, including the latest [9,15,16,39].

5.1. Synthetic Data

To evaluate the performance of our method, we established a turbulence-degraded dataset containing 32 images. The dataset was generated by conducting convolutions between four images and eight simulated atmosphere turbulence blur kernels, plus 1% white noise. The ground truth images were obtained from the dataset in [11].
These eight blur kernels were generated with the Von Kármán statistic phase screen model [32]. The blur kernel of atmospheric turbulence can be modeled as:
h ( x , y ) = F 1 { A ( u , v ) e j ω ( u , v ) } 2
where h ( x , y ) is the atmospheric turbulence blur kernel function, A ( u , v ) is the pupil function of the imaging system, ω ( u , v ) is the random phase screen function, F 1 presents the inverse Fourier transform, and j is an imaginary unit. The random phase screen function was generated by the inversion of the Von Kármán power spectrum model [40]. Choosing the simulated turbulent images similar to images recorded by a D = 1.50 m telescope through an atmospheric turbulence of r 0 to 0.045–0.055 m, the simulated blur kernel nearly occupied 35 × 35 pixels in the 256 × 256 pixels image pane. Setting the parameters as above, we obtain edmany blur kernels by generating multiple times under the same parameters and setting different parameters. Then, we cropped the edges of the simulated blur kernel image with all pixel values less than 1/225 of its maximum value and resized the kernel image to an odd number by adding zeros to the edges. Finally, we selected these eight blur kernels with less feature repetition. Their sizes range from 29 × 29 to 39 × 39 . Figure 4 shows these eight turbulent blur kernels.
In Figure 5 and Figure 6, we depict some visual results recovered by eight competing algorithms and ours on this synthetic turbulent dataset. We downloaded the codes from the authors from their websites and adjusted the parameters to those performing best on our dataset. As for our algorithm, we set γ equal to 3, α equal to 0.9, and final λ o equal to 0.0001. As shown in Figure 5 and Figure 6, our method performed better than the other algorithms visually, especially in the details of the restored images. In Figure 5, our method recovers clearer fences, fine-grained patterns on clothes, and thin branches of bushes. In Figure 6, our restored images contain more obvious textures and hair. In general, the image restored by our method has sharper edges compared to [1,2,9,15,16,39] and less artifacts compared to [2,7,22].
Additionally, we evaluated the restored results using the sum of squared differences (SSD) [11], error ratio (ER) [11], peak signal-to-noise ratio (PSNR) [41], and structural similarity (SSIM) [42]. In this study, the sparse prior algorithm in [4] was used as the non-blind image deblurring algorithm for calculating the error ratios. Figure 7 shows the cumulative performance of SSD error ratios on the synthetic turbulent dataset. Our curve reaches the highest cumulative distribution of all ERs. Figure 8 presents the cumulative histogram of SSD results per image.
Table 1 reports the average PSNR and SSIM. For these two metrics, our method achieves SotA performance too.
In Figure 9, we plot the gradient distributions of an image restored by different methods. The two best-performing methods were selected for comparison. As shown in Figure 9, our curve fits the original curve best. This confirms the effectiveness of our method.

5.2. Real Data

In this part, we compare our method with five competing methods on real turbulent images. We blindly deblurred an image with different scenes, and our method restored more explicit structures and less artifacts than others. One part of the real images is blurred images that we actually selected, with scenes involving houses, trees and moving targets. The other part of the real images is real blurred images from the sky, with scenes involving planets, moons, and nebulae. The deblurring results of some images are shown here. In Figure 10, compared to [15,16], our image has fewer artifacts in the house part and, compared to [9,39], more details in the bush part. In Figure 11, our method recovers the detailed structures with less artifacts, which can be observed from the circled part. In Figure 12, although the image noise is more serious, our method is also competitive compared to the others. In Figure 13, our image has more natural edges than other methods, which can be clearly observed from the enlarged parts. Notably, the shape of the blur kernel estimated by our method is also more in line with the turbulent kernel, which has multiple peaks.
Entropy was also used for quantitative evaluation, which is calculated as:
E n t r o p y ( I ) = v = 0 255 p v log p v
where p v denotes the probability of gray level v in the image. Table 2 shows the entropy values of the deblurred images. The entropy value is positively associated with the clarity of images. The entropy values obtained by our proposed method are commonly higher than those of previous studies [7,9,15,16,39]. The proposed method achieves the highest average entropy value, which verifies its stability.

5.3. Limitation

In our approach, we model image noise with the Gaussian function. If the blurred image contains other scattered noises, such as speckle noise, the model may misjudge the noise as the texture of the image. In this case, the SPAM algorithm may recover a noisy image. However, to the best of our knowledge, this is also challenging for other blind image deblurring algorithms. How to balance the removal of outliers and the preservation of image details in the blind deblurring algorithm is still a topic requiring future research.

6. Conclusions

In this paper, we proposed a maximizing L1 regularization prior, which is motivated by effectively supporting sharp latent images instead of turbulence blurry images. To efficiently restore the image with this prior, we developed a suppression projected alternating minimization algorithm to estimate latent sharp images and blur kernels. Then, we simulated an atmospheric turbulent blur dataset for quantitative evaluation. Experiments on this dataset and other turbulent images indicated that the output images deblurred by the proposed method have superior visual performance. The proposed approach can be extended to other types of degraded images. In future work, we plan to develop our model to handle more severe noise.

Author Contributions

Conceptualization, L.D., S.S., and J.Z.; methodology, L.D. and J.Z.; experiments, L.D. and S.S.; validation, J.Z. and Z.X.; writing—original draft preparation, L.D. and S.S.; writing—review and editing, J.Z. and Z.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the West Light Foundation for Innovative Talents of the Chinese Academy of Sciences (YA18K001) and the Frontier Research Foundation of the Chinese Academy of Sciences (Z20H04).

Data Availability Statement

The data presented in this study are available at https://webee.technion.ac.il/people/anat.levin/, accessed on 29 June 2021.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.T.; Freeman, W.T. Removing camera shake from a single photograph. In ACM SIGGRAPH 2006 Papers; ACM: New York, NY, USA, 2006; pp. 787–794. [Google Scholar]
  2. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Efficient marginal likelihood optimization in blind deconvolution. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2657–2664. [Google Scholar]
  3. Chan, T.F.; Wong, C.K. Total variation blind deconvolution. IEEE Trans. Image Process. 1998, 7, 370–375. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Levin, A.; Fergus, R.; Durand, F.; Freeman, W.T. Image and depth from a conventional camera with a coded aperture. ACM Trans. Graph. 2007, 26, 70-es. [Google Scholar] [CrossRef]
  5. Krishnan, D.; Fergus, R. Fast image deconvolution using hyper-Laplacian priors. Adv. Neural Inf. Process. Syst. 2009, 22, 1033–1041. [Google Scholar]
  6. Babacan, S.D.; Molina, R.; Do, M.N.; Katsaggelos, A.K. Bayesian blind deconvolution with general sparse image priors. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 341–355. [Google Scholar]
  7. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 233–240. [Google Scholar]
  8. Xu, L.; Zheng, S.; Jia, J. Unnatural l0 sparse representation for natural image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1107–1114. [Google Scholar]
  9. Pan, J.; Sun, D.; Pfister, H.; Yang, M.H. Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1628–1636. [Google Scholar]
  10. Yan, Y.; Ren, W.; Guo, Y.; Wang, R.; Cao, X. Image deblurring via extreme channels prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4003–4011. [Google Scholar]
  11. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding blind deconvolution algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2354–2367. [Google Scholar] [CrossRef] [PubMed]
  12. Pan, J.; Hu, Z.; Su, Z.; Yang, M.H. l_0-regularized intensity and gradient prior for deblurring text images and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 342–355. [Google Scholar] [CrossRef] [PubMed]
  13. Pan, J.; Hu, Z.; Su, Z.; Yang, M.H. Deblurring face images with exemplars. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2014; pp. 47–62. [Google Scholar]
  14. Hu, Z.; Cho, S.; Wang, J.; Yang, M.H. Deblurring low-light images with light streaks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3382–3389. [Google Scholar]
  15. Perrone, D.; Favaro, P. Total variation blind deconvolution: The devil is in the details. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2909–2916. [Google Scholar]
  16. Jin, M.; Roth, S.; Favaro, P. Normalized blind deconvolution. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 668–684. [Google Scholar]
  17. Joshi, N.; Szeliski, R.; Kriegman, D.J. PSF estimation using sharp edge prediction. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  18. Cho, S.; Lee, S. Fast motion deblurring. In ACM SIGGRAPH Asia 2009 Papers; ACM: New York, NY, USA, 2009; pp. 1–8. [Google Scholar]
  19. Xu, L.; Jia, J. Two-phase kernel estimation for robust motion deblurring. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 157–170. [Google Scholar]
  20. Lai, W.S.; Ding, J.J.; Lin, Y.Y.; Chuang, Y.Y. Blur kernel estimation using normalized color-line prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 64–72. [Google Scholar]
  21. Gong, D.; Tan, M.; Zhang, Y.; Van den Hengel, A.; Shi, Q. Blind image deconvolution by automatic gradient activation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1827–1836. [Google Scholar]
  22. Shan, Q.; Jia, J.; Agarwala, A. High-quality motion deblurring from a single image. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar]
  23. Sun, L.; Cho, S.; Wang, J.; Hays, J. Edge-based blur kernel estimation using patch priors. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Cambridge, MA, USA, 19–21 April 2013; pp. 1–8. [Google Scholar]
  24. Sun, J.; Cao, W.; Xu, Z.; Ponce, J. Learning a convolutional neural network for non-uniform motion blur removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 769–777. [Google Scholar]
  25. Zhang, J.; Pan, J.; Ren, J.; Song, Y.; Bao, L.; Lau, R.W.; Yang, M.H. Dynamic scene deblurring using spatially variant recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2521–2529. [Google Scholar]
  26. Tao, X.; Gao, H.; Shen, X.; Wang, J.; Jia, J. Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8174–8182. [Google Scholar]
  27. Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8183–8192. [Google Scholar]
  28. Kupyn, O.; Martyniuk, T.; Wu, J.; Wang, Z. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 8878–8887. [Google Scholar]
  29. Cai, J.; Zuo, W.; Zhang, L. Dark and bright channel prior embedded network for dynamic scene deblurring. IEEE Trans. Image Process. 2020, 29, 6885–6897. [Google Scholar] [CrossRef]
  30. Tikhonov, A.N.; Arsenin, V.Y. Solutions of ill-posed problems. New York 1977, 1, 487. [Google Scholar]
  31. Pan, J.; Hu, Z.; Su, Z.; Yang, M.H. Deblurring text images via L0-regularized intensity and gradient prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2901–2908. [Google Scholar]
  32. Roggemann, M.C.; Welsh, B.M. Imaging Through Turbulence; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  33. Zhang, J.; Zhang, Q.; He, G. Blind deconvolution: Multiplicative iterative algorithm. Opt. Lett. 2008, 33, 25–27. [Google Scholar] [CrossRef] [PubMed]
  34. Zhang, J.; Zhang, Q.; He, G. Blind deconvolution of a noisy degraded image. Appl. Opt. 2009, 48, 2350–2355. [Google Scholar] [CrossRef] [PubMed]
  35. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
  36. Neubeck, A.; Van Gool, L. Efficient non-maximum suppression. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 3, pp. 850–855. [Google Scholar]
  37. Chan, T.F.; Wong, C.K. Convergence of the alternating minimization algorithm for blind deconvolution. Linear Algebra Appl. 2000, 316, 259–285. [Google Scholar] [CrossRef] [Green Version]
  38. Bai, Y.; Jia, H.; Jiang, M.; Liu, X.; Xie, X.; Gao, W. Single-Image Blind Deblurring Using Multi-Scale Latent Structure Prior. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 2033–2045. [Google Scholar] [CrossRef] [Green Version]
  39. Wen, F.; Ying, R.; Liu, Y.; Liu, P.; Truong, T.K. A simple local minimal intensity prior and an improved algorithm for blind image deblurring. IEEE Trans. Circuits Syst. Video Technol. 2020. [Google Scholar] [CrossRef]
  40. Johansson, E.M.; Gavel, D.T. Simulation of stellar speckle imaging. In Amplitude and Intensity Spatial Interferometry II; International Society for Optics and Photonics: San Diego, CA, USA, 1994; Volume 2200, pp. 372–383. [Google Scholar]
  41. Wang, Z.; Bovik, A.C. Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Process. Mag. 2009, 26, 98–117. [Google Scholar] [CrossRef]
  42. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. An illustration of different regularization losses changing with blur size. The embedded image is the original clear image. The regularization loss is computed by: f ( x ( o h ) ) + f ( y ( o h ) ) , where f ( I ) presents the regular function, and x and y are discrete gradient filters. The turbulent blur kernel h ranges from 0 to 50 pixels in size.
Figure 1. An illustration of different regularization losses changing with blur size. The embedded image is the original clear image. The regularization loss is computed by: f ( x ( o h ) ) + f ( y ( o h ) ) , where f ( I ) presents the regular function, and x and y are discrete gradient filters. The turbulent blur kernel h ranges from 0 to 50 pixels in size.
Symmetry 13 01414 g001
Figure 2. The ML1 regularization losses on different sizes of 2D signals. (Left): three types of simulated 2D signal images. (Right): corresponding regularization losses curves, where w represents the width of the cone and square signals and σ is the standard deviation of the Gaussian signal.
Figure 2. The ML1 regularization losses on different sizes of 2D signals. (Left): three types of simulated 2D signal images. (Right): corresponding regularization losses curves, where w represents the width of the cone and square signals and σ is the standard deviation of the Gaussian signal.
Symmetry 13 01414 g002
Figure 3. The ML1 regularization losses on different sizes of an image with rich details. (Left): image. (Right): regularization losses curves, where b represents the image size.
Figure 3. The ML1 regularization losses on different sizes of an image with rich details. (Left): image. (Right): regularization losses curves, where b represents the image size.
Symmetry 13 01414 g003
Figure 4. Illustration of 8 simulated turbulent blur kernels. Note that the turbulent blur kernel has multiple peaks.
Figure 4. Illustration of 8 simulated turbulent blur kernels. Note that the turbulent blur kernel has multiple peaks.
Symmetry 13 01414 g004
Figure 5. Visual recovery results: (a) Input images and ground-truth kernels, (b) Fergus et al. [1], (c) Levin et al. [2], (d) Jin et al. [16], (e) Pan et al. [9], and (f) our methods.
Figure 5. Visual recovery results: (a) Input images and ground-truth kernels, (b) Fergus et al. [1], (c) Levin et al. [2], (d) Jin et al. [16], (e) Pan et al. [9], and (f) our methods.
Symmetry 13 01414 g005
Figure 6. Visual recovery results: (a) Input images and ground-truth kernels, (b) Shan et al. [22] (c) Krishnan et al. [7], (d) Perrone and Favaro [15], (e) Wen et al. [39], and (f) our methods.
Figure 6. Visual recovery results: (a) Input images and ground-truth kernels, (b) Shan et al. [22] (c) Krishnan et al. [7], (d) Perrone and Favaro [15], (e) Wen et al. [39], and (f) our methods.
Symmetry 13 01414 g006
Figure 7. Cumulative histogram of SSD error ratios on the synthetic turbulent dataset.
Figure 7. Cumulative histogram of SSD error ratios on the synthetic turbulent dataset.
Symmetry 13 01414 g007
Figure 8. Cumulative histogram of SSD results per image on the synthetic turbulent dataset.
Figure 8. Cumulative histogram of SSD results per image on the synthetic turbulent dataset.
Symmetry 13 01414 g008
Figure 9. The gradient distribution of the original image, blurred image, and the images restored by three methods.
Figure 9. The gradient distribution of the original image, blurred image, and the images restored by three methods.
Symmetry 13 01414 g009
Figure 10. House: image Size: 512 × 256 , kernel size: 15 × 15 . (a) Input image. (b) Perrone and Favaro [15]. (c) Pan et al. [9]. (d) Jin et al. [16]. (e) Wen et al. [39]. (f) Our method.
Figure 10. House: image Size: 512 × 256 , kernel size: 15 × 15 . (a) Input image. (b) Perrone and Favaro [15]. (c) Pan et al. [9]. (d) Jin et al. [16]. (e) Wen et al. [39]. (f) Our method.
Symmetry 13 01414 g010
Figure 11. Tower: image size: 250 × 242 , kernel size: 35 × 35 . (a) Input image. (b) Krishnan et al. [7] (c) Perrone & Favaro [15]. (d) Pan et al. [9] (e) Wen et al. [39]. (f) Our methods.
Figure 11. Tower: image size: 250 × 242 , kernel size: 35 × 35 . (a) Input image. (b) Krishnan et al. [7] (c) Perrone & Favaro [15]. (d) Pan et al. [9] (e) Wen et al. [39]. (f) Our methods.
Symmetry 13 01414 g011
Figure 12. Satellite (http://www.tracking-station.de/images/images.html, accessed on 20 January 2021): image size: 326 × 332 , kernel size: 35 × 35 . (a) Input image. (b) Krishnan et al. [7] (c) Perrone and Favaro [15]. (d) Jin et al. [16]. (e) Wen et al. [39]. (f) Our methods.
Figure 12. Satellite (http://www.tracking-station.de/images/images.html, accessed on 20 January 2021): image size: 326 × 332 , kernel size: 35 × 35 . (a) Input image. (b) Krishnan et al. [7] (c) Perrone and Favaro [15]. (d) Jin et al. [16]. (e) Wen et al. [39]. (f) Our methods.
Symmetry 13 01414 g012
Figure 13. Moon (https://images-assets.nasa.gov/image/as17-152-23311/as17-152-23311~thumb.jpg, accessed on 30 December 2020). Image Size: 640 × 636 , Kernel Size: 35 × 35 . (a) Input image. (b) Krishnan et al. [7] (c) Perrone & Favaro [15]. (d) Pan et al. [9] (e) Jin et al. [16]. (f) Ours.
Figure 13. Moon (https://images-assets.nasa.gov/image/as17-152-23311/as17-152-23311~thumb.jpg, accessed on 30 December 2020). Image Size: 640 × 636 , Kernel Size: 35 × 35 . (a) Input image. (b) Krishnan et al. [7] (c) Perrone & Favaro [15]. (d) Pan et al. [9] (e) Jin et al. [16]. (f) Ours.
Symmetry 13 01414 g013
Table 1. Quantitative comparison on the entire synthetic turbulent dataset: average PSNR and average SSIM.
Table 1. Quantitative comparison on the entire synthetic turbulent dataset: average PSNR and average SSIM.
MethodAverage PSNRAverage SSIM
Fergus et al. [1]20.91850.7455
Shan et al. [22]29.30300.9050
Levin et al. [2]25.81040.7902
Krishnan et al. [7]28.77160.8708
Perrone & Favaro [15]31.05890.9166
Pan et al. [9]30.59530.9195
Jin et al. [16]31.30690.9292
Wen et al. [39]31.48720.9341
Ours31.75420.9398
Table 2. The entropy values of the real images.
Table 2. The entropy values of the real images.
ImageHouseTowerSatelliteMoonAverage
Krishnan et al. [7]6.42657.85645.52035.29296.2740
Perrone & Favaro [15]6.53687.85554.90544.99426.0730
Pan et al. [9]6.36607.87075.50555.06466.2017
Jin et al. [16]6.40867.81445.72055.56296.4043
Wen et al. [39]6.37657.85375.65375.28226.2915
Our method6.47577.86205.93905.56296.4599
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Duan, L.; Sun, S.; Zhang, J.; Xu, Z. Deblurring Turbulent Images via Maximizing L1 Regularization. Symmetry 2021, 13, 1414. https://doi.org/10.3390/sym13081414

AMA Style

Duan L, Sun S, Zhang J, Xu Z. Deblurring Turbulent Images via Maximizing L1 Regularization. Symmetry. 2021; 13(8):1414. https://doi.org/10.3390/sym13081414

Chicago/Turabian Style

Duan, Lizhen, Shuhan Sun, Jianlin Zhang, and Zhiyong Xu. 2021. "Deblurring Turbulent Images via Maximizing L1 Regularization" Symmetry 13, no. 8: 1414. https://doi.org/10.3390/sym13081414

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop