Next Article in Journal
An Adaptive Sliding Mode Variable Admittance Control Method for Lower Limb Rehabilitation Exoskeleton Robot
Previous Article in Journal
Implementation of Fault-Tolerant Control for a Robot Manipulator Based on Synchronous Sliding Mode Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Infrared Image Deblurring via High-Order Total Variation and Lp-Pseudonorm Shrinkage

1
College of Physics and Information Engineering, Fuzhou University, Fuzhou 350000, China
2
School of Physics and Information Engineering, Minnan Normal University, Zhangzhou 363000, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(7), 2533; https://doi.org/10.3390/app10072533
Submission received: 26 February 2020 / Revised: 24 March 2020 / Accepted: 30 March 2020 / Published: 7 April 2020

Abstract

:

Featured Application

In this study, an innovative model for infrared image deblurring under Gaussian noise is proposed by exploring the sparsity of high-order total variation, leading to enormous progress in image recovery performance.

Abstract

The quality of infrared images is affected by various degradation factors, such as image blurring and noise pollution. Anisotropic total variation (ATV) has been shown to be a good regularization approach for image deblurring. However, there are two main drawbacks in ATV. First, the conventional ATV regularization just considers the sparsity of the first-order image gradients, thus leading to staircase artifacts. Second, it employs the L1-norm to describe the sparsity of image gradients, while the L1-norm has a limited capacity of depicting the sparsity of sparse variables. To address these limitations of ATV, a high-order total variation is introduced in the ATV deblurring model and the Lp-pseudonorm is adopted to depict the sparsity of low- and high-order total variation. In this way, the recovered image can fit the image priors with clear edges and eliminate the staircase artifacts of the ATV model. The alternating direction method of multipliers is used to solve the proposed model. The experimental results demonstrate that the proposed method does not only remove blurs effectively but is also highly competitive against the state-of-the-art methods, both qualitatively and quantitatively.

1. Introduction

Thermal infrared imagers can create images of targets when they are completely dark and far away. Moreover, the disguised targets and high-speed moving targets can also be detected in thick smoke screens and clouds. Thus, thermal infrared imagers are widely used in military and civilian applications. Consequently, users prefer higher requirements for the quality of infrared images acquired from thermal infrared imagers. The principle of infrared image generation [1,2,3] is as follows. The object whose temperature is above absolute zero in nature radiates infrared rays outward. Based on the infrared radiation, the infrared system employs a sensor to convert infrared radiation into electricity signals. After signal processing, it presents the corresponding visible light image on the display medium. As the infrared imaging system is more complex than the natural imaging system, the infrared images suffer from relatively more degradation, such as Gaussian blurring, motion blurring, and noise pollution. Therefore, deblurring infrared images plays a significant role in an infrared imaging system. Researchers have proposed some infrared image deblurring methods, for instance, the quaternion and high-order overlapping group sparse total variation model [4], the total variation with overlapping group sparsity and Lp quasinorm model [5], and the quaternion fractional-order total variation with Lp quasinorm model [6].
The total variation (TV) [7] model is simple and effective for image deblurring. However, it assumes the images to be piecewise constants, thus resulting in staircase artifacts [8]. Several TV-based restoration methods have been proposed to address the issue of staircase artifacts. Generally, these extensions are mainly divided into two categories, local information-based TV extension and non-local based TV extension. The local information-based TV model relies on exploring the total variation in a limited local area. On the contrary, non-local information-based TV models explore patch similarity by searching in the entire image. Concerning local information-based TV extension, Sroisangwan [9] proposed a new higher-order regularization for removing noise. Oh et al., [10] proposed a non-convex hybrid TV (Hybrid TV) model by introducing high-order TV. Similarly, Liu et al., [11] assumed the first-order and second-order gradients to be hyper-Laplacian distributions and proposed a constrained non-convex hybrid TV model for edge-preserving image restoration. Likewise, Zhu et al., [12] proposed an effective hybrid regularization model based on the second-order total generalized variation and wavelet frame. Recently, Lanza et al., [13] proved that non-convex regularizations can promote sparsity more effectively than convex regularizations. Thus, by considering non-convex tools for depicting sparsity, Anger et al., [14] proposed blind image deblurring using the L0 gradient prior. Yang et al., [15] proposed a weighted-l1-method noise regularization for image deblurring. Based on overlapping group sparsity, Selesnick et al., [16], Liu et al., [17], Liu et al., [18,19], Shi et al., [20] and Wang et al., [21] proposed image reconstruction schemes to further minimize staircase artifacts. Adam et al., [22,23] combined second-order non-convex TV and non-convex higher-order total variation with overlapping group sparse regularization to remove staircase artifacts. By taking advantage of neighbor information, Cheng et al., [24] proposed a four-directional TV denoising method. Combining the Lp-quasinorm shrinkage with four-directional TV, Liu et al., [6] extended anisotropic total variation (ATV) to a quaternion fractional TV using the Lp-quasinorm (FTV4Lp) model. The aforementioned models mainly rely on local image information. In contrast, by considering non-local information, Wang et al., [25] used a non-local data fidelity term to build a denoising model. Xu [26] adopted non-local TV models to regularize the solution to the image recovery optimization problem. Nasonov et al., [27] combined the block-matching and 3D (BM3D) filtering with generalized TV to deblur images. By considering the similarity of nonlocal patches, Liu et al., [28] proposed a block-matching TV regularization.
The Lp-pseudonorm is an emerging tool that is used for depicting sparse variables and has been applied in several signal-processing applications. Woodworth et al., [29] demonstrated that Lp shrinkage is superior to soft threshold shrinkage in recovering sparse signals. Recently, Lp shrinkage has been applied to numerous fields, for example, Liu et al., [5] proposed a method using the Lp-quasinorm instead of the L1-norm for infrared image deblurring with the overlapping group sparse TV method. Chen et al., [30] presented a sparse time-frequency representation model using the Lp-quasinorm constraint, this model is capable of fitting the sparsity prior to the frequency domain. Li et al., [31] extended the ATV model to the anisotropic total variation Lp-quasinorm shrinkage (ATpV) model for impedance inversion. Zhao et al., [32] put forward the Lp-norm-based sparse regularization model for license plate deblurring.
Inspired by the aforementioned studies, we propose a model that combines the first-order and high-order ATV with Lp-pseudonorm shrinkage to construct a new image-deblurring model, hereafter referred to as HTV-Lp. The proposed model addresses the following limitations of the conventional TV model: (1) staircase artifacts; and (2) the limited capability of the L1-norm to depict the sparsity of sparse variables. To realize the new HTV-Lp model, the alternating direction method of multipliers (ADMM) framework [33] was adopted. Fast Fourier transform (FFT) was used to improve the algorithm efficiency. To validate the newly proposed HTV-Lp model, experiments were performed to compare its performance with existing models including ATV [7], isotropic total variation (ITV) [34], ATpV [31], FTV4Lp [6] and non-convex hybrid TV (Hybrid TV) [10]. Objective indicators of the six methods were evaluated, such as the peak signal-to-noise ratio (PSNR) [35], structural similarity (SSIM) [36], and gradient magnitude similarity deviation (GMSD) [37]. The major contributions that have led to significant improvements in the quality of deblurred infrared images are as follows: (1) considering the sparsity of a high-order gradient field based on the ATV model; (2) introducing Lp-pseudonorm shrinkage to express the sparsity of a first-order and high-order gradient field; (3) combining first-order TV and high-order TV with Lp-pseudonorm shrinkage organically; (4) adjusting the parameters in the model separately.
This paper is organized as follows: Section 2 presents the ATV deblurring model. The HTV-Lp model proposed in this paper is described in Section 3. The algorithm for solving the HTV-Lp model is presented in Section 4. Next, the publicly available datasets, evaluation metrics, results of the extensive experiments conducted for evaluation of the six models, and experimental analysis are presented in Section 5. Finally, Section 6 concludes the paper.

2. Traditional Anisotropic Total Variation (ATV) Model

The ATV deblurring model [38] is described as follows:
F = argmin F 1 2 H F G 2 2 + u R ATV ( F ) .
The Gaussian blurring kernel H R N × N is represented by a point spread function [5], F R N × N represents the original image, and G R N × N represents the observed image with noise. 1 2 H F G 2 2 is the fidelity term, R ATV ( F ) is the prior term, and the balance factor u is used to balance the prior and fidelity terms.
R ATV ( F ) = K h F 1 + K v F 1 .
where K h = [ 1 , 1 ] , K v = [ 1 1 ] represents the two-dimensional convolution kernels, K h F describes information in the horizontal direction of the image, while K v F describes the information in the vertical direction of the image.

3. Proposed Model

Because the pixel information is not fully considered in the traditional ATV model, only the information of the first-order gradient is considered. To enhance the deblurring effect, the traditional ATV model is extended to the high-order TV model, which considers the first-order gradient information, and adds the high-order gradient information to the prior term. Figure 1 depicts the contour maps that highlight the advantage of Lp-pseudonorms. The contours of the L1-norm, L2-norm and Lp-pseudonorm are depicted in Figure 1a–c. The L1-norm and L2-norm can be expressed as X 1 1 = i = 1 N j = 1 N | X i j | and X 2 2 = i = 1 N j = 1 N X i j 2 , respectively; whereas, the Lp-pseudonorm is defined in the following paper. In Figure 1, the dotted lines represent the fidelity term, while the solid blue lines represent the contours of the prior item. The red dots represent the intersection of the dotted line and the solid line, while the intersections with the axes represent the better sparseness of the image gradients. It is also observed that the Lp-pseudonorm provides a greater degree of freedom compared with L1-norm and L2-norm, which can better depict the sparsity of the image gradients. Therefore, the Lp-pseudonorm was added to the prior terms to improve the robustness of the image gradients.
The high-order gradient information and Lp-pseudonorm were introduced to increase the accuracy of the prior knowledge, thereby preserving the edge of the image while suppressing the influence of the small edge on the estimation of the blurring core. Therefore, the proposed high-order total variation with Lp-pseudonorm shrinkage (HTV-Lp) model is defined as follows:
F = argmin F 1 2 H F G 2 2 + u 1 K h F p 1 p 1 + u 2 K v F p 2 p 2 + u 3 K h K h F p 3 p 3 + u 4 K v K v F p 4 p 4 + u 5 K h K v F p 5 p 5 ,
where K h = [ 1 , 1 ] , K v = [ 1 1 ] , the Lp-norm is defined as X p = ( i = 1 N j = 1 N | X i j | p ) 1 / p ( 0 < p < 1 ) , while the Lp-pseudonorm is defined as X p p = i = 1 N j = 1 N | X i j | p ( 0 p 1 ) , and u i ( i = 1 , 2 , , 5 ) are the balance parameters.

4. Solver by Alternating Direction Method of Multipliers (ADMM)

4.1. The Proposed Model Solution Based on the Alternating Direction Method of Multipliers

To resolve the HTV-Lp model defined by Equation (3), the ADMM framework [33,39] is used. The complex problem was transformed into several simple decoupled sub-problems using variable substitution. The split variables can be defined as Z 1 = K h F , Z 2 = K v F , Z 3 = K h K h F , Z 4 = K v K v F , Z 5 = K h K v F . The original problem is transformed into the following equations:
{ J = 1 2 H F G 2 2 + u 1 Z 1 p 1 p 1 + u 2 Z 2 p 2 p 2 + u 3 Z 3 p 3 p 3 + u 4 Z 4 p 4 p 4 + u 5 Z 5 p 5 p 5 Z 1 = K 1 F Z 2 = K 2 F Z 3 = K 3 F Z 4 = K 4 F Z 5 = K 5 F ,
where K 1 = K h , K 2 = K v , K 3 = K h K h , K 4 = K v K v , K 5 = K h K v .
According to the ADMM principle, dual variables Z ˜ i ( i = 1 , 2 , , 5 ) are introduced. The problem shown in Equation (4) can be converted into an unconstrained extended Lagrange function:
J = 1 2 H F G 2 2 + i = 1 5 ( u i Z i p i p i + β i 2 Z i K i F 2 2 β i Z ˜ i , Z i K i F ) ,
where X , Y represents the inner product of X and Y , and β i is the penalty factor of the quadratic penalty function.

4.2. F Sub-Problem Solving

The sub-function of the sub-problem of F is written as follows,
J F = 1 2 H F G 2 2 + i = 1 5 ( β i 2 Z i K i F 2 2 β i Z ˜ i , Z i K i F ) .
A supplement to Equation (6) can be formulated as follows,
0 = β i Z ˜ i 2 2 β i Z ˜ i 2 2 2 ( i = 1 , 2 , 3 , 4 , 5 ) .
The sub-function then becomes:
J F = 1 2 H F G 2 2 + i = 1 5 β i 2 Z i ( k ) K i F ( k + 1 ) Z ˜ i ( k ) 2 2 .
The Fourier transform is performed on the above expression to obtain the frequency domain representation and transform Equation (8) into the following equation:
J F ¯ = 1 2 H ¯ ° F ¯ ( k + 1 ) G ¯ 2 2 + i = 1 5 β i 2 Z ¯ i ( k ) K ¯ i ° F ¯ ( k + 1 ) Z ˜ ¯ i ( k ) 2 2 .
where the symbol ° represents the point multiplication operation and X ¯ represents the spectrum of X .
The derivative of Equation (9) to F is as follows,
J F ¯ ( k + 1 ) F ¯ ( k + 1 ) = H ¯ ° ( H ¯ ° F ¯ ( k + 1 ) G ¯ ) + i = 1 5 β i ° K ¯ i ° ( K ¯ i ° F ¯ ( k + 1 ) + Z ˜ ¯ i ( k ) Z ¯ i ( k ) ) = 0 .
Let L h s and R h s be defined as Equations (11) and (12), respectively,
L h s = H ¯ ° H ¯ + i = 1 5 β i ° K ¯ i ° K ¯ i .
R h s = H ¯ ° G ¯ + i = 1 5 β i ° K ¯ i ° ( Z ¯ i ( k ) Z ˜ ¯ i ( k ) ) .
Then, according to Equation (10), F ( k + 1 ) is defined as follows,
F ( k + 1 ) = i f f t 2 ( R h s / L h s ) ,
where i f f t 2 denotes fast inverse Fourier transform.

4.3. Z i ( i = 1 , 2 , , 5 ) Sub-Problem Solving

The objective function of the Z 1 subproblem is:
J Z 1 = u 1 ( Z 1 p 1 p 1 ) + β 1 2 Z 1 ( k ) K h F ( k ) Z ˜ 1 ( k ) 2 2 .
We adopted Lp-pseudonorm shrinkage to solve the Equation (14). In contrast to Wang et al., [21] where the value of p was the same, we separated the p values of different gradients and directions of the image. The advantage of this operation lies in the fact that it can improve the performance of the deblurring image as described in Section 5.3. Lp-pseudonorm shrinkage is defined as shrink p i ( ξ , τ ) = max { | ξ | τ 2 p i | ξ | p i 1 , 0 } ( i = 1 , 2 , , 5 ) . According to the Lp-pseudonorm shrinkage rule, Z 1 can be updated as follows,
Z 1 = shrink p 1 ( K 1 F ( k + 1 ) + Z ˜ 1 ( k ) , u 1 / β 1 ) .
Similarly, we have:
Z i = shrink p i ( K i F ( k + 1 ) + Z ˜ i ( k ) , u i / β i ) ( i = 2 , 3 , 4 , 5 ) .

4.4. Z ˜ i ( i = 1 , 2 , , 5 ) Sub-Problem Solving

The objective function of the sub-problem of Z ˜ i can be written as follows,
J Z ˜ i ( k + 1 ) = β i Z ˜ i ( k + 1 ) , Z i ( k ) K i F ( k ) ( i = 1 , 2 , , 5 ) .
According to the mountain climbing method [40],
Z ˜ i ( k + 1 ) = Z ˜ i ( k ) + γ β i ( K i F ( k ) Z i ( k ) ) ( i = 1 , 2 , , 5 ) .
where γ is the learning rate.
Table 1 presents the pseudocode of HTV-Lp for infrared image deblurring.
where t o l represents the threshold.

5. Experimental Results and Analysis

5.1. Experimental Environment

To demonstrate the superiority of the proposed HTV-Lp model, Figure 2 shows eight different test images downloaded from the publicly available datasets found in http://adas.cvc.uab.es/elektra/datasets/far-infra-red/, and http://www.dgp.toronto.edu/~nmorris/data/IRData/. Infrared images of http://adas.cvc.uab.es/elektra/datasets/far-infra-red/were obtained using an infrared camera (PathFindIR Flir camera) with 19-mm focal length lens. The size of the “Store.BMP” is 506 × 408 and that of the rest images are 384 × 288. The Gaussian blur and motion blur kernels are generated by MATLAB functions. For example, MATLAB function “fspecial” (‘gaussian’, B × B × σ) generates a B × B Gaussian blur kernel with a standard deviation of σ. For convenience, the kernel is referred to as (G, B, B, σ). “fspecial” (‘motion’, L, θ) generates a motion blur kernel with a motion displacement of L and motion angle of θ, which is referred to as (M, L, θ) for convenience. Furthermore, Gaussian noise standard deviations of 1, 3, and 5 are used.

5.2. Evaluation Metrics

The main comparison evaluation metrics considered in this study are the PSNR, SSIM, and GMSD, which are expressed in Equations (19)–(21), respectively:
PSNR = 10 lg _ 10 255 2 1 N 2 i = 1 N j = 1 N ( X i j Y i j ) 2 ,
SSIM = ( 2 u X u Y + 255 2 k 1 2 ) ( 2 σ X Y + 255 2 k 2 2 ) ( u X 2 + u Y 2 + 255 2 k 1 2 ) ( σ X 2 + σ Y 2 + 255 2 k 2 2 ) ,
where X and Y refer to the original image and the restored image, respectively. u X and u Y are the average of the sum of the images, and σ X 2 , σ Y 2 represent the variance of X and Y , respectively. Here, k1 and k2 are used to ensure that the SSIM expression is non-zero; in this experiment, k1 is set as 0.01 and k2 as 0.03.
GMSD = 1 N i N ( GMS ( i ) GMSM ) 2 ,
where,
GMSM = 1 N i N GMS ( i ) .
GMS = 2 m r ( i ) m d ( i ) + c m r 2 ( i ) + m d 2 ( i ) + c .
where m r and m d refer to the gradient amplitude of the image in the horizontal and vertical directions, respectively, and c is a constant of small value to guarantee that the denominator is a non-zero number.
Larger PSNR values and smaller GMSD values correspond to better image recovery quality. The range of SSIM is (0, 1), where higher values indicate better deblurring performance. The iterative stop condition in the algorithm can be expressed as follows,
E = F ( k + 1 ) F ( k ) 2 / F ( k ) 2 t o l ,
where t o l is set to be 10 4 .

5.3. The Sensitivity of the Parameters

The parameters u i , β i , p i , γ ( i = 1 , 2 , , 5 ) should be properly selected. We try each parameter tentatively until the iterated image is properly recovered. The ranges of each parameter are determined empirically as u i , β i = 0 20 , p i , γ = 0 1 ( i = 1 , 2 , , 5 ) . The parameters are adjusted by traversing these ranges with a step size of 0.01. When PSNR achieves a maximum value, the corresponding parameter values are selected as the optimal ones. Figure 3 depicts the effect of u i on PSNR during the adjustment. Notably, when the PSNR reaches its maximum, the optimal selections of u i ( i = 1 , 2 , , 5 ) vary, hence, it is effective to adjust these parameters separately.
Figure 4 depicts the effect of variations in p i on PSNR during the adjustment. When the p i is too large, the image gradient sparsity is not sufficiently strong. However, if p i is too small, the image gradients become too sparse. Thus, the value of p i should be adjusted until the image recovery is satisfactory. As shown in Figure 4, the values of p i are different when PSNR reaches the maximum value under the same experimental conditions. Therefore it is effective to separate the p i of different gradients and directions of image.

5.4. Comparison of the Deblurring Performance

The parameters of the six algorithms are adjusted by the traversal to achieve the best deblurring indicators. The test results for the different images are presented in Table 2 and Table 3, where the values in bold refer to the optimal indicator values. Table 2 presents the deblurred results obtained by the six different models for degraded images with Gaussian blur (G, 5, 5, 7) and Gaussian noise standard deviations of 1, 3, and 5, respectively. Table 3 presents the results obtained by the six different models for degraded images with motion blur (M, 10, 10) and Gaussian standard deviations of 1, 3, and 5, respectively. Figure 5 depicts the changes in the methods in terms of the evaluation metrics with respect to the increase in the iteration number.
The performance of the proposed method, illustrated intuitively in Figure 5, and Table 2 and Table 3, can be summarized as follows: (1) The performance indicators obtained by the HTV-Lp model are higher than all the other models, thereby demonstrating that the proposed method has better deblurring and denoising effects. (2) Observed from Table 2 when restoring the eight degraded images with the motion blur kernel (M, 10, 10) and Gaussian noise standard deviation of 1–5, the HTV-Lp model shows average PSNR values 0.584 dB higher than those of the ATV method, 0.536 dB higher than those of the ITV method, 0.389 dB higher than those of the ATpV method, 0.575 dB higher than those of the FAV4Lp method, and 0.100 dB higher than those of the Hybrid TV method. (3) Observed from Table 3 when restoring the eight degraded images with Gaussian blur kernel (G, 5, 5, 7) and Gaussian noise standard deviation of 1–5, the HTV-Lp model shows average PSNR values 0.531 dB higher than those of the ATV method, 0.391 dB higher than those of the ITV method, 0.300 dB higher than those of the ATpV method, 0.589 dB higher than those of the FAV4Lp method, and 0.175 dB higher than those of the Hybrid TV method. Hence, we conclude that the high-order gradients sparsity of the image is helpful for image recovery and Lp-pseudonorm is better than L1-norm.

5.5. Comparison of Visual Effects

The deblurring results for the “Store” are depicted in Figure 6. To better exhibit the effects of the six algorithms, a part of “Store” details in the red rectangles was enlarged. It can be seen that the HTV-Lp model minimized the noise while simultaneously reliving the staircase artifacts in slanted and smooth regions of the image during image deblurring. In addition, the Lp-pseudonorm can depict the sparsity of processed variables more precisely compared to the L1-norm. Figure 7 depicts the single columns extracted by original image, degraded image and deblurring images of ATV, ITV, ATpV, FTV4Lp, Hybrid TV, HTV-Lp. It is found that the HTV-Lp achieves the flattest curve among the compared methods.

5.6. Comparison of Computing Time

Figure 8 shows comparison of computing times of different deblurring methods. Although our method, HTV-Lp, is slower than the others, the computing time is within 3.5 s, thus it is effective to employ convolution and the FFT theorem to avoid large computational complexity.

6. Discussion and Conclusions

Deblurring infrared images plays a significant role in an infrared imaging system. To improve the performance of deblurring images, the propoed HTV-Lp model combines first-order and high-order gradients of images with Lp-pseudonorm shrinkage. The ADMM algorithm is used to split the proposed model into several decoupled sub-problems. In the process of solving thee problems, the convolution and FFT theorem are applied to image deblurring to avoid large computational complexity. The comparison of HTV-Lp with existing ATV, ITV, ATpV, FTV4Lp, and Hybrid TV models shows that HTV-Lp obtained the average highest PSNR and SSIM and lowest GMSD values of all methods and successfully mitigated staircase artifacts.
A limitation of this study is that non-local patch similarity is not fully considered in the proposed model. Additionally, there is room for further improvement with regard to the speed of the HTV-Lp algorithm within the framework of the accelerated ADMM. Thus, in our future work, we will focus on improving the performance and efficiency of the proposed method.

Author Contributions

Conceptualization, J.Y. and Y.C.; Data curation, J.Y.; Formal analysis, J.Y.; Funding acquisition, Y.C.; Investigation, J.Y.; Methodology, J.Y.; Project administration, J.Y.; Resources, J.Y.; Software, J.Y.; Supervision, Y.C. and Z.C.; Validation, J.Y.; Visualization, J.Y.; Writing original draft, J.Y.; Writing—review and editing, Y.C. and Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Educational and Research Project for Young and Middle-aged Teachers of Education Department of Fujian Provincial (JAT190378); Fujian Province Major Teaching Reform Project (FBJG20180015); The Foundation of President of Minnan Normal University (KJ19019), and the Teaching Reform Project of Minnan Normal University (JG201918).

Acknowledgments

Thanks to X.L. of the University of Electronic Science and Technology of China for sharing the FTV4Lp code.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Wang, X.W.; Shen, T.S.; Zhou, X.D. Simulation of the Sensor in the Infrared Image Generation. Semicond. Optoelectron. 2004, 25, 317–319. [Google Scholar]
  2. Choi, J.-H.; Shin, J.-M.; Kim, J.-H.; Kim, T.-K. Study on Infrared Image Generation for Different Surface Conditions with Different Sensor Resolutions. J. Soc. Nav. Arch. Korea 2010, 47, 342–349. [Google Scholar] [CrossRef] [Green Version]
  3. Norton, P.R. Infrared image sensors. Opt. Eng. 1991, 30, 1649. [Google Scholar] [CrossRef]
  4. Liu, X.; Chen, Y.; Peng, Z.; Wu, J. Infrared Image Super-Resolution Reconstruction Based on Quaternion and High-Order Overlapping Group Sparse Total Variation. Sensors 2019, 19, 5139. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Liu, X.; Chen, Y.; Peng, Z.; Wu, J. Total variation with overlapping group sparsity and Lp quasinorm for infrared image deblurring under salt-and-pepper noise. J. Electron. Imaging 2019, 28, 043031. [Google Scholar] [CrossRef] [Green Version]
  6. Liu, X.; Chen, Y.; Peng, Z.; Wu, J.; Wang, Z. Infrared Image Super-Resolution Reconstruction Based on Quaternion Fractional Order Total Variation with Lp Quasinorm. Appl. Sci. 2018, 8, 1864. [Google Scholar] [CrossRef] [Green Version]
  7. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  8. Hajiaboli, M.R. An Anisotropic Fourth-Order Diffusion Filter for Image Noise Removal. Int. J. Comput. Vis. 2010, 92, 177–191. [Google Scholar] [CrossRef]
  9. Sroisangwan, P.; Chumchob, N. A Higher-Order Variational Model for Image Restoration and Its Medical Applications; Silpakorn University: Bangkok, Thailand, 2020. [Google Scholar]
  10. Oh, S.; Woo, H.; Yun, S.; Kang, M. Non-convex hybrid total variation for image denoising. J. Vis. Commun. Image Represent. 2013, 24, 332–344. [Google Scholar] [CrossRef]
  11. Liu, R.W.; Wu, D.; Wu, C.-S.; Xu, T.; Xiong, N. Constrained Nonconvex Hybrid Variational Model for Edge-Preserving Image Restoration. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, China, 9–12 October 2015; pp. 1809–1814. [Google Scholar]
  12. Zhu, J.; Li, K.; Hao, B. Image Restoration by Second-Order Total Generalized Variation and Wavelet Frame Regularization. Complexity 2019, 2019, 1–16. [Google Scholar] [CrossRef]
  13. Lanza, A.; Morigi, S.; Selesnick, I.W.; Sgallari, F. Sparsity-Inducing Nonconvex Nonseparable Regularization for Convex Image Processing. SIAM J. Imaging Sci. 2019, 12, 1099–1134. [Google Scholar] [CrossRef]
  14. Anger, J.; Facciolo, G.; Delbracio, M. Blind Image Deblurring using the l0 Gradient Prior. Image Process. Line 2019, 9, 124–142. [Google Scholar] [CrossRef] [Green Version]
  15. Yang, C.; Wang, W.; Feng, X.; Liu, X. Weighted-l1-method-noise regularization for image deblurring. Signal Process. 2019, 157, 14–24. [Google Scholar] [CrossRef]
  16. Selesnick, I.; Chen, P.-Y. Total variation denoising with overlapping group sparsity. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 5696–5700. [Google Scholar]
  17. Liu, G.; Huang, C.; Liu, J.; Lv, X.-G. Total Variation with Overlapping Group Sparsity for Image Deblurring under Impulse Noise. PLoS ONE 2015, 10, e0122562. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Liu, J.; Huang, C.; Selesnick, I.; Lv, X.-G.; Chen, P.-Y. Image restoration using total variation with overlapping group sparsity. Inf. Sci. 2015, 295, 232–246. [Google Scholar] [CrossRef] [Green Version]
  19. Liu, J.; Huang, C.; Liu, G.; Wang, S.; Lv, X.-G. Total variation with overlapping group sparsity for speckle noise reduction. Neurocomputing 2016, 216, 502–513. [Google Scholar] [CrossRef]
  20. Shi, M.; Han, T.; Liu, S. Total variation image restoration using hyper-Laplacian prior with overlapping group sparsity. Signal Process. 2016, 126, 65–76. [Google Scholar] [CrossRef]
  21. Wang, L.; Chen, Y.; Lin, F.; Chen, Y.; Yu, F.; Chen, Y. Impulse Noise Denoising Using Total Variation with Overlapping Group Sparsity and Lp-Pseudo-Norm Shrinkage. Appl. Sci. 2018, 8, 2317. [Google Scholar] [CrossRef] [Green Version]
  22. Adam, T.; Paramesran, R. Hybrid non-convex second-order total variation with applications to non-blind image deblurring. Signal Image Video Process. 2019, 14, 115–123. [Google Scholar] [CrossRef]
  23. Adam, T.; Paramesran, R. Image denoising using combined higher order non-convex total variation with overlapping group sparsity. Multidimens. Syst. Signal Process. 2018, 30, 503–527. [Google Scholar] [CrossRef]
  24. Cheng, Z.; Chen, Y.; Wang, L.; Lin, F.; Wang, H.; Chen, Y. Four-Directional Total Variation Denoising Using Fast Fourier Transform and ADMM. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 379–383. [Google Scholar]
  25. Wang, R.; He, N.; Wang, Y.; Lu, K. Adaptively weighted nonlocal means and TV minimization for speckle reduction in SAR images. Multimed. Tools Appl. 2020, 1–15. [Google Scholar] [CrossRef]
  26. Xu, J.; Qiao, Y.; Fu, Z.; Wen, Q. Image Block Compressive Sensing Reconstruction via Group-Based Sparse Representation and Nonlocal Total Variation. Circuits Syst. Signal Process. 2018, 38, 304–328. [Google Scholar] [CrossRef]
  27. Nasonov, A.; Krylov, A. An Improvement of BM3D Image Denoising and Deblurring Algorithm by Generalized Total Variation. In Proceedings of the 2018 7th European Workshop on Visual Information Processing (EUVIP), Tampere, Finland, 26–28 November 2018; pp. 1–4. [Google Scholar]
  28. Liu, J.; Zheng, X. A Block Nonlocal TV Method for Image Restoration. SIAM J. Imaging Sci. 2017, 10, 920–941. [Google Scholar] [CrossRef]
  29. Woodworth, J.; Chartrand, R. Compressed sensing recovery via nonconvex shrinkage penalties. Inverse Probl. 2016, 32, 075004. [Google Scholar] [CrossRef]
  30. Chen, Y.; Peng, Z.; Gholami, A.; Yan, J.; Li, S. Seismic signal sparse time–frequency representation by Lp-quasinorm constraint. Digit. Signal Process. 2019, 87, 43–59. [Google Scholar] [CrossRef]
  31. Li, S.; He, Y.; Chen, Y.; Liu, W.; Yang, X.; Peng, Z. Fast multi-trace impedance inversion using anisotropic total p-variation regularization in the frequency domain. J. Geophys. Eng. 2018, 15, 2171–2182. [Google Scholar] [CrossRef] [Green Version]
  32. Zhao, C.; Wang, Y.; Jiao, H.; Yin, J.; Li, X. Lp-Norm-Based Sparse Regularization Model for License Plate Deblurring. IEEE Access 2020, 8, 22072–22081. [Google Scholar] [CrossRef]
  33. Chen, Y.; Peng, Z.; Li, M.; Yu, F.; Lin, F. Seismic signal denoising using total generalized variation with overlapping group sparsity in the accelerated ADMM framework. J. Geophys. Eng. 2019, 16, 30–51. [Google Scholar] [CrossRef] [Green Version]
  34. Vishnevskiy, V.; Tanner, C.; Goksel, O.; Gass, T.; Székely, G. Isotropic Total Variation Regularization of Displacements in Parametric Image Registration. IEEE Trans. Med Imaging 2016, 36, 385–395. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Ichigaya, A.; Kurozumi, M.; Hara, N.; Nishida, Y.; Nakasu, E. A method of estimating coding PSNR using quantized DCT coefficients. IEEE Trans. Circuits Syst. Video Technol. 2006, 16, 251–259. [Google Scholar] [CrossRef]
  36. Hore, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  37. Xue, W.; Zhang, K.; Mou, X.; Bovik, A.C. Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index. IEEE Trans. Image Process. 2013, 23, 684–695. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Chen, Y.; Wu, L.; Peng, Z.; Liu, X. Fast overlapping group sparsity total variation image denoising based on fast fourier transform and split bregman iterations. In Proceedings of the 2017 7th International Workshop on Computer Science and Engineering (WCSE 2017), Beijing, China, 25–27 June 2017. [Google Scholar]
  39. Wu, H.; Li, S.; Chen, Y.; Peng, Z. Seismic impedance inversion using second-order overlapping group sparsity with A-ADMM. J. Geophys. Eng. 2019, 17, 97–116. [Google Scholar] [CrossRef] [Green Version]
  40. Lin, F.; Chen, Y.; Wang, L.; Chen, Y.; Zhu, W.; Yu, F. An Efficient Image Reconstruction Framework Using Total Variation Regularization with Lp-Quasinorm and Group Gradient Sparsity. Information 2019, 10, 115. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Contour maps of different norms. (a) L1-norm, (b) L2-norm, (c) Lp-pseudonorm ( 0 p 1 ).
Figure 1. Contour maps of different norms. (a) L1-norm, (b) L2-norm, (c) Lp-pseudonorm ( 0 p 1 ).
Applsci 10 02533 g001
Figure 2. Test images. (a) Building.BMP; (b) Car.BMP; (c) Figure.BMP; (d) Man.BMP; (e) People.BMP; (f) Store.BMP; (g) Street.BMP; and (h) Woman.BMP.
Figure 2. Test images. (a) Building.BMP; (b) Car.BMP; (c) Figure.BMP; (d) Man.BMP; (e) People.BMP; (f) Store.BMP; (g) Street.BMP; and (h) Woman.BMP.
Applsci 10 02533 g002
Figure 3. Effects of variations on the peak signal-to-noise ratio (PSNR) for the “Car” images degraded by the motion blur kernel (M, 10, 10) with Gaussian noise standard deviations 1, 3, and 5 respectively. (a) u 1 ; (b) u 2 ; (c) u 3 ;(d) u 4 ; (e) u 5 .
Figure 3. Effects of variations on the peak signal-to-noise ratio (PSNR) for the “Car” images degraded by the motion blur kernel (M, 10, 10) with Gaussian noise standard deviations 1, 3, and 5 respectively. (a) u 1 ; (b) u 2 ; (c) u 3 ;(d) u 4 ; (e) u 5 .
Applsci 10 02533 g003aApplsci 10 02533 g003b
Figure 4. Effects of variations on the PSNR for the “Car” images degraded by the motion blur kernel (M, 10, 10) with Gaussian noise standard deviations 1, 3, and 5 respectively. (a) p 1 ; (b) p 2 ; (c) p 3 ; (d) p 4 ; (e) p 5 .
Figure 4. Effects of variations on the PSNR for the “Car” images degraded by the motion blur kernel (M, 10, 10) with Gaussian noise standard deviations 1, 3, and 5 respectively. (a) p 1 ; (b) p 2 ; (c) p 3 ; (d) p 4 ; (e) p 5 .
Applsci 10 02533 g004aApplsci 10 02533 g004b
Figure 5. Dynamic iterative curves with different evaluation metrics. (a); (b); (c) are figures illustrating effects for the degraded “Car” image with the motion blur kernel (M, 10, 10) and Gaussian noise standard deviation of 3 on PSNR, structural similarity (SSIM), gradient magnitude similarity deviation (GMSD), respectively.
Figure 5. Dynamic iterative curves with different evaluation metrics. (a); (b); (c) are figures illustrating effects for the degraded “Car” image with the motion blur kernel (M, 10, 10) and Gaussian noise standard deviation of 3 on PSNR, structural similarity (SSIM), gradient magnitude similarity deviation (GMSD), respectively.
Applsci 10 02533 g005
Figure 6. Comparison of the visual effects. (a) is the original “Store” image; (b) is the degraded image with Gaussian blur (G, 5, 5, 7) and Gaussian noise standard deviation of 5; (c,d,il) are the deblurring results of ATV, ITV, ATpV, FTV4Lp, Hybrid TV, and HTV-Lp methods, respectively; (eh,mp) show the magnified details from the rectangles in (ad,il), respectively.
Figure 6. Comparison of the visual effects. (a) is the original “Store” image; (b) is the degraded image with Gaussian blur (G, 5, 5, 7) and Gaussian noise standard deviation of 5; (c,d,il) are the deblurring results of ATV, ITV, ATpV, FTV4Lp, Hybrid TV, and HTV-Lp methods, respectively; (eh,mp) show the magnified details from the rectangles in (ad,il), respectively.
Applsci 10 02533 g006aApplsci 10 02533 g006b
Figure 7. Comparison of drawing charts. (ah) are the drawing charts of original “Store” image, degraded “Store” image with Gaussian blur (G, 5, 5, 7) and Gaussian noise standard deviation of 5, deblurring results of ATV, ITV, ATpV, FTV4Lp, Hybrid TV, and HTV-Lp methods, respectively.
Figure 7. Comparison of drawing charts. (ah) are the drawing charts of original “Store” image, degraded “Store” image with Gaussian blur (G, 5, 5, 7) and Gaussian noise standard deviation of 5, deblurring results of ATV, ITV, ATpV, FTV4Lp, Hybrid TV, and HTV-Lp methods, respectively.
Applsci 10 02533 g007aApplsci 10 02533 g007b
Figure 8. Comparison of computing times of different deblurring methods. (a) degraded “Building” image with Gaussian blur (G, 5, 5, 7) and (b) degraded “Building” image with motion blur (M, 10, 10).
Figure 8. Comparison of computing times of different deblurring methods. (a) degraded “Building” image with Gaussian blur (G, 5, 5, 7) and (b) degraded “Building” image with motion blur (M, 10, 10).
Applsci 10 02533 g008aApplsci 10 02533 g008b
Table 1. The pseudo-code of the proposed method.
Table 1. The pseudo-code of the proposed method.
HTV-Lp Pseudo-Code
Input: observed image G and fuzzy kernel H ;
Output: original image F ;
Initialize: Z ˜ i ( 0 ) , Z i ( 0 ) , , F ( 0 ) , μ i , β i , t o l , E , k ( i = 1 , 2 , , 5 ) .
1: While E > t o l do;
2: Use (13) to update F ( k + 1 ) ;
3: Use (15)–(16) to update Z i ( k + 1 ) ( i = 1 , 2 , , 5 ) ;
4: Use (18) to update Z ˜ i ( i = 1 , 2 , , 5 ) ;
5: E = F ( k + 1 ) F ( k ) 2 / F ( k ) 2 ;
6: k = k + 1;
7: End While
8: Return F ( k ) as F .
Table 2. Evaluation metrics of the six algorithms for various images with motion blur (M, 10, 10) with a Gaussian noise standard deviation (SD) of 1, 3, and 5.
Table 2. Evaluation metrics of the six algorithms for various images with motion blur (M, 10, 10) with a Gaussian noise standard deviation (SD) of 1, 3, and 5.
ImagesSDATV [7]ITV [34]ATpV [31]FTV4Lp [6]Hybrid TV [10]HTV-Lp
PSNR/SSIM/GMSDPSNR/SSIM/GMSDPSNR/SSIM/GMSDPSNR/SSIM/GMSDPSNR/SSIM/GMSDPSNR/SSIM/GMSD
Building132.216/0.859/0.05432.355/0.864/0.05233.494/0.893/0.03932.762/0.878/0.04234.203/0.911/0.03234.235/0.911/0.032
330.230/0.785/0.07230.184/0.787/0.08130.854/0.815/0.06830.143/0.786/0.07430.722/0.812/0.06730.886/0.818/0.067
529.372/0.750/0.09729.310/0.748/0.10029.847/0.776/0.08129.473/0.757/0.08530.060/0.786/0.07930.179/0.791/0.077
Figure136.566/0.922/0.02736.760/0.925/0.02636.791/0.925/0.02636.483/0.921/0.02736.735/0.924/0.02736.991/0.927/0.022
334.183/0.887/0.04934.190/0.893/0.04934.421/0.891/0.04733.989/0.897/0.04034.517/0.896/0.04934.553/0.895/0.049
533.016/0.871/0.06133.000/0.876/0.06133.297/0.876/0.05532.803/0.878/0.05533.477/0.880/0.06133.492/0.879/0.061
Street135.690/0.920/0.02735.883/0.923/0.02636.041/0.919/0.02135.476/0.916/0.02735.697/0.920/0.02736.023/0.915/0.020
332.807/0.865/0.06333.316/0.876/0.05933.219/0.877/0.05933.088/0.878/0.05533.386/0.881/0.05033.501/0.879/0.050
531.223/0.830/0.08731.743/0.840/0.07932.220/0.845/0.07332.051/0.849/0.07232.508/0.858/0.06632.551/0.859/0.065
Woman134.703/0.883/0.04634.632/0.886/0.04434.883/0.886/0.03933.271/0.872/0.04934.773/0.885/0.04635.128/0.887/0.036
332.607/0.837/0.07232.523/0.839/0.07532.700/0.847/0.06732.016/0.831/0.08233.101/0.853/0.06533.161/0.852/0.063
531.485/0.816/0.08830.977/0.815/0.09232.058/0.830/0.08130.782/0.814/0.09332.110/0.832/0.08432.177/0.832/0.083
Man137.009/0.927/0.02337.070/0.927/0.02337.374/0.929/0.02137.110/0.925/0.02537.130/0.930/0.02337.746/0.933/0.017
334.547/0.885/0.04834.595/0.890/0.04934.798/0.899/0.04434.581/0.891/0.05234.832/0.899/0.04435.066/0.899/0.043
533.412/0.874/0.05633.514/0.875/0.05633.666/0.883/0.04933.546/0.880/0.05833.802/0.886/0.04533.828/0.885/0.045
People134.135/0.874/0.03434.275/0.877/0.03334.398/0.878/0.02934.177/0.874/0.03234.402/0.881/0.03134.726/0.884/0.028
331.939/0.826/0.06031.976/0.828/0.06132.181/0.837/0.05231.737/0.824/0.06132.191/0.837/0.05232.413/0.840/0.050
531.004/0.797/0.07731.047/0.798/0.07731.310/0.806/0.06830.847/0.792/0.07731.332/0.808/0.06831.342/0.808/0.068
Store132.513/0.938/0.02332.698/0.941/0.02332.620/0.941/0.02332.891/0.934/0.02332.856/0.934/0.02733.926/0.949/0.020
329.599/0.887/0.04729.598/0.895/0.04529.821/0.898/0.04429.457/0.884/0.04830.090/0.902/0.04330.599/0.909/0.043
527.841/0.874/0.06227.955/0.875/0.06027.916/0.873/0.06027.953/0.835/0.06928.804/0.877/0.05828.914/0.860/0.064
Car136.667/0.942/0.01736.680/0.943/0.01837.052/0.943/0.01436.720/0.941/0.01736.824/0.944/0.01637.254/0.944/0.014
333.778/0.909/0.03533.731/0.915/0.03633.869/0.912/0.03333.594/0.913/0.03833.884/0.913/0.03433.971/0.916/0.033
532.571/0.889/0.05032.564/0.897/0.05132.605/0.888/0.04832.549/0.897/0.04932.621/0.892/0.05032.824/0.905/0.044
average134.937/0.908/0.031335.044/0.911/0.030635.332/0.914/0.026534.861/0.908/0.030335.328/0.916/0.028635.753/0.918/0.0236
332.461/0.860/0.055832.514/0.865/0.056932.733/0.872/0.051832.326/0.863/0.056332.840/0.874/0.050533.019/0.876/0.0498
531.241/0.838/0.072331.264/0.841/0.072031.615/0.847/0.064431.251/0.837/0.069831.839/0.852/0.063931.913/0.852/0.0634
Table 3. Evaluation metrics of the six algorithms for various images with Gaussian blur (G, 5, 5, 7) with a Gaussian noise standard deviation (SD) of 1, 3, and 5.
Table 3. Evaluation metrics of the six algorithms for various images with Gaussian blur (G, 5, 5, 7) with a Gaussian noise standard deviation (SD) of 1, 3, and 5.
ImagesSDATV [7]ITV [34]ATpV [31]FTV4Lp [6]Hybrid TV [10]HTV-Lp
PSNR/SSIM/GMSDPSNR/SSIM/GMSDPSNR/SSIM/GMSDPSNR/SSIM/GMSDPSNR/SSIM/GMSDPSNR/SSIM/GMSD
Building134.956/0.918/0.02434.998/0.919/0.02435.619/0.925/0.01735.115/0.920/0.02236.150/0.936/0.01536.186/0.936/0.014
332.010/0.850/0.04031.861/0.847/0.04132.376/0.860/0.03731.812/0.846/0.04132.372/0.855/0.03632.716/0.871/0.036
530.600/0.804/0.05030.621/0.805/0.05031.179/0.825/0.05130.153/0.786/0.05631.245/0.826/0.05231.245/0.825/0.052
Figure137.619/0.935/0.00838.020/0.939/0.00737.813/0.936/0.00837.803/0.935/0.00837.857/0.937/0.00837.925/0.936/0.008
335.217/0.905/0.02335.390/0.910/0.02434.711/0.907/0.02835.197/0.909/0.02435.393/0.912/0.02435.513/0.911/0.023
534.118/0.888/0.03334.290/0.896/0.03434.251/0.893/0.03334.163/0.895/0.03334.528/0.899/0.03234.542/0.899/0.032
Street136.317/0.925/0.01136.563/0.928/0.01136.450/0.925/0.01136.255/0.924/0.01236.701/0.930/0.01037.027/0.932/0.009
333.863/0.883/0.02833.531/0.886/0.03033.854/0.882/0.02833.783/0.884/0.03033.888/0.884/0.02833.888/0.884/0.028
532.666/0.863/0.04133.037/0.864/0.04032.888/0.860/0.04032.832/0.863/0.04233.132/0.867/0.03933.132/0.867/0.039
Woman136.622/0.921/0.01536.592/0.922/0.01336.724/0.921/0.01435.809/0.919/0.01236.830/0.924/0.01136.737/0.923/0.011
333.302/0.864/0.04834.021/0.866/0.04733.505/0.852/0.06233.649/0.865/0.04434.252/0.877/0.03634.269/0.876/0.035
533.104/0.843/0.06433.095/0.844/0.06733.122/0.852/0.05232.685/0.842/0.06733.137/0.854/0.05033.295/0.859/0.054
Man137.562/0.934/0.00937.771/0.935/0.00837.839/0.936/0.00837.512/0.934/0.00937.689/0.937/0.00838.051/0.939/0.008
335.459/0.907/0.02035.594/0.906/0.02035.576/0.912/0.01935.222/0.907/0.02335.381/0.911/0.02135.829/0.913/0.019
534.213/0.888/0.03334.301/0.890/0.03434.194/0.891/0.03233.974/0.883/0.04134.462/0.900/0.03034.462/0.900/0.030
People135.075/0.897/0.01635.340/0.902/0.01535.387/0.902/0.01335.191/0.899/0.01435.138/0.901/0.01635.820/0.910/0.013
332.627/0.849/0.03432.946/0.853/0.03332.977/0.854/0.03132.638/0.847/0.03432.991/0.854/0.03133.005/0.855/0.030
531.558/0.828/0.05531.961/0.830/0.05131.947/0.832/0.05031.646/0.820/0.05231.949/0.833/0.05032.125/0.837/0.046
Store132.902/0.933/0.01533.146/0.938/0.01433.507/0.938/0.01332.632/0.929/0.01632.904/0.933/0.01533.727/0.932/0.012
330.123/0.882/0.03330.049/0.874/0.03430.171/0.874/0.03430.229/0.899/0.03230.181/0.884/0.03330.849/0.899/0.030
528.688/0.853/0.04828.705/0.859/0.04728.673/0.850/0.04928.705/0.857/0.05528.991/0.862/0.04829.187/0.865/0.046
Car136.310/0.938/0.01036.484/0.939/0.01036.947/0.939/0.00936.687/0.934/0.00936.397/0.941/0.01037.321/0.943/0.008
333.171/0.888/0.02433.261/0.904/0.02333.702/0.909/0.02132.975/0.907/0.02633.360/0.892/0.02734.082/0.914/0.020
531.755/0.879/0.03831.841/0.884/0.03732.231/0.881/0.03631.643/0.885/0.03932.174/0.890/0.03732.465/0.890/0.036
average135.920/0.925/0.013536.114/0.928/0.012836.286/0.928/0.0116835.876/0.925/0.012836.208/0.930/0.0116836.599/0.931/0.0104
333.222/0.879/0.0312533.332/0.881/0.031533.359/0.881/0.032533.188/0.883/0.031833.477/0.884/0.029533.769/0.890/0.0276
532.088/0.856/0.045332.231/0.859/0.045032.311/0.861/0.042931.975/0.854/0.048132.452/0.866/0.042332.5570.868/0.0419

Share and Cite

MDPI and ACS Style

Yang, J.; Chen, Y.; Chen, Z. Infrared Image Deblurring via High-Order Total Variation and Lp-Pseudonorm Shrinkage. Appl. Sci. 2020, 10, 2533. https://doi.org/10.3390/app10072533

AMA Style

Yang J, Chen Y, Chen Z. Infrared Image Deblurring via High-Order Total Variation and Lp-Pseudonorm Shrinkage. Applied Sciences. 2020; 10(7):2533. https://doi.org/10.3390/app10072533

Chicago/Turabian Style

Yang, Jingjing, Yingpin Chen, and Zhifeng Chen. 2020. "Infrared Image Deblurring via High-Order Total Variation and Lp-Pseudonorm Shrinkage" Applied Sciences 10, no. 7: 2533. https://doi.org/10.3390/app10072533

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop