Next Article in Journal
Anti-Clutter Gaussian Inverse Wishart PHD Filter for Extended Target Tracking
Previous Article in Journal
Overview of the Evolution of Silica-Based Chromo-Fluorogenic Nanosensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Infrared Image Super-Resolution Reconstruction Based on Quaternion and High-Order Overlapping Group Sparse Total Variation

1
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu 610054, China
3
Chongqing College of Electronic Engineering, Chongqing 401331, China
4
School of Physics and Information Engineering, Minnan Normal University, Zhangzhou 363000, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(23), 5139; https://doi.org/10.3390/s19235139
Submission received: 26 October 2019 / Revised: 20 November 2019 / Accepted: 21 November 2019 / Published: 23 November 2019
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Owing to the limitations of imaging principles and system imaging characteristics, infrared images generally have some shortcomings, such as low resolution, insufficient details, and blurred edges. Therefore, it is of practical significance to improve the quality of infrared images. To make full use of the information on adjacent points, preserve the image structure, and avoid staircase artifacts, this paper proposes a super-resolution reconstruction method for infrared images based on quaternion total variation and high-order overlapping group sparse. The method uses a quaternion total variation method to utilize the correlation between adjacent points to improve image anti-noise ability and reconstruction effect. It uses the sparsity of a higher-order gradient to reconstruct a clear image structure and restore smooth changes. In addition, we performed regularization by using the denoising method, alternating direction method of multipliers, and fast Fourier transform theory to improve the efficiency and robustness of our method. Our experimental results show that this method has excellent performance in objective evaluation and subjective visual effects.

1. Introduction

Image super-resolution reconstruction (SRR) uses digital signal processing to generate high-resolution (HR) images from a single or multiple frames of low-resolution (LR) images, mainly through the super-resolution method. Image super-resolution reconstruction can efficiently utilize the potential value of existing image data and has applications such as military remote sensing reconnaissance [1], target tracking and monitoring [2,3,4], target location and recognition [5], astronomical observation [6], and medical imaging [7].
There are three types of super-resolution reconstruction methods: based on regular terms representation, learning-based methods, and partial differential equation-based methods. Learning-based image super-resolution reconstruction has been studied extensively in the recent years. For example, based on the convolutional neural network (CNN), Lim proposed an enhanced deep super-resolution network (EDSR) by removing unnecessary modules [8]. Dong redesigned the super-resolution CNN (SRCNN) structure by introducing a deconvolution layer at the end of the network, reformulating the mapping layer, adopting smaller filter sizes [9]. Xu proposed a novel global dense feature fusion convolutional network (DFFNet), which can take full advantage of global intermediate features leading to a continuous global information memory mechanism [10]. To restore various scales of image details, Du enhanced the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters [11]. Chi proposed a uniform deep CNN (DCNN) framework to handle the denoising and super-resolution of the CT image at the same time [12]. Zhang made a comparative study of fast super-resolution CNN (FSRCNN), deeply recursive convolutional networks (DRCN), very deep super-resolution convolutional networks (VDSR) and SRCNN for single image super-resolution with the purpose of space applications, and concluded that DRCN is the best model with more generalized for space object image [13]. Xiao formulated a joint loss function by combining the output and high-dimensional features of a non-linear mapping network, which uses satellite video data itself as a training set [14]. For infrared images, Liu proposed a classified dictionary learning method which classifies features of the samples into several reasonable clusters and trained a dictionary pair for each cluster [15]. He proposed a cascaded architecture of deep neural networks with multiple receptive fields by a large scale factor (×8) [16]. These methods learn the mapping between HR and LR images by pre-selecting test samples and accordingly reconstruct HR images. They can achieve good reconstruction results; however, the computational complexity is high.
The image reconstruction method based on the partial differential equation model has good results. The most popular of these methods are those based on the total variation (TV) regularization model [17]. This method preserves the edges of the images well, while removing image noise. However, there are “staircase artifacts” and unclear texture problems in the reconstructed image. To reduce the staircase artifacts, some scholars have proposed high-order variational models [18,19]. For example, Bredies, Kunisch, and Pock proposed total generalized variation (TGV) based on the combination of TV regularization with higher-order derivatives [20]. Although these methods can reduce staircase artifacts and protect the edges of the image, they produce “spots effect” in the processed image. To balance staircase artifacts and spot effect, a fractional-order variational model, which uses a fractional gradient instead of an integer gradient, has been proposed [21,22,23]. We have also proposed a super-resolution method, which combines quaternion [24,25] and fractional-order total variation, and uses the ADMM acceleration algorithm, achieving good results in image objective evaluation, visual effect and duration [26].
The regular term representation is an image representation model that captures the main information and intrinsic geometry of the image with a few parameters and achieves good results in terms of image restoration, target tracking, and other applications. Since Yang et al. first applied sparse representation to super-resolution reconstruction [27,28], many scholars have proposed improved methods for super-resolution reconstruction based on sparse representation [29,30,31,32,33,34,35]. In recent years, Selesnick and Chen proposed overlapping group sparse total variation (OGSTV) [36], which is a non-separating regular term that preserves the sparsity of the objective function [37]. The overlapping group sparse regularization term considers the sparsity of the image difference domain and also mines the neighborhood difference information of each point, thus mining structural sparsity characteristics of the image gradient. By overlapping the combined gradients, the difference between the smooth region and the boundary region can be improved, thereby suppressing the staircase artifacts of the TV model. The work of Selesnick and Chen, Liu et al. generalized the one-dimensional overlapping sparse regularization term into a two-dimensional overlapping sparse regularization term and introduced it into an anisotropic total variational model for denoising and deconvolution [38,39,40]. Using the Lp quasinorm instead of the L1 norm, we have also proposed a method for infrared image deblurring with an overlapping group sparse total variation method, in which the Lp quasinorm introduces another degree of freedom, better describes image sparsity characteristics, and improves image restoration [41].
Besides, there are some other types of image reconstruction models. Wang proposed an image self-embedding method, using authentication watermark and recovery watermark to complete image restoration. The authentication watermark locates the tampered area. The recovery watermark is compressed into different categories and encoded into variable lengths to improve the quality of the recovered images [42]. Xia proposed a new fast and accurate image matching algorithm, which first presents the district-identification method to obtain the integer-pixel matching result, then introduce gradient algorithm to match the sub-pixel position [43]. Wang proposed an image authentication and a recovery algorithm based on chaos and Hamming code, which can effectively detect image tampering and complete image recovery [44]. Wang proposed an image tampering detection and recovery algorithm based on jitter and chaos technology. The algorithm uses chaos technology to complete watermark embedding and encryption. Combined with the Chinese remainder theorem, it further reduces the impact of watermark embedding on image quality [45].
In fact, for the noisy images, the conventional super-resolution way is to denoise the images as a pre-processing step and then super-resolve the denoised images. In some new methods [46,47,48,49], such as the median filter transform (MFT) with parallelogram-shaped windows [47], denoising and super-resolving are integrated to provide improved results in comparison to the conventional way.
Super-resolution models based on regular terms can be solved by the alternating direction method of multipliers (ADMM) algorithm [50]. In recent years, many scholars have proposed various algorithms based on the classic ADMM, such as the plug-and-play (PnP) ADMM [51,52,53,54,55] and and regularization by denoising (RED) framework [56,57,58,59]. They are powerful image-recovery frameworks that aim to minimize an explicit regularization objective constructed from a plug-in image-denoising function. Since their introduction, they have demonstrated extremely promising results in image restoration and signal recovery problems [60,61,62].
In this study, we explore quaternion total variation and high-order to improve the sparsity exploitation of OGSTV. Our proposed method is called the quaternion and high-order overlapping group sparse (HOGS4), which is efficiently solved through the RED framework. The novelty of our work is two-fold. First, the HOGS4 method is considerably less restrictive than the OGSTV method for infrared image reconstruction as it shows good performance in terms of detail preservation by incorporating high-order image derivatives and also achieves accurate measurement of the sparsity potential from prior regularity. Second, it provides fast and efficient closed-form solutions for computationally complex sub-minimization problems using FFT.
The remainder of this paper is organized as follows. Section 2 briefly introduces the majorization–minimization (MM) method and RED framework. Section 3 describes the proposed method. In Section 4, our experiments and results are described. Finally, Section 5 and Section 6 present the discussion and conclusions, respectively.

2. Related Works

2.1. Overlapping Group Sparse Total Variation

The overlapping group sparse total variation (OGSTV) model [36] is as follows:
R OGSTV F = φ K 1 * F + φ K 2 * F ,
where the symbol * is the convolution operator; F R N × N is the reconstructed image; K 1 = 1 , 1 and K 2 = 1 1 are the horizontal and vertical differential convolution kernels, respectively. φ V = i = 1 N j = 1 N V ˜ i , j , K , K 2 is used to solve the combined gradient, where V ˜ i , j , K , K is defined as
V ˜ i , j , K , K = V i K l , j K l V i K l , j K l + 1 V i K l , j + K r V i K l + 1 , j K l V i K l + 1 , j K l + 1 V i K l + 1 , j + K r V i + K r , j K l V i + K r , j K l + 1 V i + K r , j + K r ,
where K is the group size, K l = K 1 2 , K r = K 2 . x is the largest integer value less than or equal to x.
From Equation (2), it can be seen that the combined gradient considers the gradient information of the neighborhood pixel, and the gradient information of these neighboring pixels is recombined by the L2 norm, thereby improving the difference between the smooth region and the edge region of the image [39].
The overlapping group sparse model can be solved using the MM method [63]:
P V = p r o x γ φ V 0 = arg min V 1 2 V V 0 2 2 + γ φ V ,
where φ V is the overlapping group sparse regular term, and  V ˜ i , j , K , K is an overlapping group sparse matrix of size K × K .
According to the MM method, to minimize P V , we need to find a function Q V , U , such that Q V , U P V for all V and U , and the equality holds if and only if V = U . According to this, the minimum value of Q V , U calculated each time is the optimized value of P V , and Equation (3) can be written as
V k + 1 = arg min V Q V , V k .
According to the following inequalities:
1 2 1 U 2 V 2 2 + U 2 V 2 ,
where the equal sign is only true when U = V .
From Equations (3) and (5), we can obtain the optimization terms of φ V = i = 1 N j = 1 N V ˜ i , j , K , K 2 as shown below:
S V , U = 1 2 i = 1 N j = 1 N 1 U i , j , K , K 2 V i , j , K , K 2 2 + U i , j , K , K 2 φ V = i = 1 N j = 1 N V ˜ i , j , K , K 2
Equation (6) can be written as:
S V , U = 1 2 D U v 2 2 + C U ,
where v is the vector form of the matrix V , C U is independent of V and can be considered as a constant term for V ; D U is a diagonal matrix whose diagonal elements are defined as follows:
D U m , m = i = K l K r j = K l K r k 1 = K l K r k 2 = K l K r U m i + k 1 , m j + k 2 2 1 2 .
By combining Equations (4) and (6), Equation (3) can be transformed into the following iterative optimization method:
V k + 1 = arg min V 1 2 V V 0 2 2 + γ S V , V k = arg min V 1 2 V V 0 2 2 + γ 1 2 D V k v 2 2 + C V k .
Its iterative optimal solution is as follows:
V k + 1 = mat I + γ D 2 V k 1 v 0 ,
where I is the identity matrix, v 0 is the vector form of V 0 , and  mat represents the vector matrixing operator.
Therefore, we obtain Algorithm 1 to solve Equation (3).
Algorithm 1 MM method
Initialize: V = V 0 , γ , K 2 , K l = K 1 2 , K r = K 2 , ε , Maximum inner iterations NIt, n = 0
While V n + 1 V n 2 V n 2 > ε o r n < NIt do
  • compute D 2 V n m , m = i = K l K r j = K l K r k 1 = K l K r k 2 = K l K r V m i + k 1 , m j + k 2 k 2 1 2
  • compute V n + 1 = mat I + γ D 2 V n 1 V 0
  • compute n = n + 1

End While
Return V n

2.2. Regularization by Denoising

For image super-resolution reconstruction, the model can be expressed as
F = arg min F 1 2 SHF G 2 2 + μ R F ,
where H is a circular matrix that represents the convolution for the anti-aliasing filter. S is a binary sampling matrix, where the rows are subsets of the identity matrix. Further, G is an observation image, and  F represents the corresponding original image.
To solve the above model, we can transform it into image denoising using regularization by denoising (RED) [56,57], which relies on a general structural smoothness penalty term for regularizing any desired inverse problem. Specifically, the regularization term R F is defined as
R F = 1 2 F T F f F ,
where f F is defined as the image denoising engine
f F = arg min F ^ 1 2 F ^ F 2 2 + λ ψ F ^ .
The denoising engine is applied to image F , and the induced penalty is proportional to the inner product between the image and its denoising residual. The smooth regularization effectively uses image adaptive Laplacian, and then extracts its definition from any image denoising engine f ( · ) . Interestingly, under the mild assumption of f ( · ) , it is proved that the regularized gradient is manageable, just like the given denoising residual F f ( F ) [58,59].

3. Proposed Method

Inspired by the overlapping group sparse and quaternion total variation methods, this paper proposed a denoising model that uses the RED framework to complete infrared image super-resolution reconstruction (HOGS4). The traditional OGSTV does not fully consider pixel points, only considers first-order information [40]. To improve the denoising effect, we extend the traditional OGSTV to the high-order total variation model. The proposed model of high-order overlapping group sparse total variation not only considers first-order information but also adds the high-order gradient information of the horizontal, vertical, back diagonal and diagonal directions to the prior term. The introduction of quaternion and high-order information is used to make the prior knowledge more accurate, thus protecting the edges of the image [26], and also suppressing the influence of small edges on the estimation of the blurring core [20]. The denoising model is defined as follows:
f HOGS 4 F = arg min F 1 2 F G 2 2 + i = 1 4 λ i φ K i * F + ω i K i * K i * F 2 2 ,
where K i i = 1 , 2 , 3 , 4 represents the convolution kernels along the horizontal, vertical, back diagonal, and diagonal directions, respectively. These are defined as follows:
K 1 = 1 , 1 , K 2 = 1 1 , K 3 = 0 1 1 0 , K 4 = 1 0 0 1 .
Then according to Equation (11), the HOGS4 for infrared image super-resolution reconstruction method based on RED framework can be expressed as:
F = arg min F 1 2 SHF G 2 2 + μ R HOGS 4 F ,
where regularization term R HOGS 4 ( F ) in RED framework can be defined as
R HOGS 4 F = 1 2 F T F f HOGS 4 F .
To solve the HOGS4 model in the RED framework, according to the principle of ADMM, an assistant variable Z is required to convert the unconstrained problem given by Equation (16) into a constrained problem:
F , Z = arg min F , Z 1 2 SHF G 2 2 + μ R HOGS 4 Z , s . t Z = F .
Consequently, the corresponding augmented Lagrangian function is as follows:
L F , Z , Y = 1 2 SHF G 2 2 + μ R HOGS 4 Z + ρ 2 F Z + Y 2 2 ,
where Y is a Lagrange multiplier, and  ρ > 0 is a penalty parameter.
Because F and Z are decoupled, the minimizer of Equation (18) can be found by solving the following sequence of F and Z sub-problems:
F k + 1 = arg min F 1 2 SHF G 2 2 + ρ 2 F Z k + Y k 2 2 ,
Z k + 1 = arg min Z μ R HOGS 4 Z + ρ 2 F k + 1 Z + Y k 2 2 .
The procedure comprises the following steps:
1. To solve the sub-problem of F , let W = SH . Then, Equation (20) can be represented as follows:
F k + 1 = arg min F 1 2 WF G 2 2 + ρ 2 F Z k + Y k 2 2 .
Considering Z ( k ) and Y ( k ) are fixed, by setting the first-order derivative of F in Equation (22) as zero, we have
0 = W T WF G + ρ F Z k + Y k .
According to the ADMM, the solution of the sub-problem of F is
F k + 1 = W T W + ρ I 1 W T G + ρ Z k Y k
2. To solve the sub-problem Z , according to Equations (17) and (21) can be transformed as follows:
Z k + 1 = arg min Z μ 2 Z T Z f HOGS 4 Z + ρ 2 F k + 1 Z + Y k 2 2 .
Considering F ( k + 1 ) and Y ( k ) are fixed, by setting the first-order derivative of Z in Equation (25) as zero, we have
0 = μ Z f HOGS 4 Z + ρ Z F k + 1 Y k ,
which can be solved by the fixed point strategy, leading to the following update rule for Z [56]:
Z j + 1 = 1 μ + ρ μ f HOGS 4 Z j + ρ F k + 1 + Y k ,
where f HOGS 4 · is HOGS4 denoising engine, which is defined as:
f HOGS 4 Z j = arg min Z ^ 1 2 Z ^ Z j 2 2 + i = 1 4 λ i φ K i * Z ^ + ω i K i * K i * Z ^ 2 2 .
Euqation (27) means that our approach in this case is computationally more expensive, as it will require several activations of the denoising engine f HOGS 4 [56].
3. Then we update the Lagrange multiplier as
Y k + 1 = Y k + γ ρ F k + 1 Z k + 1 .
The proposed SRR method is summarized in Algorithm 2.
Algorithm 2 Super-resolution using RED-HOGS4
Initialize: ρ , μ , N
While PSNR F k + 1 , G PSNR F k , G 2 2 > t o l
  • compute F k + 1 = arg min F 1 2 WF G 2 2 + ρ 2 F Z k + Y k 2 2
  • compute Z ˜ 1 = Z k
  • for j = 1 , 2 , , N
  • compute Z ^ j = f HOGS 4 Z ˜ j
  • compute Z ˜ j + 1 = μ μ + ρ Z ^ j + ρ μ + ρ F k + 1 + Y k
  • end for
  • compute Z k + 1 = Z ˜ N
  • compute Y k + 1 = Y k + γ ρ F k + 1 Z k + 1

End While
Regarding the sub-problem Z , Equation (28) can be converted into the following constraint problem:
Z ^ = arg min Z ^ 1 2 Z ^ Z j 2 2 + i = 1 4 λ i φ V i + ω i W i 2 2 , s . t . V i = K i * Z ^ , W i = K i * K i * Z ^ i = 1 , 2 , 3 , 4 .
Accordingly, the augmented Lagrangian function is:
L Z ^ , V i , W i ; U v i , U w i = 1 2 Z ^ Z j 2 2 + i = 1 4 λ i φ V i + ω i W i 2 2 + i = 1 4 η v i 2 V i K i * Z ^ U v i 2 2 + i = 1 4 η w i 2 W K i * K i * Z ^ U w i 2 2 ,
where U v i and U w i ( i = 1 , 2 , 3 , 4 ) is the Lagrange multipliers; η v i > 0 and η w i > 0 are penalty parameters.
The minimizer of Equation (30) is the saddle point of L Z ^ , V i , W i ; U v i , U w i , which can be found by solving the following sequence of subproblems:
Z ^ n + 1 = arg min Z ^ 1 2 Z ^ Z j 2 2 + i = 1 4 η v i 2 V i n K i * Z ^ U v i n 2 2 + i = 1 4 η w i 2 W i n K i * K i * Z ^ U w i n 2 2
V i n + 1 = arg min V i λ i φ V i + η v i 2 V i K i * Z ^ n + 1 + U v i n 2 2 , i = 1 , 2 , 3 , 4
W i n + 1 = arg min W i ω i W i 2 2 + η w i 2 W i K i * K i * Z ^ n + 1 + U w i n 2 2 , i = 1 , 2 , 3 , 4
The procedure comprises the following steps:
1. To solve the sub-problem Z ^ , the 2D Fourier transform of Z ^ can be obtained by employing the convolution theorem [64]:
F Z ^ n + 1 = arg min Z ^ 1 2 F Z ^ F Z j 2 2 + i = 1 4 η v i 2 F V i n F K i F Z ^ F U v i n 2 2 + i = 1 4 η w i 2 F W i n F K i F K i F Z ^ F U w i n 2 2
where the symbol ∘ represents component-wise multiplication.
Considering Z ( j ) , V i ( n ) , W i ( n ) , U v i ( n ) and U w i ( n ) are fixed, by setting the first-order derivative of Z ^ in Equation (35) as zero, we have   
0 = F Z ^ F Z j i = 1 4 η v i F K i * F V i n F K i F Z ^ F U v i n i = 1 4 η w i F K i F K i * F W i n F K i F K i F Z ^ F U w i n .
For simplicity, we abbreviate Equation (36) as
F Z ^ lhs = rhs ,
where
lhs = I + i = 1 4 η v i F K i * F K i + i = 1 4 η w i F K i F K i * F K i F K i
rhs = F Z j + i = 1 4 η v i F K i * F V i n F U v i n + i = 1 4 η w i F K i F K i * F W i n F U w i n
Then, according to Equation (37), we have
Z ^ n + 1 = F 1 rhs · / lhs
where F K i * is the conjugate map of F K i .
2. To solve the sub-problem of V i in Equation (33), the MM (Algorithm 1) can be used:
V i m + 1 n + 1 = mat I + λ i η v i D 2 V i m n + 1 1 V i 0 n + 1 , i = 1 , 2 , 3 , 4
where V i m + 1 n + 1 represents the iteration of the MM algorithm for V i n + 1 , and  V i 0 n + 1 as
V i 0 n + 1 = K i * Z ^ n + 1 + U v i n
3. To solve the sub-problem W i , we set the first-order derivative of W i in Equation (34) as zero, and get:
W i n + 1 = F 1 η w i F K i F K i F Z ^ n + 1 + F U w i n 2 ω i + η w i I , i = 1 , 2 , 3 , 4
4. Lastly, the Lagrange multiplier can be updated as
U v i n + 1 = U v i n γ η v i V i n + 1 K i * Z ^ n + 1 U w i n + 1 = U w i n γ η w i W i n + 1 K i * K i * Z ^ n + 1 , i = 1 , 2 , 3 , 4
In this manner, all the sub-problems of Equation (28) are solved independently. In all iterations, the sub-problem V is solved by MM algorithm according to Equations (41) and (42). Considering the special structure of the differential matrices in the sub-problem of W , we regard the differential operators as convolution operators. By introducing the convolution theorem [64], the sub-problem W is solved in the frequency domain. The entire algorithm to solve Equation (28) is summarized in Algorithm 3. Besides, for Algorithm 3, regarded as HOGS4 denoising engine, we can also use it as an independent denoising algorithm, using quaternion and high-order overlapping group sparse total variation to complete the image denoising.
Algorithm 3 HOGS4 denoising engine using ADMM
Initialize:
While PSNR Z ^ i + 1 , Z j PSNR Z ^ i , Z j 2 2 > t o l
  • compute Z ^ i + 1 according to Equations (38)–(40);
  • compute V i + 1 according to Equations (41) and (42);
  • compute W i + 1 according to Equation (43);
  • compute U v i and U w i according to Equation (44);

End While
Return Z ^ i + 1

4. Experiments and Results

4.1. Materials and Method

In this section, we present several numerical results to illustrate the performance of the proposed method. RED-HOGS4 is compared with different noise levels and Gaussian blur conditions with several other methods, including the MFT [47], RED-TV [17], RED-TGV [20], and RED-OGSTV [36] methods. Among the four methods, the MFT method used the scripts provided in [47] while other methods are based on the literatures and are combined with the RED framework for super-resolution reconstruction. Eight infrared images are selected from the infrared image database LTIR [65] and IRData [66] as test pictures, as shown in Figure 1. Our experiments were performed on a PC with an Intel CPU 2.8 GHz and 8 GB RAM using MATLAB R2014a.
For the objective evaluation, we calculated the peak signal-to-noise ratio (PSNR) [67] and structural similarity (SSIM) [68]. PSNR is an engineering term, which can compare the similarity of two input images or signals based on the mean square error. SSIM is also a method to measure the similarity between two input images, which is designed to improve on other methods such as PSNR which are not consistent with human eye perception. These can be defined as follows:
PSNR X , Y = 10 × log 10 C max 2 i = 0 M 1 j = 0 N 1 X i j Y i j 2 , C max 1 , i n d o u b l e p r e c i s i o n i n t e n s i t y i m a g e s 255 , i n 8 b i t u n s i g n e d i n t e g e r i n t e n s i t y i m a g e s
SSIM X , Y = 2 u X u Y + 255 k 1 2 2 σ XY + 255 k 2 2 u X 2 + u Y 2 + 255 k 1 2 σ X 2 + σ Y 2 + 255 k 2 2
where X and X i j are the original image; Y and Y i j are the reconstructed image; u X and u Y are the mean values of X and Y , respectively. Further, σ X 2 and σ Y 2 are the variances of X and Y , respectively; σ XY is the covariance of X and Y . The parameters k 1 and k 2 are set such that the denominator of SSIM is a nonzero number. In this study, we set k 1 = 0.01 and k 2 = 0.03 [68].
In general, larger values of PSNR and SSIM indicate better performance. Therefore, in this experiment, we focus on the PSNR as well as the SSIM. In all experiments, we set the parameters empirically as follows: μ = 1 , ρ = 0.001 , N = 3 [57]. If γ = 1 , the Algorithm 3 is a classic ADMM, but γ = 1.618 makes it converge noticeably faster than γ = 1 [38]; therefore, we set γ = 1.618 . Besides, for the tol value in Algorithm 3, when N is recommended to be set to 3 in the literature [57], we found that when t o l = 0.001 , the PSNR value is high, so we set t o l = 0.001 in all experiments. The blur matrix H in Equation (11) is set as a corresponding matrix to the blur kernel, which was generated by a MATLAB built-in command “fspecial (‘gaussian’, 7, 1.6)”. S is set as a K-fold downsampling operator which is generated by the MATLAB built-in function “downsample(X,K)”.

4.2. Infrared Image Super-Resolution Experiment without Noise

In the experiment, the LR images without noise are obtained by downsampling the HR images (2-fold, 3-fold, and 4-fold). To evaluate the performance objectively, PSNR and SSIM are calculated under different levels of super-resolving operators (corresponding to ×2, ×3, and ×4). The experimental results of each method are listed in Table 1.
It can be seen from the experimental results in Table 1 that when there is no noise in the infrared images, in the ×2 reconstruction results, the PSNR values of Street are MFT 31.6188 dB, RED-TV 36.3936 dB, RED-TGV 36.4216 dB, RED-OGSTV 36.3983 dB, and RED-HOGS4 36.4195 dB. The results show that: 1. compared RED-HOGS4 36.4195 dB with RED-OGSTV 36.3983 dB, the high-order has improvement compared with the first-order; 2. compared RED-HOGS4 36.4195 dB with RED-TGV 36.4216 dB, though the result of RED-HOGS4 is worse than that of RED-TGV, the experimental values are close. The SSIM values of Street are MFT 0.9561, RED-TV 0.9867, RED-TGV 0.9911, RED-OGSTV 0.9915, and RED-HOGS4 0.9912. Because the parameter setting in the experiment is mainly to ensure PSNR, not SSIM, the SSIM is given as a reference. However, it can be seen from SSIM results that the SSIM of RED-HOGS4 is only worse than that of RED-OGSTV in five methods. Besides, the best PSNR values of Station, Building and Office are RED-OGSTV 33.1898 dB, RED-TGV 33.6576 dB and RED-TGV 31.2934 dB, while the PSNR values of RED-HOGS4 are 33.1657 dB, 33.6553 dB, 31.2891 dB, respectively. The difference between them is little. On the contrary, we can see the best PSNR values of Garden, Gate, Car and Sidewalk are RED-HOGS4 44.1652 dB, 32.1942 dB, 32.6179 dB and 33.8876 dB in the ×2 reconstruction results. Taken together, although the RED-HOGS4 method only has the best PSNR value for the four images in the ×2 reconstruction results, from an overall perspective, the average PSNR of the eight images processed by the RED-HOGS4 method is greater than that of other methods. The RED-HOGS4 method exhibits better SSIM only in individual pictures; however, the mean SSIM is worse than the processing result obtained by the MFT method. Simultaneously, the PSNR values of all the images processed by the MFT method are poor. As the super-resolution levels of super-resolving operators increase to ×3 and ×4, the results of RED-HOGS4 become significantly better than several other methods. For example, in the ×3 reconstruction results, the PSNR values of RED-HOGS4 are higher than those of RED-OGSTV by about 0.02 dB~0.08 dB. Further, in the ×4 reconstruction results, the deference is expanded to 0.02 dB~0.23 dB. Meanwhile, the SSIM values of RED-HOGS4 are also higher than those of other methods. However, as the RED-HOGS4 method is relatively complex, it takes the longest time compared to all other methods.
The following is a comparison of visual effects on three images: Street, Station, and Gate after 4-fold down-sampling of the original image without any noise using the five methods. The LR images are shown in Figure 2, in which the rectangles are compared with the SRR effects of the five methods in Figure 3, Figure 4 and Figure 5, respectively.
As can be seen from Figure 3, Figure 4 and Figure 5, in the case no noise is introduced, after 4 times super-resolution processing, the effect of MFT processing is the worst among the images obtained by the five methods. The three images obtained by MFT have the phenomenon of unclear boundary and blur. In the visual comparison of images generated through RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4, we can see that RED-HOGS4 is better for boundary and overall processing of the image. Especially under 4 times magnification in Figure 5, the results of RED-HOGS4 method clearly show the outlines of letters and strokes, which are significantly better than the other methods.

4.3. Infrared Image Super-Resolution Experiment with Added White Gaussian Noise

In this experiment, the LR infrared images were generated by downsampling the original images by a factor of two after adding white Gaussian noise of different variance values ( σ = 5 , 10 , 20 , 30 ). To evaluate the performance variations based on the noise content for each method objectively, PSNR and SSIM were calculated at the ×2 super-resolving operator. These results are listed in Table 2.
The experimental results show that the MFT method has better PSNR in a few images, but worse PSNR mean values; the processing results of RED-TV and RED-TGV are better than that of the MFT, but worse than those of RED-OGSTV and RED-HOGS4. When the noise is small, the PSNR of the RED-OGSTV method is lower than that of the RED-HOGS4 and its SSIM value is higher than that of RED-HOGS4. With the increase in noise, the reconstruction results of the RED-OGSTV method are further lower than those of the RED-HOGS4 method. In terms of processing time, the RED-HOGS4 is relatively more time consuming compared to the other methods.
The visual effects comparison based on the Street, Station, and Gate images, which had added white Gaussian noise ( σ = 10 ) and were downsampled by a factor of two, is shown as example in Figure 6, in which the rectangles are compared with the SRR effects of the five methods in Figure 7, Figure 8 and Figure 9, respectively.
From Figure 7, Figure 8 and Figure 9, we see that the MFT has less noise in the reconstructed image, but the entire image is too smooth, resulting in serious loss of boundary information; RED-TV and RED-TGV reconstructed images have inadequate noise removal and information protection, whereas RED-OGSTV and RED-HOGS4 have better reconstructed images, as shown in Figure 7. In Figure 8 and Figure 9, the proposed method shows better results compared to RED-OGSTV for image edge reconstruction and noise effect.

5. Discussion

The HOGS4 method adopts quaternion TV and high-order OGSTV, which fully utilizes image correlations in quaternion and extends the first-order overlapping group sparsity to a higher-order such that a clear image can be reconstructed in the presence of noise interference. As to the OGSTV method, in the presence of noise, staircase artifacts are still present, and the noise removal is not as good as that of the HOGS4.
When the MFT method is used to reconstruct an image, regardless of the image being noiseless or noisy, the reconstruction result is very smooth, because of which the details are unclear.
The TV method preserves the edge and detail information of the image and smoothens the image piece by piece; hence, the result usually includes stair artifacts. The TGV method effectively reduces stair artifacts using first-order and second-order gradients during image processing. However, it also causes excessive smoothing and image distortion.
However, compared with other methods, this method is more time consuming because it introduces high-order OGSTV and quaternion, which have higher computational complexity. In the next study, we may use some accelerated iterative methods to improve the convergence speed of the algorithm, thus reducing the time consumption. As the methods in the literatures [26,69], the acceleration operator can be used to reduce the number of iterations of the ADMM algorithm, thereby reducing the time consuming of the super-resolution reconstruction algorithm. Besides, the proposed method may have other shortcomings. For example, the parameter optimization is mainly based on experience; because of the limited number of test infrared images, the parameters may not be fully applied to other sets of infrared images. For practical applications, the parameters are still necessary to optimize for the sets of infrared images. Alternatively, the adaptive mechanism of parameter optimization can be adopted in conjunction with this method.

6. Conclusions

In this paper, an infrared image super-resolution reconstruction method based on quaternion overlapping group sparse is proposed. This method produces improved image super-resolution reconstruction capability because it uses a combination of quaternion total variation and high-order group sparse methods. In addition, by introducing the RED framework, the super-resolution problem is transformed into multiple denoising sub-problems. When addressing these sub-problems, multiple difference operators are processed in convolution form. Using this method, according to the convolution theory, it can be converted to frequency domain operations, thereby avoiding large-scale matrix operations. Compared to MFT, TV, TGV, and OGSTV methods, the experimental results prove that the proposed method has better performance.
Although the proposed method only focuses on HOGS4, it can be easily extended to other regular models, such as TGV model, and combined with other methods, such as Lp quasinorm, to improve the performance of super-resolution reconstruction. Besides, in practical application, the method can be used for super-resolution reconstruction or denoising of grayscale images. We will continue to perform these extensions in our follow-up work.

Author Contributions

X.L. wrote this manuscript. Y.C., Z.P. and J.W. contributed to the writing, direction, and content, and revised the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Nos. 61571096, 61775030), the Scientific and Technological Research Program of Chongqing Municipal Education Commission (Nos. KJ1729409, KJQN201903106), the Education and Scientific Research Foundation of Education Department of Fujian Province for Middle-aged and Young Teachers (Nos. JT180309, JT180310, JT180311), the Foundation of Fujian Province Great Teaching Reform (No. FBJG20180015).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, M.; Jing, M.; Liu, D.; Xia, Z.; Zou, Z.; Shi, Z. Multi-resolution networks for ship detection in infrared remote sensing images. Infrared Phys. Technol. 2018, 92, 183–189. [Google Scholar] [CrossRef]
  2. Zhang, T.; Wu, H.; Liu, Y.; Peng, L.; Yang, C.; Peng, Z. Infrared Small Target Detection Based on Non-Convex Optimization with Lp-Norm Constraint. Remote Sens. 2019, 11, 559. [Google Scholar] [CrossRef]
  3. Zhang, L.; Peng, Z. Infrared Small Target Detection Based on Partial Sum of the Tensor Nuclear Norm. Remote Sens. 2019, 11, 382. [Google Scholar] [CrossRef]
  4. Li, M.; Peng, L.; Chen, Y.; Huang, S.; Qin, F.; Peng, Z. Mask Sparse Representation Based on Semantic Features for Thermal Infrared Target Tracking. Remote Sens. 2019, 11, 1967. [Google Scholar] [CrossRef]
  5. Rasti, P.; Uiboupin, T.; Escalera, S.; Anbarjafari, G. Convolutional neural network super resolution for face recognition in surveillance monitoring. In International Conference on Articulated Motion and Deformable Objects; Springer: Cham, Switzerland, 2016; pp. 175–184. [Google Scholar]
  6. Li, Z.; Peng, Q.; Bhanu, B.; Zhang, Q.; He, H. Super resolution for astronomical observations. Astrophys. Space Sci. 2018, 363, 92. [Google Scholar] [CrossRef]
  7. Shi, W.; Caballero, J.; Ledig, C.; Zhuang, X.; Bai, W.; Bhatia, K.; de Marvao, A.M.S.M.; Dawes, T.; O’Regan, D.; Rueckert, D. Cardiac image super-resolution with global correspondence using multi-atlas patchmatch. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2013; pp. 9–16. [Google Scholar]
  8. Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
  9. Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 391–407. [Google Scholar]
  10. Xu, W.; Chen, R.; Huang, B.; Zhang, X.; Liu, C. Single Image Super-Resolution Based on Global Dense Feature Fusion Convolutional Network. Sensors 2019, 19, 316. [Google Scholar] [CrossRef]
  11. Du, X.; Qu, X.; He, Y.; Guo, D. Single image super-resolution based on multi-scale competitive convolutional neural network. Sensors 2018, 18, 789. [Google Scholar] [CrossRef]
  12. Chi, J.; Zhang, Y.; Yu, X.; Wang, Y.; Wu, C. Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks. Sensors 2019, 19, 3348. [Google Scholar] [CrossRef]
  13. Zhang, H.; Wang, P.; Zhang, C.; Jiang, Z. A Comparable Study of CNN-Based Single Image Super-Resolution for Space-Based Imaging Sensors. Sensors 2019, 19, 3234. [Google Scholar] [CrossRef]
  14. Xiao, A.; Wang, Z.; Wang, L.; Ren, Y. Super-resolution for “Jilin-1” satellite video imagery via a convolutional network. Sensors 2018, 18, 1194. [Google Scholar] [CrossRef]
  15. Liu, F.; Han, P.; Wang, Y.; Li, X.; Bai, L.; Shao, X. Super resolution reconstruction of infrared images based on classified dictionary learning. Infrared Phys. Technol. 2018, 90, 146–155. [Google Scholar] [CrossRef]
  16. He, Z.; Tang, S.; Yang, J.; Cao, Y.; Yang, M.Y.; Cao, Y. Cascaded Deep Networks with Multiple Receptive Fields for Infrared Image Super-Resolution. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 2310–2322. [Google Scholar] [CrossRef]
  17. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  18. Lysaker, M.; Lundervold, A.; Tai, X.C. Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Trans. Image Process. 2003, 12, 1579–1590. [Google Scholar] [CrossRef] [PubMed]
  19. Chan, T.F.; Esedoglu, S.; Park, F. A fourth order dual method for staircase reduction in texture extraction and image restoration problems. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 4137–4140. [Google Scholar]
  20. Bredies, K.; Kunisch, K.; Pock, T. Total generalized variation. SIAM J. Imaging Sci. 2010, 3, 492–526. [Google Scholar] [CrossRef]
  21. Ren, Z.; He, C.; Zhang, Q. Fractional order total variation regularization for image super-resolution. Signal Process. 2013, 93, 2408–2421. [Google Scholar] [CrossRef]
  22. Zhang, J.; Wei, Z. A class of fractional-order multi-scale variational models and alternating projection algorithm for image denoising. Appl. Math. Model. 2011, 35, 2516–2528. [Google Scholar]
  23. Chen, G.; Zhang, J.; Li, D. Fractional-order total variation combined with sparsifying transforms for compressive sensing sparse image reconstruction. J. Vis. Commun. Image Represent. 2016, 38, 407–422. [Google Scholar] [CrossRef]
  24. Wang, C.; Wang, X.; Li, Y.; Xia, Z.; Zhang, C. Quaternion polar harmonic Fourier moments for color images. Inf. Sci. 2018, 450, 141–156. [Google Scholar] [CrossRef]
  25. Wang, C.; Wang, X.; Xia, Z.; Zhang, C. Ternary radial harmonic Fourier moments based robust stereo image zero-watermarking algorithm. Inf. Sci. 2019, 470, 109–120. [Google Scholar] [CrossRef]
  26. Liu, X.; Chen, Y.; Peng, Z.; Wu, J.; Wang, Z. Infrared image super-resolution reconstruction based on quaternion fractional order total variation with Lp quasinorm. Appl. Sci. 2018, 8, 1864. [Google Scholar] [CrossRef]
  27. Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image super-resolution via sparse representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef] [PubMed]
  28. Yang, J.; Wright, J.; Huang, T.; Ma, Y. Image super-resolution as sparse representation of raw image patches. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  29. Zhao, J.; Hu, H.; Cao, F. Image super-resolution via adaptive sparse representation. Knowl.-Based Syst. 2017, 124, 23–33. [Google Scholar] [CrossRef]
  30. Xu, M.; Yang, Y.; Sun, Q.; Wu, X. Image super-resolution reconstruction based on adaptive sparse representation. Concurr. Comput. Pract. Exp. 2018, 30, e4968. [Google Scholar] [CrossRef]
  31. Wang, H.; Gao, X.; Zhang, K.; Li, J. Fast single image super-resolution using sparse Gaussian process regression. Signal Process. 2017, 134, 52–62. [Google Scholar] [CrossRef]
  32. Tang, Y.; Gong, W.; Yi, Q.; Li, W. Combining sparse coding with structured output regression machine for single image super-resolution. Inf. Sci. 2018, 430, 577–598. [Google Scholar] [CrossRef]
  33. Mandal, S.; Bhavsar, A.; Sao, A.K. Noise adaptive super-resolution from single image via non-local mean and sparse representation. Signal Process. 2017, 132, 134–149. [Google Scholar] [CrossRef]
  34. Jiang, C.; Zhang, Q.; Fan, R.; Hu, Z. Super-resolution ct image reconstruction based on dictionary learning and sparse representation. Sci. Rep. 2018, 8, 8799. [Google Scholar] [CrossRef]
  35. Alvarez-Ramos, V.; Ponomaryov, V.; Reyes-Reyes, R. Image super-resolution via two coupled dictionaries and sparse representation. Multimed. Tools Appl. 2018, 77, 13487–13511. [Google Scholar] [CrossRef]
  36. Chen, P.Y.; Selesnick, I.W. Group-sparse signal denoising: Non-convex regularization, convex optimization. IEEE Trans. Signal Process. 2014, 62, 3464–3478. [Google Scholar] [CrossRef]
  37. Selesnick, I.; Farshchian, M. Sparse signal approximation via nonseparable regularization. IEEE Trans. Signal Process. 2017, 65, 2561–2575. [Google Scholar] [CrossRef]
  38. Liu, J.; Huang, T.Z.; Liu, G.; Wang, S.; Lv, X.G. Total variation with overlapping group sparsity for speckle noise reduction. Neurocomputing 2016, 216, 502–513. [Google Scholar] [CrossRef]
  39. Liu, G.; Huang, T.Z.; Liu, J.; Lv, X.G. Total variation with overlapping group sparsity for image deblurring under impulse noise. PLoS ONE 2015, 10, e0122562. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Chen, Y.; Peng, Z.; Li, M.; Yu, F.; Lin, F. Seismic signal denoising using total generalized variation with overlapping group sparsity in the accelerated ADMM framework. J. Geophys. Eng. 2019, 16, 30–51. [Google Scholar] [CrossRef] [Green Version]
  41. Liu, X.; Chen, Y.; Peng, Z.; Wu, J. Total variation with overlapping group sparsity and Lp quasinorm for infrared image deblurring under salt-and-pepper noise. J. Electron. Imaging 2019, 28, 043031. [Google Scholar] [CrossRef] [Green Version]
  42. Wang, X.; Zhang, D.; Guo, X. Authentication and recovery of images using standard deviation. J. Electron. Imaging 2013, 22, 033012. [Google Scholar] [CrossRef]
  43. Xia, Z.; Wang, X.; Wang, C.; Zhang, C. Subpixel-Based Accurate and Fast Dynamic Tumor Image Recognition. J. Med. Imaging Health Inform. 2018, 8, 925–931. [Google Scholar] [CrossRef]
  44. Wang, X.-Y.; Zhang, J.-M. A novel image authentication and recovery algorithm based on chaos and Hamming code. Acta Phys. Sin. 2014, 63, 020701. [Google Scholar]
  45. Wang, X.-Y.; Zhang, J.-M. A novel image authentica-tion and recovery algorithm based on dither and chaos. Acta Phys. Sin. 2014, 63, 210701. [Google Scholar]
  46. Ding, L.; Zhang, H.; Xiao, J.; Li, B.; Lu, S.; Norouzifard, M. An improved image mixed noise removal algorithm based on super-resolution algorithm and CNN. Neural Comput. Appl. 2019, 31, 325–336. [Google Scholar] [CrossRef]
  47. López-Rubio, E. Superresolution from a single noisy image by the median filter transform. SIAM J. Imaging Sci. 2016, 9, 82–115. [Google Scholar] [CrossRef]
  48. Zhang, X.; Li, C.; Meng, Q.; Liu, S.; Zhang, Y.; Wang, J. Infrared image super resolution by combining compressive sensing and deep learning. Sensors 2018, 18, 2587. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Tirer, T.; Giryes, R. Super-resolution via image-adapted denoising CNNs: Incorporating external and internal learning. IEEE Signal Process. Lett. 2019, 26, 1080–1084. [Google Scholar] [CrossRef]
  50. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  51. Venkatakrishnan, S.V.; Bouman, C.A.; Wohlberg, B. Plug-and-play priors for model based reconstruction. In Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing, Austin, TX, USA, 3–5 December 2013; pp. 945–948. [Google Scholar]
  52. Brifman, A.; Romano, Y.; Elad, M. Turning a denoiser into a super-resolver using plug and play priors. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 1404–1408. [Google Scholar]
  53. Chan, S.H.; Wang, X.; Elgendy, O.A. Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 2016, 3, 84–98. [Google Scholar] [CrossRef] [Green Version]
  54. Ljubenović, M.; Figueiredo, M.A. Plug-and-play approach to class-adapted blind image deblurring. Int. J. Doc. Anal. Recognit. (IJDAR) 2019, 22, 79–97. [Google Scholar] [CrossRef]
  55. Shi, B.; Lian, Q.; Fan, X. PPR: Plug-and-play regularization model for solving nonlinear imaging inverse problems. Signal Process. 2019, 162, 83–96. [Google Scholar] [CrossRef]
  56. Romano, Y.; Elad, M.; Milanfar, P. The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 2017, 10, 1804–1844. [Google Scholar] [CrossRef]
  57. Reehorst, E.T.; Schniter, P. Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 2018, 5, 52–67. [Google Scholar] [CrossRef]
  58. Brifman, A.; Romano, Y.; Elad, M. Unified Single-Image and Video Super-Resolution via Denoising Algorithms. IEEE Trans. Image Process. 2019, 28, 6063–6076. [Google Scholar] [CrossRef]
  59. Hong, T.; Romano, Y.; Elad, M. Acceleration of RED via vector extrapolation. J. Vis. Commun. Image Represent. 2019, 63, 102575. [Google Scholar] [CrossRef] [Green Version]
  60. Tirer, T.; Giryes, R. Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 2018, 28, 1220–1234. [Google Scholar] [CrossRef] [PubMed]
  61. Shi, M.; Feng, L. Plug-and-play prior based on Gaussian mixture model learning for image restoration in sensor network. IEEE Access 2018, 6, 78113–78122. [Google Scholar] [CrossRef]
  62. He, T.; Sun, Y.; Chen, B.; Qi, J.; Liu, W.; Hu, J. Plug-and-play inertial forward–backward algorithm for Poisson image deconvolution. J. Electron. Imaging 2019, 28, 043020. [Google Scholar] [CrossRef]
  63. Sun, Y.; Babu, P.; Palomar, D.P. Majorization-minimization algorithms in signal processing, communications, and machine learning. IEEE Trans. Signal Process. 2016, 65, 794–816. [Google Scholar] [CrossRef]
  64. Oppenheim, A.V.; Willsky, A.S.; Nawab, S.H. Signals and Systems; Prentice-Hall: Englewood Cliffs, NJ, USA, 1983; Volume 2, p. 10. [Google Scholar]
  65. Berg, A.; Ahlberg, J.; Felsberg, M. The LTIR Dataset. Available online: http://www.cvl.isy.liu.se/research/datasets/ltir/version1.0/ (accessed on 23 November 2019).
  66. Project, D.G. IRData. Available online: http://www.dgp.toronto.edu/~nmorris/data/IRData/ (accessed on 23 November 2019).
  67. Kahaki, S.M.; Arshad, H.; Nordin, M.J.; Ismail, W. Geometric feature descriptor and dissimilarity-based registration of remotely sensed imagery. PLoS ONE 2018, 13, e0200676. [Google Scholar] [CrossRef]
  68. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  69. Huang, B.; Ma, S.; Goldfarb, D. Accelerated linearized Bregman method. J. Sci. Comput. 2013, 54, 428–453. [Google Scholar] [CrossRef]
Figure 1. HR infrared images: (a) Streets [65], (b) Garden [65], (c) Station [66], (d) Gate [66], (e) Cars [66], (f) Sidewalk [66], (g) Building [66], (h) Office [66].
Figure 1. HR infrared images: (a) Streets [65], (b) Garden [65], (c) Station [66], (d) Gate [66], (e) Cars [66], (f) Sidewalk [66], (g) Building [66], (h) Office [66].
Sensors 19 05139 g001
Figure 2. LR infrared images that are down-sampled 4-fold without noise: (a) Street, (b) Station, (c) Gate; (df) enlarged details from the rectangles in (ac), respectively.
Figure 2. LR infrared images that are down-sampled 4-fold without noise: (a) Street, (b) Station, (c) Gate; (df) enlarged details from the rectangles in (ac), respectively.
Sensors 19 05139 g002
Figure 3. Super-resolution ×4 results of the LR Street images without noise. (ac,gi) are original image and the results of MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively. (df,jl) show the enlarged details from the rectangles in (ac,gi), respectively.
Figure 3. Super-resolution ×4 results of the LR Street images without noise. (ac,gi) are original image and the results of MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively. (df,jl) show the enlarged details from the rectangles in (ac,gi), respectively.
Sensors 19 05139 g003
Figure 4. Super-resolution ×4 results of the LR Station images without noise. (ac,gi) are original image and the results of MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively. (df,jl) show the enlarged details from the rectangles in (ac,gi), respectively.
Figure 4. Super-resolution ×4 results of the LR Station images without noise. (ac,gi) are original image and the results of MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively. (df,jl) show the enlarged details from the rectangles in (ac,gi), respectively.
Sensors 19 05139 g004
Figure 5. Super-resolution ×4 results of the LR Gate images without noise. (ac,gi) are original image and the results of MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively. (df,jl) show the enlarged details from the rectangles in (ac,gi), respectively.
Figure 5. Super-resolution ×4 results of the LR Gate images without noise. (ac,gi) are original image and the results of MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively. (df,jl) show the enlarged details from the rectangles in (ac,gi), respectively.
Sensors 19 05139 g005
Figure 6. LR infrared images that are downsampled by a factor of two with added white Gaussian noise ( σ = 10 ): (a) Street, (b) Station, (c) Gate; (df) enlarged details from the rectangles in (ac), respectively.
Figure 6. LR infrared images that are downsampled by a factor of two with added white Gaussian noise ( σ = 10 ): (a) Street, (b) Station, (c) Gate; (df) enlarged details from the rectangles in (ac), respectively.
Sensors 19 05139 g006
Figure 7. Super-resolution ×2 results of the LR images of the Street with added white Gaussian noise ( σ = 10 ): (ac,gi) original image and the results of the MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively; (df,jl) enlarged details from the rectangles in (ac,gi), respectively.
Figure 7. Super-resolution ×2 results of the LR images of the Street with added white Gaussian noise ( σ = 10 ): (ac,gi) original image and the results of the MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively; (df,jl) enlarged details from the rectangles in (ac,gi), respectively.
Sensors 19 05139 g007
Figure 8. Super-resolution ×2 results of the LR images of the Station with added white Gaussian noise ( σ = 10 ): (a–c,g–i) ground truth and the results of the MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively; (df,jl) enlarged details from the rectangles in (a–c,g–i), respectively.
Figure 8. Super-resolution ×2 results of the LR images of the Station with added white Gaussian noise ( σ = 10 ): (a–c,g–i) ground truth and the results of the MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively; (df,jl) enlarged details from the rectangles in (a–c,g–i), respectively.
Sensors 19 05139 g008
Figure 9. Super-resolution ×2 results of the LR images of the Gate with added white Gaussian noise ( σ = 10 ): (ac,gi) ground truth and the results of the MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively; (df,jl) enlarged details from the rectangles in (ac,gi), respectively.
Figure 9. Super-resolution ×2 results of the LR images of the Gate with added white Gaussian noise ( σ = 10 ): (ac,gi) ground truth and the results of the MFT, RED-TV, RED-TGV, RED-OGSTV, and RED-HOGS4 methods, respectively; (df,jl) enlarged details from the rectangles in (ac,gi), respectively.
Sensors 19 05139 g009
Table 1. Infrared image super-resolution experiment results without noise.
Table 1. Infrared image super-resolution experiment results without noise.
ScaleMethodsStreetGardenStationGateCarSidewalkBuildingOffice
PSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIME
×2MFT31.6188/0.9561/14.9638.2022/0.9898/14.1829.4866/0.9250/3.1429.6519/0.9273/3.1229.5091/0.9201/3.2431.3036/0.9350/3.2629.9346/0.8781/3.128.6855/0.9208/3.1
TV36.3936/0.9867/4.4143.6264/0.9934/5.3532.6511/0.8780/1.8631.7051/0.8672/1.5432.1085/0.8719/1.433.2392/0.8767/1.3633.3386/0.8997/1.5331.0775/0.8688/1.79
TGV36.4216/0.9911/11.4543.8369/0.9943/11.933.0034/0.9051/3.7931.9684/0.8855/3.4832.4560/0.8984/3.3133.5317/0.8974/3.6333.6576/0.9152/3.3131.2934/0.8891/3.28
OGSTV36.3983/0.9915/17.5244.1539/0.9965/19.3833.1898/0.9236/3.1432.1813/0.9129/3.1732.6052/0.9152/2.6533.8642/0.9223/3.5633.6569/0.9153/3.7931.2804/0.8867/3.48
HOGS436.4195/0.9912/18.7744.1652/0.9964/20.7133.1657/0.9222/4.3832.1942/0.9141/3.8332.6179/0.9153/3.3633.8876/0.9225/3.9133.6553/0.9155/4.3631.2891/0.8891/3.45
×3MFT28.0181/0.8737/8.3833.6661/0.9639/8.4627.4429/0.8878/2.3228.0566/0.8947/2.0327.2375/0.8780/2.0729.5699/0.9004/1.9527.4554/0.7737/2.0126.5407/0.8804/2.06
TV33.4016/0.9405/4.7141.2186/0.9932/6.4630.7039/0.9256/1.8130.2514/0.9246/1.5130.6114/0.9237/1.6232.0579/0.9326/1.6831.9141/0.9202/1.529.2032/0.9241/1.47
TGV33.4061/0.9587/13.9241.3028/0.9918/13.6730.5973/0.9303/3.3430.144/0.9250/3.4330.5905/0.9024/3.4232.0108/0.9236/3.5731.9015/0.9223/3.3929.4769/0.9239/3.49
OGSTV33.4173/0.9653/16.6941.3054/0.9902/19.0330.6362/0.9337/2.8730.2387/0.9263/3.2830.6197/0.9252/2.5632.0310/0.9218/3.2931.9027/0.9233/3.2629.4788/0.9276/3.32
HOGS433.4604/0.9748/17.9941.3245/0.9940/20.0930.7198/0.9336/3.1930.2643/0.9350/4.0430.6401/0.9311/3.1332.1018/0.9475/3.931.9334/0.9279/4.1629.5196/0.9381/3.46
×4MFT26.2322/0.8100/5.9631.1897/0.9345/5.7326.2479/0.8604/1.5427.0864/0.8727/1.5925.8728/0.8460/1.5328.6704/0.8795/1.5426.2847/0.7174/1.5425.1938/0.8457/1.5
TV30.6168/0.9114/3.7937.8652/0.9842/5.1529.2303/0.9156/1.7329.0929/0.9279/1.6729.2363/0.9158/1.6530.8118/0.9301/1.5329.9565/0.8716/1.2828.0533/0.9035/1.84
TGV30.5942/0.9082/11.1737.8796/0.9845/11.5129.2475/0.9142/3.3929.2905/0.9301/3.5629.6773/0.9149/3.4830.8159/0.9203/3.1829.9634/0.8562/0.6228.1761/0.9067/3.51
OGSTV30.6149/0.9108/15.7737.8722/0.9818/19.829.2319/0.9188/3.0029.3264/0.9249/3.3829.7456/0.9086/6.7530.8085/0.9212/3.7329.9470/0.8717/2.9328.1810/0.9072/3.42
HOGS430.8434/0.9374/16.2737.9156/0.9868/21.7329.3724/0.9229/3.4329.5356/0.9335/4.0429.8813/0.9186/6.8630.8252/0.9316/3.9630.0449/0.8750/3.7328.2137/0.9202/4.41
Table 2. Infrared image super-resolution ×2 experiment results with added white Gaussian noise.
Table 2. Infrared image super-resolution ×2 experiment results with added white Gaussian noise.
σ MethodsStreetGardenStationGateCarSidewalkBuildingOffice
PSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIMEPSNR/SSIM/TIME
5MFT30.3844/0.8947/14.1335.2944/0.9278/13.7129.2544/0.8971/3.3929.3962/0.8971/3.2829.2663/0.8918/3.2930.9504/0.9009/3.2629.7203/0.8560/3.228.4523/0.8887/3.28
TV30.1995/0.8673/3.5635.7295/0.9063/3.8130.5913/0.9213/3.7930.7728/0.9236/3.3931.1923/0.9132/1.2832.0916/0.9291/5.130.7949/0.8729/5.8229.1854/0.9176/9.38
TGV30.2973/0.8299/11.5835.8874/0.9111/11.1230.5961/0.9143/3.5930.7538/0.9225/3.7131.2238/0.9126/3.2332.1587/0.8649/3.4830.8175/0.8664/3.4329.2051/0.9162/3.63
OGSTV30.3232/0.9044/15.2735.9191/0.9481/15.5830.6232/0.9223/3.6330.8097/0.9252/2.9831.2574/0.9044/2.5332.1766/0.9307/3.5130.8574/0.8755/3.6829.2161/0.9212/7.53
HOGS430.2752/0.8793/22.4935.9576/0.9511/19.5730.6954/0.9184/9.4430.9303/0.9201/6.4131.3745/0.9196/9.0632.1300/0.9317/6.2130.9188/0.9003/7.8329.2294/0.9211/11.87
10MFT28.8193/0.8069/14.1633.6306/0.8761/13.7128.6383/0.8353/3.4628.7479/0.8295/3.2628.5947/0.8278/3.3129.9312/0.823/3.2628.9476/0.8024/3.2327.8191/0.8149/3.37
TV28.5648/0.7735/3.9534.0973/0.9009/8.2429.8220/0.9056/4.1730.2015/0.9033/1.9230.2408/0.8963/1.931.4895/0.9131/3.1429.6852/0.8180/3.6728.4762/0.8996/8
TGV29.0636/0.8046/12.3234.1716/0.9454/11.7330.0730/0.8939/3.430.1833/0.8575/3.7430.3142/0.8919/3.2131.5726/0.8840/3.5629.8594/0.8207/3.4828.5003/0.9003/3.53
OGSTV29.0669/0.8574/19.834.2232/0.9440/16.8630.0650/0.9050/2.7530.3300/0.9090/2.4330.3722/0.9026/2.4531.6164/0.9171/3.2629.8875/0.8262/3.128.4669/0.9050/6.93
HOGS429.1334/0.8582/22.1234.2920/0.9496/20.4530.1622/0.9153/8.5930.4083/0.9114/8.9830.7266/0.9108/9.9931.6563/0.9219/9.0429.9636/0.8671/8.4528.5119/0.9093/12.27
20MFT27.2470/0.7125/14.0729.1815/0.6796/13.6826.8062/0.6751/3.2326.8204/0.6589/3.3226.6719/0.6656/3.3527.32540/0.6330/3.2826.9133/0.6677/3.225.9781/0.6361/3.49
TV27.2404/0.7474/10.1431.0762/0.8125/1228.7096/0.8539/4.729.1721/0.8591/2.4629.1275/0.8520/2.7130.3201/0.8641/4.2728.2517/0.7432/4.0727.7072/0.8620/7.55
TGV27.3241/0.7799/11.6231.5214/0.8712/11.6428.8323/0.8381/3.4529.2118/0.8427/3.2829.1635/0.8699/3.430.4717/0.8869/3.628.3606/0.7062/3.4927.8472/0.7391/3.24
OGSTV27.3548/0.7504/22.1531.3022/0.8387/16.5228.9738/0.8598/2.8229.3451/0.8651/2.7329.2848/0.8592/2.9230.5138/0.8687/2.9328.3291/0.7002/2.9527.8163/0.7321/3.42
HOGS427.7175/0.7979/21.1431.8839/0.8968/21.0929.3417/0.8926/9.0729.6127/0.8831/9.9529.8622/0.8890/11.9130.7887/0.8993/10.0228.5532/0.7851/11.8927.8925/0.8836/11.48
30MFT24.9228/0.5721/14.4126.1154/0.516/13.6824.8624/0.5289/3.2924.7843/0.5070/3.3124.6631/0.5187/3.2824.9161/0.4727/3.3124.8941/0.5420/3.3224.0533/0.4860/3.46
TV26.0036/0.6568/9.4828.7370/0.6810/9.3328.0216/0.7971/3.4228.3876/0.8006/3.5328.2183/0.7961/3.1429.4937/0.8061/3.3727.4704/0.6891/4.3727.1226/0.8013/6.12
TGV26.1381/0.7028/11.2629.5419/0.7842/11.7227.8666/0.7898/3.7128.2098/0.7820/3.4528.3737/0.8541/3.4829.6464/0.8514/3.7627.5328/0.7341/3.4327.2024/0.7665/3.49
OGSTV26.1238/0.6547/14.6529.5298/0.8103/19.3628.2897/0.8634/2.9628.7690/0.8678/3.4328.5982/0.8556/3.5129.8894/0.8450/3.7427.5448/0.7378/4.3227.3762/0.8566/5.05
HOGS426.7282/0.7440/18.0329.9149/0.8372/23.9928.6433/0.8727/11.3728.8957/0.8681/9.6629.1366/0.8653/11.7229.9323/0.8742/11.7327.7340/0.7512/11.5827.4272/0.8629/11.06

Share and Cite

MDPI and ACS Style

Liu, X.; Chen, Y.; Peng, Z.; Wu, J. Infrared Image Super-Resolution Reconstruction Based on Quaternion and High-Order Overlapping Group Sparse Total Variation. Sensors 2019, 19, 5139. https://doi.org/10.3390/s19235139

AMA Style

Liu X, Chen Y, Peng Z, Wu J. Infrared Image Super-Resolution Reconstruction Based on Quaternion and High-Order Overlapping Group Sparse Total Variation. Sensors. 2019; 19(23):5139. https://doi.org/10.3390/s19235139

Chicago/Turabian Style

Liu, Xingguo, Yingpin Chen, Zhenming Peng, and Juan Wu. 2019. "Infrared Image Super-Resolution Reconstruction Based on Quaternion and High-Order Overlapping Group Sparse Total Variation" Sensors 19, no. 23: 5139. https://doi.org/10.3390/s19235139

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop