Next Article in Journal
Current State of the Gold Mining Waste from the Ores of the Ursk Deposit (Western Siberia, Russia)
Previous Article in Journal
Hyperparameter Tuned Deep Autoencoder Model for Road Classification Model in Intelligent Transportation Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Frame Blind Super-Resolution Based on Joint Motion Estimation and Blur Kernel Estimation

1
School of Computer Science, Sichuan University, Chengdu 610065, China
2
Enrollment and Employment Department, Sichuan Normal University, Chengdu 610066, China
3
Science and Technology Department, Southwest Jiaotong University Press, Chengdu 610031, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(20), 10606; https://doi.org/10.3390/app122010606
Submission received: 10 August 2022 / Revised: 28 September 2022 / Accepted: 17 October 2022 / Published: 20 October 2022

Abstract

:
Multi-frame super-resolution makes up for the deficiency of sensor hardware and significantly improves image resolution by using the information of inter-frame and intra-frame images. Inaccurate blur kernel estimation will enlarge the distortion of the estimated high-resolution image. Therefore, multi-frame blind super resolution with unknown blur kernel is more challenging. For the purpose of reducing the impact of inaccurate motion estimation and blur kernel estimation on the super-resolved image, we propose a novel method combining motion estimation, blur kernel estimation and super resolution. The confidence weight of low-resolution images and the parameter value of the motion model obtained in image reconstruction are added to the modified motion estimation and blur kernel estimation. At the same time, Jacobian matrix, which can better describe the motion change, is introduced to further correct the error of motion estimation. Based on the results acquired from the experiments on synthetic data and real data, the superiority of the proposed method over others is obvious. The reconstructed high-resolution image retains the details of the image effectively, and the artifacts are greatly reduced.

1. Introduction

People increasingly need high-resolution (HR) images in order to obtain better visual effects or for subsequent image processing. However, improving image resolution in hardware is not easy because of the limitation of the sensor manufacturing process and the high cost of it. In recent years, super-resolution (SR) technology without changing the hardware environment has developed rapidly. It can fall into two categories: non-blind super-resolution or blind super-resolution, according to whether the blur kernel is known or not.
Multi-frame super-resolution (MFSR) refers to the reconstruction of the HR image from multi-frame low-resolution (LR) images in the same scene. This paper mainly studies multi-frame blind super-resolution. It is often applied in satellite remote sensing [1], low-quality video surveillance [2] and mobile phone videos [3], etc.
Influenced by unknown motion, blur and noise, the quality of super-solved HR images using existing multi-frame blind super-resolution methods is often not ideal in practical applications. It is difficult to accurately fuse scene information observed in multi-frame LR images due to noise and blur. In addition, the irregular pixel movement has an impact on the estimated HR image directly.
Coping with the above-mentioned problems, a novel method of multi-frame blind super resolution for joint motion estimation, blur kernel estimation and noise suppression, is proposed. We combine the parameters of image reconstruction, blur kernel estimation and motion estimation while estimating the high-resolution image, blur kernel and motion matrix as accurately as possible. In addition, in order to avoid interference from external data, we directly use the information provided by successive frames to boost the resolution and quality of the image without training and learning in deep learning methods. In the algorithm we propose, two major contributions are made for improving the quality of the estimated HR image:
(1) We combine the confidence weight in the image reconstruction stage to correct the motion estimation error in the iterative process of motion estimation. At the same time, we introduce the Jacobian matrix of motion parameters to reduce the error and register the LR images as much as possible.
(2) We combine the confidence weight in the image reconstruction stage and the corrected motion estimation to correct the blur estimation error in the iterative process of blur kernel estimation. More accurate blur kernel estimation has a direct impact on the effect of image reconstruction.
The paper is organized in the following manner. We introduce the related work of MFSR in Section 2. The observation model of multi-frame blind super-resolution is constructed in Section 3. We specifically describe the proposed algorithm in Section 4 and present the experimental results on synthetic data and real data in Section 5. The research is summarized in Section 6.

2. Related Work of MFSR

Non-blind super-resolution Non-blind super-resolution methods include interpolation-based methods [4], reconstruction-based methods [5,6,7] and learning-based methods [8,9,10]. Michel Bätz et al. [4] designed an extended dual weighting scheme. This method was based on Voronoi tessellation which relied on motion confidence weights and distance weights. X. Liu et al. [6] developed an adaptive bilateral total variation (ABTV) regularization method and used the half-quadratic estimation to take error norm adaptively. T. Nascimento et al. [7] proposed a method by combining Demons registration and regularized Bayesian reconstruction. S. Lu et al. [8] reconstructed the HR image by learning texture details using local linear embedding. T. Kato et al. [9] proposed a method based on sparse coding by optimizing a single objective function. K. Ning et al. [10] proposed a method to obtain more high-frequency detail information. This method fused adjacent frames by combining the registration module based on Generative Adversarial Networks (GAN). Non-blind super-resolution methods are effective when the blur kernel is known, or the blur is not considered. When the blur kernel does not match, the quality of the super-resolved image is very poor. Figure 1 shows the influence of the blur kernel mismatch on the SR result, where σ blur denotes the blur kernel width. If the blur kernel width is smaller than the real value, the SR result is over-smoothing, and the high frequency information is blurred. If the blur kernel width is larger than the real value, the SR result shows obvious ringing artifacts due to over-enhancing of the high-frequency edges.
Blind super-resolution Multi-frame blind super-resolution can fall into two categories according to whether reconstruction of the HR image and the deblurring are separated independently. E. Faramarzi et al. [11] proposed a method to first estimate the blur from some enhanced edges iteratively and then estimated the reconstructed HR image by the non-blind super-resolution method using the estimated blur kernel. Q. Qian and B.K. Gunturk [12] proposed a blind super-resolution method, which was first to deblur each LR image by using the estimated blur kernel and then to apply MFSR to the deblurred images. A. Buades et al. [13] proposed a method for MFSR with two steps. In the first stage, a non-linear filtering method was proposed to produce an unsampled but still blurry image using the inter-frame motion and exploiting spatiotemporal redundancy. In the second stage, a variational method for single image deblurring was proposed. L. Huang and Y. Xia [14] first proposed an accurate blur kernel estimation-based matrix decomposition method and then presented a matrix-variable optimization method for blind super resolution. Such methods that reconstruct the HR image by using the first estimated blur kernel have a simple operation flow and the blur kernel is estimated only one. However, they make it easy to magnify the error caused by inaccurate estimation of the blur kernel, which leads to the unsatisfactory effect of the reconstructed HR image.
Another kind of multi-frame blind super-resolution is to estimate the blur kernel and reconstruct the HR image simultaneously. H. Zhang et al. [15] proposed a method to perform image alignment, deblurring and MFSR by an assumption of projective motion path. Ce Liu et al. [16] proposed a Bayesian method for estimating the motion, blur kernel and noise simultaneously while reconstructing the original high-resolution frame. Z. Ma et al. [17] proposed an expectation maximization algorithm to guide motion blur estimation and high-resolution image reconstruction. Zhen Lv et al. [18] proposed a novel method of joint image registration and Point Spread Function (PSF) estimation so as to produce the reconstructed HR image. This method can be formulated as a convex optimization problem. Z. Shi et al. [19] proposed a blind MFSR algorithm based on combining ANN learning and non-subsampled Contourlet directional image representation. T. Honda et al. [20] proposed a joint deblurring, denoising and super-resolution method for multi-frame RGB/NIR imaging. Such methods are suitable for complex environments. However, they are not easy to converge, and the quality of the reconstructed HR image is not ideal because there are many uncertain factors to be considered at the same time. In order to accelerate the convergence speed of the algorithm and obtain better image quality, we propose the confidence weight of LR images obtained in image reconstruction, and the parameter value of the motion model are added to the modified motion estimation and blur kernel estimation.

3. Observation Model of Multi-Frame Blind Super-Resolution

Classical MFSR methods assume that these low-resolution images are obtained from an unknown high-resolution image after warping, blurring, down sampling and noise. Therefore, the observation model of multi-frame blind super-resolution can be expounded as
y n = D K n F n x + ε n ,
where y n represents the n t h LR image, n = 1 , 2 , , N . N is the number of LR images. D represents the down sampling matrix. K n represents the blur matrix of the n t h LR image. F n represents the sub-pixel displacement motion matrix of the n t h LR image. x is the HR image. ε n is the noise of the n t h LR image. The observation model of MFSR is shown in Figure 2. Mathematically, x needs to be calculated, y n and D are known, K n , F n and ε n are unknown in Equation (1). The solution is unstable and not unique. Therefore, MFSR is an ill-posed inverse problem. It is unlikely that the calculated HR image is completely consistent with the real HR image, due to various unknown factors. Super resolution reconstruction can only make the reconstructed HR image quality as good as possible.
K n , F n , x can be estimated in the Bayesian MAP framework as
{ K ^ , F ^ , x ^ } = argmax K , F , x p ( K , F , x | y 1 , y 2 , , y N ) = argmax K , F , x p ( y 1 , y 2 , , y N | K , F , x ) p ( x ) p ( K ) p ( F ) ,
where p ( x ) represents the image prior, p ( K ) represents the blur prior, p ( F ) represents the motion prior.

4. Proposed Multi-Frame Blind Super-Resolution Algorithm

We employ the alternating minimization (AM) method for reconstructing the HR image x , motion matrix F and the blur kernel matrix K iteratively. To overcome the poor convergence of a simple alternating minimization method, we propose a method of combining the estimation of motion parameters, blur kernel estimation and parameters in image reconstruction, and finally outputting the estimated high-resolution image. In the image reconstruction stage, a coarse-to-fine strategy is used to enlarge to the required scale factor. We modify the motion matrix by combining the confidence weight in the image reconstruction stage and the Jacobian matrix of motion parameters. We use the modified motion matrix and the confidence weight of each LR image in the image reconstruction stage to modify the blur matrix.

4.1. Preparation and Initial Settings

The middle frame of LR images is selected as the reference frame. We use the inter-frame information to calculate the relationship between each frame and the reference frame. The initial values of blur kernel, motion estimation and the HR image are set first. Bicubic interpolation is used for the initialization of the estimated HR image. Enhanced correlation coefficient (ECC) optimization [21] is employed for the initialization of motion estimation. The blind image deblurring method [22] is used for the initialization of the blur kernel estimation.

4.2. Image Reconstruction

K and F are fixed in this image reconstruction step, the estimated HR image x can be computed in the following way:
x ^ = argmax x p ( y 1 , y 2 , , y N | K , F , x ) p ( x ) ,
We adopt the regularization technique to estimate the HR image x which can be expounded as
x ^ = a r g m i n x { n = 1 N W · h ( D K n F n x y n ) + λ   γ ( x ) } ,
where W is the confidence weights matrix, h ( r ) is the Huber loss function as the data fidelity term, γ ( x ) is the regularization term with respect to x ,   λ is the trade-off parameter between the fidelity term and the regularization term. We use cross validation [5] to determine the value of λ .
The larger the observation error, the smaller the corresponding confidence weight. This paper estimates the confidence weight matrix W = ( β 1 , , β n , , β N ) T by using the method proposed in our previous article [23]. β n represents the confidence weight of the n t h LR image. β n can be expounded as
β n = { mean ( r n ) r n                                                           i f   | r n , i | c σ f t mean ( r n ) r n     · c σ f t | r n , i |                 o t h e r w i s e ,                                            
where r n = D K n F n x y n represents the observation error of the nth LR image, mean ( r n ) is the mean value of r n . c represents a positive constant. We set c = 2 . σ f t represents the scale parameter of iteration t . For discriminating inliers and outliers adaptively, σ f t can be estimated by using the median absolute deviation (MAD) [24]. The equation can be expounded as
σ f t = σ 0 · M A D ( r t 1 | β t 1 ) ,
where σ 0 = 1.4826 .
Huber loss function h ( r ) is used for the fidelity term. The Huber loss function h ( r ) can be defined as
h ( r ) = { r 2                   i f   | r | δ   2 δ | r | δ 2           o t h e r w i s e ,                                                                          
Bilateral Total Variation (BTV) [25] is used for the regularization term γ ( x ) . The BTV with the image sparsity is used to handle outliers. The equation can be expounded as
γ ( x ) = n = P P m = P P α | m | + | n | x S x n S y m x 1 ,
where α represents the scaled weight parameter. α can be computed by using the method in [19]. P is the size of the sliding window. S x n represents moving n pixels in the x direction. S y m represents moving m pixels in the y direction.
Scaled Conjugate Gradient (SCG) [26] is employed for the purpose of solving the problem in the Equation (4).

4.3. Motion Estimation

K and x are fixed in this motion estimation step, the motion matrix F can be computed as
F ^ = argmax F p ( y 1 , y 2 , , y N | K , F , x ) p ( F ) ,
We adopt the affine transformation model described by the parameter φ n = ( a 1 , a 2 , a 3 , a 4 , t x , t y ) T with the translation t = ( t x , t y ) T and the parameter a i ( i = 1 , 2 , 3 , 4 ) , which represents the rotation, scaling and other changes for the n t h LR image y n .
In object motion, the Jacobian matrix is an important part of instantaneous motion. In order to better describe the sub-pixel motion model φ n , we introduce the Jacobian matrix J ( φ n ) of motion parameters into the n t h LR image. J ( φ n ) can be expounded as
J ( φ n ) = D K n F n φ n ,
The derivative of F n w. r. t. φ n is computed using the bilinear interpolation [27].
The motion matrix F n describe the sub-pixel motion of the n t h LR image according to the motion parameter φ n relative to x . F n can be expressed as
F n ^ = a r g m i n F n   · J ( φ n ) T β n J ( φ n ) J ( φ n ) T β n ( D K n F n x y n ) 2 2 ,
where β n , which is the confidence weight of the n t h LR image, is calculated in the image reconstruction stage. J ( φ n ) is the Jacobian matrix of the n t h LR image. We use the Conjugate Gradient (CG) method to work out the Equation (11) in this paper.

4.4. Blur Kernel Estimation

F and x are fixed in this blur kernel estimation step, the blur matrix K can be determined as follows
K ^ = argmax K p ( y 1 , y 2 , , y N | K , F , x ) p ( K ) ,
We estimate the blur kernel by using the estimated HR image x and motion estimation F , the blur matrix K n of the n t h LR image can be expounded as
K n ^ = a r g m i n K n { β n · D K n F n x y n 1 +   K n 1 } ,
where β n , which is the confidence weight of the n t h LR image, is calculated in the image reconstruction stage. We set
A n = F n x ,
Equation (13) can be rewritten as
K n ^ = a r g m i n K n { β n · D K n A n y n 1 +   K n 1 } ,
Equation (15) can be worked out through solving the following linear system by the Iterated Reweighted least squares (IRLS) method [28] iteratively:
[ A n T D T β n D A n + ( I x T W s I x + I y T W s I y ) ] K n = A n T D T β n y n ,
where the matrices I x and I y represent the x - and y - derivative filters, respectively. the diagonal weight matrix W s can be expressed as
W s = d i a g { [ ( K n ) 2 + ϵ ] 1 2 } ,
where ϵ is a tuning parameter, which is set as ϵ = 0.0001 .
The corrected blur matrix and motion matrix are used to return to the image reconstruction stage to calculate the super-resolved HR image iteratively. The number of iterations is set to five in this paper. The final algorithm of multi-frame blind super-resolution is demonstrated in Algorithm 1.
Algorithm 1. The final algorithm of multi-frame blind super-resolution.
Input: LR images y 1 , y 2 , , y N and upsampling factor s
Initialize x ( 0 ) = y ( N / 2 ) s  % bicubic interpolation,
     F ( 0 )   % Enhanced correlation coefficient optimization [17],
     K ( 0 )   % Blind image deblurring method [18]
for    i : = 1   t o   I do  % I is the number of iterations
 %step1: image reconstruction
 while not satisfy SCG stopping criterion do
   Compute the confidence weights matrix W from Equation (5)
   Estimate the HR image x ( i ) by solving Equation (4)
 end while
 %step2: motion estimation
 while not satisfy CG stopping criterion do
   Compute the Jacobian matrix J ( φ n ) from Equation (10)
   Correct the motion matrix F ( i ) by solving Equation (11)
 end while
 %step3: blur kernel estimation
 while not satisfy stopping criterion do
   Compute the diagonal weight matrix W s from Equation (17)
   Corrected blur matrix K ( i ) by solving Equation (16)
 end while
i = i + 1
end for
Output: x = x ( i )

5. Experimental Results and Analysis

Synthetic data and real data are used to test the proposed multi-frame blind super-resolution algorithm. Our algorithm is compared to BayesSR [16], SRB [17], DeepSR [29] and FBSR [14]. Simulation results are conducted in MATLAB (R2018b) on a notebook computer with Intel(R) Core (TM) i7-8650U CPU and 16 GB RAM. We first transfer the input RGB LR images to YCbCr color space. The chromatic components Cb and Cr are super-resolved by bicubic interpolation. Our algorithm uses only the Y component containing the luminance or geometry of the image. The final estimated HR image is composed of the processed Y component and the chromatic components.

5.1. Experiments on Synthetic Data

The ground truth HR images are available on synthetic data, so our proposed algorithm and comparison methods are first measured on synthetic data. In order to evaluate image quality, we employ peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and information fidelity criterion (IFC).
Thirty-one LR images are created from one HR image. Random motion parameters, blur kernels, mixed noises and down sampling are added to the HR image. Random translations range from −3 to +3 pixels, while random rotation angles range from −3° to +3°. The random Gaussian blur kernel σ b l u r of each LR image ranges from 0.5   to   3.5 , and the size of the blur kernel is 15 × 15 in this paper. We put in mixed noises in the form of random Gaussian noise and Poisson noise, and random standard deviation of Gaussian noise, σ n o i s e , ranges from 0.005   to   0.035 . Synthetic LR images are generated by using commonly natural images shown in Figure 3.
We find that the SR result becomes better gradually by correcting the blur kernel and motion estimation from Figure 4. The artifacts caused by inaccurate estimation of blur kernel disappear gradually in the iterative process.
PSNR, SSIM and IFC results on synthetic LR images are shown in Table 1. Visual comparison results of the Baby image and the Butterfly image are shown in Figure 5 and Figure 6, respectively.
We can see that the values of PSNR, SSIM and IFC have been improved on synthetic data in Table 1. The values of PSNR, SSIM and IFC increased by 3.92%, 6.56% and 6.32%.
We can see from Figure 5, when the noise level is high, the noise outliers of the simple bicubic interpolation are particularly obvious. There are still slight noise points in the BayesSR and SRB methods. The noise level of the DeepSR method is much better, but the image is blurred. The FBSR method is worse than our proposed method in the details of the image. In comparison, the proposed method can preserve the specific details of the image better and suppress the noise as well. The main reason for this is that our algorithm suppresses noise in the manner of iteratively updating the explicit detection of the confidence weight of each low-resolution image and implicit detection using Huber loss function. In addition, motion estimation and blur kernel estimation are modified in the iteration, while the visual effect of the reconstructed high-resolution image is also better.
Regarding the low-level noise, the influence of blur and motion on the super-resolution reconstruction should be considered here. We can see from Figure 6 that the high-resolution images reconstructed by the bicubic interpolation, BayesSR and DeepSR methods have been blurred. The high-resolution image estimated by the SRB method has obvious point artifacts. The FBSR method gives some ringing artifacts. Our proposed algorithm can better find a balance between deblurring and artifact removal, so the reconstructed high-resolution image has higher quality, which owes much to the combination of motion estimation, blur kernel estimation and image reconstruction.

5.2. Experiments on Real Data

The proposed algorithm is tested on real data which comes from [17], except for our own synthetic data. Each group of images in the real dataset has 31 low resolution images with various motions and blurring in the same scene. The quality of real images cannot be evaluated by image evaluation criteria such as PSNR, SSIM and IFC. In order to test our method, we use no-reference assessment metrics, such as Natural Image Quality Evaluator (NIQE) [30], Perception-Based Image Quality Evaluator (PIQE) [31] and No-Reference Quality Metric (NRQM) [32] for super-resolved images. The results of NIQE, PIQE and NRQM on real data are presented in Table 2. Visual comparison results of the IMG_0663 image and the IMG_1480 image are shown in Figure 7 and Figure 8.
We can see that the values of NIQE, PIQE and NRQM have been improved on real data from Table 2. The values of NIQE, PIQE and NRQM increase by 4.45%, 10.56% and 15.75%.
It can be seen from Figure 7 that bicubic interpolation, BayesSR and DeepSR have different degrees of blur. SRB and FBSR have obvious ringing artifacts. Figure 7 and Figure 8 show that the proposed algorithm can produce better results on real data. The estimated HR image can preserve the details of the image and has no obvious artifacts.
It can be drawn from the experimental results on synthetic data and real data that our proposed method is superior to the previous multi-frame blind super-resolution methods. The reason for this is that we combine the parameters of image reconstruction, blur kernel estimation and motion estimation. The error of each estimation can be further reduced through iteration. On the one hand, this method can quickly correct the error of motion estimation and blur estimation. On the other hand, it can accelerate the convergence of our algorithm and the estimated HR image is very stable.

6. Conclusions

In MFSR practical applications, the reconstructed HR images are greatly affected by motion, blur and noise. Currently, it is assumed that the blur kernel is known in most of the super-resolution methods. Multi-frame blind super-resolution under an unknown blur kernel is challenging. We develop a novel multi-frame blind super-resolution method in this paper. In contrast to deep learning methods, our method deals with multi frame super-resolution issues, but not through training and learning. We first calculate the motion estimation and blur kernel estimation of the image independently. Then, we use the result and the confidence weight of multi-frame blind super-resolution to correct the motion estimation and blur kernel estimation. In the iterative motion estimation, the Jacobian matrix of motion parameters is introduced to estimate the motion more accurately. The final estimated HR image is gradually obtained in the iterative correction process. This method of combining the estimation of motion parameters, blur kernel estimation and parameters in image reconstruction overcomes the poor convergence of the simple alternating minimization method.
Our proposed method can better preserve the details of the image without obvious artifacts compared to the existing methods. We mainly study the MFSR of natural images. We assume that the next task for us to perform is to study the MFSR of special images or specific domains, such as low light and multi-modal MFSR. Our future work will be directed to the development of new priors for special applications.

Author Contributions

Conceptualization, S.L. and M.W.; methodology, S.L.; software, Q.H.; validation, S.L. and Q.H.; formal analysis, M.W.; writing—original draft preparation, S.L.; writing—review and editing, M.W.; visualization, Q.H.; supervision, M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available from the corresponding author. Send a request in corresponding author’s email, then you will receive the data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, H.; Gu, Y.; Wang, T.; Li, S. Satellite Video Super-Resolution Based on Adaptively Spatiotemporal Neighbors and Nonlocal Similarity Regularization. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8372–8383. [Google Scholar] [CrossRef]
  2. Seibel, H.; Goldenstein, S.; Rocha, A. Eyes on the target: Super-resolution and license-plate recognition in low-quality surveillance videos. IEEE Access 2017, 5, 20020–20035. [Google Scholar] [CrossRef]
  3. Wronski, B.; Garcia-Dorado, I.; Ernst, M.; Kelly, D.; Krainin, M.; Liang, C.K.; Levoy, M.; Milanfar, P. Handheld Multi-Frame Super-Resolution. ACM Trans. Graph. 2019, 38, 28. [Google Scholar] [CrossRef] [Green Version]
  4. Bätz, M.; Eichenseer, A.; Kaup, A. Multi-image super resolution using a dual weighting scheme based on voronoi tessellation. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016. [Google Scholar]
  5. Köhler, T.; Huang, X.; Schebesch, F.; Aichert, A.; Maier, A.; Hornegger, J. Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization. IEEE Trans. Comput. Imaging 2016, 2, 42–58. [Google Scholar] [CrossRef]
  6. Liu, X.; Chen, L.; Wang, W.; Zhao, J. Robust Multi-Frame Super-Resolution Based on Spatially Weighted Half-Quadratic Estimation and Adaptive BTV Regularization. IEEE Trans. Image Process. 2018, 27, 4971–4986. [Google Scholar] [CrossRef]
  7. Nascimento, T.; Salles, E. Multi-frame super-resolution combining Demons registration and regularized Bayesian reconstruction. IEEE Signal Process. Lett. 2020, 27, 2009–2013. [Google Scholar] [CrossRef]
  8. Lu, S.P.; Li, S.M.; Wang, R.; Lafruit, G.; Cheng, M.M.; Munteanu, A. Low-Rank Constrained Super-Resolution for Mixed-Resolution Multiview Video. IEEE Trans. Image Process. 2021, 30, 1072–1085. [Google Scholar] [CrossRef]
  9. Kato, T.; Hino, H.; Murata, N. Double sparsity for multi-frame super resolution. Neurocomputing 2017, 240, 115–126. [Google Scholar] [CrossRef]
  10. Ning, K.; Zhang, Z.; Han, K.; Han, S.; Zhang, X. Multi-frame super-resolution algorithm based on WGAN. IEEE Access 2021, 9, 85839–85851. [Google Scholar] [CrossRef]
  11. Faramarzi, E.; Rajan, D.; Fernandes, F. Blind Super Resolution of Real-Life Video Sequences. IEEE Trans. Image Process. 2016, 25, 1544–1555. [Google Scholar] [CrossRef]
  12. Qian, Q.; Gunturk, B.K. Blind super-resolution restoration with frame-by-frame nonparametric blur estimation. Multidimens. Syst. Signal Process. 2016, 27, 255–273. [Google Scholar] [CrossRef]
  13. Buades, A.; Duran, J.; Navarro, J. Motion-Compensated Spatio-Temporal Filtering for Multi-Image and Multimodal Super-Resolution. Int. J. Comput. Vis. 2019, 127, 1474–1500. [Google Scholar] [CrossRef]
  14. Huang, L.; Xia, Y. Fast Blind Image Super Resolution Using Matrix-Variable Optimization. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 945–955. [Google Scholar] [CrossRef]
  15. Zhang, H.; Carin, L. Multi-Shot Imaging: Joint Alignment, Deblurring and Resolution-Enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  16. Liu, C.; Sun, D. On Bayesian Adaptive Video Super Resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 346–360. [Google Scholar] [CrossRef] [Green Version]
  17. Ma, Z.; Liao, R.; Xin, T.; Li, X.; Jia, J.; Wu, E. Handling Motion Blur in Multi-Frame Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  18. Lv, Z.; Jia, Y.; Zhang, Q. Joint image registration and point spread function estimation for the super-resolution of satellite images. Signal Process. Image Commun. 2017, 58, 199–211. [Google Scholar] [CrossRef]
  19. Shi, Z.; Tian, F.; Wang, Y.; Ran, J. Blind multi-image super-resolution based on combination of ANN learning and non-subsampled Contourlet directional image representation. Signal Image Video Process. 2018, 12, 25–31. [Google Scholar] [CrossRef]
  20. Honda, T.; Sugimura, D.; Hamamoto, T. Multi-frame RGB/NIR imaging for low-light color image super-resolution. IEEE Trans. Comput. Imaging 2020, 6, 248–262. [Google Scholar] [CrossRef]
  21. Evangelidis, G.D.; Psarakis, E.Z. Parametric image alignment using enhanced correlation coefficient maximization. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1858–1865. [Google Scholar] [CrossRef] [Green Version]
  22. Wen, F.; Ying, R.; Liu, Y.; Liu, P.; Truong, T.K. A Simple Local Minimal Intensity Prior and An Improved Algorithm for Blind Image Deblurring. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2923–2937. [Google Scholar] [CrossRef]
  23. Liu, S.S.; Wang, M.H.; Huang, Q.B.; Liu, X. Robust Multi-Frame Super-Resolution Based on Adaptive Half-Quadratic Function and Local Structure Tensor Weighted BTV. Sensors 2021, 21, 5533. [Google Scholar] [CrossRef]
  24. Scales, J.A.; Gersztenkorn, A. Robust methods in inverse theory. Inverse Probl. 1988, 4, 1071. [Google Scholar] [CrossRef]
  25. Farsiu, S.; Robinson, M.D.; Elad, M.; Milanfar, P. Fast and robust multiframe super resolution. IEEE Trans. Image Process. 2004, 13, 1327–1344. [Google Scholar] [CrossRef] [PubMed]
  26. Nabney, I.T. NETLAB: Algorithms for Pattern Recognition, 1st ed.; Springer: New York, NY, USA, 2001. [Google Scholar]
  27. He, Y.; Yap, K.H.; Chen, L.; Chau, L.P. A Nonlinear Least Square Technique for Simultaneous Image Registration and Super-Resolution. IEEE Trans. Image Process. 2007, 16, 2830–2841. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, C. Beyond Pixels: Exploring New Representations and Applications for Motion Analysis. Ph.D. Thesis, MIT, Cambridge, MA, USA, 2009. [Google Scholar]
  29. Liao, R.; Tao, X.; Li, R.; Ma, Z.; Jia, J. Video super-resolution via deep draft-ensemble learning. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 531–539. [Google Scholar]
  30. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a ‘completely blind’ image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  31. Venkatanath, N.; Praneeth, D.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the Twenty First National Conference on Communications, Mumbai, India, 27 February–1 March 2015. [Google Scholar]
  32. Ma, C.; Yang, C.Y.; Yang, X.; Yang, M.H. Learning a no-reference quality metric for single-image super-resolution. Comput. Vis. Image Underst. 2017, 158, 1–16. [Google Scholar] [CrossRef]
Figure 1. SR results using different blur kernels when the real blur kernel width is 1. (a) σ b l u r = 0.5 ; (b) σ b l u r = 1 ; (c) σ b l u r = 1.5 .
Figure 1. SR results using different blur kernels when the real blur kernel width is 1. (a) σ b l u r = 0.5 ; (b) σ b l u r = 1 ; (c) σ b l u r = 1.5 .
Applsci 12 10606 g001
Figure 2. The observation model of MFSR.
Figure 2. The observation model of MFSR.
Applsci 12 10606 g002
Figure 3. Natural images for generating synthetic LR images. (a) Baby; (b) Butterfly; (c) Head; (d) Lena.
Figure 3. Natural images for generating synthetic LR images. (a) Baby; (b) Butterfly; (c) Head; (d) Lena.
Applsci 12 10606 g003
Figure 4. The SR result of the first iteration and the fifth iteration for the Butterfly image. (a) The first iteration; (b) The fifth iteration.
Figure 4. The SR result of the first iteration and the fifth iteration for the Butterfly image. (a) The first iteration; (b) The fifth iteration.
Applsci 12 10606 g004
Figure 5. Visual comparison of algorithms for the Baby image with σ noise = 0.025 when the magnification factor is 4. (a) The LR image; (b) Bicubic interpolation; (c) BayesSR; (d) SRB; (e) DeepSR; (f) FBSR; (g) Ours; (h) ground truth.
Figure 5. Visual comparison of algorithms for the Baby image with σ noise = 0.025 when the magnification factor is 4. (a) The LR image; (b) Bicubic interpolation; (c) BayesSR; (d) SRB; (e) DeepSR; (f) FBSR; (g) Ours; (h) ground truth.
Applsci 12 10606 g005aApplsci 12 10606 g005b
Figure 6. Visual comparison of algorithms for the Butterfly image with σ n o i s e = 0.005 when the magnification factor is 4. (a) The LR image; (b) Bicubic interpolation; (c) BayesSR; (d) SRB; (e) DeepSR; (f) FBSR; (g) Ours; (h) ground truth.
Figure 6. Visual comparison of algorithms for the Butterfly image with σ n o i s e = 0.005 when the magnification factor is 4. (a) The LR image; (b) Bicubic interpolation; (c) BayesSR; (d) SRB; (e) DeepSR; (f) FBSR; (g) Ours; (h) ground truth.
Applsci 12 10606 g006aApplsci 12 10606 g006b
Figure 7. Visual comparison of algorithms for the IMG_0663 image when the magnification factor is 5. (a) The LR image; (b) Bicubic interpolation; (c) BayesSR; (d) SRB; (e) DeepSR; (f) FBSR; (g) Ours.
Figure 7. Visual comparison of algorithms for the IMG_0663 image when the magnification factor is 5. (a) The LR image; (b) Bicubic interpolation; (c) BayesSR; (d) SRB; (e) DeepSR; (f) FBSR; (g) Ours.
Applsci 12 10606 g007
Figure 8. Visual comparison of algorithms for the IMG_1480 image when the magnification factor is 5. (a) The LR image; (b) Bicubic interpolation; (c) BayesSR; (d) SRB; (e) DeepSR; (f) FBSR; (g) Ours.
Figure 8. Visual comparison of algorithms for the IMG_1480 image when the magnification factor is 5. (a) The LR image; (b) Bicubic interpolation; (c) BayesSR; (d) SRB; (e) DeepSR; (f) FBSR; (g) Ours.
Applsci 12 10606 g008
Table 1. PSNR, SSIM and IFC results of our algorithms on synthetic LR images when the magnification factor is 4.
Table 1. PSNR, SSIM and IFC results of our algorithms on synthetic LR images when the magnification factor is 4.
MethodsBabyButterflyHeadLena
PSNRSSIMIFCPSNRSSIMIFCPSNRSSIMIFCPSNRSSIMIFC
Bicubic interpolation22.38610.63071.472920.98430.65472.082722.76810.44780.657322.14860.61421.6548
BayesSR24.64790.68721.593421.38020.72602.309124.88790.49270.712424.67420.68411.7259
SRB25.77000.70811.798122.12110.77732.604825.67460.55740.869225.79370.72851.8423
DeepSR25.52730.68501.754822.86980.72612.581325.76940.54730.882725.78490.77461.8948
FBSR26.23640.71481.807222.89710.78092.654326.08140.60720.908926.43750.79011.9024
Ours27.34670.77761.923023.21130.80232.742127.76810.67830.982627.38680.81382.0442
Table 2. NIQE, PIQE and NRQM results of our algorithms on real data when the magnification factor is 5.
Table 2. NIQE, PIQE and NRQM results of our algorithms on real data when the magnification factor is 5.
ImagesAssessment MetricsBicubic InterpolationBayesSRSRBDeepSRFBSROurs
IMG_0659NIQE8.6028.99468.00727.79756.89016.0805
PIQE90.069875.689566.902858.961643.276538.3211
NRQM3.96083.1543.9442.94654.01024.408
IMG_0663NIQE39.800739.802639.839.800539.799239.796
PIQE74.602265.337159.199777.089156.980249.5662
NRQM3.76564.40254.75994.23214.90255.5618
IMG_0667NIQE11.36329.38368.92938.25677.92087.2685
PIQE90.167466.697669.726477.303260.248958.4445
NRQM3.4013.63833.06312.67993.89434.0978
IMG_0687NIQE10.47968.97248.32369.73158.72338.0351
PIQE90.917969.03167.106553.118150.246742.425
NRQM2.82952.62972.97892.58113.08743.8819
IMG_1480NIQE8.49426.10626.07227.73836.00025.9913
PIQE80.859574.11459.013672.432756.302454.7688
NRQM4.42414.55142.04872.89164.89125.4116
IMG_1481NIQE9.24078.72828.89476.71346.69626.7838
PIQE92.478974.539768.793588.952550.560841.6472
NRQM4.49513.02682.53342.38575.4580 7.0695
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, S.; Huang, Q.; Wang, M. Multi-Frame Blind Super-Resolution Based on Joint Motion Estimation and Blur Kernel Estimation. Appl. Sci. 2022, 12, 10606. https://doi.org/10.3390/app122010606

AMA Style

Liu S, Huang Q, Wang M. Multi-Frame Blind Super-Resolution Based on Joint Motion Estimation and Blur Kernel Estimation. Applied Sciences. 2022; 12(20):10606. https://doi.org/10.3390/app122010606

Chicago/Turabian Style

Liu, Shanshan, Qingbin Huang, and Minghui Wang. 2022. "Multi-Frame Blind Super-Resolution Based on Joint Motion Estimation and Blur Kernel Estimation" Applied Sciences 12, no. 20: 10606. https://doi.org/10.3390/app122010606

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop