Adaptive Image Deblurring Convolutional Neural Network with Meta-Tuning
Abstract
1. Introduction
- The SFSNet proposed in this study integrates an RFE to address the limitation of the traditional convolutions by expanding the receptive field and adaptively selecting spatial features to improve the deblurring performance.
- The introduction of the BlurMix dataset mitigates the limited generalizability of existing datasets, accompanied by a meta-tuning strategy that enables rapid adaptation to various blur domains.
- The experimental results demonstrate that DL-based models, such as the SFSNet with meta-tuning, effectively reduce undesired artifacts and enhance the deblurring performance across diverse datasets.
2. Related Work
2.1. Deep-Learning-Based Image Deblurring
2.2. Deblurring Datasets
3. Proposed Method
3.1. Spatial Feature Selection Network (SFSNet) for Image Deblurring
3.2. Generation of the BlurMix Dataset
3.2.1. Existing Image Deblurring Datasets
- Blur kernel convolution: MC-UHDM [14] is a large-scale high-resolution dataset acquired by convolving sharp images with large blur kernel sizes ranging from to .
- Frame averaging: The GoPro [20], HIDE [43], and REDS [44] generate blurry images by averaging successive sharp frames. A ground truth image is selected from the center of these frames. Notably, the REDS were selected with different background environments and camera settings compared with those of the GoPro and HIDE datasets.
- Real-world shooting: The RealBlur [42], RSBlur [45], and ReLoBlur [46] are collected using a system of cameras that simultaneously capture pairs of blurry and sharp images with geometrical alignment. The RealBlur dataset [42], comprising RealBlur and RealBlur-Tele subsets, was obtained using wide-angle telephoto lenses, which produce severe blurriness. The RSBlur dataset [45] is a real-world dataset that provides large-scale pairs of blurry and sharp high-resolution images, whereas the ReLoBlur dataset [46] is the first real-world local motion deblurring dataset that includes indoor and outdoor scenes of diverse motion objects. The RWBI dataset [22] was collected using handheld devices and used for qualitative evaluation.
3.2.2. BlurMix Dataset
3.3. Meta-Tuning Strategy for Adaptive Image Deblurring
Algorithm 1 Meta-Tuning for adaptive image deblurring |
|
4. Experiments
4.1. Implementation Details and Evaluation Metrics
4.1.1. Implementation Details
4.1.2. Evaluation Metrics
4.2. Study of SFSNet Without Meta-Tuning
4.2.1. Quantitative Comparison
4.2.2. Qualitative Comparison
4.2.3. Ablation Study of the Spatial Gated Module (SGM) in the Regional Feature Extractor (RFE) Module
4.3. Study of SFSNet with Meta-Tuning
4.3.1. Quantitative Comparison
4.3.2. Qualitative Comparison
4.3.3. Comparison Between Meta-Tuning and Fine-Tuning Methods
4.3.4. Hyperparameter Selection for MetaSFSNet*
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Pham, T.-D.; Duong, M.-T.; Ho, Q.-T.; Lee, S.; Hong, M.-C. CNN-based facial expression recognition with simultaneous consideration of inter-class and intra-class variations. Sensors 2023, 23, 9658. [Google Scholar] [CrossRef] [PubMed]
- Sharif, S.M.A.; Naqvi, R.A.; Mehmood, Z.; Hussain, J.; Ali, A.; Lee, S.-W. MedDeblur: Medical Image Deblurring with Residual Dense Spatial-Asymmetric Attention. Mathematics 2023, 11, 115. [Google Scholar] [CrossRef]
- Khare, A.; Tiwary, U.S. A new method for deblurring and denoising of medical images using complex wavelet transform. In Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 17–18 January 2006. [Google Scholar]
- Khan, S.A.; Lee, H.J.; Lim, H. Enhancing Object Detection in Self-Driving Cars Using a Hybrid Approach. Electronics 2023, 12, 2768. [Google Scholar] [CrossRef]
- Zhang, S.; Shen, X.; Lin, Z.; Měch, R.; Costeira, J.P.; Moura, J.M. Learning to understand image blur. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.; Freeman, W. Removing camera shake from a single photograph. ACM Trans. Graph. 2006, 25, 787–794. [Google Scholar] [CrossRef]
- Cho, S.; Lee, S. Fast motion deblurring. ACM Trans. Graph. 2009, 28, 1–8. [Google Scholar] [CrossRef]
- Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
- Gong, D.; Tan, M.; Zhang, Y.; Van den Hengel, A.; Shi, Q. Blind image deconvolution by automatic gradient activation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Whyte, O.; Sivic, J.; Zisserman, A.; Ponce, J. Non-uniform deblurring for shaken images. Int. J. Comput. Vision 2012, 98, 168–186. [Google Scholar] [CrossRef]
- Sun, L.; Cho, S.; Wang, J.; Hays, J. Edge-based blur kernel estimation using patch priors. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Cambridge, MA, USA, 19–21 April 2013. [Google Scholar]
- Köhler, R.; Hirsch, M.; Mohler, B.; Schölkopf, B.; Harmeling, S. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012. [Google Scholar]
- Pan, J.; Sun, D.; Pfister, H.; Yang, M.H. Deblurring Images via Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2315–2328. [Google Scholar] [CrossRef] [PubMed]
- Zhang, K.; Wang, T.; Luo, W.; Ren, W.; Stenger, B.; Liu, W.; Yang, M.H. MC-Blur: A comprehensive benchmark for image deblurring. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 3755–3767. [Google Scholar] [CrossRef]
- Zhang, K.; Ren, W.; Luo, W.; Lai, W.S.; Stenger, B.; Yang, M.H.; Li, H. Deep image deblurring: A survey. Int. J. Comput. Vis. 2022, 130, 2103–2130. [Google Scholar] [CrossRef]
- Koh, J.; Lee, J.; Yoon, S. Single-image deblurring with neural networks: A comparative survey. Comput. Vis. Image Underst. 2021, 203, 103134. [Google Scholar] [CrossRef]
- Oh, J.; Hong, M.-C. Low-light image enhancement using hybrid deep-learning and mixed-norm loss functions. Sensors 2022, 22, 6904. [Google Scholar] [CrossRef] [PubMed]
- Duong, M.-T.; Lee, S.; Hong, M.-C. DMT-Net: Deep Multiple Networks for Low-Light Image Enhancement Based on Retinex Model. IEEE Access 2023, 11, 132147–132161. [Google Scholar] [CrossRef]
- Duong, M.-T.; Nguyen Thi, B.-T.; Lee, S.; Hong, M.-C. Multi-branch network for color image denoising using dilated convolution and attention mechanisms. Sensors 2024, 24, 3608. [Google Scholar] [CrossRef] [PubMed]
- Nah, S.; Hyun Kim, T.; Mu Lee, K. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Kupyn, O.; Martyniuk, T.; Wu, J.; Wang, Z. DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Zhang, K.; Luo, W.; Zhong, Y.; Ma, L.; Stenger, B.; Liu, W.; Li, H. Deblurring by realistic blurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Park, D.; Kang, D.U.; Kim, J.; Chun, S.Y. Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Cho, S.-J.; Ji, S.-W.; Hong, J.-P.; Jung, S.-W.; Ko, S.-J. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.-H.; Shao, L. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Chen, L.; Lu, X.; Zhang, J.; Chu, X.; Chen, C. HINet: Half instance normalization network for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021. [Google Scholar]
- Tsai, F.-J.; Peng, Y.-T.; Tsai, C.-C.; Lin, Y.-Y.; Lin, C.-W. BANet: A Blur-aware attention network for dynamic scene deblurring. IEEE Trans. Image Process. 2022, 31, 6789–6799. [Google Scholar]
- Chen, L.; Chu, X.; Zhang, X.; Sun, J. Simple baseline for image restoration. Proc. Eur. Conf. Comput. Vis. 2022, 17–33. [Google Scholar] [CrossRef]
- Cui, Y.; Ren, W.; Cao, X.; Knoll, A. Image restoration via frequency selection. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 1093–1108. [Google Scholar] [CrossRef] [PubMed]
- Ho, Q.T.; Duong, M.T.; Lee, S.; Hong, M.C. EHNet: Efficient Hybrid Network with Dual Attention for Image Deblurring. Sensors 2024, 24, 6545. [Google Scholar] [CrossRef] [PubMed]
- Su, H.; Jampani, V.; Sun, D.; Gallo, O.; Learned-Miller, E.; Kautz, J. Pixel-adaptive convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.-H. Restormer: Efficient Transformer for High-Resolution Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. In Proceedings of the International Conference on Learning Representations, San Juan, PR, USA, 2–4 May 2016. [Google Scholar]
- Ople, J.J.M.; Yeh, P.Y.; Sun, S.W.; Tsai, I.T.; Hua, K.L. Multi-scale neural network with dilated convolutions for image deblurring. IEEE Access 2020, 8, 53942–53952. [Google Scholar] [CrossRef]
- Zou, W.; Jiang, M.; Zhang, Y.; Chen, L.; Lu, Z.; Wu, Y. Sdwnet: A straight dilated network with wavelet transformation for image deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
- Fang, Z.; Wu, F.; Dong, W.; Li, X.; Wu, J.; Shi, G. Self-supervised non-uniform kernel estimation with flow-based motion prior for blind image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
- Sun, J.; Cao, W.; Xu, Z.; Ponce, J. Learning a convolutional neural network for non-uniform motion blur removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Vasu, S.; Maligireddy, V.R.; Rajagopalan, A. Non-blind deblurring: Handling kernel uncertainty with CNNs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. In Proceedings of the Conference on Learning Representations, Virtual Event, 3–7 May 2021. [Google Scholar]
- Tsai, F.J.; Peng, Y.T.; Lin, Y.Y.; Tsai, C.C.; Lin, C.W. Stripformer: Strip transformer for fast image deblurring. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022. [Google Scholar]
- Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Rim, J.; Lee, H.; Won, J.; Cho, S. Real-world blur dataset for learning and benchmarking deblurring algorithms. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Shen, Z.; Wang, W.; Lu, X.; Shen, J.; Ling, H.; Xu, T.; Shao, L. Human-aware motion deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Nah, S.; Baik, S.; Hong, S.; Moon, G.; Son, S.; Timofte, R.; Mu Lee, K. Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Rim, J.; Kim, G.; Kim, J.; Lee, J.; Lee, S.; Cho, S. Realistic blur synthesis for learning image deblurring. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022. [Google Scholar]
- Li, H.; Zhang, Z.; Jiang, T.; Luo, P.; Feng, H.; Xu, Z. Real-world deep local motion deblurring. Proc. AAAI Artif. Intell. 2023, 37, 1314–1322. [Google Scholar] [CrossRef]
- Dauphin, Y.N.; Fan, A.; Auli, M.; Grangier, D. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017. [Google Scholar]
- Finn, C.; Abbeel, P.; Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017. [Google Scholar]
- Fallah, A.; Mokhtari, A.; Ozdaglar, A. On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, AISTATS, Okinawa, Japan, 16–18 April 2019. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations (ICLR 2015), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
Dataset | Splitting of Training/Testing Sets | Blur Model | Acquisition Method |
---|---|---|---|
Levin [8] | ✗ | Uniform | Kernel convolution |
Sun [11] | ✗ | Uniform | Kernel convolution |
Kohler [12] | ✗ | Nonuniform | Kernel convolution |
MC-UHDM [14] | ✓ | Nonuniform | Kernel convolution |
GoPro [20] | ✓ | Nonuniform | Frame averaging |
HIDE [43] | ✓ | Nonuniform | Frame averaging |
REDS [44] | ✓ | Nonuniform | Frame averaging |
RealBlur [42] | ✓ | Nonuniform | Real-world shooting |
RealBlur-Tele [42] | ✗ | Nonuniform | Real-world shooting |
RSBlur [45] | ✓ | Nonuniform | Real-world shooting |
ReLoBlur [46] | ✓ | Nonuniform | Real-world shooting |
RWBI [22] | ✗ | Nonuniform | Real-world shooting |
Model | Training Dataset | Testing Dataset | PSNR↑ | SSIM↑ |
---|---|---|---|---|
MIMO-UNet+ [24] | GoPro [20] | RealBlur [42] | 27.63 | 0.837 |
RealBlur [42] | RealBlur [42] | 31.92 | 0.919 | |
BANet+ [27] | GoPro [20] | RealBlur [42] | 28.10 | 0.852 |
RealBlur [42] | RealBlur [42] | 32.40 | 0.929 | |
RealBlur testing set [42] | - | RealBlur [42] | 27.68 | 0.825 |
MIMO-UNet+ [24] | GoPro [20] | RSBlur [45] | 26.92 | 0.721 |
RSBlur [45] | RSBlur [45] | 33.37 | 0.856 | |
FSNet [29] | GoPro [20] | RSBlur [45] | 28.21 | 0.728 |
RSBlur [45] | RSBlur [45] | 34.31 | 0.872 | |
RSBlur testing set [45] | - | RSBlur [45] | 29.03 | 0.739 |
Dataset | Training/Testing Image Pairs | Resolution | Acquisition Method |
---|---|---|---|
MC-UHDM [14] | 8000/2000 | 3984 × 2656 | Kernel convolution |
GoPro [20] | 2103/1111 | 1280 × 720 | Frame averaging |
HIDE [43] | 6397/2025 | 1280 × 720 | Frame averaging |
REDS [44] | 24,000/3000 | 1280 × 720 | Frame averaging |
RealBlur [42] | 3758/980 | 680 × 773 | Real-world shooting (wide-angle lens) |
RealBlur-Tele [42] | 996 | 950 × 598 | Real-world shooting (telephoto lens) |
RSBlur [45] | 8878/3360 | 1920 × 1200 | Real-world shooting |
ReLoBlur [46] | 2010/395 | 2152 × 1436 | Real-world shooting (local blurs) |
RWBI * [22] | 3112 | 1000 × 750 | Real-world shooting (handheld devices) |
Dataset | Number of Image Pairs in the Training Set | Resolution | Acquisition Method |
---|---|---|---|
MC-UHDM [14] | 403 | 3984×2656 | Kernel convolution |
REDS [44] | 500 | 1280×720 | Frame averaging |
RealBlur [42] | 400 | 680×773 | Real-world shooting (wide-angle lens) |
RealBlur-Tele [42] | 400 | 950×598 | Real-world shooting (telephoto lens) |
RSBlur [45] | 400 | 1920×1200 | Real-world shooting |
Total image pairs in the BlurMix dataset: | 2103 |
Methods | GoPro [20] | HIDE [43] | #Params↓ (M) | Complexity↓ (GFLOPs) | ||||
---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | LPIPS↓ | PSNR↑ | >SSIM↑ | LPIPS↓ | |||
DeblurGAN-v2 [21] | 29.55 | 0.934 | 0.117 | 26.61 | 0.875 | 0.158 | 60.9 | 411 |
DBGAN [22] | 31.10 | 0.942 | 0.109 | 28.94 | 0.915 | 0.142 | 11.6 | 760 |
MTRNN [23] | 31.15 | 0.945 | 0.122 | 29.15 | 0.918 | 0.154 | 2.64 | 164 |
MIMO-UNet+ [24] | 32.45 | 0.958 | 0.090 | 29.99 | 0.930 | 0.124 | 16.1 | 145 |
HINet [26] | 32.71 | 0.959 | 0.088 | 30.32 | 0.932 | 0.119 | 88.7 | 171 |
BANet+ [27] | 33.03 | 0.961 | 0.085 | 30.58 | 0.935 | 0.117 | 40.0 | 588 |
FSNet [29] | 33.28 | 0.963 | 0.080 | 31.05 | 0.941 | 0.109 | 13.28 | 111 |
SFSNet (ours) | 33.34 | 0.963 | 0.080 | 31.14 | 0.941 | 0.103 | 15.10 | 75 |
Ablation | GoPro [20] | #Params↓ (M) | Complexity↓ (GFLOPs) | |
---|---|---|---|---|
PSNR↑ | SSIM↑ | |||
With DW convolutions | 33.22 | 0.962 | 14.92 | 74.57 |
With multirate dilated convolutions [33] | 33.24 | 0.962 | 45.64 | 180.29 |
With SGM (ours) | 33.34 | 0.964 | 15.10 | 75.10 |
Methods | REDS [44] | RealBlur [42] | RSBlur [45] | ReLoBlur [46] | ||||
---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | PSNR↑ | SSIM↑ | PSNR↑ | SSIM↑ | PSNR↑ | SSIM↑ | |
DBGAN [22] | 22.95 | 0.788 | 24.93 | 0.745 | 27.15 | 0.709 | 23.64 | 0.825 |
MTRNN [23] | 26.83 | 0.864 | 28.44 | 0.862 | 28.79 | 0.741 | 31.24 | 0.900 |
MIMO-UNet+ [24] | 26.43 | 0.859 | 27.63 | 0.837 | 26.92 | 0.721 | 28.14 | 0.864 |
HINet [26] | 26.77 | 0.867 | 28.17 | 0.849 | 28.82 | 0.716 | 32.19 | 0.894 |
BANet+ [27] | 27.01 | 0.873 | 28.10 | 0.852 | 26.34 | 0.695 | 28.34 | 0.827 |
FSNet [29] | 26.70 | 0.869 | 28.47 | 0.868 | 28.21 | 0.728 | 28.80 | 0.862 |
SFSNet (ours) | 26.91 | 0.878 | 28.64 | 0.873 | 28.04 | 0.721 | 29.45 | 0.866 |
MetaSFSNet * | 29.88 | 0.912 | 30.20 | 0.900 | 31.16 | 0.816 | 32.32 | 0.917 |
(our method with meta-tuning) | (+2.97) | (+0.034) | (+1.56) | (+0.027) | (+3.12) | (+0.095) | (+2.87) | (+0.051) |
Ablations | Pre-Trained Blur Domain | New Blur Adaptive Domains | Runtime (Seconds) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GoPro [20] | REDS [44] | RealBlur [42] | RSBlur [45] | ReLoBlur [46] | ||||||||||||
PSNR↑ | SSIM↑ | LPIPS↓ | PSNR↑ | SSIM↑ | LPIPS↓ | PSNR↑ | SSIM↑ | LPIPS↓ | PSNR↑ | SSIM↑ | LPIPS↓ | PSNR↑ | SSIM↑ | LPIPS↓ | ||
Pretrained SFSNet (baseline) | 33.34 | 0.963 | 0.081 | 26.91 | 0.878 | 0.185 | 28.64 | 0.873 | 0.159 | 28.04 | 0.721 | 0.317 | 29.45 | 0.866 | 0.250 | - |
FineSFSNet † (1 epoch) | 33.33 | 0.963 | 0.083 | 27.09 | 0.879 | 0.187 | 28.71 | 0.875 | 0.158 | 29.05 | 0.741 | 0.284 | 30.15 | 0.882 | 0.224 | 183 |
FineSFSNet † (3 epochs) | 31.69 | 0.954 | 0.123 | 29.88 | 0.910 | 0.150 | 30.19 | 0.897 | 0.150 | 31.02 | 0.815 | 0.311 | 30.80 | 0.915 | 0.275 | 526 |
FineSFSNet * (1 epoch) | 32.84 | 0.960 | 0.088 | 28.97 | 0.901 | 0.167 | 29.63 | 0.890 | 0.157 | 30.39 | 0.798 | 0.306 | 31.89 | 0.915 | 0.245 | 339 |
FineSFSNet * (3 epoch) | 32.90 | 0.961 | 0.086 | 29.57 | 0.909 | 0.155 | 30.00 | 0.894 | 0.152 | 30.90 | 0.811 | 0.315 | 32.28 | 0.917 | 0.238 | 1006 |
MetaSFSNet * (1 epoch) | 32.61 | 0.959 | 0.085 | 29.88 | 0.912 | 0.144 | 30.20 | 0.900 | 0.136 | 31.16 | 0.816 | 0.268 | 32.32 | 0.917 | 0.173 | 401 |
MetaSFSNet-3 * (3 epoch) | 32.65 | 0.959 | 0.086 | 30.44 | 0.919 | 0.132 | 30.66 | 0.905 | 0.133 | 31.29 | 0.824 | 0.275 | 33.05 | 0.921 | 0.143 | 1206 |
Inner Steps | α | GoPro [20] | RealBlur [42] | Runtime (Seconds) | ||||
---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | LPIPS↓ | PSNR↑ | SSIM↑ | LPIPS↓ | |||
1 | 1e−2 | 32.47 | 0.958 | 0.088 | 30.10 | 0.897 | 0.143 | 326 |
1 | 1e−3 | 32.49 | 0.959 | 0.087 | 30.09 | 0.895 | 0.143 | 325 |
1 | 1e−4 | 32.45 | 0.958 | 0.087 | 29.84 | 0.892 | 0.144 | 324 |
2 | 1e−2 | 32.63 | 0.959 | 0.086 | 30.11 | 0.898 | 0.138 | 418 |
2 | 1e−3 | 32.61 | 0.959 | 0.085 | 30.20 | 0.916 | 0.136 | 401 |
2 | 1e−4 | 32.62 | 0.959 | 0.084 | 30.22 | 0.900 | 0.137 | 403 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ho, Q.-T.; Duong, M.-T.; Lee, S.; Hong, M.-C. Adaptive Image Deblurring Convolutional Neural Network with Meta-Tuning. Sensors 2025, 25, 5211. https://doi.org/10.3390/s25165211
Ho Q-T, Duong M-T, Lee S, Hong M-C. Adaptive Image Deblurring Convolutional Neural Network with Meta-Tuning. Sensors. 2025; 25(16):5211. https://doi.org/10.3390/s25165211
Chicago/Turabian StyleHo, Quoc-Thien, Minh-Thien Duong, Seongsoo Lee, and Min-Cheol Hong. 2025. "Adaptive Image Deblurring Convolutional Neural Network with Meta-Tuning" Sensors 25, no. 16: 5211. https://doi.org/10.3390/s25165211
APA StyleHo, Q.-T., Duong, M.-T., Lee, S., & Hong, M.-C. (2025). Adaptive Image Deblurring Convolutional Neural Network with Meta-Tuning. Sensors, 25(16), 5211. https://doi.org/10.3390/s25165211