Sparse Representation-Based Multi-Focus Image Fusion Method via Local Energy in Shearlet Domain
Abstract
1. Introduction
2. Related Works
2.1. Shearlet Transform
2.2. Sparse Representation
3. Proposed Fusion Method
3.1. Shearlet Transform Decomposition
3.2. Low-Frequency Fusion
3.3. High-Frequency Fusion
3.4. Shearlet Transform Reconstruction
4. Experimental Results and Discussions
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Vasu, G.T.; Palanisamy, P. Gradient-based multi-focus image fusion using foreground and background pattern recognition with weighted anisotropic diffusion filter. Signal Image Video Process. 2023. [Google Scholar] [CrossRef]
- Li, H.; Qian, W. Siamese conditional generative adversarial network for multi-focus image fusion. Appl. Intell. 2023. [Google Scholar] [CrossRef]
- Li, X.; Wang, X. Multi-focus image fusion based on Hessian matrix decomposition and salient difference focus detection. Entropy 2022, 24, 1527. [Google Scholar] [CrossRef] [PubMed]
- Jiang, L.; Fan, H. Multi-level receptive field feature reuse for multi-focus image fusion. Mach. Vis. Appl. 2022, 33, 92. [Google Scholar] [CrossRef]
- Mohan, C.; Chouhan, K. Improved procedure for multi-focus images using image fusion with qshiftN DTCWT and MPCA in Laplacian pyramid domain. Appl. Sci. 2022, 12, 9495. [Google Scholar] [CrossRef]
- Zhang, X.; He, H.; Zhang, J. Multi-focus image fusion based on fractional order differentiation and closed image matting. ISA Trans. 2022, 129, 703–714. [Google Scholar] [CrossRef]
- Zhang, X. Deep learning-based multi-focus image fusion: A survey and a comparative study. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4819–4838. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, L. Multi-focus image fusion with deep residual learning and focus property detection. Inf. Fusion 2022, 86–87, 1–16. [Google Scholar] [CrossRef]
- Wang, Z.; Li, X. A self-supervised residual feature learning model for multifocus image fusion. IEEE Trans. Image Process. 2022, 31, 4527–4542. [Google Scholar] [CrossRef] [PubMed]
- Aymaz, S.; Kose, C.; Aymaz, S. A novel approach with the dynamic decision mechanism (DDM) in multi-focus image fusion. Multimed. Tools Appl. 2023, 82, 1821–1871. [Google Scholar] [CrossRef]
- Luo, H.; U, K.; Zhao, W. Multi-focus image fusion through pixel-wise voting and morphology. Multimed. Tools Appl. 2023, 82, 899–925. [Google Scholar] [CrossRef]
- Jiang, L.; Fan, H.; Li, J. DDFN: A depth-differential fusion network for multi-focus image. Multimed. Tools Appl. 2022, 81, 43013–43036. [Google Scholar] [CrossRef]
- Li, L.; Ma, H. Pulse coupled neural network-based multimodal medical image fusion via guided filtering and WSEML in NSCT domain. Entropy 2021, 23, 591. [Google Scholar] [CrossRef] [PubMed]
- Li, L.; Ma, H. Saliency-guided nonsubsampled shearlet transform for multisource remote sensing image fusion. Sensors 2021, 21, 1756. [Google Scholar] [CrossRef]
- Xiao, Y.; Guo, Z.; Veelaert, P.; Philips, W. General image fusion for an arbitrary number of inputs using convolutional neural networks. Sensors 2022, 22, 2457. [Google Scholar] [CrossRef]
- Karim, S.; Tong, G. Current advances and future perspectives of image fusion: A comprehensive review. Inf. Fusion 2023, 90, 185–217. [Google Scholar] [CrossRef]
- Candes, E.; Demanet, L. Fast discrete curvelet transforms. Multiscale Model. Simul. 2006, 5, 861–899. [Google Scholar] [CrossRef]
- Lu, Y.M.; Do, M.N. Multidimensional directional filter banks and surfacelets. IEEE Trans. Image Process. 2007, 16, 918–931. [Google Scholar] [CrossRef]
- Do, M.N.; Vetterli, M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Trans. Image Process. 2005, 14, 2091–2106. [Google Scholar] [CrossRef]
- Da, A.; Zhou, J.; Do, M. The nonsubsampled contourlet transform: Theory, design, and applications. IEEE Trans. Image Process. 2006, 15, 3089–3101. [Google Scholar]
- Guo, K.; Labate, D. Optimally sparse multidimensional representation using shearlets. SIAM J. Math. Anal. 2007, 39, 298–318. [Google Scholar] [CrossRef]
- Easley, G.; Labate, D.; Lim, W.Q. Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 2008, 25, 25–46. [Google Scholar] [CrossRef]
- Vishwakarma, A.; Bhuyan, M.K. Image fusion using adjustable non-subsampled shearlet transform. IEEE Trans. Instrum. Meas. 2019, 68, 3367–3378. [Google Scholar] [CrossRef]
- Vishwakarma, A.; Bhuyan, M. A curvelet-based multi-sensor image denoising for KLT-based image fusion. Multimed. Tools Appl. 2022, 81, 4991–5016. [Google Scholar] [CrossRef]
- Yang, Y.; Tong, S. A hybrid method for multi-focus image fusion based on fast discrete curvelet transform. IEEE Access 2017, 5, 14898–14913. [Google Scholar] [CrossRef]
- Zhang, B.; Zhang, C.; Liu, Y. Multi-focus image fusion algorithm based on compound PCNN in Surfacelet domain. Optik 2014, 125, 296–300. [Google Scholar] [CrossRef]
- Li, B.; Peng, H. Multi-focus image fusion based on dynamic threshold neural P systems and surfacelet transform. Knowl.-Based Syst. 2020, 196, 105794. [Google Scholar] [CrossRef]
- Xu, W.; Fu, Y. Medical image fusion using enhanced cross-visual cortex model based on artificial selection and impulse-coupled neural network. Comput. Methods Programs Biomed. 2023, 229, 107304. [Google Scholar] [CrossRef] [PubMed]
- Das, S.; Kundu, M.K. A neuro-fuzzy approach for medical image fusion. IEEE Trans. Biomed. Eng. 2013, 60, 3347–3353. [Google Scholar] [CrossRef]
- Li, L.; Ma, H.; Jia, Z.; Si, Y. A novel multiscale transform decomposition based multi-focus image fusion framework. Multimed. Tools Appl. 2021, 80, 12389–12409. [Google Scholar] [CrossRef]
- Peng, H.; Li, B. Multi-focus image fusion approach based on CNP systems in NSCT domain. Comput. Vis. Image Underst. 2021, 210, 103228. [Google Scholar] [CrossRef]
- Wang, L.; Liu, Z. The fusion of multi-focus images based on the complex shearlet features-motivated generative adversarial network. J. Adv. Transp. 2021, 2021, 5439935. [Google Scholar] [CrossRef]
- Li, L.; Si, Y.; Wang, L.; Jia, Z.; Ma, H. A novel approach for multi-focus image fusion based on SF-PAPCNN and ISML in NSST domain. Multimed. Tools Appl. 2020, 79, 24303–24328. [Google Scholar] [CrossRef]
- Amrita, S.; Joshi, S. Water wave optimized nonsubsampled shearlet transformation technique for multimodal medical image fusion. Concurr. Comput. Pract. Exp. 2023, 35, e7591. [Google Scholar] [CrossRef]
- Luo, X.; Xi, X. Multimodal medical volumetric image fusion using 3-D shearlet transform and T-S fuzzy reasoning. Multimed. Tools Appl. 2022, 1–36. [Google Scholar] [CrossRef]
- Yin, M.; Liu, X.; Liu, Y. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans. Instrum. Meas. 2019, 68, 49–64. [Google Scholar] [CrossRef]
- Zha, Z.; Wen, B. Learning nonlocal sparse and low-rank models for image compressive sensing: Nonlocal sparse and low-rank modeling. IEEE Signal Process. Mag. 2023, 40, 32–44. [Google Scholar] [CrossRef]
- Zha, Z.; Yuan, X. From rank estimation to rank approximation: Rank residual constraint for image restoration. IEEE Trans. Image Process. 2020, 29, 3254–3269. [Google Scholar] [CrossRef]
- Zha, Z.; Yuan, X. Image restoration via simultaneous nonlocal self-similarity priors. IEEE Trans. Image Process. 2020, 29, 8561–8576. [Google Scholar] [CrossRef]
- Zha, Z.; Yuan, X. Image restoration using joint patch-group-based sparse representation. IEEE Trans. Image Process. 2020, 29, 7735–7750. [Google Scholar] [CrossRef]
- Zha, Z.; Yuan, X. A benchmark for sparse coding: When group sparsity meets rank minimization. IEEE Trans. Image Process. 2020, 29, 5094–5109. [Google Scholar] [CrossRef] [PubMed]
- Zha, Z.; Yuan, X. Group sparsity residual constraint with non-local priors for image restoration. IEEE Trans. Image Process. 2020, 29, 8960–8975. [Google Scholar] [CrossRef] [PubMed]
- Zha, Z.; Wen, B. Image restoration via reconciliation of group sparsity and low-rank models. IEEE Trans. Image Process. 2021, 30, 5223–5238. [Google Scholar] [CrossRef]
- Zha, Z.; Wen, B. A hybrid structural sparsification error model for image restoration. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 4451–4465. [Google Scholar] [CrossRef] [PubMed]
- Zha, Z.; Wen, B. Triply complementary priors for image restoration. IEEE Trans. Image Process. 2021, 30, 5819–5834. [Google Scholar] [CrossRef]
- Zha, Z.; Wen, B. Low-rankness guided group sparse representation for image restoration. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–15. [Google Scholar] [CrossRef]
- Wang, C.; Wu, Y. Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion. Mach. Vis. Appl. 2022, 33, 69. [Google Scholar] [CrossRef]
- Qin, X.; Ban, Y.; Wu, P. Improved image fusion method based on sparse decomposition. Electronics 2022, 11, 2321. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X. Image fusion with convolutional sparse representation. IEEE Signal Process. Lett. 2016, 23, 1882–1886. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Z. Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Process. 2015, 9, 347–357. [Google Scholar] [CrossRef]
- Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar] [PubMed]
- Zhang, Y.; Xiang, W.; Zhang, S. Local extreme map guided multi-modal brain image fusion. Front. Neurosci. 2022, 16, 1055451. [Google Scholar] [CrossRef]
- Zhang, Y.; Liu, Y.; Sun, P. IFCNN: A general image fusion framework based on convolutional neural network. Inf. Fusion 2020, 54, 99–118. [Google Scholar] [CrossRef]
- Zhang, H.; Xu, H.; Xiao, Y. Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12797–12804. [Google Scholar]
- Xu, H.; Ma, J.; Jiang, J. U2Fusion: A unified unsupervised image fusion network. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 502–518. [Google Scholar] [CrossRef]
- Dong, Y.; Chen, Z.; Li, Z.; Gao, F. A multi-branch multi-scale deep learning image fusion algorithm based on DenseNet. Appl. Sci.-Basel 2022, 12, 10989. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Z. A practical pan-sharpening method with wavelet transform and sparse representation. In Proceedings of the IEEE International Conference on Imaging Systems and Techniques (IST), Beijing, China, 22–23 October 2013; pp. 288–293. [Google Scholar]
- Nejati, M.; Samavi, S.; Shirani, S. Multi-focus image fusion using dictionary-based sparse representation. Inf. Fusion 2015, 25, 72–84. [Google Scholar] [CrossRef]
- Hu, X.; Jiang, J.; Liu, X.; Ma, J. ZMFF: Zero-shot multi-focus image fusion. Inf. Fusion 2023, 92, 127–138. [Google Scholar] [CrossRef]
- Qu, X.; Yan, J.; Xiao, H. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Autom. Sin. 2008, 34, 1508–1514. [Google Scholar] [CrossRef]
- Liu, Z.; Blasch, E.; Xue, Z. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 94–109. [Google Scholar] [CrossRef] [PubMed]
QAB/F | QCB | QY | QE | QG | QNCIE | QMI | QP | |
---|---|---|---|---|---|---|---|---|
NSCT | 0.7092 | 0.7108 | 0.9248 | 0.8798 | 0.6831 | 0.8216 | 6.0272 | 0.6575 |
CVT | 0.7373 | 0.7576 | 0.9652 | 0.8825 | 0.7191 | 0.8225 | 6.1919 | 0.7694 |
NSST | 0.6526 | 0.7195 | 0.8680 | 0.8543 | 0.6167 | 0.8180 | 5.3646 | 0.4842 |
IFCNN | 0.7412 | 0.7622 | 0.9666 | 0.8848 | 0.7207 | 0.8234 | 6.3637 | 0.7727 |
PMGI | 0.5466 | 0.6070 | 0.7656 | 0.6316 | 0.5156 | 0.8169 | 5.1347 | 0.3925 |
U2Fusion | 0.6575 | 0.6164 | 0.8832 | 0.7952 | 0.6338 | 0.8176 | 5.2894 | 0.6640 |
LEGFF | 0.6923 | 0.6857 | 0.9164 | 0.8205 | 0.6658 | 0.8158 | 4.8919 | 0.6937 |
ZMFF | 0.7342 | 0.7802 | 0.9644 | 0.8779 | 0.7134 | 0.8222 | 6.1505 | 0.7673 |
Proposed | 0.7446 | 0.7760 | 0.9708 | 0.8868 | 0.7273 | 0.8243 | 6.5008 | 0.7860 |
QAB/F | QCB | QY | QE | QG | QNCIE | QMI | QP | |
---|---|---|---|---|---|---|---|---|
NSCT | 0.7276 | 0.6578 | 0.9363 | 0.8598 | 0.7110 | 0.8337 | 7.5353 | 0.8483 |
CVT | 0.7411 | 0.6838 | 0.9561 | 0.8661 | 0.7332 | 0.8333 | 7.5911 | 0.8879 |
NSST | 0.6934 | 0.6588 | 0.9376 | 0.8391 | 0.6667 | 0.8308 | 7.1464 | 0.7872 |
IFCNN | 0.7315 | 0.6825 | 0.9349 | 0.8663 | 0.7205 | 0.8324 | 7.4651 | 0.8744 |
PMGI | 0.4798 | 0.5977 | 0.7132 | 0.5816 | 0.4592 | 0.8251 | 6.3071 | 0.4944 |
U2Fusion | 0.5951 | 0.4969 | 0.6918 | 0.6838 | 0.5786 | 0.8242 | 6.1325 | 0.7393 |
LEGFF | 0.6770 | 0.6466 | 0.8394 | 0.7920 | 0.6603 | 0.8225 | 5.8173 | 0.8132 |
ZMFF | 0.7085 | 0.6544 | 0.9171 | 0.8568 | 0.6927 | 0.8297 | 7.0711 | 0.8365 |
Proposed | 0.7404 | 0.6924 | 0.9593 | 0.8684 | 0.7326 | 0.8332 | 7.5773 | 0.8860 |
QAB/F | QCB | QY | QE | QG | QNCIE | QMI | QP | |
---|---|---|---|---|---|---|---|---|
NSCT | 0.6800 | 0.6636 | 0.9272 | 0.8345 | 0.6807 | 0.8239 | 6.2537 | 0.7682 |
CVT | 0.7008 | 0.7053 | 0.9498 | 0.8626 | 0.7020 | 0.8213 | 5.9563 | 0.7959 |
NSST | 0.6359 | 0.6789 | 0.9138 | 0.7973 | 0.6372 | 0.8201 | 5.7004 | 0.6903 |
IFCNN | 0.7060 | 0.7047 | 0.9509 | 0.8676 | 0.7047 | 0.8241 | 6.4525 | 0.8134 |
PMGI | 0.4143 | 0.5467 | 0.7409 | 0.5178 | 0.4139 | 0.8198 | 5.6679 | 0.4286 |
U2Fusion | 0.6059 | 0.5849 | 0.7903 | 0.7988 | 0.6064 | 0.8193 | 5.5648 | 0.6591 |
LEGFF | 0.6764 | 0.7090 | 0.9104 | 0.8557 | 0.6757 | 0.8198 | 5.6745 | 0.7630 |
ZMFF | 0.6898 | 0.7406 | 0.9408 | 0.8563 | 0.6890 | 0.8234 | 6.3327 | 0.7934 |
Proposed | 0.7134 | 0.7222 | 0.9589 | 0.8710 | 0.7139 | 0.8231 | 6.2696 | 0.8194 |
QAB/F | QCB | QY | QE | QG | QNCIE | QMI | QP | |
---|---|---|---|---|---|---|---|---|
NSCT | 0.6961 | 0.6866 | 0.9407 | 0.8496 | 0.6975 | 0.8363 | 7.7208 | 0.7916 |
CVT | 0.7125 | 0.7240 | 0.9515 | 0.8656 | 0.7114 | 0.8343 | 7.5880 | 0.8219 |
NSST | 0.5955 | 0.6809 | 0.8837 | 0.7067 | 0.5944 | 0.8308 | 7.0354 | 0.6179 |
IFCNN | 0.7103 | 0.7098 | 0.9399 | 0.8679 | 0.7112 | 0.8364 | 7.8860 | 0.8101 |
PMGI | 0.3491 | 0.5517 | 0.6784 | 0.3959 | 0.3491 | 0.8285 | 6.7140 | 0.3640 |
U2Fusion | 0.5988 | 0.5576 | 0.7763 | 0.7853 | 0.5985 | 0.8282 | 6.6513 | 0.6573 |
LEGFF | 0.6639 | 0.6739 | 0.8700 | 0.8327 | 0.6649 | 0.8279 | 6.5996 | 0.7240 |
ZMFF | 0.6780 | 0.7229 | 0.9196 | 0.8539 | 0.6774 | 0.8340 | 7.5359 | 0.7762 |
Proposed | 0.7148 | 0.7301 | 0.9584 | 0.8691 | 0.7162 | 0.8363 | 7.8484 | 0.8249 |
QAB/F | QCB | QY | QE | QG | QNCIE | QMI | QP | |
---|---|---|---|---|---|---|---|---|
NSCT | 0.7103 | 0.6799 | 0.9208 | 0.8644 | 0.7058 | 0.8280 | 6.7075 | 0.7616 |
CVT | 0.7292 | 0.7265 | 0.9434 | 0.8764 | 0.7257 | 0.8281 | 6.7485 | 0.7985 |
NSST | 0.6720 | 0.6895 | 0.8955 | 0.8247 | 0.6655 | 0.8254 | 6.3212 | 0.6932 |
IFCNN | 0.7337 | 0.7292 | 0.9519 | 0.8792 | 0.7297 | 0.8298 | 7.0353 | 0.8178 |
PMGI | 0.3901 | 0.5656 | 0.6738 | 0.4736 | 0.3857 | 0.8225 | 5.8641 | 0.4620 |
U2Fusion | 0.6143 | 0.5682 | 0.7912 | 0.7835 | 0.6093 | 0.8221 | 5.7765 | 0.6657 |
LEGFF | 0.6810 | 0.6751 | 0.8817 | 0.8195 | 0.6754 | 0.8214 | 5.6138 | 0.7565 |
ZMFF | 0.7087 | 0.7412 | 0.9313 | 0.8687 | 0.7030 | 0.8271 | 6.6271 | 0.7853 |
Proposed | 0.7343 | 0.7436 | 0.9538 | 0.8808 | 0.7317 | 0.8299 | 7.0260 | 0.8076 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, L.; Lv, M.; Jia, Z.; Ma, H. Sparse Representation-Based Multi-Focus Image Fusion Method via Local Energy in Shearlet Domain. Sensors 2023, 23, 2888. https://doi.org/10.3390/s23062888
Li L, Lv M, Jia Z, Ma H. Sparse Representation-Based Multi-Focus Image Fusion Method via Local Energy in Shearlet Domain. Sensors. 2023; 23(6):2888. https://doi.org/10.3390/s23062888
Chicago/Turabian StyleLi, Liangliang, Ming Lv, Zhenhong Jia, and Hongbing Ma. 2023. "Sparse Representation-Based Multi-Focus Image Fusion Method via Local Energy in Shearlet Domain" Sensors 23, no. 6: 2888. https://doi.org/10.3390/s23062888
APA StyleLi, L., Lv, M., Jia, Z., & Ma, H. (2023). Sparse Representation-Based Multi-Focus Image Fusion Method via Local Energy in Shearlet Domain. Sensors, 23(6), 2888. https://doi.org/10.3390/s23062888