Light Field Super-Resolution via Dual-Domain High-Frequency Restoration and State-Space Fusion
Abstract
:1. Introduction
- •
- First, we propose a novel local sparse angular attention module (LSAA) that sparsely samples informative sub-view pairs based on a local window mask mechanism, reducing computational complexity.
- •
- Second, we propose a multi-scale discrete cosine transform (DCT) high-frequency feature recovery module, which effectively enhances the high-frequency information of images from the frequency domain by incorporating a masking mechanism.
- •
- Third, we design a detail-guided cross-attention module (DCA), which restores spatial-domain texture detail features through a multi-scale detail enhancement module and a cross-attention mechanism.
- •
- Finally, we design a Mamba-based fusion module (MF) that models long-range spatial–angular dependencies via hidden state transitions, achieving linear computational complexity compared to the Transformer-based self-attention mechanism.
2. Related Work
2.1. Light Field Super-Resolution
2.2. Frequency-Domain Approaches for Image Restoration
2.3. Spatial–Angular Interaction Mechanisms
2.4. State-Space Models in Vision Tasks
2.5. Super-Diffraction Imaging
2.6. Module Adaptation and Novel Design for Light Field Super-Resolution
3. Method
3.1. Overall Architecture of Our Network
3.2. Sub-View Interaction for High-Frequency Recovery
Local Sparse Angular Attention
3.3. Dual-Domain High-Frequency Detail Recovery Sub-Network
3.3.1. High-Frequency Information Recovery in the Frequency Domain
3.3.2. High-Frequency Information Recovery in the Spatial Domain
3.4. Spatial–Angular Feature Fusion
Mamba-Based Fusion Module
4. Experiments
4.1. Datasets and Training Strategy and Evaluation Metric
4.2. Quantitative Evaluation
4.3. Qualitative Evaluation
4.4. Ablation Study
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wu, G.; Masia, B.; Jarabo, A.; Zhang, Y.; Wang, L.; Dai, Q.; Chai, T.; Liu, Y. Light field image processing: An overview. IEEE J. Sel. Top. Signal Process. 2017, 11, 926–954. [Google Scholar] [CrossRef]
- Kim, C.; Zimmer, H.; Pritch, Y.; Sorkine-Hornung, A.; Gross, M.H. Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. TOG 2013, 32, 73:1–73:4. [Google Scholar] [CrossRef]
- Zhang, Q.; Li, H.; Wang, X.; Wang, Q. 3D scene reconstruction with an un-calibrated light field camera. Int. J. Comput. Vis. 2021, 129, 3006–3026. [Google Scholar] [CrossRef]
- Zhou, Y.; Guo, H.; Fu, R.; Liang, G.; Wang, C.; Wu, X. 3D reconstruction based on light field information. In Proceedings of the 2015 IEEE International Conference on Information and Automation (ICIA), Lijiang, China, 8–10 August 2015; IEEE: New York, NY, USA, 2015; pp. 976–981. [Google Scholar]
- Overbeck, R.S.; Erickson, D.; Evangelakos, D.; Pharr, M.; Debevec, P. A system for acquiring, processing, and rendering panoramic light field stills for virtual reality. ACM Trans. Graph. TOG 2018, 37, 1–15. [Google Scholar] [CrossRef]
- Yu, J. A light-field journey to virtual reality. IEEE Multimed. 2017, 24, 104–112. [Google Scholar] [CrossRef]
- Lam, E.Y. Computational photography with plenoptic camera and light field capture: Tutorial. J. Opt. Soc. Am. A 2015, 32, 2021–2032. [Google Scholar] [CrossRef]
- Levoy, M. Light fields and computational imaging. Computer 2006, 39, 46–55. [Google Scholar] [CrossRef]
- Choy, C.; Gwak, J.; Savarese, S. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3075–3084. [Google Scholar]
- Deeba, F.; Kun, S.; Dharejo, F.A.; Zhou, Y. Wavelet-based enhanced medical image super resolution. IEEE Access 2020, 8, 37035–37044. [Google Scholar] [CrossRef]
- Beaini, D.; Passaro, S.; Létourneau, V.; Hamilton, W.; Corso, G.; Liò, P. Directional graph networks. In Proceedings of the 38th International Conference on Machine Learning (ICML), Virtual Event, 18–24 July 2021; pp. 748–758. [Google Scholar]
- Wang, Y.; Wang, L.; Wu, G.; Yang, J.; An, W.; Yu, J.; Guo, Y. Disentangling light fields for super-resolution and disparity estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 425–443. [Google Scholar] [CrossRef]
- Wanner, S.; Goldluecke, B. Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 606–619. [Google Scholar] [CrossRef]
- Wang, X.; Ma, J.; Yi, P.; Tian, X.; Jiang, J.; Zhang, X.P. Learning an epipolar shift compensation for light field image super-resolution. Inf. Fusion 2022, 79, 188–199. [Google Scholar] [CrossRef]
- Yeun, H.W.F.; Hou, J.; Chen, X.; Chen, J.; Chen, Z.; Chung, Y.Y. Light field spatial super-resolution using deep efficient spatial–angular separable convolution. IEEE TIP 2018, 28, 2319–2330. [Google Scholar]
- Yan, T.; Jiao, J.; Liu, W.; Lau, R.W. Stereoscopic image generation from light field with disparity scaling and super-resolution. IEEE Trans. Image Process. 2019, 29, 1827–1842. [Google Scholar] [CrossRef]
- Wang, S.; Zhou, T.; Lu, Y.; Di, H. Detail-preserving transformer for light field image super-resolution. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), Virtual, 22 February–1 March 2022; Volume 36, pp. 2522–2530. [Google Scholar]
- Xu, R.; Kang, X.; Li, C.; Chen, H.; Ming, A. DCT-FANet: DCT based frequency attention network for single image super-resolution. Displays 2022, 74, 102220. [Google Scholar] [CrossRef]
- Dai, T.; Wang, J.; Guo, H.; Li, J.; Wang, J.; Zhu, Z. FreqFormer: Frequency-aware transformer for lightweight image super-resolution. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI), Jeju, Republc of Korea, 3–9 August 2024; pp. 731–739. [Google Scholar]
- Yoon, Y.; Jeon, H.G.; Yoo, D.; Lee, J.Y.; Kweon, I.S. Light-field image super-resolution using convolutional neural network. IEEE Signal Process. Lett. 2017, 24, 848–852. [Google Scholar] [CrossRef]
- Ghassab, V.K.; Bouguila, N. Light field super-resolution using edge-preserved graph-based regularization. IEEE Trans. Multimed. 2019, 22, 1447–1457. [Google Scholar] [CrossRef]
- Gao, C.; Lin, Y.; Chang, S.; Zhang, S. Spatial-angular multi-scale mechanism for light field spatial super-resolution. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 17–24 June 2023; pp. 1961–1970. [Google Scholar]
- Liu, G.; Yue, H.; Wu, J.; Yang, J. Efficient light field angular super-resolution with sub-aperture feature learning and macro-pixel upsampling. IEEE Trans. Multimed. 2022, 25, 6588–6600. [Google Scholar] [CrossRef]
- Gu, A.; Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv 2023, arXiv:2312.00752. [Google Scholar]
- Liu, Y.; Tian, Y.; Zhao, Y.; Yu, H.; Xie, L.; Wang, Y.; Ye, Q.; Jiao, J.; Liu, Y. Vmamba: Visual state space model. NeurIPS 2024, 37, 103031–103063. [Google Scholar]
- Li, K.; Li, X.; Wang, Y.; He, Y.; Wang, Y.; Wang, L.; Qiao, Y. Videomamba: State space model for efficient video understanding. In Proceedings of the 18th European Conference (ECCV), Milan, Italy, 29 September–4 October 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 237–255. [Google Scholar]
- Pendry, J.B. Negative Refraction Makes a Perfect Lens. Phys. Rev. Lett. 2000, 85, 3966–3969. [Google Scholar] [CrossRef]
- Liu, J.; Sun, W.; Wu, F.; Shan, H.; Xie, X. Macroscopic Fourier Ptychographic Imaging Based on Deep Learning. Photonics 2025, 12, 170. [Google Scholar] [CrossRef]
- Liang, Z.; Wang, Y.; Wang, L.; Yang, J.; Zhou, S. Light field image super-resolution with transformers. IEEE Signal Process. Lett. 2022, 29, 563–567. [Google Scholar] [CrossRef]
- Zhu, L.; Liao, B.; Zhang, Q.; Wang, X.; Liu, W.; Wang, X. Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model. arxiv 2024, arXiv:2401.09417. [Google Scholar] [CrossRef]
- Hsu, R.; Kodama, K.; Harashima, H. View interpolation using epipolar plane images. In Proceedings of the 1st International Conference on Image Processing (ICIP), Austin, TX, USA, 13–16 November 1994; IEEE: New York, NY, USA, 1994; Volume 2, pp. 745–749. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. NeurIPS 2017, 30, 11. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Park, S.C.; Park, M.K.; Kang, M.G. Super-resolution image reconstruction: A technical overview. IEEE Signal Process. Mag. 2003, 20, 21–36. [Google Scholar] [CrossRef]
- Defrise, M.; Gullberg, G.T. Image reconstruction. Phys. Med. Biol. 2006, 51, R139. [Google Scholar] [CrossRef]
- Demoment, G. Image reconstruction and restoration: Overview of common estimation structures and problems. IEEE Trans. Autom. Sci. Eng. 1989, 37, 2024–2036. [Google Scholar] [CrossRef]
- Gao, S.; Zhang, P.; Yan, T.; Lu, H. Multi-scale and detail-enhanced segment anything model for salient object detection. In Proceedings of the ACM Multimedia, Melbourne, Australia, 28 October–1 November 2024; pp. 9894–9903. [Google Scholar]
- Hamilton, J.D. State-space models. Handb. Econom. 1994, 4, 3039–3080. [Google Scholar]
- Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
- Honauer, K.; Johannsen, O.; Kondermann, D.; Goldluecke, B. A dataset and evaluation methodology for depth estimation on 4D light fields. In Proceedings of the 13th Asian Conference on Computer Vision (ACCV), Taipei, Taiwan, 20–24 November 2016; Springer: Berlin/Heidelberg, Germany, 2017; pp. 19–34. [Google Scholar]
- Wanner, S.; Meister, S.; Goldluecke, B. Datasets and benchmarks for densely sampled 4D light fields. In Proceedings of the 18th International Workshop on Vision, Modeling and Visualization (VMV), Lugano, Switzerland, 11–13 September 2013; Volume 13, pp. 225–226. [Google Scholar]
- Le Pendu, M.; Jiang, X.; Guillemot, C. Light field inpainting propagation via low rank matrix completion. IEEE Trans. Image Process. 2018, 27, 1981–1993. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Zhang, S.; Lin, Y.; Sheng, H. Residual networks for light field image super-resolution. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 11046–11055. [Google Scholar]
- Jin, J.; Hou, J.; Chen, J.; Kwong, S. Light field spatial super-resolution via deep combinatorial geometry embedding and structural consistency regularization. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2260–2269. [Google Scholar]
- Wang, Y.; Wang, L.; Yang, J.; An, W.; Yu, J.; Guo, Y. Spatial-angular interaction for light field image super-resolution. In Proceedings of the 16th European Conference (ECCV), Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 290–308. [Google Scholar]
- Wang, Y.; Yang, J.; Wang, L.; Ying, X.; Wu, T.; An, W.; Guo, Y. Light field image super-resolution using deformable convolution. IEEE Trans. Image Process. 2020, 30, 1057–1071. [Google Scholar] [CrossRef]
- Sarma, M.; Bond, C.; Nara, S.; Raza, H. MEGNet: A MEG-Based Deep Learning Model for Cognitive and Motor Imagery Classification. In Proceedings of the 2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Istanbul, Turkiye, 5–8 December 2023; IEEE: New York, NY, USA, 2023; pp. 2571–2578. [Google Scholar]
- Liu, G.; Yue, H.; Wu, J.; Yang, J. Intra-inter view interaction network for light field image super-resolution. IEEE Trans. Multimed. 2021, 25, 256–266. [Google Scholar] [CrossRef]
- Cheng, Z.; Liu, Y.; Xiong, Z. Spatial-angular versatile convolution for light field reconstruction. IEEE Trans. Comput. Imaging 2022, 8, 1131–1144. [Google Scholar] [CrossRef]
- Liang, Z.; Wang, Y.; Wang, L.; Yang, J.; Zhou, S.; Guo, Y. Learning non-local spatial-angular correlation for light field image super-resolution. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 12376–12386. [Google Scholar]
- Van Duong, V.; Huu, T.N.; Yim, J.; Jeon, B. Light field image super-resolution network via joint spatial-angular and epipolar information. IEEE Trans. Comput. Imaging 2023, 9, 350–366. [Google Scholar] [CrossRef]
- Cong, R.; Sheng, H.; Yang, D.; Cui, Z.; Chen, R. Exploiting spatial and angular correlations with deep efficient transformers for light field image super-resolution. IEEE Trans. Multimed. 2023, 26, 1421–1435. [Google Scholar] [CrossRef]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
Methods | Scale | #Params. | HClnew | HClold | STFgantry |
---|---|---|---|---|---|
Bilinear | ×2 | – | 30.718/0.9192 | 36.243/0.9709 | 29.577/0.9310 |
Bicubic | ×2 | – | 31.887/0.9356 | 37.686/0.9785 | 31.063/0.9498 |
VDSR [54] | ×2 | 0.665 M | 34.371/0.9561 | 40.606/0.9867 | 35.541/0.9789 |
EDSR [55] | ×2 | 38.62 M | 34.828/0.9592 | 41.014/0.9874 | 36.296/0.9818 |
RCAN [43] | ×2 | 15.31 M | 35.022/0.9603 | 41.125/0.9875 | 36.670/0.9831 |
resLF [44] | ×2 | 7.982 M | 36.685/0.9739 | 43.422/0.9932 | 38.354/0.9904 |
LFSSR [15] | ×2 | 0.888 M | 36.802/0.9749 | 43.811/0.9938 | 37.944/0.9898 |
LF-ATO [45] | ×2 | 1.216 M | 37.244/0.9767 | 44.205/0.9942 | 39.636/0.9929 |
LF_InterNet [46] | ×2 | 5.040 M | 37.170/0.9763 | 44.573/0.9946 | 38.435/0.9909 |
LF-DFnet [47] | ×2 | 3.940 M | 37.418/0.9773 | 44.198/0.9941 | 39.427/0.9926 |
MEG-Net [48] | ×2 | 1.693 M | 37.424/0.9777 | 44.097/0.9942 | 38.767/0.9915 |
LF-IINet [49] | ×2 | 4.837 M | 37.768/0.9790 | 44.852/0.9948 | 39.894/0.9936 |
DPT [17] | ×2 | 3.731 M | 37.355/0.9771 | 44.302/0.9943 | 39.429/0.9926 |
LFT [29] | ×2 | 1.114 M | 37.838/0.9791 | 44.522/0.9945 | 40.510/0.9941 |
DistgSSR [12] | ×2 | 3.532 M | 37.959/0.9796 | 44.943/0.9949 | 40.404/0.9942 |
LFSSR_SAV [50] | ×2 | 1.217 M | 37.425/0.9776 | 44.216/0.9942 | 38.689/0.9914 |
EPIT [51] | ×2 | 1.421 M | 38.228/0.9810 | 45.075/0.9949 | 42.166/0.9957 |
HLFSR-SSR [52] | ×2 | 13.72 M | 38.317/0.9807 | 44.978/0.9950 | 40.849/0.9947 |
LF-DET [53] | ×2 | 1.588 M | 38.314/0.9807 | 44.986/0.9950 | 41.762/0.9955 |
ours | ×2 | 1.352 M | 38.309/0.9807 | 45.361/0.9964 | 42.238/0.9962 |
Methods | Scale | #Params. | HClnew | HClold | STFgantry |
---|---|---|---|---|---|
Bilinear | ×4 | – | 27.085/0.8397 | 31.688/0.9256 | 25.203/0.8261 |
Bicubic | ×4 | – | 27.715/0.8517 | 32.576/0.9344 | 26.087/0.8452 |
VDSR [54] | ×4 | 0.665 M | 29.308/0.8823 | 34.810/0.9515 | 28.506/0.9009 |
EDSR [55] | ×4 | 38.89 M | 29.591/0.8869 | 35.176/0.9536 | 28.703/0.9072 |
RCAN [43] | ×4 | 15.36 M | 29.694/0.8886 | 35.359/0.9548 | 29.021/0.9131 |
resLF [44] | ×4 | 8.646 M | 30.723/0.9107 | 36.705/0.9682 | 30.191/0.9372 |
LFSSR [15] | ×4 | 1.774 M | 30.928/0.9145 | 36.907/0.9696 | 30.570/0.9426 |
LF-ATO [45] | ×4 | 1.364 M | 30.880/0.9135 | 36.999/0.9699 | 30.607/0.9430 |
LF_InterNet [46] | ×4 | 5.483 M | 30.961/0.9161 | 37.150/0.9716 | 30.365/0.9409 |
LF-DFnet [47] | ×4 | 3.990 M | 31.234/0.9196 | 37.321/0.9718 | 31.147/0.9494 |
MEG-Net [48] | ×4 | 1.775 M | 31.103/0.9177 | 37.287/0.9716 | 30.771/0.9453 |
LF-IINet [49] | ×4 | 4.886 M | 31.331/0.9208 | 37.620/0.9734 | 31.261/0.9502 |
DPT [17] | ×4 | 3.778 M | 31.196/0.9188 | 37.412/0.9721 | 31.150/0.9488 |
LFT [29] | ×4 | 1.163 M | 31.462/0.9218 | 37.630/0.9735 | 31.860/0.9548 |
DistgSSR [12] | ×4 | 3.582 M | 31.380/0.9217 | 37.563/0.9732 | 31.649/0.9535 |
LFSSR_SAV [50] | ×4 | 1.543 M | 31.450/0.9217 | 37.497/0.9721 | 31.362/0.9505 |
EPIT [51] | ×4 | 1.470 M | 31.511/0.9231 | 37.677/0.9737 | 32.179/0.9571 |
HLFSR-SSR [52] | ×4 | 13.87 M | 31.571/0.9238 | 37.776/0.9742 | 31.641/0.9537 |
LF-DET [53] | ×4 | 1.687 M | 31.558/0.9235 | 37.843/0.9744 | 32.139/0.9573 |
Ours | ×4 | 1.402 M | 31.545/0.9229 | 38.024/0.9812 | 32.341/0.9607 |
Method | Gflops | Param ↓ | PSNR ↑ | SSIM ↑ |
---|---|---|---|---|
LSAA→SA | 71.943 | 1.352M | 37.514 | 0.9743 |
w/o MDEM | 35.734 | 1.038M | 36.952 | 0.9688 |
w/o DCTM | 41.390 | 1.349M | 37.216 | 0.9721 |
w/o MF | 39.566 | 1.202M | 36.871 | 0.9639 |
MF→SA | 97.293 | 1.416M | 36.295 | 0.9577 |
Ours | 43.868 | 1.352M | 38.309 | 0.9807 |
Direction | ||||
---|---|---|---|---|
PSNR ↑ | 38.309 | 37.902 | 37.884 | 38.288 |
SSIM ↑ | 0.9807 | 0.9747 | 0.9726 | 0.9792 |
Local Window Range | |||
---|---|---|---|
PSNR ↑ | 38.004 | 38.309 | 37.936 |
SSIM ↑ | 0.9762 | 0.9807 | 0.9733 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Z.; Yan, T.; Huang, H.; Liu, J.; Wang, C.; Wei, C. Light Field Super-Resolution via Dual-Domain High-Frequency Restoration and State-Space Fusion. Electronics 2025, 14, 1747. https://doi.org/10.3390/electronics14091747
Zhang Z, Yan T, Huang H, Liu J, Wang C, Wei C. Light Field Super-Resolution via Dual-Domain High-Frequency Restoration and State-Space Fusion. Electronics. 2025; 14(9):1747. https://doi.org/10.3390/electronics14091747
Chicago/Turabian StyleZhang, Zhineng, Tao Yan, Hao Huang, Jinsheng Liu, Chenglong Wang, and Cihang Wei. 2025. "Light Field Super-Resolution via Dual-Domain High-Frequency Restoration and State-Space Fusion" Electronics 14, no. 9: 1747. https://doi.org/10.3390/electronics14091747
APA StyleZhang, Z., Yan, T., Huang, H., Liu, J., Wang, C., & Wei, C. (2025). Light Field Super-Resolution via Dual-Domain High-Frequency Restoration and State-Space Fusion. Electronics, 14(9), 1747. https://doi.org/10.3390/electronics14091747