LL-CSFormer: A Novel Image Denoiser for Intensified CMOS Sensing Images under a Low Light Environment
Abstract
:1. Introduction
- We propose an image denoiser for ICMOS sensing images called a low-light cross-scale transformer (LL-CSFormer).
- Considering the spatially clustered noise from ICMOS sensing images, we designed a cross-scale transformer network to effectively separate the signal and noise by multi-scale and multi-range learning.
- We established a novel ICMOS image dataset of still noisy bursts under different illumination levels and scenes.
- Extensive experiments conducted on our proposed dataset demonstrate that our method outperforms existing state-of-the-art image-denoising methods for ICMOS sensing image denoising.
2. Intensified CMOS Imaging System
3. Proposed Method
3.1. Low Light Cross-Scale Transformer
Hybrid Transformer Module (HTM)
3.2. Two-Stream Noise-to-Noise Training Strategy
4. Experimental Results and Analysis
4.1. ICMOS Noisy Image Dataset
4.2. Implementation Details
4.3. Ablation Study
4.3.1. The Analysis of Cross-Scale Structure
4.3.2. The Analysis of STL
CS | STL | CAB | PSNR/dB (↑) | SSIM (↑) | |||
---|---|---|---|---|---|---|---|
lx | lx | lx | lx | ||||
w/o CS | ✓ | ✓ | 33.63 | 33.45 | 0.8595 | 0.8880 | |
w/o STL | ✓ | ✓ | 34.13 | 33.74 | 0.8857 | 0.9004 | |
w/o CAB | ✓ | ✓ | 33.95 | 33.42 | 0.8810 | 0.8833 | |
LL-CSFormer | ✓ | ✓ | ✓ | 34.50 | 34.29 | 0.8904 | 0.9077 |
4.3.3. The Analysis of CAB
4.4. Results and Analysis
4.4.1. Visual Comparison
4.4.2. Quantitative Comparison
Methods | PSNR/dB (↑) | SSIM (↑) | ||||
---|---|---|---|---|---|---|
lx | lx | Outdoor | lx | lx | Outdoor | |
NLM | 30.39 | 22.62 | 18.92 | 0.7753 | 0.5255 | 0.5559 |
BM3D | 33.28 | 31.66 | 18.00 | 0.8725 | 0.8389 | 0.6947 |
CBDnet | 31.82 | 31.49 | 22.03 | 0.8667 | 0.8794 | 0.7564 |
DANet | 32.81 | 30.88 | 21.08 | 0.8407 | 0.7580 | 0.6870 |
VDN | 31.23 | 31.38 | 19.12 | 0.8241 | 0.8479 | 0.5341 |
DeamNet | 32.34 | 32.85 | 22.04 | 0.8745 | 0.8937 | 0.7595 |
Uformer | 33.98 | 33.89 | 22.07 | 0.8791 | 0.8975 | 0.7639 |
ours | 34.50 | 34.29 | 22.24 | 0.8904 | 0.9077 | 0.7666 |
Methods | Parameters (M) | Running Time (s) |
---|---|---|
NLM | / | 163.2 |
BM3D | / | 2236 |
CBDnet | 4.3 | 0.124 |
DANet | 9.2 | 0.318 |
VDN | 7.8 | 0.109 |
DeamNet | 2.2 | 0.369 |
Uformer | 50.9 | 0.886 |
ours | 0.97 | 0.186 |
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wang, F.; Wang, Y.; Yang, M.; Zhang, X.; Zheng, N. A denoising scheme for randomly clustered noise removal in ICCD sensing image. Sensors 2017, 17, 233. [Google Scholar] [CrossRef] [PubMed]
- Yang, M.; Wang, F.; Wang, Y.; Zheng, N. A denoising method for randomly clustered noise in ICCD sensing images based on hypergraph cut and down sampling. Sensors 2017, 17, 2778. [Google Scholar] [CrossRef] [PubMed]
- Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), Bombay, India, 7 January 1998; pp. 839–846. [Google Scholar]
- Zhang, W.; Li, J.; Yang, Y. A fractional diffusion-wave equation with non-local regularization for image denoising. Signal Process. 2014, 103, 6–15. [Google Scholar] [CrossRef]
- Su, Y.; Xu, Z. Parallel implementation of wavelet-based image denoising on programmable PC-grade graphics hardware. Signal Process. 2010, 90, 2396–2411. [Google Scholar] [CrossRef]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
- Bui, A.T.; Im, J.K.; Apley, D.W.; Runger, G.C. Projection-free kernel principal component analysis for denoising. Neurocomputing 2019, 357, 163–176. [Google Scholar] [CrossRef]
- Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
- Ou, Y.; Swamy, M.; Luo, J.; Li, B. Single image denoising via multi-scale weighted group sparse coding. Signal Process. 2022, 200, 108650. [Google Scholar] [CrossRef]
- Nie, T.; Wang, X.; Liu, H.; Li, M.; Nong, S.; Yuan, H.; Zhao, Y.; Huang, L. Enhancement and Noise Suppression of Single Low-Light Grayscale Images. Remote Sens. 2022, 14, 3398. [Google Scholar] [CrossRef]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed]
- Guo, S.; Yan, Z.; Zhang, K.; Zuo, W.; Zhang, L. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1712–1722. [Google Scholar]
- Yue, Z.; Yong, H.; Zhao, Q.; Meng, D.; Zhang, L. Variational denoising network: Toward blind noise modeling and removal. Adv. Neural Inf. Process. Syst. 2019, 32, 1690–1701. [Google Scholar]
- Yue, Z.; Zhao, Q.; Zhang, L.; Meng, D. Dual adversarial network: Toward real-world noise removal and noise generation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 41–58. [Google Scholar]
- Cheng, S.; Wang, Y.; Huang, H.; Liu, D.; Fan, H.; Liu, S. Nbnet: Noise basis learning for image denoising with subspace projection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4896–4906. [Google Scholar]
- Liu, Y.; Wan, B.; Shi, D.; Cheng, X. Generative Recorrupted-to-Recorrupted: An Unsupervised Image Denoising Network for Arbitrary Noise Distribution. Remote Sens. 2023, 15, 364. [Google Scholar] [CrossRef]
- Ren, C.; He, X.; Wang, C.; Zhao, Z. Adaptive consistency prior based deep network for image denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8596–8606. [Google Scholar]
- Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu, C.; Xu, C.; Gao, W. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12299–12310. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 213–229. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Yang, S.; Quan, Z.; Nie, M.; Yang, W. Transpose: Keypoint localization via transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 11802–11812. [Google Scholar]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 17683–17693. [Google Scholar]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 5728–5739. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Learning enriched features for real image restoration and enhancement. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 492–511. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Zhang, X.; Wang, X. MARN: Multi-Scale Attention Retinex Network for Low-Light Image Enhancement. IEEE Access 2021, 9, 50939–50948. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Pang, T.; Zheng, H.; Quan, Y.; Ji, H. Recorrupted-to-Recorrupted: Unsupervised Deep Learning for Image Denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 2043–2052. [Google Scholar]
- Calvarons, A.F. Improved Noise2Noise denoising with limited data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 796–805. [Google Scholar]
- Charbonnier, P.; Blanc-Feraud, L.; Aubert, G.; Barlaud, M. Two deterministic half-quadratic regularization algorithms for computed imaging. In Proceedings of the 1st International Conference on Image Processing, Austin, TX, USA, 13–16 November 1994; Volume 2, pp. 168–172. [Google Scholar]
- Gao, T.; Cao, F.; Wang, X.; Cui, Z. Direct Coupling of Low Light Image Intensifier with Large Size CMOS. Infrared Technol. 2021, 43, 537–542. [Google Scholar]
- Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
Dataset | Location | Illumination Level | Total Scenes | Total Images |
---|---|---|---|---|
part1 | indoor | lx | 20 | 20,000 |
lx | 20 | 20,000 | ||
part2 | outdoor | lx– lx | 15 | 15,000 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, X.; Wang, X.; Yan, C. LL-CSFormer: A Novel Image Denoiser for Intensified CMOS Sensing Images under a Low Light Environment. Remote Sens. 2023, 15, 2483. https://doi.org/10.3390/rs15102483
Zhang X, Wang X, Yan C. LL-CSFormer: A Novel Image Denoiser for Intensified CMOS Sensing Images under a Low Light Environment. Remote Sensing. 2023; 15(10):2483. https://doi.org/10.3390/rs15102483
Chicago/Turabian StyleZhang, Xin, Xia Wang, and Changda Yan. 2023. "LL-CSFormer: A Novel Image Denoiser for Intensified CMOS Sensing Images under a Low Light Environment" Remote Sensing 15, no. 10: 2483. https://doi.org/10.3390/rs15102483
APA StyleZhang, X., Wang, X., & Yan, C. (2023). LL-CSFormer: A Novel Image Denoiser for Intensified CMOS Sensing Images under a Low Light Environment. Remote Sensing, 15(10), 2483. https://doi.org/10.3390/rs15102483