Physics-Based Noise Modeling and Deep Learning for Denoising Permanently Shadowed Lunar Images
Abstract
:1. Introduction
- (1)
- This study integrates the physical noise model with deep learning, simulating the noise distribution and intensity in LROC NAC lunar PSR images, enhancing the denoising capability of the model and providing a physics-based simulation method for constructing training datasets, thereby improving the physical interpretability of the network model.
- (2)
- This study proposes a full-scale feature refinement and local enhancement shifted window multi-head self-attention Transformer (FRET) network, which effectively captures multi-resolution noise and details through full-scale skip connections and enhances denoising accuracy and detail recovery using the self-attention mechanism of Transformers.
- (3)
- This study proposes the REWin Transformer module, which combines the shifted window multi-head self-attention mechanism (SW-MSA), the gated feature refinement feedforward neural network (GFRN), and the gated convolution-based local enhancement feedforward neural network (GCEN). By refining feature maps, removing redundant information, and enhancing detail recovery, the denoising performance and texture reconstruction ability are significantly improved.
2. Datasets
2.1. LROC NAC Data Information
2.2. Simulated Data of Lunar PSRs
3. Methods
3.1. Overall Pipeline
3.2. REWin Transformer Block
3.3. Evaluating Indicator
4. Results
4.1. Experimental Environment and Model Training
4.2. Experimental Results of Simulated Images
4.3. Experimental Results of Real Images
4.4. Analysis of the Impact of Different Modules on FRET
5. Discussion
5.1. The Significance of PSR Image Denoising
5.2. Scalability of the Proposed Method
5.3. Limitations of Physical Noise Models
6. Conclusions
- (1)
- The physical noise model effectively simulated the noise in the lunar permanent shadow area images obtained from LROC NAC. More importantly, this model not only provides an effective solution for constructing training samples, but also enhances the physical interpretability of the network.
- (2)
- This study proposes a FRET network that combines full-scale skip connections with Transformer structures. By using full-scale skip connections and Transformer structures to enhance the network’s ability to obtain multi-scale information and extract key information, the denoising effect on complex noisy images has been significantly improved.
- (3)
- This study proposes a REWin Transformer block consisting of SW-MSA, GFRN, and GCEN modules to remove redundant information from feature maps and reduce interference with network learning. Meanwhile, fine processing and local enhancement are applied to the remaining key information, further enhancing the network’s ability to recover texture details from noisy images.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
LROC | Lunar Reconnaissance Orbiter Camera |
NACs | Narrow-Angle Cameras |
PSRs | Permanently shadowed regions |
FRET | Full-scale feature refinement and local enhancement shifted window multi-head self attention transformer |
SW-MSA | Shifted window multi-head self attention |
GFRN | Gated feature refinement feedforward neural network |
GCEN | Gated convolution-based local enhancement feedforward neural network |
CCD | Charge-coupled device |
DN | Digital number |
MSE | Mean Squared Error |
FFN | Feedforward neural networks |
MAE | Mean Absolute Error |
SSIM | Structural Similarity Index Measure |
PSNR | Peak Signal-to-Noise Ratio |
IL-NIQE | Integrated Local Natural Image Quality Evaluator |
NRQM | Non-Reference Quality Metric |
MUSIQ | Multi-Scale Image Quality |
References
- Gordon, S.; Brylow, M.; Foote, J.; Garvin, J.; Kasper, J.; Keller, M.; Litvak, I.; Mitrofanov, D.; Paige, K.; Raney, M.; et al. Lunar reconnaissance orbiter overview: The instrument suite and mission. Space Sci. Rev. 2007, 129, 391–419. [Google Scholar]
- Robinson, M.S.; Brylow, S.M.; Tschimmel, M.E.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; et al. Lunar reconnaissance orbiter camera (LROC) instrument overview. Space Sci. Rev. 2010, 150, 81–124. [Google Scholar] [CrossRef]
- Chromey, F.R. To Measure the Sky; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
- Konnik, M.; Welsh, J. High-level numerical simulations of noise in CCD and CMOS photosensors: Review and tutorial. arXiv 2014, arXiv:1412.4031. [Google Scholar]
- Buades, A.; Coll, B.; Morel, J.M. A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 2005, 4, 490–530. [Google Scholar] [CrossRef]
- Goyal, B.; Dogra, A.; Agrawal, S.; Sohi, B.S.; Sharma, A. Image denoising review: From classical to state-of-the-art approaches. Inf. Fusion 2020, 55, 220–244. [Google Scholar] [CrossRef]
- Fan, L.; Zhang, F.; Fan, H.; Zhang, C. Brief review of image denoising techniques. Vis. Comput. Ind. Biomed. Art 2019, 2, 7. [Google Scholar] [CrossRef] [PubMed]
- Gu, S.; Timofte, R. A Brief Review of Image Denoising Algorithms and Beyond. In Inpainting and Denoising Challenges; Part of the The Springer Series on Challenges in Machine Learning Book Series (SSCML); Springer: Cham, Switzerland, 2019. [Google Scholar]
- Jebur, R.S.; Zabil, M.H.B.M.; Hammood, D.A.; Cheng, L.K. A comprehensive review of image denoising in deep learning. Multimed. Tools Appl. 2024, 83, 58181–58199. [Google Scholar] [CrossRef]
- Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
- Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
- Portilla, J.; Strela, V.; Wainwright, M.J.; Simoncelli, E.P. Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans. Image Process. 2003, 12, 1338–1351. [Google Scholar] [CrossRef] [PubMed]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
- Huang, T.; Li, S.; Jia, X.; Lu, H.; Liu, J. Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images. arXiv 2021, arXiv:2101.02824. [Google Scholar]
- Cheng, S.; Wang, Y.; Huang, H.; Liu, D.; Fan, H.; Liu, S. NBNet: Noise Basis Learning for Image Denoising with Subspace Projection. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Liu, Y.; Qin, Z.; Anwar, S.; Ji, P.; Kim, D.; Caldwell, S.; Gedeon, T. Invertible Denoising Network: A Light Solution for Real Noise Removal. arXiv 2021, arXiv:2104.10546. [Google Scholar]
- Byun, J.; Cha, S.; Moon, T. FBI-Denoiser: Fast Blind Image Denoiser for Poisson-Gaussian Noise. arXiv 2021, arXiv:2105.10967. [Google Scholar]
- Wu, W.; Liu, S.; Xia, Y.; Zhang, Y. Dual residual attention network for image denoising. Pattern Recognit. 2024, 149, 110291. [Google Scholar] [CrossRef]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A General U-Shaped Transformer for Image Restoration. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17662–17672. [Google Scholar]
- Zhou, S.; Chen, D.; Pan, J.; Shi, J.; Yang, J. Adapt or Perish: Adaptive Sparse Transformer with Attentive Feature Refinement for Image Restoration. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024. [Google Scholar]
- Kong, Z.; Deng, F.; Zhuang, H.; Yu, J.; He, L.; Yang, X. A Comparison of Image Denoising Methods. arXiv 2023, arXiv:2304.08990. [Google Scholar]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed]
- Mao, X.J.; Shen, C.; Yang, Y.B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Proceedings of the NIPS’16: Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 2810–2818. [Google Scholar]
- Tai, Y.; Yang, J.; Liu, X.; Xu, C. MemNet: A Persistent Memory Network for Image Restoration. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Guo, S.; Yan, Z.; Zhang, K.; Zuo, W.; Zhang, L. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; IEEE Computer Society: Washington, DC, USA, 2019. [Google Scholar]
- Lehtinen, J.; Munkberg, J.; Hasselgren, J.; Laine, S.; Karras, T.; Aittala, M.; Aila, T. Noise2Noise: Learning image restoration without clean data. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; International Machine Learning Society (IMLS): Stroudsburg, PA, USA, 2018. [Google Scholar]
- Krull, A.; Buchholz, T.O.; Jug, F. Noise2void-Learning denoising from single noisy images. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 2124–2132. [Google Scholar]
- Lempitsky, V.; Vedaldi, A.; Ulyanov, D. Deep Image Prior. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9446–9454. [Google Scholar]
- Moseley, B.; Bickel, V.; López-Francos, I.G.; Rana, L. Extreme Low-Light Environment-Driven Image Denoising over Permanently Shadowed Lunar Regions with a Physical Noise Model. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 6313–6323. [Google Scholar]
- Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to See in the Dark. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Wei, K.; Fu, Y.; Yang, J.; Huang, H. A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Xu, K.; Yang, X.; Yin, B.; Lau, R.W.H. Learning to restore low-light images via decomposition-and-enhancement. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2278–2287. [Google Scholar]
- Maharjan, P.; Li, L.; Li, Z.; Xu, N.; Ma, C.; Li, Y. Improving extreme low-light image denoising via residual learning. In Proceedings of the 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 8–12 July 2019; IEEE Computer Society: Washington, DC, USA, 2019. [Google Scholar]
- Szeliski, R. Computer Vision: Algorithms and Applications; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Working with Lunar Reconnaissance Orbiter LROC Narrow. Available online: http://www.lroc.asu.edu/data/support/downloads/LROC_NAC_Processing_Guide.pdf (accessed on 21 November 2023).
- Humm, D.C.; Tschimmel, M.; Brylow, S.M.; Mahanti, P.; Tran, T.N.; Braden, S.E.; Wiseman, S.; Danton, J.; Eliason, E.M.; Robinson, M.S. Flight Calibration of the LROC Narrow Angle Camera. Space Sci. Rev. 2016, 200, 431–473. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Proceedings of the 8th International Conference on Learning Representations, ICLR, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. In Proceedings of the NeurIPS, Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Li, Y.; Zhang, K.; Cao, J.; Timofte, R.; Van Gool, L. LocalViT: Bringing Locality to Vision Transformers. arXiv 2021, arXiv:2104.05707. [Google Scholar]
- Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; Zhang, L. CvT: Introducing convolutions to vision transformers. arXiv 2021, arXiv:2103.15808. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. arXiv 2021, arXiv:2103.14030. [Google Scholar]
- Shaw, P.; Uszkoreit, J.; Vaswani, A. SelfAttention with Relative Position Representations. arXiv 2018, arXiv:1803.02155. [Google Scholar]
- Chen, J.; Kao, S.; He, H.; Zhuo, W.; Wen, S.; Lee, C.; Chan, S.G. Run, don’t walk: Chasing higher flops for faster neural networks. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Bovik, A. A Feature-Enriched Completely Blind Image Quality Evaluator. IEEE Trans. Image Process. 2015, 24, 2579–2591. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Sheikh, H.; Bovik, A. No-reference perceptual quality assessment of JPEG compressed images. In Proceedings of the Proceedings. International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002. [Google Scholar]
- Ke, J.; Wang, Q.; Wang, Y.; Milanfar, P.; Yang, F. MUSIQ: Multi-scale Image Quality Transformer. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 5128–5137. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
- Dua, P. Image Denoising Using a U-Net. 2019. Available online: http://stanford.edu/class/ee367/Winter2019/dua_report.pdf (accessed on 15 April 2023).
- Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Wu, J. UNet 3+: A Full-Scale Connected UNet for Medical Image Segmentation. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
- Li, S.; Lucey, P.G.; Milliken, R.E.; Hayne, P.O.; Fisher, E.; Williams, J.P.; Hurley, D.M.; Elphic, R.C. Direct evidence of surface exposed water ice in the lunar polar regions. Proc. Natl. Acad. Sci. USA 2018, 115, 8907–8912. [Google Scholar] [CrossRef] [PubMed]
Evaluating Indicator | UNet | UNet3Plus | DRANet | UFormer | AST | This Study |
---|---|---|---|---|---|---|
MAE | 3.7986 | 3.1189 | 2.8718 | 2.8728 | 2.7597 | 2.5834 |
SSIM | 0.9942 | 0.9958 | 0.9969 | 0.9967 | 0.9968 | 0.9973 |
PSNR | 60.6075 | 61.5843 | 62.6493 | 62.8602 | 63.6921 | 64.0642 |
Evaluating Indicator | Original Image | UNet | UNet3Plus | DRANet | UFormer | AST | This Study |
---|---|---|---|---|---|---|---|
IL-NIQE | 143.279 | 97.981 | 95.895 | 97.522 | 97.085 | 94.710 | 94.106 |
NRQM | 2.762 | 2.946 | 2.969 | 3.015 | 2.988 | 3.044 | 3.068 |
MUSIQ | 25.272 | 26.791 | 26.810 | 26.134 | 27.203 | 27.565 | 27.778 |
Evaluating Indicator | Baseline | Baseline+A | Baseline+A+B+B | Baseline+A+C+C | Baseline+A+C+B | Baseline+A+B+C |
---|---|---|---|---|---|---|
MAE | 3.3185 | 3.2900 | 2.9918 | 2.9525 | 2.9976 | 2.8575 |
SSIM | 0.9957 | 0.9957 | 0.9966 | 0.9966 | 0.9967 | 0.9968 |
PSNR | 61.8474 | 61.9904 | 62.7690 | 62.7622 | 62.8227 | 63.0946 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pan, H.; Chen, B.; Zhou, R. Physics-Based Noise Modeling and Deep Learning for Denoising Permanently Shadowed Lunar Images. Appl. Sci. 2025, 15, 2358. https://doi.org/10.3390/app15052358
Pan H, Chen B, Zhou R. Physics-Based Noise Modeling and Deep Learning for Denoising Permanently Shadowed Lunar Images. Applied Sciences. 2025; 15(5):2358. https://doi.org/10.3390/app15052358
Chicago/Turabian StylePan, Haiyan, Binbin Chen, and Ruyan Zhou. 2025. "Physics-Based Noise Modeling and Deep Learning for Denoising Permanently Shadowed Lunar Images" Applied Sciences 15, no. 5: 2358. https://doi.org/10.3390/app15052358
APA StylePan, H., Chen, B., & Zhou, R. (2025). Physics-Based Noise Modeling and Deep Learning for Denoising Permanently Shadowed Lunar Images. Applied Sciences, 15(5), 2358. https://doi.org/10.3390/app15052358