MEFormer: Enhancing Low-Light Images While Preserving Image Authenticity in Mining Environments
Abstract
:1. Introduction
- Current algorithms fail to meet authenticity requirements for processed images in mining environments: Existing low-light image enhancement (LLIE) algorithms in mining environments often fail to preserve image authenticity, resulting in color distortions, increased noise, and excessive brightness that leads to distortions. Additionally, these methods may also introduce artifacts and incorrect features, undermining the semantic accuracy and reliability of the enhanced images. These limitations hinder the effective use of LLIE technologies for accurate and trustworthy visual information in mining operations.
- Expanding datasets for mining environments: There is a critical demand for more datasets specific to mining environments that can enable models to accurately learn the unique illuminance characteristics of such environments. The incorporation of more relevant data into models will facilitate their ability to effectively handle the challenges posed by low-light images in mining environments, thereby improving the accuracy and realism of image enhancement techniques.
- To address the issue of insufficient preservation of image authenticity in existing low-light image enhancement (LLIE) methods, we propose the Unite Features Exchange Encoder–Decoder (UniteEDBlock) structure. This structure integrates a cross-scale feature fusion architecture into the encoder–decoder framework of MEFormer, enabling the exchange and fusion of feature information across multiple scales. This integration enhances feature accuracy, ultimately leading to a significant improvement in the authenticity of low-light image enhancements in mining environments.
- We have developed a new dataset named the Mining Environment Low-Light (MELOL) dataset, which encompasses a variety of mining environments, including underground tunnels, working faces, chambers, truck loading areas, and belt conveyors. The dataset consists of pairs of low-light and normal-light images, meticulously designed to address the challenges posed by inadequate illumination in mining environments. This dataset is poised to facilitate more accurate and realistic training for image enhancement models in the mining sector.
- We propose a novel image enhancement framework for mining low-illumination environments, which we have designated MEFormer. This framework integrates both the Unite Axial Transformer block (UniteATBlock) and UniteEDBlock, leveraging axial-based Transformer to significantly reduce model size while enhancing expressive capability. Specifically, within its encoder–decoder framework, MEFormer utilizes UniteEDBlock to capture semantic relationships across multiple scales, thereby ensuring the authenticity of enhanced low-light images in mining environments. A comparative evaluation of MEFormer on public datasets, including LOL-v1, MIT-Adobe FiveK, and our proprietary MELOL dataset, demonstrates its superiority over both mainstream low-light enhancement algorithms and those specifically tailored for mining environments. This evaluation highlights MEFormer’s remarkable capacity to preserve image authenticity under complex lighting conditions.
2. Related Work
3. Methodology
3.1. MEFormer Model Preliminaries
UniteATBlock
3.2. UniteEDBlock
3.3. MELOL Dataset
- Real-time underground monitoring footage: Obtained through the utilization of advanced monitoring systems, this visual documentation offers a genuine depiction of subterranean environments in conditions of limited luminosity.
- Surface loading systems recordings: The objective of incorporating real-time recordings of surface operations was to establish a contrast between the illumination scenarios present in the subterranean environment and those experienced at the surface.
- GoPro-captured coal loading images: The utilization of GoPro cameras during coal loading activities facilitated the acquisition of dynamic and high-resolution images during coal loading activities, which could enhance the dataset’s applicability to real-world mining tasks.
- Photos taken inside silos: Detailed imagery from within silos introduces an additional dimension of complexity and variability to the dataset, given the distinct lighting and structural characteristics present within these structures.
Underground Mining Scenes | Ground Production System Scenes | ||||
---|---|---|---|---|---|
Working Face | Roadways | Chambers | Semi-Enclosed Scenes | Indoor Scenes | |
Number of Images | 310 | 277 | 255 | 375 | 405 |
- Illumination adjustment: A meticulous processing of all images was conducted by using the Adobe Lightroom software 2020 to create low-light and normal-light versions that were accurately paired. This process entailed precise adjustments to illumination levels, ensuring that each pair authentically represented the transition from challenging low-light conditions to optimal visibility.
- Quality assurance through expert evaluation: A panel of 20 mining engineering students was tasked with evaluating a set of image pairs. Their assessments focused on the authenticity and quality of the images, ensuring that the dataset reliably mirrors real-world mining environments. This peer-reviewed validation process was crucial in maintaining the dataset’s integrity and applicability.
4. Experiments and Analysis
4.1. Implementation Details
4.2. Evaluation Metrics
4.3. Low-Light Image Dataset
- LOL (Low-Light) dataset: This dataset contains both low-light and normal-light images [63]. It is commonly used for training and evaluating low-light image enhancement algorithms. The dataset provides paired images, allowing researchers to perform supervised learning for enhancement tasks.
- DARK FACE dataset: Focused on face detection in low-light conditions, this dataset includes images captured in various low-light environments with different illumination levels [64]. It is designed to benchmark the performance of face detection algorithms under challenging lighting conditions.
- SID (See-in-the-Dark) dataset: This dataset consists of raw short-exposure images taken in extremely low light conditions, along with corresponding long-exposure reference images [65]. It is used for tasks such as raw image denoising and enhancement.
- MIT-Adobe FiveK: This dataset is a widely used collection of high-resolution photographs intended for research in the field of image enhancement, editing, and processing [66]. This dataset consists of 5000 images captured by various photographers using different cameras and under diverse lighting conditions. Each image in the dataset is accompanied by five expert-retouched versions, providing multiple interpretations of the best possible enhancements for each photo.
- LOL-v2 (Low-Light dataset version 2): An updated version of the LOL dataset, LOL-v2 contains more challenging low-light scenarios and more diverse scene types, allowing for more robust training of low-light enhancement algorithms [67].
4.4. Comparison Results on Public Datasets
4.5. Comparison Results on MELOL Datasets
5. Ablation Studies
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Niu, S. Coal mine safety production situation and management strategy. Manag. Eng. 2014, 14, 78–82. [Google Scholar]
- Miao, D.; Lv, Y.; Yu, K.; Liu, L.; Jiang, J. Research on coal mine hidden danger analysis and risk early warning technology based on data mining in China. Process Saf. Environ. Prot. 2023, 171, 1–17. [Google Scholar] [CrossRef]
- Wang, S.; Liu, G.; Song, Z.; Yang, K.; Li, M.; Chen, Y.; Wang, M. Three-Dimensional Deformation Prediction Based on the Improved Segmented Knothe–Dynamic Probabilistic Integral–Interferometric Synthetic Aperture Radar Model. Remote Sens. 2025, 17, 261. [Google Scholar] [CrossRef]
- Tian, X.; Yao, X.; Zhou, Z.; Tao, T. Surface Multi-Hazard Effects of Underground Coal Mining in Mountainous Regions. Remote Sens. 2025, 17, 122. [Google Scholar] [CrossRef]
- Milošević, I.; Stojanović, A.; Nikolić, Đ.; Mihajlović, I.; Brkić, A.; Perišić, M.; Spasojević-Brkić, V. Occupational health and safety performance in a changing mining environment: Identification of critical factors. Process Saf. Environ. Prot. 2025, 184, 106745. [Google Scholar] [CrossRef]
- Zhao, J.; Yu, H.; Dong, H.; Xie, S.; Cheng, Y.; Xia, Z. Analysis on dust prevention law of new barrier strategy in fully mechanized coal mining face. Process Saf. Environ. Prot. 2024, 187, 1527–1539. [Google Scholar] [CrossRef]
- Zhou, G.; Guo, H.; Shao, W.; Liu, Z.; Chen, X.; Chen, J.; Yan, G.; Hu, S.; Zhang, Y.; Sun, B. Spatiotemporal spreading characteristics of dust aerosols by coal mining machine cutting and “partition/multi-level composite field” atomization dust purification technology. Process Saf. Environ. Prot. 2024, 192, 386–400. [Google Scholar] [CrossRef]
- Qiang, X.; Li, G.; Sari, Y.A.; Fan, C.; Hou, J. Development of targeted safety hazard management plans utilizing multidimensional association rule mining. Heliyon 2024, 10, e40676. [Google Scholar] [CrossRef]
- Li, S.; You, M.; Li, D.; Liu, J. Identifying coal mine safety production risk factors by employing text mining and Bayesian network techniques. Process Saf. Environ. Prot. 2022, 162, 1067–1081. [Google Scholar] [CrossRef]
- Cheng, L.; Guo, H.; Lin, H. Evolutionary model of coal mine safety system based on multi-agent modeling. Process Saf. Environ. Prot. 2021, 147, 1193–1200. [Google Scholar]
- Solarz, J.; Gawlik-Kobylińska, M.; Ostant, W.; Maciejewski, P. Trends in energy security education with a focus on renewable and nonrenewable sources. Energies 2022, 15, 1351. [Google Scholar] [CrossRef]
- Wei, C.; Bai, L.; Chen, X.; Han, J. Cross-Modality Data Augmentation for Aerial Object Detection with Representation Learning. Remote Sens. 2024, 16, 4649. [Google Scholar] [CrossRef]
- Wu, D.; Zhang, S. Research on image enhancement algorithm of coal mine dust. In Proceedings of the SNSP, Xi’an, China, 28–31 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 261–265. [Google Scholar]
- Subramani, B.; Veluchamy, M. Fuzzy gray level difference histogram equalization for medical image enhancement. J. Med. Syst. 2020, 44, 103. [Google Scholar]
- Singh, K.; Kapoor, R. Image enhancement using Exposure based Sub Image Histogram Equalization. Pattern Recognit. Lett. 2014, 36, 10–14. [Google Scholar] [CrossRef]
- Vijayalakshmi, D.; Nath, M.K. A novel multilevel framework based contrast enhancement for uniform and non-uniform background images using a suitable histogram equalization. Digit. Signal Process. 2022, 127, 103532. [Google Scholar]
- Rao, B.S. Dynamic Histogram Equalization for contrast enhancement for digital images. Appl. Soft Comput. 2020, 89, 106114. [Google Scholar] [CrossRef]
- She, D. Retinex Based Visual Image Enhancement Algorithm for Coal Mine Exploration Robots. Informatica 2024, 48, 133–146. [Google Scholar] [CrossRef]
- Wu, C.; Wang, D.; Huang, K.; Wu, L. Enhancement of Mine Images through Reflectance Estimation of V Channel Using Retinex Theory. Processes 2024, 12, 1067. [Google Scholar] [CrossRef]
- Du, Y.; Tong, M.; Zhou, L.; Dong, H. Edge detection based on Retinex theory and wavelet multiscale product for mine images. Appl. Opt. 2016, 55, 9625–9637. [Google Scholar] [CrossRef]
- Shang, D.; Yang, Z.; Zhang, X.; Zheng, L.; Lv, Z. Research on low illumination coal gangue image enhancement based on improved Retinex algorithm. Int. J. Coal Prep. Util. 2023, 43, 999–1015. [Google Scholar]
- Wang, Z.; Hu, G.; Zhao, S.; Wang, R.; Kang, H.; Luo, F. Local Pyramid Vision Transformer: Millimeter-Wave Radar Gesture Recognition Based on Transformer with Integrated Local and Global Awareness. Remote Sens. 2024, 16, 4602. [Google Scholar] [CrossRef]
- Sngeorzan, D.D.; Păcurar, F.; Reif, A.; Weinacker, H.; Rușdea, E.; Vaida, I.; Rotar, I. Detection and Quantification of Arnica montana L. Inflorescences Grassland Ecosystems Using Convolutional Neural Networks Drone-Based Remote Sensing. Remote Sens. 2024, 16, 2012. [Google Scholar] [CrossRef]
- Li, J.; Chen, C.; Han, Y.; Chen, T.; Xue, X.; Liu, H.; Zhang, S.; Yang, J.; Sun, D. Wind Profile Reconstruction Based on Convolutional Neural Network for Incoherent Doppler Wind LiDAR. Remote Sens. 2024, 16, 1473. [Google Scholar] [CrossRef]
- Liang, Z.; Long, H.; Zhu, Z.; Cao, Z.; Yi, J.; Ma, Y.; Liu, E.; Zhao, R. High-Precision Disparity Estimation for Lunar Scene Using Optimized Census Transform and Superpixel Refinement. Remote Sens. 2024, 16, 3930. [Google Scholar] [CrossRef]
- Nan, Z.; Gong, Y. An Image Enhancement Method in Coal Mine Underground Based on Deep Retinex Network and Fusion Strategy. In Proceedings of the ICIVC, Qingdao, China, 23–25 July 2021; pp. 209–214. [Google Scholar]
- Zhou, W.; Li, L.; Liu, B.; Cao, Y.; Ni, W. A Multi-Tiered Collaborative Network for Optical Remote Sensing Fine-Grained Ship Detection in Foggy Conditions. Remote Sens. 2024, 16, 3968. [Google Scholar] [CrossRef]
- Cao, T.; Peng, T.; Wang, H.; Zhu, X.; Guo, J.; Zhang, Z. Multi-scale adaptive low-light image enhancement based on deep learning. J. Electron. Imaging 2024, 33, 043033. [Google Scholar]
- Li, N.; Gao, S.; Xue, J.; Zhang, Y. Downhole Image Enhancement Algorithm Based on Improved CycleGAN. In Proceedings of the CVIDL, Zhuhai, China, 19–21 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 216–220. [Google Scholar]
- Feng, Y.; Hou, S.; Lin, H.; Zhu, Y.; Wu, P.; Dong, W.; Sun, J.; Yan, Q.; Zhang, Y. DiffLight: Integrating Content and Detail for Low-light Image Enhancement. In Proceedings of the CVPR, Seattle, WA, USA, 17–21 June 2024; pp. 6143–6152. [Google Scholar]
- Onifade, M.; Said, K.O.; Shivute, A.P. Safe mining operations through technological advancement. Process Saf. Environ. Prot. 2023, 175, 251–258. [Google Scholar] [CrossRef]
- Tingjiang, T.; Changfang, G.; Guohua, Z.; Wenhua, J. Research and application of downhole drilling depth based on computer vision technique. Process Saf. Environ. Prot. 2023, 174, 531–547. [Google Scholar] [CrossRef]
- Gonzalez-Cortes, A.; Burlet-Vienney, D.; Chinniah, Y. Inherently safer design: An accident prevention perspective on reported confined space fatalities in Quebec. Process Saf. Environ. Prot. 2021, 149, 794–816. [Google Scholar] [CrossRef]
- Xu, P.; Zhou, Z.; Geng, Z. Safety monitoring method of moving target in underground coal mine based on computer vision processing. Sci. Rep. 2022, 12, 17899. [Google Scholar] [CrossRef]
- Hanif, M.W.; Li, Z.; Yu, Z.; Bashir, R. A lightweight object detection approach based on edge computing for mining industry. IET Image Process. 2024, 18, 4005–4022. [Google Scholar] [CrossRef]
- Dhal, K.G.; Das, A.; Ray, S.; Gálvez, J.; Das, S. Histogram equalization variants as optimization problems: A review. Arch. Comput. Methods Eng. 2021, 28, 1471–1496. [Google Scholar] [CrossRef]
- Vaswani, A. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Garg, P.; Jain, T. A comparative study on histogram equalization and cumulative histogram equalization. Int. J. New Technol. Res. 2017, 3, 263242. [Google Scholar]
- Hussein, R.R.; Hamodi, Y.I.; Rooa, A.S. Retinex theory for color image enhancement: A systematic review. Int. J. Electr. Comput. Eng. 2019, 9, 5560. [Google Scholar] [CrossRef]
- Jiang, K.; Wang, Q.; An, Z.; Wang, Z.; Zhang, C.; Lin, C.W. Mutual retinex: Combining transformer and cnn for image enhancement. IEEE Trans. Emerging Top. 2024, 8, 2240–2252. [Google Scholar]
- Cao, X.; Yu, J. LLE-NET: A Low-Light Image Enhancement Algorithm Based on Curve Estimation. Mathematics 2024, 12, 1228. [Google Scholar] [CrossRef]
- Tian, Z.; Wu, J.; Zhang, W.; Chen, W.; Zhou, T.; Yang, W.; Wang, S. An illuminance improvement and details enhancement method on coal mine low-light images based on Transformer and adaptive feature fusion. Int. J. Coal Sci. Technol. 2024, 52, 297–310. [Google Scholar]
- Tolstikhin, I.O.; Houlsby, N.; Kolesnikov, A.; Beyer, L.; Zhai, X.; Unterthiner, T.; Yung, J.; Steiner, A.; Keysers, D.; Uszkoreit, J.; et al. Mlp-mixer: An all-mlp architecture for vision. Adv. Neural Inform. Process. Syst. 2021, 34, 24261–24272. [Google Scholar]
- Bolya, D.; Fu, C.Y.; Dai, X.; Zhang, P.; Feichtenhofer, C.; Hoffman, J. Token merging: Your vit but faster. arXiv 2022, arXiv:2210.09461. [Google Scholar]
- Yi, X.; Xu, H.; Zhang, H.; Tang, L.; Ma, J. Text-IF: Leveraging Semantic Text Guidance for Degradation-Aware and Interactive Image Fusion. In Proceedings of the CVPR, Seattle, WA, USA, 17–21 June 2024; pp. 27026–27035. [Google Scholar]
- Yang, W.; Wang, S.; Wu, J.; Chen, W.; Tian, Z. A low-light image enhancement method for personnel safety monitoring in underground coal mines. Complex Intell. Syst. 2024, 10, 4019–4032. [Google Scholar]
- Wang, T.; Zhang, K.; Shen, T.; Luo, W.; Stenger, B.; Lu, T. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 2654–2662. [Google Scholar]
- Wang, W.; Yan, D.; Wu, X.; He, W.; Chen, Z.; Yuan, X.; Li, L. Low-light image enhancement based on virtual exposure. Signal Process. Image Commun. 2023, 118, 117016. [Google Scholar] [CrossRef]
- Mythili, R.; bama, B.S.; Kumar, P.S.; Das, S.; Thatikonda, R.; Inthiyaz, S. Radial basis function networks with lightweight multiscale fusion strategy-based underwater image enhancement. Expert Syst. 2025, 42, e13373. [Google Scholar] [CrossRef]
- Ren, P.; Jia, Q.; Xu, Q.; Li, Y.; Bi, F.; Xu, J.; Gao, S. Oil Spill Drift Prediction Enhanced by Correcting Numerically Forecasted Sea Surface Dynamic Fields With Adversarial Temporal Convolutional Networks. IEEE Trans. Geosci. Remote Sens. 2025, 63, 4701018. [Google Scholar]
- Yang, J.; Wu, C.; Du, B.; Zhang, L. Enhanced multiscale feature fusion network for HSI classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10328–10347. [Google Scholar]
- Zhang, Z.; Li, C. Front Matter: Volume 13091. In Proceedings of the Fifteenth International Conference on Signal Processing Systems (ICSPS 2023), Xi’an, China, 17–19 November 2023; 13091, pp. 1309101-1. [Google Scholar]
- Xu, K.; Chen, H.; Tan, X.; Chen, Y.; Jin, Y.; Kan, Y.; Zhu, C. HFMNet: Hierarchical feature mining network for low-light image enhancement. IEEE Trans. Instrum. Meas. 2022, 71, 5014014. [Google Scholar]
- Wu, C.; Wang, D.; Huang, K. Enhancement of Mine Images Based on HSV Color Space. IEEE Access 2024, 12, 72170–72186. [Google Scholar]
- Zhang, W.; Zuo, D.; Wang, C.; Sun, B. Research on image enhancement algorithm for the monitoring system in coal mine hoist. Meas. Control 2023, 56, 1572–1581. [Google Scholar]
- Qiao, J.; Wang, X.; Chen, J.; Jian, M. Low-light image enhancement with an anti-attention block-based generative adversarial network. Electronics 2022, 11, 1627. [Google Scholar] [CrossRef]
- Peng, B.; Zhang, X.; Lei, J.; Zhang, Z.; Ling, N.; Huang, Q. LVE-S2D: Low-Light Video Enhancement From Static to Dynamic. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 8342–8352. [Google Scholar] [CrossRef]
- Si, L.; Wang, Z.; Xu, R.; Tan, C.; Liu, X.; Xu, J. Image enhancement for surveillance video of coal mining face based on single-scale retinex algorithm combined with bilateral filtering. Symmetry 2017, 9, 93. [Google Scholar] [CrossRef]
- Huo, G.; Wu, J.; Ding, H. A two-stage image enhancement network for complex underground coal mine environment. In Proceedings of the ICDIP, Haikou, China, 24–26 May 2024; SPIE: Cergy-Pontoise, France, 2024; Volume 13274, pp. 436–445. [Google Scholar]
- Li, C.; Zheng, T.; Li, S.; Yu, C.; Gong, Y. Multi-Scale Enhancement and Sharpening Method for Visible Light Images in Underground Coal Mines. In Proceedings of the ICIVC, Dalian, China, 27–29 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 761–770. [Google Scholar]
- Sun, L.; Chen, S.; Yao, X.; Zhang, Y.; Tao, Z.; Liang, P. Image enhancement methods and applications for target recognition in intelligent mine monitoring. J. China Coal Soc. 2024, 49, 495–504. [Google Scholar]
- Yang, W.; Zhang, X.; Ma, B.; Wang, Y.; Wu, Y.; Yan, J.; Liu, Y.; Zhang, C.; Wan, J.; Wang, Y.; et al. An open dataset for intelligent recognition and classification of abnormal condition in longwall mining. Sci. Data 2023, 10, 416. [Google Scholar]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Schmid, C.; Soatto, S.; Tomasi, C. Conference on Computer Vision and Pattern Recognition; IEEE Computer Society: Piscataway, NJ, USA, 2005. [Google Scholar]
- Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to see in the dark. In Proceedings of the CVPR, Lake City, UT, USA, 18–22 June 2018; pp. 3291–3300. [Google Scholar]
- Bychkovsky, V.; Paris, S.; Chan, E.; Durand, F. Learning photographic global tonal adjustment with a database of input/output image pairs. In Proceedings of the CVPR, Providence, RI, USA, 20–25 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 97–104. [Google Scholar]
- Afifi, M.; Derpanis, K.G.; Ommer, B.; Brown, M.S. Learning multi-scale photo exposure correction. In Proceedings of the CVPR, Nashville, TN, USA, 20–25 June 2021; pp. 9157–9167. [Google Scholar]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [PubMed]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Neukirch, K. Moldava. In Elections in Europe; Nomos Verlagsgesellschaft mbH & Co. KG: Baden-Baden, Germany, 2010; pp. 1313–1348. [Google Scholar]
- Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the CVPR, New Orleans, LA, USA, 18–24 June 2022; pp. 5901–5910. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Learning enriched features for fast image restoration and enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 1934–1948. [Google Scholar]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 12504–12513. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1780–1789. [Google Scholar]
Method | LOL-v1 | MIT-Adobe FiveK | ||||||
---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | BRIS↑ [68] | LPIPS↓ [69] | PSNR↑ | SSIM↑ | BRIS↑ | LPIPS↓ | |
Ground Truth | \ | \ | 2.12 | \ | \ | \ | 2.11 | \ |
PSMUCM [70] | 18.75 | 0.78 | 0.53 | 18.26 | 0.58 | 0.61 | ||
Retinex-Net [63] | 16.77 | 0.43 | 0.48 | 0.47 | 17.63 | 0.73 | 0.56 | 0.25 |
URetinex-Net [71] | 20.04 | 0.83 | 0.48 | 0.42 | 18.32 | 0.77 | 0.50 | 0.23 |
MIRNetv2 [72] | 25.04 | 0.85 | 0.51 | 0.26 | 24.84 | 0.88 | 0.57 | 0.27 |
Retinexformer [73] | 25.16 | 0.85 | 0.50 | 0.18 | 24.52 | 0.89 | 0.58 | 0.20 |
EnlightenGAN [74] | 17.48 | 0.65 | 0.52 | 0.32 | 17.91 | 0.84 | 0.59 | 0.14 |
Zero-DCE [75] | 14.86 | 0.56 | 0.49 | 0.34 | 15.93 | 0.77 | 0.49 | 0.16 |
LLFormer [47] | 23.65 | 0.82 | 2.11 | 0.17 | 25.75 | 0.92 | 1.87 | 0.04 |
Ours | 25.75 | 0.89 | 2.12 | 0.15 | 26.47 | 0.94 | 2.06 | 0.07 |
MELOL | Computational Efficiency | ||||||
---|---|---|---|---|---|---|---|
and Memory Usage | |||||||
PSNR↑ | SSIM↑ | BRIS↑ | LPIPS↓ | Params | FLOPS | Inference Time | |
(M) | (G) | (s; 10k images) | |||||
MIRNetv2 | 22.84 | 0.86 | 1.31 | 0.23 | 5.0 | 34.88 | 12.7 |
Retinexformer | 26.16 | 0.89 | 1.48 | 0.13 | 1.61 | 3.93 | 3.8 |
LLFormer | 25.72 | 0.86 | 1.47 | 0.17 | 24.52 | 3.46 | 2.7 |
Ours | 26.34 | 0.91 | 1.62 | 0.12 | 29.34 | 3.56 | 2.8 |
Structure | Params (M) | FLOPS (G) | SSIM | PSNR |
---|---|---|---|---|
Base (LLFormer) | 24.52 | 3.46 | 0.82 | 23.65 |
LLFormer + UniteChNet | 29.34 | 3.56 | 0.89 | 25.75 |
Model | AT | CUT | UniteChNet | Params (M) | FLOPS (G) | SSIM | PSNR |
---|---|---|---|---|---|---|---|
Base | 13.87 | 1.83 | 0.80 | 20.85 | |||
a | ✓ | 19.78 | 2.50 | 0.81 | 23.07 | ||
b | ✓ | ✓ | 24.52 | 3.46 | 0.82 | 23.65 | |
c | ✓ | ✓ | ✓ | 29.34 | 3.56 | 0.89 | 25.75 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, Z.; Shen, Z.; Chen, N.; Pang, S.; Liu, H.; You, Y.; Wang, H.; Zhu, Y. MEFormer: Enhancing Low-Light Images While Preserving Image Authenticity in Mining Environments. Remote Sens. 2025, 17, 1165. https://doi.org/10.3390/rs17071165
Sun Z, Shen Z, Chen N, Pang S, Liu H, You Y, Wang H, Zhu Y. MEFormer: Enhancing Low-Light Images While Preserving Image Authenticity in Mining Environments. Remote Sensing. 2025; 17(7):1165. https://doi.org/10.3390/rs17071165
Chicago/Turabian StyleSun, Zhenming, Zeqing Shen, Ning Chen, Shuoqi Pang, Hui Liu, Yimeng You, Haoyu Wang, and Yiran Zhu. 2025. "MEFormer: Enhancing Low-Light Images While Preserving Image Authenticity in Mining Environments" Remote Sensing 17, no. 7: 1165. https://doi.org/10.3390/rs17071165
APA StyleSun, Z., Shen, Z., Chen, N., Pang, S., Liu, H., You, Y., Wang, H., & Zhu, Y. (2025). MEFormer: Enhancing Low-Light Images While Preserving Image Authenticity in Mining Environments. Remote Sensing, 17(7), 1165. https://doi.org/10.3390/rs17071165