Next Article in Journal
UWB and MB-OFDM for Lunar Rover Navigation and Communication
Previous Article in Journal
Algebraic Structure Graphs over the Commutative Ring Zm: Exploring Topological Indices and Entropies Using M-Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Low-Light Image Enhancement and Denoising via a New Retinex-Based Decomposition Model

1
School of Computer Science and Technology, Henan Institute of Science and Technology, Xinxiang 453003, China
2
School of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, China
3
School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China
4
School of Mathematics and Information Sciences, Zhongyuan University of Technology, Zhengzhou 451191, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(18), 3834; https://doi.org/10.3390/math11183834
Submission received: 7 August 2023 / Revised: 1 September 2023 / Accepted: 5 September 2023 / Published: 7 September 2023

Abstract

:
It is well known that images taken in low-light conditions frequently suffer from unknown noise and low visibility, which can pose challenges for image enhancement. The majority of Retinex-based decomposition algorithms usually attempt to directly design prior regularization for illumination or reflectance. Nevertheless, noise can be involved in such schemes. To address these issues, a new Retinex-based decomposition model for simultaneous enhancement and denoising has been developed. In this paper, an extended decomposition scheme is introduced to extract the illumination and reflectance components, which helps to better describe the prior information on illumination and reflectance. Subsequently, spatially adaptive weights are designed for two regularization terms. The main motivation is to provide a small amount of smoothing in near edges or bright areas and stronger smoothing in dark areas, which could preserve useful information and remove noise effectively during image-enhancement processing. Finally, the proposed algorithm is validated on several common datasets: LIME, LOL, and NPE. Extensive experiments show that the presented method is superior to state-of-the-art methods both in objective index comparisons and visual quality.

1. Introduction

Low-light image enhancement is a critical task in the field of computer vision [1,2]. Low light refers to the scene in which the brightness of the subject is so weak and might make the signal level of the camera output much lower than a certain threshold. The gray intensity of the low-light image is small overall, and the detail component is weakened. As a result, images taken under low-light conditions have unknown noise and low visibility, which corrupt the contents of the image and pose a challenge for image enhancement. Since images captured in low-light conditions are usually noisy and lack detail, these degradation problems challenge a variety of essential tasks in computer vision algorithms. Although it has been studied for many years, it is still a challenge to acquire enhanced images with high quality that effectively remove noise [3,4,5].
In order to solve these problems, lots of methods have been developed, such as the joint filtering-based linear representation model [6], Retinex-based methods [7,8,9,10,11,12] as well as deep learning methods [13,14,15]. Joint filtering [6] is designed to transfer significant structural details from the input image and the guidance image to the target image and has been found to perform well in various applications.
“Retinex” is a combination of two words: retina and cortex. The Retinex theory was proposed by Land and McCann in 1971 [7,8,9]. It has shown impressive agreement with the color perception of the human visual system (HVS) and has been developed under the idea that decomposing an observed image I ( x , y ) into components can be multiplied by two elements, namely, the reflectance and illumination components [10,11]:
I x , y = L x , y R x , y ,
where L ( x , y ) refers to the illumination component, which represents the intensity and distribution of environmental illumination and is usually presumed to be piece-wise smooth, while R ( x , y ) is the reflectance component, which primarily contains the critical details and color information of the original image. In addition, multiplication is converted to an addition by performing a logarithmic transformation of Equation (1) so that edge-aware smoothing together with traditional image decomposition methods can be exploited to the yield L ( x , y ) and R ( x , y ) [12].
In recent years, methods based on deep learning have achieved remarkable results, greatly promoting the development of image enhancement [14,16,17,18,19]. For instance, the authors of [16] designed a method for depth curve estimation and image enhancement. They defined a lightweight deep network for the estimation of higher-order curves to allow the pixel-level adjustment of the observed image’s dynamic range, resulting in enhanced images. The authors in [17] learned to map between weakly illuminated images and the illumination components, which was utilized to acquire enhanced results. The authors of [18] suggested a pyramid structure for learning across multiple levels, which is composed of a luminance-aware refinement branch and two coarse feature extraction branches. The authors in [14] present an adaptively unfolded total variation network for low-light noisy image enhancements. By contrast, the authors of [19] presented a lightweight enhancement network through the injection of low-light image knowledge and searching for lightweight priori architectures. However, this learning-based approach does not generalize well to uneven and poorly lit cases. This is mainly due to the challenge of capturing images in normal light and low light in the same visual scene at the same time. As a result, few common datasets are available. Furthermore, the parameters of learning methods are often immutable once trained and cannot generalize well to variable real-world scenarios, which limits the utility value of these methods.
In the Retinex-based decomposition framework, once this image is decomposed, the illumination component and reflectance component can be operated in different ways for each application [12]. Targeting the task of image contrast enhancement, Guo et al. [8] modified the Retinex model to augment the contrast of an image by constructing a light map, which enables the enhancement of the images under low-light conditions. Then, they imposed L 1 -norm on the illumination component and F -norm on the difference between the maximum intensity and refined illumination map for all the channels. Park et al. [20] present a method of low-light image enhancement with the L 2 -norm on an illumination component and L 1 -norm on the reflectance component. In addition, there are some work-appended weights on the regularization term of illumination or reflectance to preserve the image edge. For instance, Jia et al. [12] designed a new decomposition mode to extract illumination and reflectance components and set a uniform weight of RGB color channels on the gradient of the illumination component to preserve the significant edges.
Given that low-light images may comprise complex noise, which can affect the quality of enhancement, numerous joint methods have been developed [21,22,23,24,25,26]. For instance, the authors in [21] developed a structure-revealing image enhancement model that simultaneously estimated both reflectance and illumination. Ren et al. [22] presented a robust low illumination enhancement approach through a low-rank regularized Retinex model. They injected a low rank prior to normalizing the noise-suppressed reflectivity estimates. Kurihara et al. [23] presented a joint optimization equation that took into account both illumination and reflectance characteristics. They adopted the L 2 L p norm regularization on reflectance, which is supposed to retain textures and details, and estimated illumination which was expected to retain structural information. Chien et al. [24] presented a new image contrast enhancement method on the basis of Retinex decomposition and the noise-aware shadow-up function.
Despite the fact that the majority of existing methods can improve image contrast and brightness, insufficient illumination or poor imaging scenes may result in noises and artifacts in dark areas. To solve these problems, an extended decomposition model was introduced, which is tailored to simultaneous enhancement and denoising. Specifically, an inputting image was decomposed into three parts, including illumination, reflectance, and color layer. The illumination layer denotes the luminance component that the RGB channels share; the reflectance component complements the illumination layer with shared texture details; and the color layer denotes the different color information in RGB channels. Based on the above decomposition scheme, spatially adaptive weights were designed and injected into the illumination regularization and the reflectance regularization. Thus, it allowed for a small amount of smoothing near the edges or in the bright areas and stronger smoothing in the dark areas, resulting in effective noise removal along with enhancement. Specifically, the main contributions of this paper can be grouped as follows.
  • An extended decomposition scheme was introduced to extract the illumination and reflectance components from the observed image, which contributed to a better description of the prior regularization of illumination and reflectance.
  • A spatially adaptive weight was proposed for the illumination and the reflectance regularization, which retained useful details and effectively removed the noise in the image-enhancement process.
  • Several popular low-light datasets were evaluated to display improved performance in low-illumination conditions in contrast to other competing methods.
The remainder of this paper is organized as follows. Section 2 briefly reviews the relevant methods. Section 3 introduces and analyzes the presented model and describes the numerical algorithm. Section 4 describes the implementation details and experimental results, and Section 5 provides the conclusions of the paper.

2. Related Work

2.1. Retinex-Based Methods

Among several approaches for image enhancement, Retinex-based algorithms have become very popular. With the introduction of appropriate a priori regular functions, image decomposition techniques based on Retinex can handle both reflectance and illumination problems. In [27], a total variation-based decomposition model was proposed. Specifically, they addressed the following optimization problem:
min L , R β 2 I L R F 2 + μ 2 L F 2 + α L 2 2 + R 1 ,
where TV semi-norm and L 2 -norm regularization on R and L lead to piecewise constant reflectance and piecewise smooth illumination, respectively. Various in the model in Equation (2) can be found in [7,28,29,30]. In particular, ref. [28] presented a weighted variational model to mitigate the impact of logarithmic transformation on the image. By contrast, the authors of [7] developed a novel texture-aware and structure-aware method to accurately estimate reflectance and illuminations.

2.2. Extended Decomposition Method

In order to further extract illumination and reflectance effectively, the authors in [12] devised a variation-based decomposition model. Specifically, the decomposition model can be described as
I x , y = L x , y 1 1 1 + C x , y + R x , y 1 1 1 ,
where I x , y = log I r x , y , log I g x , y , log I b x , y   T is the logarithmic transformation of the observed image I r , I g , I b   T , R and L are the reflectance and illumination layers of the observed image, and C x , y = C r x , y ,   C g x , y , C b x , y   T is referred to as the color layer, representing the different color information of the three channels. Based on the above decomposition mode, they developed a variational decomposition algorithm by adding a simple TV semi-norm regularization on the gradient of the R component.

2.3. Joint Enhancement and Denoising

To improve the visibility of dark images by removing noise, the authors in [11] presented a weighted regularization on the fractional derivative of the reflectance component to recover more image details while enhancing the image. The authors in [3] present an approach that uses domain-specific knowledge together with hybrid image enhancement techniques that can provide resultant images with more details and lower noise levels. Alternatively, the authors in [24] proposed an image contrast enhancement method based on both a noise-aware shadow-up function and Retinex decomposition, which allowed them to not only enhance the contrast of dark regions but also to avoid amplifying noise, even under strong noise environments.

3. Proposed Model and Algorithm

3.1. Model Formulation

This paper introduces an extended decomposition model and proposes a spatially adaptive weighted method by adding the weighted prior regularization on illumination and reflectance components, respectively. On the one hand, the decomposition scheme helped to effectively depict the reflectance and illumination; on the other hand, the new regularization scheme was beneficial for denoising during image enhancement. Specifically, the proposed model is formulated as
min L ˜ , L , R H ( L ˜ , L , R ) + α Φ ( L ˜ , W L ) + β Ψ ( R , W R ) ,
where H ( L ˜ , L , R ) is the fidelity term, which can be defined as
H ( L ˜ , L , R ) = c { r , g , b } 1 2 I c L ˜ c R F 2 + L L ˜ c F 2 .
The first term in Equation (5) is to detect the error between L + R in each channel where the input image I = I r , I g , I b   T ; L ˜ = L ˜ r , L ˜ g , L ˜ b   T is an intermediate variable, and its significance is reflected in the possibility of applying prior piecewise smoothing; R approximates each residual channel of I L ˜ . The second term is expected to pull L toward the average value of L ˜ for all the c r , g , b .
The function Φ ( L ˜ , W L ) in Equation (4) is the edge-aware piecewise smooth prior, which is formulated as
Φ ( L ˜ , W L ) = W L L ˜ F 2 .
The term Ψ ( R , W R ) refers to weighted prior regularization on reflectance, which aims to effectively remove noise and restore higher quality images. It is set as
Ψ ( R , W R ) = W R R 1 .
It can be noted that the operator “ ” in Equations (6) and (7) refers to the point-wise product.

3.2. Weight Setting

In order to preserve important edges, regions with larger gradients should be allocated smaller weights. Therefore, the weight W L in Equation (6) is set as an edge-aware smoothing map
W L ( x , y ) = 1 max c r , g , b L ˜ c ( x , y ) γ L + ε ,
where weight is set according to the largest gradient in the color channels; this is because the edge with the larger gradient in any of the three color channels should be maintained during the smoothing process.
Similarly, the weight W R in Equation (7) is expected to retain image information and remove noise. In dark regions, which are prone to be corrupted by strong noise, stronger smoothing to suppress noise is desired. While in bright regions or near the edges, little smoothness should be implemented. To this end, W R is designed as an adaptive texture map.
W R ( x , y ) = 1 max c r , g , b I c ( x , y ) R ( x , y ) γ R + ε ,
where I c represents the component of the image I in red, yellow or blue channel.
The proposed model has the following advantages. First, the extended decomposition model can help better describe the prior regularization of the illumination and reflectance components. Second, the spatially adaptive weighted scheme on the regularization of the illumination facilitates a small amount of smoothing near the edges or in bright areas and a stronger smoothing in the dark areas. Moreover, the spatially adaptive weight method on the regularization of the reflectance component produces stronger noise reduction in dark regions and weaker noise reduction in the bright region. Compared with several methods in [3,7,8,9,11,12,24], the spatially adaptive regularization scheme can preserve useful information and remove noise effectively during image-enhancement processing.

3.3. Numerical Algorithm

In this subsection, the numerical procedure to solve the minimization problem (4) is discussed. It is easy to see that the optimal value of variable L in Equation (4) is the average value of L ˜ c among c r , g , b ; therefore, there are essentially only two variables, L ˜ and R , that need to be addressed. They can be solved via an alternative updated scheme, leading to two subproblems, as follows.
  • Subproblem 1: Updating L ˜ while fixing R .
In the k -th iteration, the minimization problem in (4) with respect to L ˜ is
L ˜ k = argmin L ˜ H ( L ˜ , L k , R k ) + α Φ ( L ˜ , W L ) ,
where the functional H ( L ˜ , L k , R k ) can be reformulated as
H ( L ˜ , L , R ) = c { r , g , b } 1 2 I c L ˜ c R k F 2 + L k L ˜ c F 2 = c { r , g , b } L ˜ c I c + L k R k 2 F 2 + 1 4 I c L k R k F 2 .
This can be denoted by X c k = I c + L k R k 2 and Φ ( L ˜ , W L ) can be replaced with (6); the minimization problem in (10) can be rewritten as
L ˜ k + 1 = argmin L ˜ c { r , g , b } L ˜ c X c k F 2 + α W L L ˜ F 2 .
It has a closed-form solution
L ˜ c k + 1 = 1 + α ( W L x D x ) T ( W L x D x ) + ( W L y D y ) T ( W L y D y ) 1 X c k , c { r , g , b } ,
where D x and D y are the horizontal and vertical components of the operator , respectively. W L x and W L y are the vectorized forms of W L in horizontal and vertical directions.
After acquiring L ˜ k + 1 from the solution of Equation (12), it can immediately update L k + 1 through the closed-form solution
L k + 1 = 1 3 c r , g , b L ˜ c k + 1 .
  • Subproblem 2: Updating R while fixing L ˜ .
After acquiring L ˜ k + 1 , the optimization problem in (4), with respect to R , can be expressed as
R k + 1 = argmin R H L ˜ k + 1 , L k + 1 , R + β Ψ R , W R = argmin R 1 2 c { r , g , b } I c L ˜ c k + 1 R F 2 + β W R R 1 ,
which can be addressed by fast TV thresholding algorithms, as seen in [31].
Combining Equations (12), (13) and (14), the numerical algorithm for the minimization problem (4) is formulated as in Algorithm 1. For the convergence condition of the algorithm, the stopping criterion is set as L k + 1 L k F L k F < ε and R k + 1 R k F R k F < ε , or this updating achieves the maximum iteration number: the preset positive integer K . In addition, the parameter setting involved in Algorithm 1 is presented in the experimental section.
Algorithm 1 Alternating the updating algorithm for the minimization problem (4)
Input: Choose a group of initial parameters and variables and generate new iteration via the following scheme.
      1: Transform input image I into the logarithmic domain;
      2: For k = 1, 2……, perform the following:
          Update L ˜ according to (12);
          Update L according to (13);
          Update R according to (14);
      3: End the iteration when the stopping criterion is satisfied;
Output: the decomposition results L and R .

4. Implementation Details and Experimental Results

4.1. Experiment Setting

In order to verify the performance of the presented model, extensive experiments were conducted on three datasets. The first dataset was LIME, which contained nine widely used images collected in [8]. The second dataset was LOL [32], which contained 500 low-light images and their normal-light ground truth. The third dataset was NPE. It consisted of 156 images that had low contrast in local areas but serious illumination variation in global space. The dataset and the codes are shared and available on the Sina Blog http://blog.sina.com.cn/u/2694868761, March 2013 [33]. We compared the proposed method with the joint intrinsic–extrinsic prior model (JIEP) [9], low-light image enhancement via an illumination map estimation (LIME) [8], the extended variational image decomposition model (EVID) [12], the Retinex-based variational framework (RBVF) [11] and the structure- and texture-aware Retinex model (STAR) [7]. The experimental tests were developed on MATLAB (R2018a, 64-bit). We first discuss the parameters as well as their selection criterion and subsequently compare the presented method with several state-of-the-art methods. Several objective evaluation metrics such as peak signal-to-noise ratio (PSNR), structure similarity index (SSIM) [34], the autoregressive-based image sharpness metric (ARISM), and Natural Image Quality Evaluator (NIQE) [35] were adopted to assess the enhancement and denoising performances. It is noted that higher SSIM and PSNR meant better quality of results, while lower ARISM and NIQE indicated better image quality.
For the parameter selection, two key parameters α and β , which were expected to balance regularization terms in Equation (4), were primarily studied. Therefore, other parameters were first fixed and set empirically. In particular, ε = 1 × 10 2 , K = 20 , and γ 1 = γ 2 = 1.2 were set. Then, the experimental results were analyzed on two regularization parameters α and β to guarantee the most satisfactory enhancement and denoise effects. Figure 1 shows the average SSIM and PSNR results of the proposed method on the LOL dataset. It can be observed that both the PSNR and SSIM were significantly affected by the two parameters. To be specific, when fixing β , the index curves first rose and later fell as α increased and achieved the least value near α = 0.1 . This indicates that the best selection of α for image enhancement and noise range is in [ 0.06 ,   0.14 ] . Similar experimental results have been obtained for parameter β , and empirically, the best selection of β can be found in [ 0.15 , 0.25 ] .

4.2. Decomposition Results and Discussions

To evaluate the performance of the extended decomposition method, we introduced the correlation coefficient (referred to as “Cor’’) between the illumination component L and reflection component R , defined as
Corr ( L , R ) = c o v ( L , R ) v a r ( L ) v a r ( R ) ,
where c o v ( , ) and v a r ( ) denote the covariance and variance of specific variables, respectively. “Cor’’ can be used to measure the correlation between L and R , and a lower “Corr’’ indicates that the image decomposition is of high quality. Figure 2 and Figure 3 display the visualization of several decomposition approaches. It can be found that JIEP, LIME, and EVID reserved more image content in the reflectance component, RBVF and STAR, and retained some image details to L . By contrast, the proposed method better-preserved details in the reflectance component and enforced piecewise smoothness in the illumination component. In addition, the index analysis in Table 1 shows that our method achieved the best “Corr’’ values among the several methods.

4.3. Enhancement Results and Discussions

In this subsection, the experimental results of several methods of image enhancement are reported. Figure 4 and Figure 5 show some improved results in various methods for images, such as “moon’’ and “cars’’. As shown in Figure 4, JIEP, LIME, and EVID processed many small structures improperly, while STAR and RBVF could not adequately remove the light component from the reflectance component. Figure 4g shows our result with α = 0.1 and   β = 0.2 . It is easy to see that our method could retain the original color while strengthening the dark regions, effectively decreasing amplified noise in dark areas.
As can be observed in Figure 5, it is clear that all approaches yielded good results. Nevertheless, one could observe that the enhanced results of JIEP, EVID, RBVF and STAR simultaneously enlarged dense noise in the dark areas, and LIME could slightly brighten these dark areas. Figure 5g is our result via utilizing   α = 0.1 and   β = 1.8 . It can be observed that our method better enhanced the image and, meanwhile, greatly minimized amplified noise in dark areas.
To further prove the performance of the presented method, we assessed the methods on low-light images collected from the LOL dataset. Figure 6 and Figure 7 display some of the results of the visualization comparison, and Table 2 presents the results of the quantitative analysis. It can be noted that the proposed method exhibited better visual quality compared to other similar methods. It reproduced images with a higher contrast, more detail, and more vivid colors. The metrics in Table 2 verify that the proposed algorithm achieved higher PSNR, SSIM values, and less ARISM and NIQE values than several comparative methods. It further confirms that the results of the new method improved the denoising capability while enhancing the brightness of the testing image.
In addition, it assessed the approaches on 200 low-light images gathered from the NPE and LOL datasets. Table 3 lists the results of average PSNR, SSIM, ARISM, and NIQF for the various methods, with the optimal metrics highlighted in bold. Evidently, the presented method stands out among its counterparts in terms of the values of PSNR, SSIM, ARISM, and NIQF. It reveals that this new method can be successful in improving the overall quality of images and restoring images while improving their denoising capability.
In summary, based on the above experimental results and data analyses, it can be concluded that the presented method provides good results in terms of both noise suppression and brightness.

5. Conclusions

In this paper, a novel Retinex-based decomposition model is presented for simultaneous image enhancement and denoising. First, an extended decomposition scheme was introduced to extract the illumination and reflectance component, which was expected to better represent the prior regularization of illumination and reflectance. Then, spatially-adaptive weights to regularize the illumination and reflectance components were presented and developed. The proposed method allows small smoothing near the edges or bright areas and stronger smoothing in dark areas, which could retain useful information and remove noise effectively during the processing of image enhancement. Comprehensive experiments prove that the proposed method gives better results both qualitatively and quantitatively compared to other competing methods. However, it was found that there was still space for improvement in the proposed method. On the one hand, the algorithm inducted in the log-transform domain alleviated the ill-posedness of the Retinex-based method, which could affect the gradient information to some extent. On the other hand, many off-the-shelf image denoising algorithms were fully utilized to improve the low-light image enhancement qualities. Future work needs to focus on the Retinex model in the image domain and the organic combination of enhancement and denoising.

Author Contributions

Conceptualization, C.Z. and J.X.; writing—original draft preparation, C.Z. and W.Y.; investigation and data curation, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 62001158 and 62202513, the China Postdoctoral Science Foundation under Grant 2019M652545, the Key Scientific and Technological Research Projects in Henan Province under Grant 222102210324.

Data Availability Statement

The data presented in this study are publicly available data (sources stated in the citations). Please contact the corresponding author regarding data availability.

Acknowledgments

Thanks to all editors and reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lozano-Vázquez, L.V.; Miura, J.; Rosales-Silva, A.J.; Luviano-Juárez, A.; Mújica-Vargas, D. Analysis of Different Image Enhancement and Feature Extraction Methods. Mathematics 2022, 10, 2407. [Google Scholar] [CrossRef]
  2. Yuan, N.; Zhao, X.; Sun, B.; Han, W.; Tan, J.; Duan, T.; Gao, X. Low-Light Image Enhancement by Combining Transformer and Convolutional Neural Network. Mathematics 2023, 11, 1657. [Google Scholar] [CrossRef]
  3. Muslim, H.S.M.; Khan, S.A.; Hussain, S.; Jamal, A.; Qasim, H.S.A. A knowledge-based image enhancement and denoising approach. Comput. Math. Organ. Theory 2019, 25, 108–121. [Google Scholar] [CrossRef]
  4. Devi, Y.A.S. Ranking Based Classification in Hyperspectral Images. J. Eng. Appl. Sci. 2018, 13, 1606–1612. [Google Scholar]
  5. Li, L.; Wang, R.; Wang, W.; Gao, W. A low-light image enhancement method for both denoising and contrast enlarging. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 3730–3734. [Google Scholar]
  6. Dong, J.; Pan, J.; Ren, J.S.; Lin, L.; Tang, J.; Yang, M.H. Learning spatially variant linear representation models for joint filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 8355–8370. [Google Scholar] [CrossRef]
  7. Xu, J.; Hou, Y.; Ren, D.; Liu, L.; Zhu, F.; Yu, M.; Wang, H.; Shao, L. Star: A structure and texture aware Retinex model. IEEE Trans. Image Process. 2020, 29, 5022–5037. [Google Scholar] [CrossRef]
  8. Guo, X.; Li, Y.; Ling, H. Lime: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef]
  9. Cai, B.; Xu, X.; Guo, K.; Jia, K.; Hu, B.; Tao, D. A joint intrinsic-extrinsic prior model for Retinex. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4000–4009. [Google Scholar]
  10. Wang, Y.; Pang, Z.F.; Duan, Y.; Chen, K. Image Retinex based on the nonconvex TV-type regularization. Inverse Probl. Imaging 2020, 15, 1381–1407. [Google Scholar] [CrossRef]
  11. Ma, Q.; Wang, Y.; Zeng, T. Retinex-based variational framework for low-light image enhancement and denoising. IEEE Trans. Multimed. 2022, 1–9. [Google Scholar] [CrossRef]
  12. Jia, X.; Feng, X.; Wang, W.; Zhang, L. An extended variational image decomposition model for color image enhancement. Neurocomputing 2018, 322, 216–228. [Google Scholar] [CrossRef]
  13. Balamurugan, D.; Aravinth, S.S.; Reddy, P.C.S.; Rupani, A.; Manikandan, A. Multiview objects recognition using deep learning-based wrap-CNN with voting scheme. Neural Process. Lett. 2022, 54, 1495–1521. [Google Scholar] [CrossRef]
  14. Zheng, C.; Shi, D.; Shi, W. Adaptive unfolding total variation network for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Nashville, TN, USA, 19–25 October 2021; pp. 4439–4448. [Google Scholar]
  15. Liu, X.; Ma, W.; Ma, X.; Wang, J. Lae-net: A locally-adaptive embedding network for low-light image enhancement. Pattern Recognit. 2023, 133, 109–119. [Google Scholar] [CrossRef]
  16. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1780–1789. [Google Scholar]
  17. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Lighten-net: A convolutional neural network for weakly illuminated image enhancement. Pattern Recognit. Lett. 2018, 104, 15–22. [Google Scholar]
  18. Li, J.; Li, J.; Fang, F.; Li, F.; Zhang, G. Luminance-aware pyramid network for low-light image enhancement. IEEE Trans. Multimed. 2020, 23, 3153–3165. [Google Scholar] [CrossRef]
  19. Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10561–10570. [Google Scholar]
  20. Park, S.; Yu, S.; Moon, B.; Ko, S.; Paik, J. Low-light image enhancement using variational optimization-based Retinex model. IEEE Trans. Consum. Electron. 2017, 63, 178–184. [Google Scholar] [CrossRef]
  21. Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
  22. Ren, X.; Yang, W.; Cheng, W.H.; Liu, J. LR3M: Robust low-light enhancement via low-rank regularized Retinex model. IEEE Trans. Image Process. 2020, 29, 5862–5876. [Google Scholar] [CrossRef]
  23. Kurihara, K.; Yoshida, H.; Iiguni, Y. Low-light image enhancement via adaptive shape and texture prior. In Proceedings of the 2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Sorrento, Italy, 26–29 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 74–81. [Google Scholar]
  24. Chien, C.C.; Kinoshita, Y.; Shiota, S.; Kiya, H. A Retinex-based image enhancement scheme with noise aware shadow-up function. In International Workshop on Advanced Image Technology (IWAIT); SPIE: Bellingham, WA, USA, 2019; Volume 501–506, p. 11049. [Google Scholar]
  25. Kang, M.; Jung, M. Simultaneous image enhancement and restoration with non-convex total variation. J. Sci. Comput. 2021, 87, 83. [Google Scholar] [CrossRef]
  26. Guo, Y.; Lu, Y.; Yang, M.; Liu, R.W. Low-light image enhancement with deep blind denoising. In Proceedings of the 2020 12th International Conference on Machine Learning and Computing, Shenzhen, China, 19–21 June 2020; pp. 406–411. [Google Scholar]
  27. Ng, M.K.; Wang, W. A total variation model for Retinex. SIAM J. Imaging Sci. 2011, 4, 345–365. [Google Scholar] [CrossRef]
  28. Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, June 26–1 July 2016; pp. 2782–2790. [Google Scholar]
  29. Merugu, S.; Tiwari, A.; Sharma, S.K. Spatial–spectral image classification with edge preserving method. J. Indian Soc. Remote Sens. 2021, 49, 703–711. [Google Scholar] [CrossRef]
  30. Gu, Z.; Li, F.; Fang, F.; Zhang, G. A novel Retinex-based fractional-order variational model for images with severely low light. IEEE Trans. Image Process. 2019, 29, 3239–3253. [Google Scholar] [CrossRef]
  31. Wang, J.; Li, Q.; Yang, S.; Fan, W.; Wonka, P.; Ye, J. A highly scalable parallel algorithm for isotropic total variation models. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 235–243. [Google Scholar]
  32. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
  33. Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef] [PubMed]
  34. Gu, K.; Zhai, G.; Lin, W.; Yang, X.; Zhang, W. No-reference image sharpness assessment in autoregressive parameter space. IEEE Trans. Image Process. 2015, 24, 3218–3231. [Google Scholar] [PubMed]
  35. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
Figure 1. Influences of parameter α and β on the performance of the proposed method: (a,b) Show the effects of α on the PSNR and SSIM, (c,d) Display the effects of β on the PSNR and SSIM.
Figure 1. Influences of parameter α and β on the performance of the proposed method: (a,b) Show the effects of α on the PSNR and SSIM, (c,d) Display the effects of β on the PSNR and SSIM.
Mathematics 11 03834 g001
Figure 2. Comparisons between illumination and reflectance components using different methods on the image of the LIME dataset.
Figure 2. Comparisons between illumination and reflectance components using different methods on the image of the LIME dataset.
Mathematics 11 03834 g002
Figure 3. Comparisons of illumination and reflectance components using different methods on the image of the LIME dataset.
Figure 3. Comparisons of illumination and reflectance components using different methods on the image of the LIME dataset.
Mathematics 11 03834 g003
Figure 4. Comparisons with some related methods on images from the LIME dataset.
Figure 4. Comparisons with some related methods on images from the LIME dataset.
Mathematics 11 03834 g004
Figure 5. Comparisons with some related methods on images from the LIME dataset.
Figure 5. Comparisons with some related methods on images from the LIME dataset.
Mathematics 11 03834 g005
Figure 6. Comparisons with some related methods on image from the LOL dataset.
Figure 6. Comparisons with some related methods on image from the LOL dataset.
Mathematics 11 03834 g006
Figure 7. Comparisons with some related methods on images from the LOL dataset.
Figure 7. Comparisons with some related methods on images from the LOL dataset.
Mathematics 11 03834 g007
Table 1. Analysis of index “Corr’’ for Retinex-based decomposition.
Table 1. Analysis of index “Corr’’ for Retinex-based decomposition.
MethodJIEPLIMEEVIDRBVFSTAROURS
Corr0.03460.02750.02970.02670.02310.0167
Table 2. Quantitative analysis of different methods for images from the LOL dataset.
Table 2. Quantitative analysis of different methods for images from the LOL dataset.
ImageMethodsPSNR↑ 1SSIM↑ 1ARISM↓ 1NIQE↓ 1
Figure 6
(Bookcase)
JIEP20.36010.82553.73213.2672
LIME20.37540.88623.71643.2167
EVID20.12090.81903.57513.1753
RBVF20.98010.88093.53243.1387
STAR21.01020.90013.47013.1122
OURS22.06610.91533.38823.0512
Figure 7
(Cabinet)
JIEP18.24230.80123.71063.2238
LIME19.32020.85273.69413.2069
EVID19.62290.85053.51033.1711
RBVF19.61300.85783.51353.1287
STAR19.95800.867513.36253.1022
OURS20.61370.89513.22253.0382
1 The upper arrow indicates that higher values of the metric mean better enhancement and denoising performance, while the lower arrow implies the opposite.
Table 3. Average results of different methods on 200 low-light images from LOL and NPE dataset.
Table 3. Average results of different methods on 200 low-light images from LOL and NPE dataset.
MethodsPSNR↑ 1SSIM↑ 1ARISM↓ 1NIQE↓ 1
JIEP18.11090.80103.81213.4083
LIME19.17320.84773.71253.3399
EVID19.56100.84983.62003.2670
RBVF19.57070.85223.53463.1871
STAR19.87650.86113.43203.1651
OURS20.61370.89313.28903.0901
1 The upper arrow indicates that higher values of the metric mean better enhancement and denoising performance, while the lower arrow implies the opposite.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, C.; Yue, W.; Xu, J.; Chen, H. Joint Low-Light Image Enhancement and Denoising via a New Retinex-Based Decomposition Model. Mathematics 2023, 11, 3834. https://doi.org/10.3390/math11183834

AMA Style

Zhao C, Yue W, Xu J, Chen H. Joint Low-Light Image Enhancement and Denoising via a New Retinex-Based Decomposition Model. Mathematics. 2023; 11(18):3834. https://doi.org/10.3390/math11183834

Chicago/Turabian Style

Zhao, Chenping, Wenlong Yue, Jianlou Xu, and Huazhu Chen. 2023. "Joint Low-Light Image Enhancement and Denoising via a New Retinex-Based Decomposition Model" Mathematics 11, no. 18: 3834. https://doi.org/10.3390/math11183834

APA Style

Zhao, C., Yue, W., Xu, J., & Chen, H. (2023). Joint Low-Light Image Enhancement and Denoising via a New Retinex-Based Decomposition Model. Mathematics, 11(18), 3834. https://doi.org/10.3390/math11183834

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop