Next Article in Journal
Socially-Assistive Robots to Support Learning in Students on the Autism Spectrum: Investigating Educator Perspectives and a Pilot Trial of a Mobile Platform to Remove Barriers to Implementation
Next Article in Special Issue
Real-Time and Efficient Multi-Scale Traffic Sign Detection Method for Driverless Cars
Previous Article in Journal
Estimation of Soil Organic Carbon Using Vis-NIR Spectral Data and Spectral Feature Bands Selection in Southern Xinjiang, China
Previous Article in Special Issue
Convolutional Neural Networks and Heuristic Methods for Crowd Counting: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Light Image Enhancement Based on Constraint Low-Rank Approximation Retinex Model

School of Electrical and Electronic Engineering, Shandong University of Technology, Zibo 255000, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(16), 6126; https://doi.org/10.3390/s22166126
Submission received: 30 July 2022 / Revised: 11 August 2022 / Accepted: 12 August 2022 / Published: 16 August 2022
(This article belongs to the Special Issue Sensing Technologies for Image/Video Analysis)

Abstract

:
Images captured in a low-light environment are strongly influenced by noise and low contrast, which is detrimental to tasks such as image recognition and object detection. Retinex-based approaches have been continuously explored for low-light enhancement. Nevertheless, Retinex decomposition is a highly ill-posed problem. The estimation of the decomposed components should be combined with proper constraints. Meanwhile, the noise mixed in the low-light image causes unpleasant visual effects. To address these problems, we propose a Constraint Low-Rank Approximation Retinex model (CLAR). In this model, two exponential relative total variation constraints were imposed to ensure that the illumination is piece-wise smooth and that the reflectance component is piece-wise continuous. In addition, the low-rank prior was introduced to suppress the noise in the reflectance component. With a tailored separated alternating direction method of multipliers (ADMM) algorithm, the illumination and reflectance components were updated accurately. Experimental results on several public datasets verify the effectiveness of the proposed model subjectively and objectively.

1. Introduction

Images captured in low-light scenarios, i.e., in the darkness or night-time, not only lack favorable visual aesthetics but also suffer from noise and color distortions. This directly impedes the performance of many computer vision algorithms, e.g., image retrial and image recognition. With the boom and popularity of sensing technologies, the amount of images to be processed has skyrocketed, and the need for high-quality images in low-light conditions is of great urgency. Therefore, it is vital to construct an effective and practical method to deal with low-light image enhancement tasks.
There are various methods for low-light image enhancement, which can be divided into three categories, namely histogram equalization (HE)-based methods, Retinex-based methods, and deep-learning-based methods. HE is the property adjusted method that can enlighten dark images by extending the dynamic range of the image. However, this kind of method cannot adjust the detailed information of the image satisfactorily. Proposed by Land et al. [1], the Retinex theory can be regarded as a fundamental theory to human visual perception, and decomposes an image into illumination and reflectance components. It provides a robust and flexible framework for low-light image enhancement tasks [2]. Furthermore, the variational Retinex methods are utilized to estimate the piece-wise continuous reflectance component and piece-wise smooth illumination component [3]. Recently, low-light image enhancement methods based on deep learning have also been studied comprehensively. The efforts have been focused on exploring the learning-based methods after Bychkovsky’s et al. [4] pioneer work. The learning data-driven photo adjustment utilizes both the traditional machine learning techniques [5] and the deep neural networks [6,7]. However, the effectiveness of the learning-based methods highly depends on the massive amounts of high-quality labelled data, which is labor-intensive and time-consuming.
In this work, we propose a CLAR model for feasible low-light image enhancement. Since the Retinex decomposition is a highly ill-posed problem, we constructed two exponential relative total variation constraints for illumination and reflectance components, respectively. The two constraints enable the illumination to be piece-wise smooth, and the reflectance to be piece-wise continuous. However, the illumination and reflection components are interrelated and interactional during the loop iteration. To address this problem, an ADMM formulation was developed to optimize the CLAR model effectively. Meanwhile, excessive noise is inevitable in low-light images, and exists in the reflectance component [8]. To address this problem, the low-rank prior was introduced to suppress noise during the estimation of reflectance. Comparative experimental results demonstrate that the CLAR model can achieve a favorable performance in low-light image enhancement.
This paper is organized as follows: In Section 2, we briefly review the low-light image enhancement methods. In Section 3, the proposed model is elaborated. The experimental results are shown in Section 4. The conclusion of the work is demonstrated in Section 5.

2. Related Work

Histogram equalization (HE)-based methods generally utilize the histogram diagram to boost the contrast of the image. There are many variants of HE-based methods. The AHE [9] method calculates the local histogram and then reallocates the brightness to enhance the contrast of the image while preserving more details. However, the image is equally divided into several blocks, which will lead to the time complexity of the algorithm. The WTHE [10] algorithm can be applied to video enhancement and avoids the artifacts of over-enhancement and horizontal saturation effectively. However, the details are incomplete and noisy. The LDR [11] method expands the difference in adjacent pixels in the gray level through the layered difference representation of the 2D histogram, which has a considerable speed and enhancement effect. Nevertheless, the HE-based methods focus on contrast improvement while neglecting the effect of illumination conditions.
Learning-based methods model the feature maps from high visual quality images to enhance the low-light images. The pioneering supervised low-light enhancement method LLNet [12] lightens images with minimum pixel-level saturation by stacking a sparse denoising autoencoder. The MBLLEN method [13] decomposes the whole end-to-end network into a feature extraction module, enhancement module, and fusion module for low-light image enhancement, which can effectively suppress image noise and image artifacts in a low-light area. In addition, the Retinex theory is joined up with deep learning methods. According to the Retinex theory, the Retinex-Net [14] is composed of a decomposed network and an enhanced network. Specifically, the decomposed network aims to gain the reflectance and illumination components by decomposing the given image, and the enhanced network is built to enhance the estimated illumination. It overcomes the problem of the traditional method being limited by the reflectance and the capacity of the illumination decomposition model, but it depends on the two assumptions of Retinex theory decomposition. Similarly, KinD [7] introduces a degradation removal module in the estimation of the reflectance component. The light level can be flexibly adjusted according to the different needs of users, while effectively eliminating the visual defects amplified by enhancing dark areas. In addition to the aforementioned supervised learning methods, the unsupervised methods also find their way in the low-light image domain. EnlightenGAN [15], as the first unsupervised approach, employs a global–local discriminator to ensure the reality of the enhanced result. The main focus is on illumination enhancement without special attention to noise suppression, which may even be magnified in the enhanced image. Recently, Xiong et al. [16] proposed a two-stage GAN framework with pseudo-labelled samples. In this method, contrast enhancement and denoising are separated and good contrast enhancement and denoising results are obtained.
Retinex-based methods enhance low-light images by image decomposition. They postulate that the input low-light image O can be denoted as the product of the illumination L and the reflectance R, O = L R [17]. The symbol ⊙ means the element-wise multiplication. The decomposed components can be converted back by dividing them alternatively, namely L = O R and R = O L , where ⊘ represents the element-wise division. Then, further processes to the decomposed components are utilized in order to obtain the enhanced result. As the pioneering methods in this domain, the single-scaled Retinex (SSR) [18] and multi-scaled Retinex (MSR) [19] were proposed for low-light image enhancement. The SSR is sensitive to high-frequency components and can better enhance the edge information in the image. However, the enhanced image looks unnatural and may be over-enhanced. Compared with SSR, MSR can realize color enhancement, color constancy, local dynamic range compression, and global dynamic range compression. The shortcoming of this method is that the edge is not sharp enough, and that high-frequency details cannot be improved significantly. The subsequent methods consider both the illumination and reflectance components in order to improve the performance [20,21]. However, estimating the illumination and reflectance components from a single image is an ill-posed problem [22]. To solve this issue, some researchers attempt to transform the Retinex decomposition into a statistical reasoning problem and solve the problem by imposing different constraints [23,24]. The variational Retinex methods commonly adopt the variational model to estimate the decomposed components [3]. The model is formulated as min L , R O L R F 2 + N 1 + N 2 , where N 1 and N 2 are regularized constraints for the decomposed components.

3. Methodology

3.1. Illumination and Reflectance Constraints

In Retinex theory, the key to preserving the brightness distribution consistency is to ensure that the illumination component L is piece-wise smooth [25]. In contrast, the reflectance component R should be piece-wise continuous while preserving the detail. To this end, it is necessary to impose appropriate constraints on the estimation of illumination and reflectance components. In the proposed model, we utilized the exponential relative total variation as the smoothness and continuous constraints by applying different exponential operations. The previous relative total variation method takes the relationship between the centered pixel and the neighbour pixels as the main consideration. It is composed of the window-based total variation and inherent variation [26]. The windowed total variation is formulated as
P x / y = q R ( p ) G σ x / y I q ,
where the P x and P y are the windowed total variation in the centred pixel in vertical and horizontal directions. The windowed inherent variation Q x and Q y are defined as
Q x / y = q R ( p ) G σ x / y I q .
where I is the input image, x / y is a partial derivative in the horizontal or vertical direction and G σ is a Gaussian kernel with window size σ = 3. The symbol ∗ is a convolutional operator. R ( p ) is a rectangular region centered on the pixel p, and the pixel q belongs to R ( p ) . The Retinex theory verifies the significant property that the gradient distributions of illumination and reflectance components are different. The gradient of the ideal piece-wise smooth illumination component tends to be small, whereas the gradient of the ideal reflectance component tends to be large. By applying exponents to the relative total variation, the constructed constraints can capture the gradient distribution characteristics of the decomposed components. The formulation of the illumination smooth constraint is formulated as
S x / y = ( P x / y ( L ) Q x / y ( L ) + ϵ ) γ s ,
By combining the decay exponents, the detail preserving constraint is proposed. The formulation of the constraint is
T x / y = 1 ( P x / y ( R ) Q x / y ( R ) + ϵ γ t + ε ) .
where ϵ = 0.001 and ε = 0.005. γ s and γ t are the exponential coefficients.

3.2. Nuclear Norm Minimization for Low-Rank Approximation

Noise commonly exists in low-light images due to the thermal noise in the electronic device and other factors. As prior knowledge, the rank of the noise-free images tends to be low. In contrast, the noisy image tends to have a high rank due to the chaotic distribution of the noise. The common solution to image denoising is the low-rank approximation method. Given the matrix Y as the input noisy image matrix, the main idea of low-rank approximation is to obtain a low-rank matrix X that is as close to the input Y matrix as possible. One way to achieve the low-rank matrix approximation is nuclear norm minimization (NNM).
Given a matrix X, the formulation of the nuclear norm is
X * = i σ i ( X ) 1 ,
where σ i ( X ) means the i-th singular value of X. Furthermore, the optimization problem of NNM approximation with the Frobenius norm can be solved by a soft-thresholding operation on the singular values of the observed matrix [27]. The formulation of the optimization problem is
X ^ = arg min X Y X F 2 + X * ,
where X ^ is the solution matrix.

3.3. Constraint Low-Rank Approximation Retinex Model

The formulation of the constraint low-rank approximation Retinex model is formulated as
a r g m i n R , L R L O F 2 + α S x x L F 2 + S y y L F 2 + β T x x R F 2 + T y y R F 2 + i R i ( R ) * ,
where α , β are the parameters that control the importance of different terms in the object function. O L R F 2 constrains the fidelity between the observed image O and the reconstructed image L R . The S x x L F 2 and S y y L F 2 enable the illumination map to be piece-wise smooth, and the T x x R F 2 and T y y R F 2 enable the reflectance map to be piece-wise continuous. i R i ( R ) * is the nuclear norm term used to minimize the rank of the observed matrix of the i-th similar patch of reflectance component R. R i is a patch extraction operation. The framework of the proposed model is shown in Figure 1.
In the following parts, we elaborate the solutions to estimate the illumination component and reflectance component. These two components are divided into two separative problems and estimated independently.

3.4. Illumination and Reflectance Estimation Problems

3.4.1. Illumination Estimation Problem

Picking all of the terms related to L in Equation (7), the formulation of the illumination estimation problem is
a r g m i n L R L O F 2 + α S x x L F 2 + S y y L F 2 .
Estimating L from the reconstructed image may make the problem complicated and time-consuming. In addition, the influence of the reflectance component can be ignored when estimating the illumination component [22]. Therefore, the L is estimated from the initial illumination L ^ with the proposed constraint in this model. The illumination estimation problem is reformulated as
a r g m i n L L L ^ F 2 + α S x x L F 2 + S y y L F 2 .
As proved by Guo et al. [28], the RGB image shares the same illumination component in three channels. The biggest value of the three channels is utilized as the initial illumination component L ^ , and the formulation of the L ^ is
L ^ ( x ) = max c { R , G , B } L c ( x ) ,
where x is the pixel of the image. Furthermore, we constructed the augmented Lagrangian function to obtain the solution of the Equation (9), and the formulation is
L ( L , B , Z ) = L L ^ F 2 + α S x x B F 2 + S y y B F 2 + Z ( B L ) + μ 2 B L F 2 , s . t . B = L ,
where Z is the Lagrangian multiplier and μ is a positive penalty scalar. By establishing the augmented Lagrange function, the number of variables to be iterated is increased to four, including L, B, Z, and μ . Then, Equation (9) is solved by using the alternating direction method of multipliers (ADMM) approach, which is commonly used to solve such convex problems. Therefore, each variable corresponds to a separate sub-problem and closed-form solution. The solutions of each sub-problems are given below:
(1) Solution toLproblem. Collecting all of the terms related to L in Equation (11), the L problem is formulated as
a r g m i n L L L ^ F 2 + Z ( B L ) + μ 2 B L F 2 .
To solve Equation (12), the matrix notation form of the function is rewritten as
L ( k + 1 ) = a r g m i n L ( L ( k ) T L ^ T ) ( L ( k ) L ^ ) + Z ( k ) ( B ( k ) L ( k ) ) + μ ( k ) 2 ( B ( k ) T L ( k ) T ) ( B ( k ) L k ) .
Then, Equation (13) is differentiated and the derivative is set to 0. The solution to Equation (13) is formulated as
2 ( L ( k + 1 ) L ^ ) + Z ( k ) ( B ( k ) L ( k + 1 ) ) + μ ( k ) ( B ( k ) L ( k + 1 ) ) Z ( k ) = 0 ,
L ( k + 1 ) = μ B ( k ) + Z ( k ) + 2 L ^ ( 2 + μ ( k ) ) I ,
where I is the corresponding identity matrix.
(2) Solution toBproblem. Collecting all terms related to B in Equation (11), we have:
B = a r g m i n B α ( S x x B F 2 + S y y B F 2 ) + Z ( B L ) + μ 2 B L F 2 .
To solve Equation (16), the matrix notation form of the problem is written as
B ( k + 1 ) = a r g m i n B α ( B ( k ) T D x T S x D x B ( k ) + B ( k ) T D y T S y D y B ( k ) ) + Z ( k ) ( B ( k ) L ( k + 1 ) ) + μ 2 ( B ( k ) T L ( k + 1 ) T ) ( B ( k ) L ( k + 1 ) ) ,
where D x and D y are the Toeplitz matrices in horizontal and vertical directions, respectively. Then, Equation (17) is differentiated with respect to B and the derivative is set to 0; the solution to Equation (13) is formulated as
2 α ( D x T S x D x + D y T S y D y ) B ( k + 1 ) + Z k + μ ( k ) ( B k L k + 1 ) = 0 ,
B ( k + 1 ) = μ L ( k + 1 ) Z ( k ) 2 α ( D x T S x D x + D y T S y D y ) + μ ( k ) I .
(3) UpdatingZand μ . The problem of updating Z and μ can be solved via
Z ( k + 1 ) = Z ( k ) + μ ( k ) B ( k + 1 ) L ( k + 1 ) μ ( k + 1 ) = μ ( k ) ρ , ρ > 1

3.4.2. Reflectance Estimation Problem

Collecting all of the terms related to R in Equation (7), the following formulation can be obtained:
a r g m i n R R L O F 2 + β T x x R F 2 + T y y R F 2 + i R i ( R ) * .
Similar to the illumination estimation problem, the reflectance estimation problem is also solved by utilizing the ADMM algorithm. In addition, the L in Equation (21) is regarded as a constant after estimating the illumination component by solving the L-problem. The augmented Lagrangian function for Equation (21) is written as
L ( R , G , Y ) = R L O F 2 + β T x x G F 2 + T y y G F 2 + i R i ( R ) * + Y ( G R ) + η 2 G R F 2 , s . t . G = R ,
where Y is the Lagrange multiplier and η is the positive penalty scalar. The R in the second term of Equation (21) is substituted by an auxiliary variable G. Then, the solutions to each sub-problem for the corresponding variable are as follows.
(1) Solution toGproblem. Neglecting the terms unrelated to G, we have the following problem:
G = a r g m i n G β ( T x x G F 2 + T y y G F 2 ) + Y ( G R ) + η 2 G R F 2 .
Similar to the solution to Equation (17), the matrix notation form of the problem is written as:
G ( k + 1 ) = a r g m i n G β ( G ( k ) T D x T T x D x G ( k ) + G ( k ) T D y T T y D y G ( k ) ) + Y ( k ) ( G ( k ) R ( k ) ) + η ( k ) 2 ( G ( k ) T R ( k ) T ) ( G ( k ) R ( k ) )
Then, Equation (24) is differentiated and the derivative is set to 0. The solution is formulated as
2 β ( D x T T x D x + D y T T y D y ) G ( k + 1 ) + Y ( k ) + η ( k ) ( G ( k + 1 ) R ( k ) = 0
G ( k + 1 ) = η R ( k ) Y ( k ) 2 β ( D x T T x D x + D y T T y D y ) + η ( k ) I
(2) Solution toRproblem. Collecting the terms related to R, the formulation of estimating the variable R is
R = a r g m i n R R L O F 2 + i R i ( R ) * + Y ( G R ) + η 2 G R F 2 .
The R i ( R ) is the patch-level representation for the corresponding R. To simplify the problem, Equation (27) is reformulated as
R = a r g m i n R i R i ( R ) L i O i F 2 + i R i ( R ) * + i Y i ( G i R i ( R ) ) + i η 2 G i R i ( R ) F 2 ,
where L i , O i , Y i , and G i are the i-th patch-level representations of L, O, Y, and G, respectively. For further simplicity, the problem is transformed into each i-th patch-level location. The operator with respect to i is omitted in the rest part. Then, Equation (28) is reformulated as
R = a r g m i n R R L O F 2 + R * + Y ( G R ) + η 2 G R F 2 .
Equation (29) is further modified to solve the problem. The reform can be formulated as
a r g m i n R R R ¯ ( k ) F 2 + R * 2 , R ¯ ( k ) = 2 O L + η ( k ) G ( k + 1 ) + Y ( k ) 2 L 2 + η ( k ) .
From then on, the original R-R problem is transformed into the standard low-rank minimization problem, which can be formulated as
R ( k + 1 ) = U S τ Σ V T ,
where R ¯ ( k ) = U Σ V T is the singular value decomposition of the R ¯ ( k ) and S τ Σ is the soft-thresholding operation.
(3) UpdatingYand η . The updating problem of Y and η can be solved via
Y ( k + 1 ) = Y ( k ) + η ( k ) G ( k + 1 ) R ( k + 1 ) η ( k + 1 ) = η ( k ) ρ , ρ > 1

3.5. Retinex Composition

Since the illumination and reflectance components have been estimated by solving the above problems, the final step is to adjust L to improve the visibility and brightness of the input image. Therefore, the gamma correction [29] is adopted as the illumination adjustment method. The corrected illumination L G a m m a is written as
L G a m m a = L 1 γ .
The final enhanced image O ^ is generated by the composition of the estimated reflectance component and the corrected illumination component, which is formulated as
O ^ = R L 1 γ ,
where γ is empirically set to 2.2 [20,30].

4. Experimental Results and Analysis

4.1. Experiment Settings and Implementation Details

All of the experiments were performed on MATLAB R2019b with Intel i7-9700K CPU @3.60 GHz and 32 GB memory. We set the key parameters as γ s = 1.25 , γ t = 0.75 , α = 0.015 , and β = 0.01 . For a fair comparison, the results of the compared methods were reproduced by official codes. The proposed method was evaluated on six benchmarks, i.e., LIME [28], DICM [11], LOL [14], MEF [31], NPE [32], and VV (https://sites.google.com/site/vonikakis/datasets, accessed on 25 July 2022). Comparison analyses were carried out on ten competitors, including HE [33], SSR [18], MSRCR [34], CVC [35], Dong [36], LIME [28], Kindle [7], Jiep [37], ZERO-DCE [38], and ZERO-DCE++ [39].

4.2. Decomposition Analysis

The example results of the Retinex decomposition of the proposed model are shown in Figure 2. As mentioned above, the illumination map should be piece-wise smooth, and the reflectance should be piece-wise continuous. From the first row and second row in Figure 2b,c, the estimated reflectance and illumination component is preferable without noise increasing. Through the first row in Figure 2b, the details of buildings and hillsides are well-preserved in the reflectance component. As shown in the second row of Figure 2b, the murals on the wall and the patterns on the columns in the refined reflectance component are clear. As for the estimated illumination component, its spatial smoothness makes the global brightness of the image evenly distributed, e.g., the church in the second row of Figure 2b).

4.3. Subjective Visual Evaluation

To subjectively verify the enhancement performance between the proposed method and competitors, the comparative results are shown in Figure 3, Figure 4, Figure 5 and Figure 6. Some meaningful information can be observed from these figures.
In Figure 3, the input image suffers from darkness and unevenly distributed illumination. In Figure 3b, the method of HE [33] could brighten the low-light image, but the noise in the image is also amplified. In Figure 3c, the result generated by the SSR [18] suffers not only from the serious artifacts, but also from the strongly boosted noise. Even MSRCR [34] can improve the brightness of the image and maintain details, but the color distortion and noise amplification are serious in the results generated by MSRCR, e.g., Figure 3d. In Figure 3e, the CVC [35] fails to enhance the brightness of the image effectively. The edge texture of the desk tends to be overly thick in Figure 3f. The distinct halo appears in the bright area near the window in Figure 3g of the result generated by LIME [28]. In Figure 4b, the bright region around the reading lamp is also vague. In Figure 3h and Figure 4h, our method could not only enhance the dark areas with uneven illumination distribution but could also avoid the halo and noise amplification. In Figure 4h, the result generated by Kindle [7] tends to be detail-obscured, e.g., the books on the table. In Figure 6i, although the global brightness of the image can be enhanced by Jiep [37], the floral designs tend to be color-distorted. As for ZERO-DCE and ZERO-DCE++, the color and atmosphere of the enhanced images are changed, e.g., in Figure 5j,k.
Similarly, HE, SSR, and MSRCR amplify the noise in the background in Figure 5. Meanwhile, the result generated by CVC tends to be darker than others in Figure 5e. Observing the people in Figure 5, it is obvious that the Dong [36] and LIME [28] make the enhancement of the face and the clothes slightly theatrical. The enhancement result of our method is consistent with people’s visual perception in Figure 5h.

4.4. Quantitative Evaluation

Apart from the visual evaluation comparisons, the quantitative comparisons were also utilized to verify the effectiveness of the proposed model. For image quality assessment, many metrics have been proposed [40]. For example, Zhai et al. [41] proposed the LIEQA to evaluate the image quality from four aspects, namely, luminance enhancement, color rendition, noise evaluation, and structure preserving. Zhou et al. [42] put forward the SRIF indicator to evaluate the visual quality of super-resolved images. Still, the peak signal noise ratio (PSNR) and structural similarity index measure (SSIM) [43] are mostly adopted metrics for objective evaluations in the domain of low-light image enhancement [44]. The quantitative comparative results on PSNR and SSIM are depicted in Figure 7 and Figure 8.
PSNR quantifies to what extent the image is affected by noise, approximating the human perception of the image. For PSNR, the bigger value represents the better image quality. SSIM quantifies a measure or prediction of image quality relative to the original uncompressed or undistorted image as a reference. For SSIM, the bigger value represents the better image quality. As demonstrated in Figure 7, our method outperforms the competitors on four datasets, namely, LIME [28], DICM [11], LOL [14], and MEF [31]. In addition, our method ranks in the top three on the VV dataset. From Figure 8, we can see that the proposed method ranks the first on three datasets, namely, DICM [11], LOL [14], and MEF [31]. Meanwhile, the proposed method ranks the second on LIME and VV datasets, which are approximate to the highest scores.

4.5. Denoising Evaluation

In addition to verifying the effectiveness of the enhancement performance, the denoising ability of our model can also be demonstrated in Figure 9 and Figure 10. For the realistic and intuitive visual perception, the illumination component is transformed into the hot map without gamma correction.
Through Figure 9b,c, it is shown that extracting the clean initial illumination map from a low-light image will leave most of the noise in the reflectance component. From Figure 9c–f, the results show that the illumination obtained by solving Equation (8) could be piece-wise smooth without noise amplification. Through Figure 9b,e and Figure 10b,e, the estimated reflectance component could contain less noise and enhance the contrast by comparing with the initial reflectance component. This proves the effectiveness of the low-rank approximation in our model. In general, our method could brighten the low-light image while avoiding noise amplification.
Furthermore, more comparative results of the denoising comparison for low-light image enhancement methods are depicted in Figure 11. Distinctly, the methods of SSR, CVC, and Dong fail to maintain the details while brightening the image. The results generated by HE and LIME tend to be over-enhanced, e.g., the edges of the light in Figure 11b,g are obscured. In addition, the method of MSRCR fails to maintain the color constancy while amplifying the noise. Compared to competitors, our method could generate a preferable enhancement result.

4.6. Ablation Study

In Equation (7), the CLAR model contains four components, namely, the reconstructed term, the illumination smoothing term, the reflectance detail-preserving term, and the low-rank approximation term. To demonstrate the function of the components of the model, the subjective results of the ablation experiments are shown in Figure 12. In Figure 12b, the result generated by the CLAR without the illumination smoothing term tends to be overly sharp and color-distorted. In Figure 12c, the result generated by the CLAR without the reflectance detail-preserving term is blurry. By comparing Figure 12d,e, the low-rank approximation term could suppress the noise and improve the quality of the image. In addition to a subjective evaluation, the quantitative evaluations are demonstrated in Table 1. From Table 1, the CLAR consisting of each component ranks first.

4.7. Time-Consuming Evaluation

In order to compare the computing time of the proposed method and the conventional methods, the time-consuming evaluation is demonstrated in Table 2. For a fair comparison, the computational time was calculated by averaging the process time of ten images, which were resized to 960 × 720 . As demonstrated in Table 2, the CLAR takes more time than the compared methods, but the results generated by the CLAR achieve satisfactory qualitative and quantitative effects. The increase in the computation time can be attributed to the fact that each additional regularization term leads to a significant increase in the amount of data to be processed.

5. Conclusions

In this paper, we proposed a constraint low-rank approximation Retinex (CLAR) model to enhance low-light images. Considering the noise mixed in the low-light image, the model is combined with exponential relative total variation constraints and low-rank prior. The constraints and low-rank prior ensure the piece-wise smooth illumination component and noise-free reflectance component. In addition, the alternating direction method of multipliers (ADMM) algorithm was utilized to solve the complexity problem. Comparative experiment results demonstrate that the proposed CLAR model achieves a compelling performance compared with the state-of-the-art methods.

Author Contributions

Funding acquisition, J.P.; Methodology, X.L.; Software, J.S., W.S., J.C. and G.Z.; Writing—original draft, X.L.; Writing—review & editing, J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the National Natural Science Foundation of China (No. 61801272).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Land, E. The retinex theory of color vision. Sci. Am. 1977, 237, 108–128. [Google Scholar] [CrossRef]
  2. Chen, J.; Yang, X.; Lu, L.; Li, Q.; Li, Z.; Wu, W. A novel infrared image enhancement based on correlation measurement of visible image for urban traffic surveillance systems. J. Intell. Transp. Syst. 2020, 24, 290–303. [Google Scholar] [CrossRef]
  3. Kimmel, R.; Elad, M.; Shaked, D.; Keshet, R.; Sobel, I. A Variational Framework for Retinex. Int. J. Comput. Vis. 2004, 52, 7–23. [Google Scholar] [CrossRef]
  4. Bychkovsky, V.; Paris, S.; Chan, E.; Durand, F. Learning photographic global tonal adjustment with a database of input/output image pairs. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 97–104. [Google Scholar]
  5. Yan, J.; Lin, S.; Kang, S.B.; Tang, X. A Learning-to-Rank Approach for Image Color Enhancement. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2987–2994. [Google Scholar]
  6. Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to See in the Dark. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3291–3300. [Google Scholar]
  7. Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
  8. Lin, H.; Shi, Z. Multi-scale retinex improvement for nighttime image enhancement. Optik 2014, 125, 7143–7148. [Google Scholar] [CrossRef]
  9. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  10. Wang, Q.; Ward, R.K. Fast image/video contrast enhancement based on weighted thresholded histogram equalization. IEEE Trans. Consum. Electron. 2007, 53, 757–764. [Google Scholar] [CrossRef]
  11. Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef]
  12. Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
  13. Lv, F.; Lu, F.; Wu, J.; Lim, C.S. MBLLEN: Low-Light Image/Video Enhancement Using CNNs. In Proceedings of the British Machine Vision Conference 2018, Newcastle, UK, 3–6 September 2018. [Google Scholar]
  14. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
  15. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
  16. Xiong, W.; Liu, D.; Shen, X.; Fang, C.; Luo, J. Unsupervised real-world low-light image enhancement with decoupled networks. arXiv 2020, arXiv:2005.02818. [Google Scholar]
  17. Brainard, D.; Wandell, B. Analysis of the retinex theory of color vision. J. Opt. Soc. Am. A Opt. Image Sci. 1986, 3, 1651–1661. [Google Scholar] [CrossRef] [PubMed]
  18. Jobson, D.; Rahman, Z.; Woodell, G. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
  19. Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 1997, 6, 965–976. [Google Scholar] [CrossRef]
  20. Gu, Z.; Li, F.; Fang, F.; Zhang, G. A Novel Retinex-Based Fractional-Order Variational Model for Images With Severely Low Light. IEEE Trans. Image Process. 2020, 29, 3239–3253. [Google Scholar] [CrossRef]
  21. Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
  22. Hao, S.; Han, X.; Guo, Y.; Xu, X.; Wang, M. Low-Light Image Enhancement with Semi-Decoupled Decomposition. IEEE Trans. Multimed. 2020, 22, 3025–3038. [Google Scholar] [CrossRef]
  23. Provenzi, E.; Carli, L.D.; Rizzi, A.; Marini, D. Mathematical definition and analysis of the retinex algorithm. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2005, 22, 2613–2621. [Google Scholar] [CrossRef]
  24. Fu, X.; Liao, Y.; Zeng, D.; Huang, Y.; Zhang, X.; Ding, X. A Probabilistic Method for Image Enhancement With Simultaneous Illumination and Reflectance Estimation. IEEE Trans. Image Process. 2015, 24, 4965–4977. [Google Scholar] [CrossRef]
  25. Zhang, Q.; Nie, Y.; Zhu, L.; Xiao, C.; Zheng, W.S. Enhancing Underexposed Photos Using Perceptually Bidirectional Similarity. IEEE Trans. Multimed. 2021, 23, 189–202. [Google Scholar] [CrossRef]
  26. Xu, L.; Yan, Q.; Xia, Y.; Jia, J. Structure extraction from texture via relative total variation. ACM Trans. Graph. (TOG) 2012, 31, 1–10. [Google Scholar] [CrossRef]
  27. Candès, E.J.; Recht, B. Exact matrix completion via convex optimization. Found. Comput. Math. 2009, 9, 717–772. [Google Scholar] [CrossRef]
  28. Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef]
  29. Pang, J.; Zhang, S.; Bai, W. A novel framework for enhancement of the low lighting video. In Proceedings of the 2017 IEEE Symposium on Computers and Communications (ISCC), Heraklion, Greece, 3–6 July 2017; pp. 1366–1371. [Google Scholar] [CrossRef]
  30. Gao, Y.; Hu, H.M.; Li, B.; Guo, Q. Naturalness Preserved Nonuniform Illumination Estimation for Image Enhancement Based on Retinex. IEEE Trans. Multimed. 2018, 20, 335–344. [Google Scholar] [CrossRef]
  31. Ma, K.; Duanmu, Z.; Yeganeh, H.; Wang, Z. Multi-Exposure Image Fusion by Optimizing A Structural Similarity Index. IEEE Trans. Comput. Imaging 2018, 4, 60–72. [Google Scholar] [CrossRef]
  32. Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef]
  33. González, R.; Woods, R. Digital Image Processing. IEEE Trans. Pattern Anal. Mach. Intell. 1981, PAMI-3, 242–243. [Google Scholar]
  34. Rahman, Z.; Jobson, D.J.; Woodell, G.A. Retinex processing for automatic image enhancement. J. Electron. Imaging 2004, 13, 100–110. [Google Scholar]
  35. Çelik, T.; Tjahjadi, T. Contextual and Variational Contrast Enhancement. IEEE Trans. Image Process. 2011, 20, 3431–3441. [Google Scholar] [CrossRef]
  36. Dong, X.; Pang, Y.; Wen, J. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the 2011 IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2011; pp. 1–6. [Google Scholar]
  37. Cai, B.; Xu, X.; Guo, K.; Jia, K.; Hu, B.; Tao, D. A Joint Intrinsic-Extrinsic Prior Model for Retinex. In Proceedings of the International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4020–4029. [Google Scholar]
  38. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1780–1789. [Google Scholar]
  39. Li, C.; Guo, C.; Chen, C.L. Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef]
  40. Ren, X.; Yang, W.; Cheng, W.H.; Liu, J. LR3M: Robust Low-Light Enhancement via Low-Rank Regularized Retinex Model. IEEE Trans. Image Process. 2020, 29, 5862–5876. [Google Scholar] [CrossRef] [PubMed]
  41. Zhai, G.; Sun, W.; Min, X.; Zhou, J. Perceptual quality assessment of low-light image enhancement. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2021, 17, 1–24. [Google Scholar] [CrossRef]
  42. Zhou, W.; Wang, Z. Quality Assessment of Image Super-Resolution: Balancing Deterministic and Statistical Fidelity. arXiv 2022, arXiv:2207.08689. [Google Scholar]
  43. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  44. Singh, N.; Bhandari, A.K. Principal component analysis-based low-light image enhancement using reflection model. IEEE Trans. Instrum. Meas. 2021, 70, 1–10. [Google Scholar] [CrossRef]
Figure 1. The framework of the proposed model. In the RGB space, the initial illumination component is first obtained to refine the estimated illumination component. Subsequently, the estimated illumination component is considered as the constant in order to obtain an initial reflectance component by Retinex decomposition. Then, the denoised reflectance component is estimated. Finally, the enhanced result is composed of the corrected illumination and the estimated reflectance.
Figure 1. The framework of the proposed model. In the RGB space, the initial illumination component is first obtained to refine the estimated illumination component. Subsequently, the estimated illumination component is considered as the constant in order to obtain an initial reflectance component by Retinex decomposition. Then, the denoised reflectance component is estimated. Finally, the enhanced result is composed of the corrected illumination and the estimated reflectance.
Sensors 22 06126 g001
Figure 2. Visual evaluation of Retinex decomposition. (a) Input low-light image, (b) the estimated reflectance component, (c) the estimated illumination component, (d) the enhancement result.
Figure 2. Visual evaluation of Retinex decomposition. (a) Input low-light image, (b) the estimated reflectance component, (c) the estimated illumination component, (d) the enhancement result.
Sensors 22 06126 g002
Figure 3. Visual comparison between low-light image enhancement results in the exemplar image. (a) Input (b) HE [33], (c) SSR [18], (d) MSRCR [34], (e) CVC [35], (f) Dong [36], (g) LIME [28], (h) Jiep [37], (i) Kindle [7], (j) ZERO-DCE [38], (k) ZERO-DCE++ [39], (l) ours.
Figure 3. Visual comparison between low-light image enhancement results in the exemplar image. (a) Input (b) HE [33], (c) SSR [18], (d) MSRCR [34], (e) CVC [35], (f) Dong [36], (g) LIME [28], (h) Jiep [37], (i) Kindle [7], (j) ZERO-DCE [38], (k) ZERO-DCE++ [39], (l) ours.
Sensors 22 06126 g003
Figure 4. Visual comparison between low-light image enhancement results in the exemplar image. (a) Input (b) HE [33], (c) SSR [18], (d) MSRCR [34], (e) CVC [35], (f) Dong [36], (g) LIME [28], (h) Jiep [37], (i) Kindle [7], (j) ZERO-DCE [38], (k) ZERO-DCE++ [39], (l) ours.
Figure 4. Visual comparison between low-light image enhancement results in the exemplar image. (a) Input (b) HE [33], (c) SSR [18], (d) MSRCR [34], (e) CVC [35], (f) Dong [36], (g) LIME [28], (h) Jiep [37], (i) Kindle [7], (j) ZERO-DCE [38], (k) ZERO-DCE++ [39], (l) ours.
Sensors 22 06126 g004
Figure 5. Visual comparison between low-light image enhancement results in the exemplar image. (a) Input (b) HE [33], (c) SSR [18], (d) MSRCR [34], (e) CVC [35], (f) Dong [36], (g) LIME [28], (h) Jiep [37], (i) Kindle [7], (j) ZERO-DCE [38], (k) ZERO-DCE++ [39], (l) ours.
Figure 5. Visual comparison between low-light image enhancement results in the exemplar image. (a) Input (b) HE [33], (c) SSR [18], (d) MSRCR [34], (e) CVC [35], (f) Dong [36], (g) LIME [28], (h) Jiep [37], (i) Kindle [7], (j) ZERO-DCE [38], (k) ZERO-DCE++ [39], (l) ours.
Sensors 22 06126 g005
Figure 6. Visual comparison between low-light image enhancement results in the exemplar image. (a) Input (b) HE [33], (c) SSR [18], (d) MSRCR [34], (e) CVC [35], (f) Dong [36], (g) LIME [28], (h) Jiep [37], (i) Kindle [7], (j) ZERO-DCE [38], (k) ZERO-DCE++ [39], (l) ours.
Figure 6. Visual comparison between low-light image enhancement results in the exemplar image. (a) Input (b) HE [33], (c) SSR [18], (d) MSRCR [34], (e) CVC [35], (f) Dong [36], (g) LIME [28], (h) Jiep [37], (i) Kindle [7], (j) ZERO-DCE [38], (k) ZERO-DCE++ [39], (l) ours.
Sensors 22 06126 g006
Figure 7. Quantitative comparisons in terms of PSNR. (a) LIME [28], (b) DICM [11], (c) LOL [14], (d) MEF [31], (e) NPE [32], (f) VV.
Figure 7. Quantitative comparisons in terms of PSNR. (a) LIME [28], (b) DICM [11], (c) LOL [14], (d) MEF [31], (e) NPE [32], (f) VV.
Sensors 22 06126 g007
Figure 8. Quantitative comparisons in terms of SSIM. (a) LIME [28], (b) DICM [11], (c) LOL [14], (d) MEF [31], (e) NPE [32], (f) VV.
Figure 8. Quantitative comparisons in terms of SSIM. (a) LIME [28], (b) DICM [11], (c) LOL [14], (d) MEF [31], (e) NPE [32], (f) VV.
Sensors 22 06126 g008
Figure 9. Visual evaluation of Retinex decomposition. (a) Original low-light image, (b) initial reflectance component R ^ , (c) initial illumination component L ^ , (d) corresponding enhancement result, (e) refined reflectance component R, (f) estimated illumination component L.
Figure 9. Visual evaluation of Retinex decomposition. (a) Original low-light image, (b) initial reflectance component R ^ , (c) initial illumination component L ^ , (d) corresponding enhancement result, (e) refined reflectance component R, (f) estimated illumination component L.
Sensors 22 06126 g009
Figure 10. Visual evaluation of Retinex decomposition. (a) Original low-light image, (b) initial reflectance component R ^ , (c) initial illumination component L ^ , (d) corresponding enhancement result, (e) refined reflectance component R, (f) estimated illumination component L.
Figure 10. Visual evaluation of Retinex decomposition. (a) Original low-light image, (b) initial reflectance component R ^ , (c) initial illumination component L ^ , (d) corresponding enhancement result, (e) refined reflectance component R, (f) estimated illumination component L.
Sensors 22 06126 g010
Figure 11. Visual comparison between enhancement results in a noisy low-light image. (a) Input, (b) HE [33], (c) SSR [18], (d) MSRCR [34], (e) CVC [35], (f) Dong [36], (g) LIME [28], (h) ours.
Figure 11. Visual comparison between enhancement results in a noisy low-light image. (a) Input, (b) HE [33], (c) SSR [18], (d) MSRCR [34], (e) CVC [35], (f) Dong [36], (g) LIME [28], (h) ours.
Sensors 22 06126 g011
Figure 12. Ablation study for components of the CLAR model. (a) Input, (b) CLAR without S x x L F 2 + S y y L F 2 , (c) CLAR without T x x R F 2 + T y y R F 2 , (d) CLAR without i R i ( R ) * , (e) CLAR.
Figure 12. Ablation study for components of the CLAR model. (a) Input, (b) CLAR without S x x L F 2 + S y y L F 2 , (c) CLAR without T x x R F 2 + T y y R F 2 , (d) CLAR without i R i ( R ) * , (e) CLAR.
Sensors 22 06126 g012
Table 1. Quantitative evaluation in terms of PSNR ↑ and SSIM ↑ metric for ablation test.
Table 1. Quantitative evaluation in terms of PSNR ↑ and SSIM ↑ metric for ablation test.
Component AblationPSNR ↑SSIM ↑
CLAR without S x x L F 2 + S y y L F 2 12.53270.3086
CLAR without T x x R F 2 + T y y R F 2 18.89500.5091
CLAR without i R i ( R ) * 19.17430.5881
CLAR19.65210.6078
Table 2. Comparison of time cost (in seconds).
Table 2. Comparison of time cost (in seconds).
MethodHE [33]SSR [18]MSRCR [34]CVC [35]
Time2.072.176.670.79
MethodDong [36]LIME [28]Jiep [37]Ours
Time2.6214.1113.4335.60
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, X.; Shang, J.; Song, W.; Chen, J.; Zhang, G.; Pan, J. Low-Light Image Enhancement Based on Constraint Low-Rank Approximation Retinex Model. Sensors 2022, 22, 6126. https://doi.org/10.3390/s22166126

AMA Style

Li X, Shang J, Song W, Chen J, Zhang G, Pan J. Low-Light Image Enhancement Based on Constraint Low-Rank Approximation Retinex Model. Sensors. 2022; 22(16):6126. https://doi.org/10.3390/s22166126

Chicago/Turabian Style

Li, Xuesong, Jianrun Shang, Wenhao Song, Jinyong Chen, Guisheng Zhang, and Jinfeng Pan. 2022. "Low-Light Image Enhancement Based on Constraint Low-Rank Approximation Retinex Model" Sensors 22, no. 16: 6126. https://doi.org/10.3390/s22166126

APA Style

Li, X., Shang, J., Song, W., Chen, J., Zhang, G., & Pan, J. (2022). Low-Light Image Enhancement Based on Constraint Low-Rank Approximation Retinex Model. Sensors, 22(16), 6126. https://doi.org/10.3390/s22166126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop