Next Article in Journal
LTransformer: A Transformer-Based Framework for Task Offloading in Vehicular Edge Computing
Previous Article in Journal
Suitability Assessment of Multilayer Urban Underground Space Based on Entropy and CRITIC Combined Weighting Method: A Case Study in Xiong’an New Area, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Low-Brightness Image Enhancement Algorithm Based on Multi-Scale Fusion

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
College of Physics, Jilin University, Changchun 130012, China
4
Changchun UP Optotech Co., Ltd., Changchun 130031, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(18), 10230; https://doi.org/10.3390/app131810230
Submission received: 24 August 2023 / Revised: 6 September 2023 / Accepted: 11 September 2023 / Published: 12 September 2023

Abstract

:
Images captured in low-brightness environments typically have low brightness, low contrast, and high noise levels, which significantly affect the overall image quality. To improve the image quality, a low-brightness image enhancement algorithm based on multi-scale fusion is proposed. First, a novel brightness transformation function is used for the generation of two images with different brightnesses. Then, the illumination estimation technique is used to construct a weight matrix, which facilitates the extraction of advantageous features from each image. Finally, the enhanced image is obtained by the fusion of two images using the weight matrix and the pyramid reconstruction algorithm. The proposed method has a better enhancement effect as shown by the experimental results. Compared to other image enhancement algorithms, it has lower evaluation values in the natural image quality evaluator (NIQE) and lightness order error (LOE) indices. The lowest average NIQE value of the proposed algorithm in each dataset is 2.836. This further demonstrates its superior performance.

1. Introduction

In people’s daily lives and in production processes, it is inevitable to capture images under low-light conditions to obtain and store scene information [1,2,3]. However, due to the complexity of light sources, the peculiarity of illumination, and the difference in intensity, various problems occur in the captured images. These problems include uneven brightness distribution, where some areas are too bright and others too dark, random noise may appear in the image, which negatively affects its contrast and color deviation. As a result, these low-quality, low-contrast images can significantly reduce the effectiveness of applications such as night surveillance, vehicle detection, facial recognition, and fault detection [4,5,6].
To improve image quality, brightness and contrast must be enhanced to help people obtain richer image information and meet their daily needs. Image contrast enhancement under low-brightness conditions is an important research area in image processing with significant practical and scientific significance. At present, the field of image enhancement algorithm research has witnessed significant progress, with favorable results achieved through continuous exploration and progress. However, there are still many challenges that require further solutions. This field offers wide development opportunities and application prospects, thus attracting researchers to continue investing in this area.
In practical applications, traditional image enhancement methods provide some restoration effects on various regions of low-brightness images. However, they still fail to restore the true sense of images with rich texture details. The enhanced results may be natural but not obvious, or they may have clear details but too bright an overall brightness, resulting in brightness distortion.
In this paper, we propose a low-brightness image enhancement algorithm that addresses the mentioned issues. The algorithm incorporates the brightness transformation function, illumination estimation technique, and the Laplacian pyramid. The goal is to achieve a balanced enhancement of image brightness while avoiding excessive enhancement in localized areas, thereby preserving detailed information within the image. First, a novel brightness transformation function is used to obtain two images. Then, the illumination estimation technique is used to construct a weight matrix. Finally, the enhanced image is obtained by merging the two images using the weight matrix and the Laplacian pyramid algorithm. Our method is effective and has been tested on several datasets. Figure 1 shows the algorithm flowchart for the proposed method. The proposed algorithm makes the following contributions:
(1) The novel brightness transformation function is used to construct an image enhancement model, and the illumination estimation technique is utilized to create a weight matrix that assigns higher values to pixels with superior brightness;
(2) The combination of a weight matrix and pyramid fusion achieves excellent performance in obtaining natural and detailed images;
(3) By setting two parameters, we can build our image enhancement model, thereby demonstrating the simplicity and ease of implementation of our algorithm.

2. Related Work

Researchers have conducted extensive research on image enhancement, addressing various problems associated with it and proposing a variety of methods [7]. These methods have many focuses, such as improving visibility in low-light images, controlling brightness inconsistencies in images, and improving results by integrating multiple images. Each method has its own distinct advantages.
There are global image enhancement algorithms, such as histogram equalization and gamma correction [8]. Histogram equalization stretches the spatial distribution of image brightness, enhancing the brightness of the majority of pixels and increasing image contrast. The advantage of the global image enhancement algorithm is simple and effective. However, the global enhancement methods do not pay attention to the brightness distribution of the image, so it is difficult to ensure the image quality.
Moreover, many image enhancement algorithms rely on the Retinex theory [9]. The theory relevant to Retinex is applied to the field of image enhancement, whose purpose is to decompose an image into an illumination map and a reflectance map. The reflectance map can be considered as the final result. The Retinex model can achieve balance in multiple tasks and adapt to different types of images for adaptive enhancement. In contrast, traditional global enhancement methods are limited in their ability to enhance specific features of the image, often resulting in color distortion or loss of detail. Early experiments with the Retinex theory include single-scale Retinex (SSR) [10], multiscale Retinex (MSR) [11], and multiscale Retinex with color restoration (MSRCR) [12]. Using the Retinex algorithms to improve image quality can effectively improve image sharpness and brightness. However, the Retinex algorithms rely on smooth changes in the original light image, and, in real situations, the illumination at the edge of regions with significant brightness differences is usually not smooth. Thus, the application of the Retinex algorithm will lead to the appearance of a halo phenomenon in the resulting image.
In addition, there are many classical image enhancement algorithms. Dong et al. [13] proposed a fast and efficient algorithm for low-light video. They treat the inverted low-light image as a blurred image that can be enhanced by dehazing methods. This algorithm has the advantage of fast computation speed, but it produces images with excessive noise. Guo et al. [14] introduced an illumination estimation-based model (LIME) to enhance low-light images. The enhanced images of LIME are visually pleasing, but the robustness of this method is low. Wang et al. [15] introduced a naturalness-preserving enhancement algorithm (NPE) for non-uniform illumination images. The enhanced images of NPE are visually pleasing; however, this method has a relatively high computational complexity. Fu et al. [16] introduced a weighted variation model for simultaneous reflection and illumination estimation (SRIE), but the SRIE method’s performance shows that the enhancement effect is weak. Therefore, Fu et al. [17] also proposed a fusion-based enhancement method for weakly illuminated images (MF). The proposed model enhances the derivative amplitudes at brighter points using a weighted variation approach. The experimental results demonstrate the effectiveness of this method in enhancing image brightness and fidelity. However, this method cannot restore texture details. Agrawal et al. [18] proposed a novel image contrast enhancement based on joint histogram equalization. This algorithm effectively exploits the information between each pixel, but it suffers from a slow computation speed.
Recently, deep learning methods have been widely used in computer vision and can also achieve good performance in the underlying image processing. Park et al. [19] used a depth autoencoder to enhance images with low brightness. Wei et al. [20] used a deep neural network to simulate the Retinex model (Retinex-Net) and obtained the reflectance map. Chen C et al. [21] enhanced the raw data on the camera instead of targeting the already captured image. Hai et al. [22] used low-light image enhancement via the Real-low to Real-normal Networks. This method uses frequency information to preserve image details. Fan et al. [23] used a lightweight attention-guided ConvNeXt network for low-light image enhancement. This method suppresses noise and captures essential feature information. However, the biggest obstacle for deep learning-based image enhancement algorithms is data collection
Although there have been many low-light image enhancement algorithms, there are still some problems that have not been solved [24]. Deep learning-based image enhancement algorithms are computationally expensive. Methods based on global enhancement for image enhancement algorithms often lead to image distortion and do not guarantee image quality. Traditional image enhancement algorithms fail to preserve image texture and detail while enhancing image brightness. This article aims to propose a computationally simple image enhancement algorithm that balances the enhancement of image brightness while preserving image details. By avoiding excessive enhancement in certain areas, the algorithm can be better applied to real-life situations.

3. Proposed Method

The purpose of this paper is to enhance the contrast of the image while maintaining the texture and detail information of the image, including the following three parts of the work: the brightness transformation function, weights definition, and pyramid fusion.

3.1. Brightness Transformation Function

The main characteristic of a low-light image is that the image brightness and contrast are low, resulting in poor detail recognition. To address this problem, researchers have explored various methods to improve image brightness. The simplest and most direct approach is to use linear functions. However, this method neglects the spatial distribution of image brightness, resulting in over-saturation in high-brightness areas and significant loss of detail.
To mitigate these problems, researchers have turned to various brightness transformation function (BTF) models. The most representative of these BTF models are nonlinear functions, which are better suited for improving image brightness. By adjusting parameters, these nonlinear functions modify the enhancement amplitude of the image brightness, avoiding the distortion caused by uniform enhancement across varying brightness levels. This ensures the naturalness of the enhancement results. Among the nonlinear functions used, the gamma function [8] is widely used and can be expressed as:
s ( I ( x ) ) = c · I ( x ) α
where c is the constant, I ( x ) is the input image, and α is the model adjustment parameter. However, due to unreasonable assumptions of model parameters, the enhancement result often looks unreal.
Ying [25,26] noted that the histogram of the underexposed image is mainly concentrated in the low-brightness region. Consequently, by linearly increasing the pixel values before conventional gamma correction, the resulting image will closely resemble a well-exposed image. Therefore, he proposed a new model: the camera response model of beta-gamma, which was modified on the basis of gamma correction. Before gamma correction, the pixels of the image were enlarged to obtain the brightness transformation function g ( I ( x ) , k ) :
g ( I ( x ) , k ) = β · I ( x ) γ β = e b ( 1 k a ) γ = k a
where I ( x ) is the input image, a and b are the parameters of the camera, k is the exposure rate, and β and γ are calculated by Equation (2). Finally, the brightness transformation function is:
g ( I ( x ) , k ) = e b ( 1 k a ) · I ( x ) k a .
However, in this model, the image exposure ratio needs to be estimated in advance, which makes the algorithm complex. Therefore, we propose to use a new and simple brightness transformation function, which can be defined as
L ( I ( x ) , a ) = ( a + 1 ) I ( x ) a I ( x ) 2
where I ( x ) is the input image, and a ( 0 , 1 ) is the parameter, which controls the exposure level and adjusts the magnitude of the curve. This model can be iterative as follows:
L n ( x ) = ( a + 1 ) L n 1 ( x ) a L n 1 ( x ) 2
where n is the number of iterations. The graphs of Equation (5) are as follows:
As can be seen in Figure 2b, n controls the curvature, and higher-order curves have a stronger fitting ability (i.e., larger curvature). Compared to the model proposed by Ying, our model is more concise. However, this model is still a global adjustment. A global mapping to adjust image brightness can easily cause image brightness distortion. Therefore, it is also necessary to process the image after the brightness transformation.

3.2. Weights Definition

Our plan is to fuse two images with different brightnesses and assign the larger weight value to the pixels with good brightness and the smaller weight value to the pixels with insufficient brightness. According to the illumination estimation technique, the weight matrix is defined as follows:
W ( x ) = T ( x ) μ
where T ( x ) is the illumination map, and μ is the model parameter usually set to 1/2. We need to solve an optimization problem to estimate T ( x ) . First, extract the maximum value in the R , G , B channels as L ( x ) :
L ( x ) = M A X c ( r , g , b ) I c ( x )
where I ( x ) is the input image. The illumination estimation technique uses a first-order backward difference ∇ on L ( x ) and filters L ( x ) with a filter kernel w ( x ) . The difference result and the filtering result are reciprocally multiplied to obtain the weight matrix W d ( x ) . It is proposed as follows:
W d ( x ) = 1 y w ( x ) d L ( y ) + ε , d { h , v }
where h is horizontal, v is vertical, and ε is a very small constant. Using W d ( x ) to approximate the optimal solution for T ( x ) , we should solve the following optimization equation:
m i n T ( x ) x ( ( T ( x ) L ( x ) ) 2 + β d { h , v } W d ( x ) ( d T ( x ) ) 2 | d L ( x ) | + δ )
where h and v are the row and column operations of the difference operation, β is used to balance the two indices, and δ is a very small constant. The first equation in brackets is used to control the overall structure of the image, and the second equation controls the texture details of the image. The minimum value of T ( x ) is obtained by optimizing the equation. This value represents the optimal solution of T ( x ) , which is then used to calculate the weight matrix W ( x ) based on Equation (6).

3.3. Pyramid Fusion

The fusion method based on the multi-scale transformation method can effectively extract and separate image features and then integrate them more effectively, resulting in fusion results suitable for human eyes. First, the image to be fused is subjected to multi-scale transformation to obtain both high-frequency and low-frequency information, including prominent features. Then, selection rules are designed to integrate various pieces of decomposition information and highlight areas of interest to human eyes in the fusion results.
Pyramid fusion is a fusion method based on multi-scale transformation [27]. First, the image is decomposed into different resolution tower layers according to different scales. Then, the layers of the tower are fused to obtain the fusion pyramid. Finally, the final image is obtained by reconstruction.
In this section, we will introduce the relevant theories of pyramid decomposition to facilitate image fusion and reconstruction.
The Gaussian pyramid obtains the high-level image by downsampling from the lower layer, which can be expressed as
G l ( x , y ) = m = 2 2 n = 2 2 w ( m , n ) · G l 1 ( 2 x + m , 2 y + n ) ( 0 l L e v 1 , 0 x C l 1 , 0 y R l 1 )
where w ( m , n ) is the Gaussian filter template and often takes the size of 5 × 5 . L e v is the number of pyramid layers. G i refers to the l-th layer image of the Gaussian pyramid, while C l and R l are the total number of rows and total columns of the l-th layer image, respectively.
When the Gaussian pyramid is decomposed, the Gaussian filter will lose the details of the image. The introduction of the Laplacian pyramid can preserve the details, and the texture and detail information of the original image can be restored after image reconstruction and fusion. The Laplacian pyramid decomposition method is as follows:
I l * ( x , y ) = 4 m = 2 2 n = 2 2 w ( m . n ) · G l ( x + m 2 , y + n 2 ) G l ( x + m 2 , y + n 2 ) = G l ( x + m 2 , y + n 2 ) , x + m 2 , y + n 2 Z 0 e l s e
where Z is an integer, the Laplacian pyramid L l of the l-th layer is as follows:
L l = G l I l * , 0 l < L e v 1 G L e v 1 l = L e v 1 .
In this section, we present the algorithm proposed in this paper. The algorithm consists of the following steps: First, the source image is transformed by adjusting the brightness using different parameters. This transformation produces two images. One image has high brightness and shows prominent details, but it also introduces some brightness distortions. The other image has lower brightness and less detail but appears more natural. It is important to note that this process tends to introduce local brightness distortion. To address this issue, we use illumination estimation technology to create a weight map. This weight map is then used to perform a multi-scale fusion of the two images obtained in the previous step. By applying this fusion technique, we finally obtain the enhanced version of the image using our proposed method.
In summary, our algorithm provides a simple and effective solution for image enhancement. As shown in Figure 1, our algorithm includes the following steps:
(1) According to the specified variable parameters a and n in Formula (5), we obtain two images, I 1 and I 2 . I 1 is brighter than I 2 .
(2) The two images I 1 and I 2 are placed in the Laplacian pyramid L · and decomposed into different layers.
(3) To preserve contrast, the weight matrix should be positively correlated with the brightness of the scene. Therefore, we compute the weight map W using the image I 1 as the source image.
(4) The weight map W is placed into the Gaussian pyramid G · . The weight map assigns the larger weight value to the pixels with good brightness and the smaller weight value to the pixels with insufficient brightness. To make better use of the weight map, we merge L · and G · in the following way:
F l ( x , y ) = L { I 2 l ( x , y ) } × G { W l ( x , y ) } + L { I 1 l ( x , y ) } × ( 1 G { W l ( x , y ) } )
where l expresses the number of pyramid layers, and F l ( x , y ) expresses each layer of fused pyramid.
(5) The fused image F f i n a l ( x , y ) can be obtained by the inverse transformation of the fused pyramid image:
F f i n a l ( x , y ) = l = 6 0 F l ( x , y ) + u ( F l ( x , y ) )
where u is an up-sampling operation. We summarize these steps in Algorithm 1.
Algorithm 1: The proposed algorithm
Input: Source image I, parameter a, parameter n
output: The result F f i n a l ( x , y ) after fusion
1: Calculate I 1 , I 2 by Equation (5);
2: Put I 1 , I 2 into the Laplace pyramid;
3: Take image I 1 as the source image for calculating the weight map W;
4: Put W into the Gaussian pyramid;
5: for each I 1 l , I 2 l and W l  do
6: Calculate F l ( x , y ) by Equation (13);
7: end for
8: Use Laplacian pyramid reconstruction to obtain the final fused image F f i n a l ( x , y ) .

4. Experiment and Analysis

To demonstrate the superiority of our method, we compared it with several advanced methods, including the dehazing-based method (Dong) [13], the illumination estimation-based model (LIME) [14], a multiscale Retinex with color restoration (MSRCR) [12], the multi-deviation fusion method (MF) [17], the naturalness-preserving enhancement method (NPE) [15], and the simultaneous reflection and illumination estimation method (SRIE) [16].
Our method consists of three parts: the brightness transformation function (BTF) model, weights definition, and pyramid fusion. By using the brightness transformation function and appropriate adjustment parameters, two images with different brightnesses can be obtained. One image has a lower brightness and less detail but appears more natural. The other image is high in brightness and has excellent detail but also has some brightness distortion. We set the parameter a in the BTF model to two different values, 0.25 and 0.55, to obtain two images with different brightness. We have verified the rationality of the parameters as much as possible, and the current parameters are suitable for most images. We set n in the BTF model equal to four, which can deal with most cases satisfactorily. In the pyramid fusion part, we set the number of pyramid layers to seven.

4.1. Subjective Evaluation of Experimental Results

In this section, we subjectively analyze the effect of the proposed method on different images and compare it with other methods. The performance of our method was tested on the VE-LOL dataset [20], which is a large low-light image dataset with rich materials, including indoor and outdoor scenes. Due to space limitations, we selected eight representative images, namely “bookshelf”, “cupboard”, “doll”, “classroom”, “swimming pool”, “hall”, “gym”, and “wardrobe”. Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 show the experimental results of the different methods.
The enhanced images obtained by our method are brighter and more natural, as shown in Figure 3. In comparison, the images obtained by the MF, NPE, and SRIE methods lack brightness and detail, making it difficult to clearly see the dolls on the bookshelf. In addition, these methods suffer from noise and artifacts. It is obvious that the MSR method increases the image brightness excessively.
In Figure 4, the scene in the cabinet has a clear bottle of wine after image enhancement. In the more complex scene shown in Figure 5, our method demonstrates its ability to enhance the pattern with satisfactory contrast, allowing clear visualization of dolls, fruits, and costumes. Figure 6 clearly shows the artifacts surrounding the bright areas when Dong’s algorithm is applied.
In the swimming pool scene shown in Figure 7, the MF method produces excessive noise and lacks noticeable contrast. In contrast, our method excels in preserving rich and natural details, allowing for a more realistic image restoration. Figure 8 shows the presence of brightness distortion in the LIME method. Figure 9 shows a gym scene. In addition, Figure 10 shows that our method excels in both brightness balance and detail rendering, outperforming other algorithms.
The detailed analysis of the experimental results reveals several problems with the different image enhancement methods. The enhanced images of Dong have too much noise; their details are blurred. The enhanced images of LIME are visually pleasing, but the details of the dark region in the enhanced image are not clear. The enhanced images of MSRCR show excessive overall brightness enhancement. A detailed analysis shows that the overall enhancement effect of MF is not clear, and they look unnatural. The enhanced images of NPE are relatively natural, but the image is noisy, and the color saturation is slightly lower. The SRIE method makes the enhancement effect weak and does not show the details of the image. Our method achieves a balanced enhancement of image brightness without excessive enhancement.

4.2. Objective Assessment of Image Quality

4.2.1. Lightness Order Error (LOE)

Objective indicators are crucial for evaluating the performance of fusion algorithms. In this study, we use three objective evaluation indicators to evaluate different methods, including lightness order error (LOE) [15], visual information fidelity (VIF) [28], and the natural image quality evaluator (NIQE) [29]. Among them, LOE is an important indicator to judge the image enhancement algorithm, which is defined as:
LOE = 1 n i = 1 n R D ( x )
R D ( x ) = j = 1 n U ( L ( x ) , L ( y ) U ( L e ( x ) , L e ( y )
where n is the number of pixels, L ( x ) represents the lightness component at location x of the input images, ⊕ stands for exclusive-or operator, L e ( x ) represents the lightness component at location x of the enhanced images, and U ( p , q ) is a function that returns 1 if p q , 0 otherwise. A smaller LOE value indicates better brightness naturalness. Quantitative comparisons of the performance among the different methods are presented in Table 1. It is evident from Table 1 that our enhanced images have the smallest LOE values, demonstrating the best brightness naturalness.

4.2.2. Visual Information Fidelity

Another important metric used to assess image quality is visual information fidelity (VIF). The VIF assesses the information fidelity of the image by comparing the degree of information agreement in two HVS (human visual system) results. A higher VIF value indicates better visual information. Table 2 presents the VIF values and shows that our method does not achieve the highest VIF values, but it is higher than the other four methods. Therefore, our method still produces images with good visual information.

4.2.3. Natural Image Quality Evaluator

The natural image quality evaluator (NIQE) is an evaluation model that is computed from a large number of natural scene images and used to evaluate image quality. A lower NIQE value indicates better image quality. From Table 3, our enhanced images have the lowest NIQE values, indicating the best image quality.
The NIQE values of the algorithms were compared, and the average values of the three datasets were calculated by running experiments on each algorithm on the VV, LIME [14], and DICM [30] datasets. As can be seen from Table 4, the NIQE value obtained by our method is the smallest, which proves the superiority of our method.
We comprehensively compare different evaluation metrics. This provides a more stable evaluation of the performance of the enhancement method. The most important metric for evaluating the quality of image enhancement algorithms is the LOE. As shown in Table 1, the MF algorithm produces images with a low LOE and good performance. In particular, the LOE value of the Dong algorithm is the lowest for the “cupboard” image, while our algorithm has the lowest LOE value for all other test images, demonstrating the superiority of our algorithm over others. Regarding the VIF indicators, the LIME and MSR algorithms show the best performance. However, subjectively, images enhanced by the LIME and MSR algorithms may suffer from brightness distortion. In contrast, our method achieves a balanced enhancement with a higher VIF value compared to other algorithms. Although our method does not achieve the highest value of VIF, it can still preserve the visual information of images well.
When observing Table 1, Table 2, Table 3 and Table 4, there are two optimal values (LOE and NIQE) in the three evaluation indexes, which indicates that the proposed method is more effective and that the enhanced image retains more detailed information while improving the brightness. Therefore, our method is better than other advanced methods.

5. Discussion

All the images in this paper were taken under low light conditions and all the experimental algorithms used were image enhancement algorithms. The proposed algorithm has the ability to process any single low-brightness image without the need for multiple lens images with different brightnesses of the same scene. In practical application, it can effectively solve the problem in which the fusion result is natural but not obvious, or in which its details are obvious, but the overall brightness is too bright. This makes the algorithm versatile and useful in cases where the fusion result needs to be improved in terms of natural visibility and overall brightness. By analyzing the model, we find that there are also some deficiencies. As shown in Figure 11, we adjust the parameter a while n is equal to four. It was observed that as the value of a increases, the details of the dark area become more pronounced, but the image is also prone to over-enhancement. Therefore, how to choose a is a problem that needs to be studied. We expect to solve this problem by adaptive selection of parameter a in the future.
The method presented is simple and effective, but further research is needed to achieve real-time processing of low-light images and better application in areas such as nighttime image acquisition.

6. Conclusions

In this paper, a low-brightness image enhancement algorithm is proposed to improve image brightness while avoiding excessive enhancement of local areas. First, a novel brightness transformation function is used to obtain two images. Then, the illumination estimation technique is used to construct a weight matrix. Finally, the enhanced image is obtained by merging the two images using the weight matrix and the Laplacian pyramid algorithm. Our method is a simple and effective algorithm that enhances image brightness while addressing the problem of local brightness distortion. It achieves this by combining a multi-scale fusion algorithm and a brightness transformation function. Compared with other algorithms, the proposed method has lower evaluation values in the natural image quality evaluator (NIQE) and lightness order error (LOE) indices than other image enhancement algorithms. In the future, it is necessary to further optimize the algorithms to improve the quality of low-light images to a level that is most suitable for human observation and resembles natural images.

Author Contributions

Methodology, E.Z.; software, E.Z.; validation, X.L.; data curation, E.Z.; writing—original draft preparation, E.Z.; writing—review and editing, S.Y., X.L. and J.G.; visualization, L.G.; supervision, L.G.; project administration, L.K.; funding acquisition, L.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Cooperation Project of the Changchun Institute of Optics, Fine Mechanics and Physics 202302SJZQ(1)GX.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank the anonymous reviewers for their valuable suggestions, which have helped us to improve the manuscript greatly.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chouhan, R.; Biswas, P.K.; Jha, R.K. Enhancement of low-contrast images by internal noise-induced Fourier coefficient rooting. Signal Image Video Process. 2015, 9, 255–263. [Google Scholar] [CrossRef]
  2. Zhang, S.; Lan, X.; Yao, H.; Zhou, H.; Tao, D.; Li, X. A biologically inspired appearance model for robust visual tracking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2357–2370. [Google Scholar] [CrossRef] [PubMed]
  3. Lan, X.; Zhang, S.; Yuen, P.C.; Chellappa, R. Learning common and feature-specific patterns: A novel multiple-sparse-representation-based tracker. IEEE Trans. Image Process. 2017, 27, 2022–2037. [Google Scholar] [CrossRef] [PubMed]
  4. Tang, H.; Zhu, H.; Tao, H.; Xie, C. An Improved Algorithm for Low-Light Image Enhancement Based on RetinexNet. Appl. Sci. 2022, 12, 7268. [Google Scholar] [CrossRef]
  5. Si, W.; Xiong, J.; Huang, Y.; Jiang, X.; Hu, D. Quality Assessment of Fruits and Vegetables Based on Spatially Resolved Spectroscopy: A Review. Foods 2022, 11, 1198. [Google Scholar] [CrossRef] [PubMed]
  6. Xu, Y. A fusion-based approach of deep learning and edge-cutting algorithms for identification and color recognition of traffic lights. Intell. Transp. Infrastruct. 2023, 2, liad007. [Google Scholar] [CrossRef]
  7. Wang, D.; Xu, C.; Feng, B.; Hu, Y.; Tan, W.; An, Z.; Han, J.; Qian, K.; Fang, Q. Multi-Exposure Image Fusion Based on Weighted Average Adaptive Factor and Local Detail Enhancement. Appl. Sci. 2022, 12, 5868. [Google Scholar] [CrossRef]
  8. Hoang, T.; Pan, B.; Nguyen, D.; Wang, Z. Generic gamma correction for accuracy enhancement in fringe-projection profilometry. Opt. Lett. 2010, 35, 1992–1994. [Google Scholar] [CrossRef]
  9. Pan, X.; Li, C.; Pan, Z.; Yan, J.; Tang, S.; Yin, X. Low-Light Image Enhancement Method Based on Retinex Theory by Improving Illumination Map. Appl. Sci. 2022, 12, 5257. [Google Scholar] [CrossRef]
  10. Brainard, D.H.; Wandell, B.A. Analysis of the retinex theory of color vision. JOSA A 1986, 3, 1651–1661. [Google Scholar] [CrossRef]
  11. Jobson, D.J.; Rahman, Z.u.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
  12. Petro, A.B.; Sbert, C.; Morel, J.M. Multiscale retinex. Image Process. Line 2014, 4, 71–88. [Google Scholar] [CrossRef]
  13. Dong, X.; Pang, Y.; Wen, J. Fast efficient algorithm for enhancement of low lighting video. In ACM SIGGRAPH 2010 Posters; ACM Digital Library: New York, NY, USA, 2010; p. 1. [Google Scholar]
  14. Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef]
  16. Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar]
  17. Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
  18. Agrawal, S.; Panda, R.; Mishro, P.K.; Abraham, A. A novel joint histogram equalization based image contrast enhancement. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 1172–1182. [Google Scholar] [CrossRef]
  19. Park, S.; Yu, S.; Kim, M.; Park, K.; Paik, J. Dual autoencoder network for retinex-based low-light image enhancement. IEEE Access 2018, 6, 22084–22093. [Google Scholar] [CrossRef]
  20. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
  21. Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3291–3300. [Google Scholar]
  22. Hai, J.; Xuan, Z.; Yang, R.; Hao, Y.; Zou, F.; Lin, F.; Han, S. R2rnet: Low-light image enhancement via real-low to real-normal network. J. Vis. Commun. Image Represent. 2023, 90, 103712. [Google Scholar] [CrossRef]
  23. Fan, S.; Liang, W.; Ding, D.; Yu, H. LACN: A lightweight attention-guided ConvNeXt network for low-light image enhancement. Eng. Appl. Artif. Intell. 2023, 117, 105632. [Google Scholar] [CrossRef]
  24. Han, X.; Lv, T.; Song, X.; Nie, T.; Liang, H.; He, B.; Kuijper, A. An adaptive two-scale image fusion of visible and infrared images. IEEE Access 2019, 7, 56341–56352. [Google Scholar] [CrossRef]
  25. Ying, Z.; Li, G.; Ren, Y.; Wang, R.; Wang, W. A new low-light image enhancement algorithm using camera response model. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 3015–3022. [Google Scholar]
  26. Ying, Z.; Li, G.; Gao, W. A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv 2017, arXiv:1711.00591. [Google Scholar]
  27. Shao, L.; Zhen, X.; Tao, D.; Li, X. Spatio-temporal Laplacian pyramid coding for action recognition. IEEE Trans. Cybern. 2013, 44, 817–827. [Google Scholar] [PubMed]
  28. Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [Google Scholar] [CrossRef] [PubMed]
  29. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  30. Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar]
Figure 1. Flowchart of the algorithm for the proposed model.
Figure 1. Flowchart of the algorithm for the proposed model.
Applsci 13 10230 g001
Figure 2. (a) n is equal to 1. (b) n is equal to 4.
Figure 2. (a) n is equal to 1. (b) n is equal to 4.
Applsci 13 10230 g002
Figure 3. Enhanced effect of “bookshelf” image.
Figure 3. Enhanced effect of “bookshelf” image.
Applsci 13 10230 g003
Figure 4. Enhanced effect of “cupboard” image.
Figure 4. Enhanced effect of “cupboard” image.
Applsci 13 10230 g004
Figure 5. Enhanced effect of “doll” image.
Figure 5. Enhanced effect of “doll” image.
Applsci 13 10230 g005
Figure 6. Enhanced effect of “classroom” image.
Figure 6. Enhanced effect of “classroom” image.
Applsci 13 10230 g006
Figure 7. Enhanced effect of “swimming pool” image.
Figure 7. Enhanced effect of “swimming pool” image.
Applsci 13 10230 g007
Figure 8. Enhanced effect of “hall” image.
Figure 8. Enhanced effect of “hall” image.
Applsci 13 10230 g008
Figure 9. Enhanced effect of “gym” image.
Figure 9. Enhanced effect of “gym” image.
Applsci 13 10230 g009
Figure 10. Enhanced effect of “wardrobe” image.
Figure 10. Enhanced effect of “wardrobe” image.
Applsci 13 10230 g010
Figure 11. Detail preservation. (a) Original image; (b) a = 0.1; (c) a = 0.3; (d) a = 0.5; (e) a = 0.7; (f) a = 0.9.
Figure 11. Detail preservation. (a) Original image; (b) a = 0.1; (c) a = 0.3; (d) a = 0.5; (e) a = 0.7; (f) a = 0.9.
Applsci 13 10230 g011
Table 1. Quantitative measurement results of LOE. The bold numbers indicate the best performance.
Table 1. Quantitative measurement results of LOE. The bold numbers indicate the best performance.
InputDongLIMEMSRMFNPESRIEOurs
bookshelf961.81063.31942.6563.3810.3608.2358.2
cupboard306.9431.4930.6368.4678.3348.9361.9
doll449.9645.11534.8397.2800.5476.8365.5
classroom470.6687.71636.5598.11120.9539.7410.3
swimming pool1244.6538.21373.6684.51875.21140.9295.5
hall805.6622.11638.9658.91698.9801.1280.7
gym625.8846.91742.8533.31014.2666.2392.1
wardrobe574.31211.52477.9613.3554.2429.5455.2
Table 2. Quantitative measurement results of VIF. The bold numbers indicate the best performance.
Table 2. Quantitative measurement results of VIF. The bold numbers indicate the best performance.
InputDongLIMEMSRMFNPESRIEOurs
bookshelf9.73418.76316.2147.1998.3744.90811.533
cupboard35.37582.29889.70942.79454.18714.18763.57
doll41.627146.493165.13859.93663.30116.74266.871
classroom42.835125.786137.80555.98485.47316.33795.942
swimming pool9.25623.65519.5589.10913.3934.37315.948
hall8.53718.11414.5418.8589.8764.36110.375
gym23.62184.68570.67330.14548.7979.56654.606
wardrobe22.20442.36236.70316.67522.0639.23331.02
Table 3. Quantitative measurement results of NIQE. The bold numbers indicate the best performance.
Table 3. Quantitative measurement results of NIQE. The bold numbers indicate the best performance.
InputDongLIMEMSRMFNPESRIEOurs
bookshelf8.3319.0028.0598.7049.0187.7697.081
cupboard11.37210.92710.99812.93211.56610.22410.128
doll7.6538.0858.5368.3898.1067.7417.588
classroom10.71411.54710.32211.48511.48510.77110.153
swimming pool9.9139.2478.8649.5859.2648.5018.438
hall10.70311.1199.78811.91910.2729.9229.673
gym8.8858.5098.2329.8438.3147.6527.145
wardrobe6.9657.4126.7627.8057.2096.4235.782
Table 4. Comparison of the NIQE values of each algorithm in three datasets. The bold numbers indicate the best performance.
Table 4. Comparison of the NIQE values of each algorithm in three datasets. The bold numbers indicate the best performance.
MethodDICMLIMEVVAvg.
Dong [13]4.3132.6013.5973.504
LIME [14]3.4212.6363.5613.206
MSR [12]3.3462.4523.6293.142
MF [17]3.5062.8253.2343.188
NPE [15]3.2772.5623.1913.024
SRIE [16]3.5562.5413.4543.183
Ours3.1212.2353.1522.836
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, E.; Guo, L.; Guo, J.; Yan, S.; Li, X.; Kong, L. A Low-Brightness Image Enhancement Algorithm Based on Multi-Scale Fusion. Appl. Sci. 2023, 13, 10230. https://doi.org/10.3390/app131810230

AMA Style

Zhang E, Guo L, Guo J, Yan S, Li X, Kong L. A Low-Brightness Image Enhancement Algorithm Based on Multi-Scale Fusion. Applied Sciences. 2023; 13(18):10230. https://doi.org/10.3390/app131810230

Chicago/Turabian Style

Zhang, Enqi, Lihong Guo, Junda Guo, Shufeng Yan, Xiangyang Li, and Lingsheng Kong. 2023. "A Low-Brightness Image Enhancement Algorithm Based on Multi-Scale Fusion" Applied Sciences 13, no. 18: 10230. https://doi.org/10.3390/app131810230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop