Next Article in Journal
Piezoresistive Hydrogel-Based Sensors for the Detection of Ammonia
Next Article in Special Issue
An AutomationML Based Ontology for Sensor Fusion in Industrial Plants
Previous Article in Journal
IKULDAS: An Improved kNN-Based UHF RFID Indoor Localization Algorithm for Directional Radiation Scenario
Previous Article in Special Issue
Estimating Visibility of Annotations for View Management in Spatial Augmented Reality Based on Machine-Learning Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Image Rendering Using a Nonlinear Mapping-Function-Based Retinex Model

School of Electronic Engineering, Soongsil University, Seoul 06978, Korea
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(4), 969; https://doi.org/10.3390/s19040969
Submission received: 12 February 2019 / Accepted: 21 February 2019 / Published: 25 February 2019
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)

Abstract

:
This paper introduces an adaptive image rendering using a parametric nonlinear mapping-function-based on the retinex model in a low-light source. For this study, only a luminance channel was used to estimate the reflectance component of an observed low-light image, therefore halo artifacts coming from the use of the multiple center/surround Gaussian filters were reduced. A new nonlinear mapping function that incorporates the statistics of the luminance and the estimated reflectance in the reconstruction process is proposed. In addition, a new method to determine the gain and offset of the mapping function is addressed to adaptively control the contrast ratio. Finally, the relationship between the estimated luminance and the reconstructed luminance is used to reconstruct the chrominance channels. The experimental results demonstrate that the proposed method leads to the promised subjective and objective improvements over state-of-the-art, scale-based retinex methods.

1. Introduction

The high performance and miniaturization of image sensors make it possible for image information to be used in various applications, such as mobile platforms, recognition systems, and security systems [1,2]. However, low contrast coming from an absent light source leads to the degradation of image quality, so the performance of the application system may be unsatisfactory [3]. In order to solve the low-contrast problem, many simple approaches, such as histogram equalization, gamma correction, and auto exposure, have been widely used [4]. However, they limit performance because they do not account for human visual perception [5].
Many efforts have been made to formalize human visual systems (HVSs). Among them, retinex theory has attracted attention as a useful way to estimate the human sensation derived from an observed scene. For example, Land et al. presented a model of HVS color perception. It explains how an HVS, as a combination of processes, supposedly taking place in both the retina and the cortex, is capable of adaptively coping with illumination that varies spatially in both intensity and color [6].
Enhancements of low-contrast images using the retinex model are aimed at estimating illuminance and reflectance under various assumptions. According to the mathematical formulation and implementation-of-cost function, these can be classified as modified retinex methods [7,8,9], scale-based methods [10,11,12,13,14,15], variational methods [16,17,18], and deep learning-based methods [19,20]. The modified retinex methods use a reset and threshold mechanism to estimate illuminance based on the pixel intensity of a given random path. These methods are robust against additive noise. However, they are limited in improving the contrast ratio because they do not account for the statistical distribution of low-light images. The variational methods, which model appropriate energy functions, have led to promising results. However, their performance is very sensitive to tuning function. In addition, they require very expensive computational costs, so the scope of their applications is limited. Recently, the deep learning-based methods have been exploited to enhance the contrast ratio. Most of the schemes are based on the property of the linear retinex model. Therefore, in order to improve the performance of the deep learning approaches, based on the retinex model, it is necessary to study the retinex model that reflects HVS.
A single-scale retinex (SSR) method has been introduced, in which a center/surround Gaussian filter is used to extract the reflectance from an observed image in accordance with the Werber-Fachner Law and based on the nonlinearity of human visual perception. This leads to an enhancement of the contrast range [11]. However, the performance is very sensitive to the choice of parameters for the Gaussian filter. A multi-scale retinex (MSR) model and an MSR with color restoration (MSRCR) model have been presented to resolve the filter dependency problem [12]. They have the capability to effectively enhance contrast ratios with less filter dependency, but they also increase the number of halo artifacts, which is visually annoying. The artifacts increase, as the number of filters increase.
An adaptive MSR (AMSR) [21] was created to improve the contrast ratio and reduce the color distortion; in this system, luminance is used to estimate the reflectance from an observed image. The estimated reflectance is used to reconstruct reflectance via linear stretching assisted by a weighted map. Although the AMSR improves the contrast ratio and reduces computational complexity, it increases the number of halo artifacts because the statistical properties of the extracted reflectance are not incorporated into the reconstruction process.
The bottlenecks of the existing scale-based retinex methods are summarized as follows: (1) the number of halo artifacts due to the use of multiple center/surround Gaussian filters, (2) color distortion due to independent processing of color channels, and (3) loss of signal distribution characteristics due to not considering the statistics of the observed images.
This paper presents an image rendering method via an adaptive, scale-based retinex model using a parametric, nonlinear mapping function of statistical characteristics of luminance and reflectance for low-light images. In order to reduce the number of halo artifacts, a center/surround Gaussian filter is only applied to the luminance channel in the YCbCr color space to estimate the reflectance. The statistical characteristics of the captured image are distributed differently according to the brightness and direction of the light source. Therefore, it is necessary to incorporate these statistical characteristics into the reconstruction process of the reflectance. This paper introduces a nonlinear reflectance reconstruction function that is defined as a function of the skewness of the luminance of a low-light image, so the contrast ratio is adaptively controlled. In addition, a new determination of the gain and offset of the nonlinear function is addressed to adaptively clip the dynamic range of the reflectance. Finally, the chrominance channels are reconstructed by the ratio between the estimated luminance and the reconstructed luminance. Figure 1 depicts the overall flowchart of the proposed method.
This paper is organized as follows. Section 2 briefly describes the MSR for low-light contrast enhancement. Section 3 describes the proposed scale-based retinex method using a new parametric, nonlinear function for enhancing low-light images. The determination of parameters, the gain, and the offset of the nonlinear function using the statistical characteristics are explained in this section as well. We analyze the experimental results in Section 4, and finally, describe the conclusions derived from the results in Section 5.

2. Related Work

The human visual model has been well studied in regard to solving low-light and back-light problems. Land, et al. experimentally proved that the human visual model can be expressed by the reflection coming through an object and the illuminance coming from a light source [6]. According to their research, perceptual intensity can be expressed as
I = R · L ,
where I, R, and L represent the perceptual intensity of human eyes, the reflectance, and the illuminance, respectively. Equation (1) implies that the illuminance and the reflectance can be arithmetically obtained. Based on the retinex theory, many approaches have been presented to obtain better results by reconstructing the reflectance or the illuminance. The SSR method aimed to correct the reflectance of an object by applying center/surround Gaussian filters to an observed image as follows [11]:
R ( x , y ) = l o g I ( x , y ) l o g ( I ( x , y ) G ( x , y ) ) ,
where denotes the two-dimensional convolution operator, and G represents a Gaussian filter. The Gaussian filter of the ( x , y ) -th pixel is defined as follows:
G ( x , y ) = K e ( ( x x ) 2 + ( y y ) 2 ) / c 2 , ( x , y ) S ,
where K and c denote a normalization constant and standard deviation, respectively, and S represents a two-dimensional support region to which the Gaussian filter is applied. The above expression means that the density of light concentrates around the light source, and the correlation of light decreases as the distance from the center increases. It was verified that the SSR method is very sensitive to the choice of standard deviation c [22].
In order to solve this problem, an MSR method was proposed in which N center/surround Gaussian filters are applied to each channel of an input color image and weights are applied to each result to reduce the dependency of the filter. The reflection of the i-th color channel is estimated as follows [12]:
R i M S R ( x , y ) = n = 1 N w n l o g [ I i ( x , y ) I i ( x , y ) G ( x , y ) ]
for i { R , G , B } . In equation (4), N = 3 is generally used because the computational cost increases as N increases. It has been shown that w n = {0.3, 0.1, 0.6} and c n = { 5 , 30 , 240 } are effective for obtaining a reasonable result [23]. The estimated reflectance, R i M S R , includes distorted color and illuminance components, so a gain/offset is set to reconstruct the reflection as follows:
R ^ i M S R ( x , y ) = max [ min { ( R i M S R ( x , y ) R i ,   m i n M S R R i ,   m a x   M S R R i ,   m i n M S R ) , 1 } , 0 ] ,
where R i , m a x M S R and R i , m i n M S R represent the maximum and the minimum, respectively, of the estimated reflectance and are determined using statistical characteristics as follows:
R i , m a x M S R = m a x ( m i + α σ i , 1 ) , R i , m i n M S R = m i n ( m i α σ i , 0 ) ,
where α represents a constant to clip the dynamic range. In addition, m i and σ i denote the mean and standard deviation of R i M S R , respectively. For an image represented by k bits per pixel, each pixel is reconstructed as follows:
I ^ i ( x , y ) = R ^ i M S R ( x , y ) × ( 2 k 1 ) .
It has been shown that MSR methods have the capability of reducing filter dependency, but they also increase the number of halo artifacts caused by the independent processing of the center/surround Gaussian filters to RGB channels. In addition, there is a limit to improvements to the contrast ratio because the statistical characteristics of the energy density of the observed low-light image are not considered.

3. The Proposed Method

In order to solve the problems of the existing scale-based retinex methods, this paper presents an adaptive scale-based retinex model based on a nonlinear function using the skewness characteristics of luminance and reflectance. The luminance channel (Y) in the YCbCr color space is suitable to represent the perceptual information and to include the relationships between RGB channels. Therefore, reflectance can be estimated by applying the center/surround Gaussian filter to only the luminance. Therefore, the number of halo artifacts and computational complexity can both be reduced. Skewness has been used to statistically represent the degree of bias of energy density. In this paper, a nonlinear function, defined as a function of the mean, variance, and skewness of the estimated reflectance and luminance, is presented to improve the contrast ratio and reduce the number of halo artifacts.
In the study, an observed low-light RGB image is transformed to the YCbCr image, and the reflectance of Y channel is obtained in a similar way to MSR methods as follows:
R ( x , y ) = n = 1 N w n l o g [ Y ( x , y ) Y ( x , y ) G ( x , y ) ] ,
where Y and G denote the luminance channel and the center/surround Gaussian filter, respectively. In addition, N = 3 is used with w n = {0.3, 0.1, 0.6} and c n = { 5 , 30 , 240 } in the same way as the MSR.
As mentioned, the conventional scale-based retinex methods have limited performance because they do not incorporate the statistical characteristics of the energy density of an observed image into the reconstruction process. In this study, skewness is used to represent the bias degree of the energy density. For a U × V -sized image, the skewness of the luminance and the estimated reflectance can be written as follows [24]:
S k Y = 1 U V x = 0 U 1 y = 0 V 1 [ Y ( x , y ) m Y σ Y ] 3 , S k R = 1 U V x = 0 U 1 y = 0 V 1 [ R ( x , y ) m R σ R ] 3 , .
where m Y and σ Y denote the mean and the standard deviation, respectively, of the luminance, and m R and σ R represent the mean and standard deviation of the estimated reflectance, respectively.
As shown in Figure 2, the skewness increases as the luminance becomes darker. In addition, the skewness is equal to 0 when the distribution is symmetrical. As the amount of the available light-source lessens, there is a distortion of the illuminance and the estimated reflectance [11,12]. Therefore, it is necessary to compensate for the distortion. In conventional approaches, the linear compensation in equation (5) is used, but there is a limit to the ability to correct the distortion with this equation because the statistical characteristics of the observed image are not reflected. Therefore, a new reconstruction function is used as follows:
R ^ ( x , y ) = max [ min { ( R ( x , y ) R m i n R m a x R m i n ) μ , 1 } , 0 ] ,
where R m a x and R m i n are the maximum and minimum for the gain and offset of the estimated reflectance, respectively. In order to improve the contrast ratio, reflectance should be expanded by setting μ to be larger, as the image becomes darker. Conversely, μ is set to decrease, as the image gets brighter so the reflectance becomes compressed. The relationship between μ and S k Y can be written as follows:
μ { S k Y for S k Y 0 , 1 | S k Y | for S k Y < 0 ,
where satisfying equation (11) with μ can be justified in various ways. In this study, μ is defined as a function of S k Y as follows:
μ = { 1 + ( α   × S k Y ) if S k Y 0 , 1 ( 1 +   α   × | S k Y | ) otherwise ,
where α is a constant.
In the MSR, the gain and offset in equation (6) are determined only by the mean and standard deviation of the estimated reflection, under the assumption that the estimated reflection has a bilateral symmetrical distribution. However, the distribution of the estimated reflection is not symmetrical because the estimated reflectance may contain a distorted component that is dependent on light intensity. Therefore, it is necessary to set the gain and offset by the degree of asymmetry of the reflectance. In this study, they are defined as follows:
R m a x = m R + σ R × ( T + β × S k R ) , R m i n = m R σ R × ( T + β × S k R ) ,
where β is a constant to scale the skewness. In addition, the constant, T , is chosen such that ( T + β × S k R ) is greater than 0. Equations (10) and (13) have the following properties. When the skewness of the estimated reflection is positive, the estimated reflection is concentrated in a lower-than-average reflectance region. In this case, R m a x and R m i n are determined to expand the concentrated reflectance region. Conversely, R m a x and R m i n are chosen to expand a higher-than-average reflectance region when the skewness is negative. According to these properties, the dense and loose regions of the estimated reflectance are reconstructed in a balanced manner. Then, the luminance of a pixel represented by k bits is reconstructed in the following manner:
Y ^ ( x , y ) = R ^ ( x , y ) × ( 2 k 1 ) .
The chrominance channels corresponding to the reconstructed luminance can be reconstructed in various ways. In this study, the chrominance channels are reconstructed by gains in luminance in order to maintain the correlation between the channels with the reduction of the computational cost. The gains in luminance can be defined as follows:
ρ ( x , y ) = Y ^ ( x , y ) Y ( x , y ) × γ ,
where γ is a constant. Then, Cb and Cr are reconstructed as follows:
C b ^ ( x , y ) = ρ ( x , y ) × ( C b ( x , y ) 128 ) + 128 , C r ^ ( x , y ) = ρ ( x , y ) × ( C r ( x , y ) 128 ) + 128 .

4. Experimental Results

4.1. Experimental Setup

Several experiments were conducted with various low-contrast images, such as indoor/outdoor environments and single/multiple light sources. As shown in Figure 3, for the experiments, 20 images (A1‒A20) were obtained from the Internet and 20 images (B1-B20) were acquired with a Nikon-Df camera using AF-S NIKKOR 50 mm f/1.8 G lens.
The proposed method was compared to the state-of-the-art, scale-based retinex algorithms, such as the MSR [12], random spray retinex (RSR) [7], light RSR (LRSR) [8], and AMSR [21]. To evaluate the performance of the algorithms, contrast per pixel (CPP) [25] was used. For a U × V -sized color image, the CPP is defined as follows:
CPP = k = 1 3 i = 0 U 1 j = 0 V 1 ( 1 9 m = 1 1 n = 1 1 | I ^ k ( i , j ) I ^ k ( i + m , j + n ) | ) U × V
where I ^ k ( k = 1 , 2 , 3 ) represents the k -th reconstructed channel of an RGB color image. An Intel Core i7-3770 CPU 3.4GHz with 8 GB memory was used to examine the processing time, and MS C++ 2010 was used to simulate the algorithms. To evaluate subjective visual quality, a double-stimulus continuous quality scale (DSCQS) [26] was examined, with which a blind quality assessment was conducted by 20 individuals.
Several parameters were defined for the proposed method. α and β in equations (12) and (13) were used to reflect the contribution of the skewness of the luminance and the reflectance in the mapping function. As they increase, the contrast ratio of the reconstructed image showed an out-of-proportion increase as well. It was observed that 1.5 α , β 2.5 led to promising results, and α = β = 2 was used to reconstruct the image. Additionally, T in equation (13) was used to set the gain and offset of the mapping function. As T decreased, the degree of the saturation of the brightness increased. Conversely, as T increased, the contrast ratio of the reconstructed image decreased, so the saturation and the brightness were both reduced. In these experiments, T = 2 was used. In addition, the luminance gain, γ in equation (15), was used to reconstruct the Cb and Cr channels. As γ increased, the chrominance channels became more saturated. The experiments yielded 0.85 < γ < 0.95 , which is a good range with respect to performance. In these experiments, γ = 0.9 was used.

4.2. Analyses of Experimental Results

The CPP has been used previously as a way to represent the degree of intensity variation between neighbor pixels, and it has been shown to decrease as the contrast ratio of an observed image decreases [23]. Table 1 shows the CPP comparisons for this study. With the conventional MSR method, improvements in CPP varied depending on the image. Conversely, the RSR and LRSR methods were very effective for noise reduction in the low-contrast region, but they were limited in improving CPP. AMSR outperformed the other methods in terms of the CPP in most cases. However, it was observed that the CPP improvement was caused by a halo artifact increase. Conversely, the proposed method outperformed the comparative methods, with the exception of AMSR. It was observed that the proposed method led to better, consistently guaranteed results with respect to CPP, regardless of the degree of contrast. In these experiments, the average CPP improvements for the low-light images, MSR, AMSR, RSR, LRSR, and the proposed method were 78.9%, 134.2%, 7.7%, 7.7%, and 113.1%, respectively.
The comparisons of the processing times per pixel are presented in Table 2. In AMSR and the proposed method, the processing times for converting the RGB ground truth image into the YCbCr channels and converting the reconstructed YCbCr channels into the RGB image are included. The MSR required more computation than the proposed method due to the independent reconstruction processing for each channel. Additionally, the computational complexities of the RSR and LRSR were the most expensive due to the large number of random spray filters, and the filter window size applied to each pixel. The AMSR required less computation than the other comparative methods because it performed the Y-channel oriented processing. However, it spent a certain amount of processing time to reconstruct the chrominance channels, revealing marginally higher computational complexity than the proposed method. Conversely, it was confirmed that the proposed method consistently had the lowest computational cost of all the methods because it directly applied the statistical characteristics of an observed image to the mapping function. The processing time reductions of the proposed method over the MSR, AMSR, RSR, and LRSR were 100.4%, 14.3%, 249.3%, and 263.8%, respectively.
Visual comparisons are presented in Figure 4 and Figure 5. The MSR was effective in improving the contrast ratio. However, there was signal saturation and color distortion because this method did not consider the statistical characteristics of the observed image in reconstructing the reflectance. Although the AMSR was better than the MSR in terms of the contrast ratio, the number of halo artifacts increased because the linear stretching assisted by a weighted map, without considering the asymmetry of the reflectance of the observed image. The RSR and LRSR were effective in color representation and they removed noise in low-contrast regions well. However, they were limited in their ability to enhance the contrast ratio. Conversely, the proposed method considered the distribution characteristics of the image, thereby improving the contrast ratio and effectively representing the color components.
Table 3 illustrates the comparisons of the DSCQS for subjective quality assessment, in which the low-light image was assumed to be 5 points and 0-10 points were used to score the compared image. In most cases, the MSR scores were higher than the comparative methods, but there was a large difference in the evaluators’ preferences, depending on the images. AMSR had the lowest score among the comparative methods due to the number of halo artifacts, although it outperformed the others in terms of CPP. These experiments verified that the number of halo artifacts was an important cause of visual inconvenience. RSR and LRSR had relatively low scores due to the performance limits in improving the contrast ratio. On the other hand, the proposed method adaptively improved the contrast ratio with the reduction of the color distortion, leading to it consistently outperforming the other methods.
The experiments proved that, subjectively and objectively, promising results were obtained by incorporating the asymmetry of the extracted reflectance and the illuminance into the reconstruction process. The experiments confirmed that the objective performance evaluation, CPP, did not coincide with the subjective performance evaluation, such as the DSCQS, because CPP does not consider the halo artifacts and the color distortion. Therefore, it is necessary to study a quality assessment metric that reflects the elimination of the halo artifact and the improvement of color distortion, as well as the improvement of the contrast ratio.

5. Conclusions

This paper presents an adaptive image rendering method using the asymmetry of an observed image in a low-light environment. A new nonlinear mapping function, as determined by the asymmetry of the illuminance, and the extracted reflectance, was presented for reconstructing the reflectance. In addition, the determination of the gain and offset of the nonlinear mapping function was also introduced. The experimental results demonstrated that the proposed method leads to subjectively and objectively promising results. The proposed method can be used as a computational platform to provide the high-quality image in various vision-sensor-based intelligent systems, such as visual surveillance and vision assistant driving systems, in a low-light source environment.
In these experiments, halo artifacts were the main cause of increased CPP, but at the same time, the artifacts were very annoying to human viewers. Therefore, it is worth developing an objective image quality assessment to consider the elimination of the halo artifact and the color distortion, as well as the improvement of the contrast ratio. A new, high-order, norm-based, deep learning method assisted by asymmetrical characteristics is under development, and the newest method is expected to produce a more sophisticated formulation and achieve even better performance.

Author Contributions

J.O. and M.H. conceived and designed the experiments; J.O. performed the experiments; J.O. and M.H. analyzed the data; M.H. wrote the paper.

Funding

This research was supported in part by the National Research Foundation of Korea (NRF) grant funded by Korean government (MIST) (2017R1A2B4002205) and in part by ITRC (Information Technology Research Center) support program (IITP-2018-0-01419) supervised by Institute for Information & Communication Technology Promotion.

Acknowledgments

The authors would like to thank the reviewers and colleague for their helpful comments and suggestions at various stages of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xie, S.J.; Lu, Y.; Yoon, S.; Yang, J.; Park, D.S. Intensity variation normalization for finger vein recognition using guided filter based singe scale retinex. Sensors 2015, 15, 7. [Google Scholar] [CrossRef] [PubMed]
  2. Chien, J.-C.; Chen, Y.-S.; Lee, J.-D. Improving night time driving safety using vision-based classification techniques. Sensors 2017, 17, 10. [Google Scholar] [CrossRef] [PubMed]
  3. Ochoa-Villegas, A.M.; Nolazco-Flores, J.A.; Barron-Cano, O.; Kakadiaris, I.A. Addressing the illuminance challenge in two-dimensional face recognition: A survey. IET Comput. Vis. 2015, 9, 978–992. [Google Scholar] [CrossRef]
  4. Celik, T. Spatial entropy-based global and local image contrast enhancement. IEEE Trans. Image Process. 2014, 23, 5298–5309. [Google Scholar] [CrossRef] [PubMed]
  5. Ogata, M.; Tsuchiya, T.; Kubozono, T.; Ueda, K. Dynamic range compression based on illuminance compensation. IEEE Trans. Consum. Electron. 2001, 47, 548–558. [Google Scholar] [CrossRef]
  6. Land, E.; McCann, J. Lightness and retinex theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef] [PubMed]
  7. Provenzi, E.; Fierro, M.; Rizzi, A.; Carli, L.D.; Gadia, D.; Marini, D. Random spray retinex: A new retinex implementation to investigate the local properties of the model. IEEE Trans. Image Process. 2007, 16, 162–171. [Google Scholar] [CrossRef] [PubMed]
  8. Banic, N.; Loncaric, S. Light random spray retinex: exploiting the noisy illumination estimation. IEEE Signal Process. Lett. 2013, 20, 1240–1243. [Google Scholar] [CrossRef]
  9. Simone, G.; Audino, G.; Farup, I.; Albregtsen, F.; Rizzi, A. Termite retinex: a new implementation based on a colony of intelligent agents. J. Electron. Imaging 2014, 23, 1. [Google Scholar] [CrossRef]
  10. Lecca, M.; Rizzi, A.; Serapioni, R.P. GRASS: A gradient-based random sampling scheme for Milano retinex. IEEE Trans. Image Process. 2017, 26, 2767–2780. [Google Scholar] [CrossRef] [PubMed]
  11. Jobson, D.J.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
  12. Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multi-scale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
  13. Dou, Z.; Gao, K.; Zhang, B.; Yu, X.; Han, L.; Zhu, Z. Realistic image rendition using a variable exponent functional model for retinex. Sensors 2016, 16, 6. [Google Scholar] [CrossRef] [PubMed]
  14. Watanabe, T.; Kuwahara, Y.; Kurosawa, T. An adaptive multi-scale retinex algorithm realizing high color quality and high-speed processing. J. Imaging Sci. Tech. 2005, 49, 486–497. [Google Scholar]
  15. Kim, K.; Bae, J.; Kim, J. Natural HDR image tone mapping based on retinex. IEEE Trans. Consum. Electron. 2011, 57, 1807–1814. [Google Scholar] [CrossRef]
  16. Kimmel, R.; Elad, M.; Sobel, I. A variational framework for retinex. Int. J. Comput. Vis. 2003, 52, 7–23. [Google Scholar] [CrossRef]
  17. Zosso, D.; Tran, G.; Osher, S.J. Non-local retinex-A unifying framework and beyond. SIAM J. Imaging Sci. 2015, 8, 787–826. [Google Scholar] [CrossRef]
  18. Park, S.; Yu, S.; Moon, B.; Ko, S.; Paik, J. Low-light image enhancement using variational optimization-based retinex model. IEEE Trans. Consum. Electron. 2017, 63, 178–184. [Google Scholar] [CrossRef]
  19. Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef] [Green Version]
  20. Baslamisli, A.S.; Le, H.-A.; Gevers, T. CNN based learning using reflection and retinex models for intrinsic image decomposition. In Proceedings of the IEEE International Conference Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18‒23 June 2018; pp. 6674–6683. [Google Scholar]
  21. Lee, C.H.; Shih, J.-L.; Lien, C.-C.; Han, C.-C. Adaptive multiscale retinex for image contrast enhancement. In Proceedings of the International Conference on Signal Image Technology and Internet Based System, Kyoto, Japan, 2–5 December 2013; Volume 13, pp. 43–50. [Google Scholar]
  22. Shin, Y.; Jeong, S.; Lee, S. Efficient naturalness restoration for non-uniform illuminance images. IET Image Process. 2015, 9, 662–671. [Google Scholar] [CrossRef]
  23. Ciurea, F.; Funt, B.V. Tuning retinex parameters. J. Electron. Imaging 2004, 13, 1. [Google Scholar] [CrossRef]
  24. Seglen, P.O. The skewness of science. J. Am. Soc. Inform. Sci. 1992, 43, 628–638. [Google Scholar] [CrossRef]
  25. Peli, E. Contrast in complex images. J. Opt. Soc. Am. A 1990, 7, 2032–2040. [Google Scholar] [CrossRef] [PubMed]
  26. Recommendation ITU-R, BT.500-13 Methodology for the subjective assessment of the quality of television pictures. ITU-R BT.500-13. 2012. Available online: https://www.itu.int/rec/R-REC-BT.500-13-201201-I/en (accessed on 25 February 2019).
Figure 1. Flowchart of the proposed method.
Figure 1. Flowchart of the proposed method.
Sensors 19 00969 g001
Figure 2. Examples of histogram and skewness: (from top to bottom) low-light color image, luminance image, and histogram and skewness of luminance image.
Figure 2. Examples of histogram and skewness: (from top to bottom) low-light color image, luminance image, and histogram and skewness of luminance image.
Sensors 19 00969 g002aSensors 19 00969 g002b
Figure 3. Test images used in the experiments.
Figure 3. Test images used in the experiments.
Sensors 19 00969 g003aSensors 19 00969 g003b
Figure 4. Visual comparisons with A1, A5 and A12 test images: (from top to bottom) low-light image, MSR, AMSR, RSR, LRSR, and proposed method.
Figure 4. Visual comparisons with A1, A5 and A12 test images: (from top to bottom) low-light image, MSR, AMSR, RSR, LRSR, and proposed method.
Sensors 19 00969 g004aSensors 19 00969 g004b
Figure 5. Visual comparisons with B1, B8, and B20 test images: (from top to bottom) low-light image, MSR, AMSR, RSR, LRSR, and proposed method.
Figure 5. Visual comparisons with B1, B8, and B20 test images: (from top to bottom) low-light image, MSR, AMSR, RSR, LRSR, and proposed method.
Sensors 19 00969 g005aSensors 19 00969 g005b
Table 1. CPP comparisons.
Table 1. CPP comparisons.
ImageLow-Light ImageMSR [12]AMSRRSR [7]LRSR [8]Proposed Method
A149.15107.10125.0857.1457.91108.80
A293.48123.61161.0861.1861.44135.53
A364.4080.68106.1351.7751.7587.24
A439.5843.2847.3125.1925.2052.65
A564.89145.21195.2186.6186.13156.65
A674.37104.75130.8075.5075.50121.39
A781.59139.97168.0590.8390.81147.96
A847.61131.13174.4366.3066.50132.10
A950.9486.63122.1847.7647.7291.02
A1023.59127.66140.0260.7659.63139.79
A1120.2553.6363.2422.0122.0659.34
A1240.5371.9092.4440.5940.5579.46
A1346.69114.50150.6766.9066.52116.76
A1450.07156.64171.6970.3270.50158.31
A1544.5586.26109.5148.2048.1688.11
A1674.68174.04201.3180.9080.90182.31
A17102.04150.07198.70102.34102.32162.68
A1858.2391.08121.7158.2358.24101.36
A1926.1072.5392.4535.3935.3972.83
A2022.3972.3488.9551.1351.4177.18
B118.1034.0262.8918.1018.1054.72
B237.2164.2585.6837.2137.21171.48
B372.2992.70157.5772.3372.38159.09
B443.3957.35102.6843.3943.39101.91
B542.1853.5578.2742.1842.1893.28
B630.4445.3362.3930.4730.4984.21
B732.3437.7374.1832.9432.9464.44
B829.9334.1461.0029.9329.9355.09
B917.8222.1951.6417.9117.9134.55
B1023.7326.0853.0523.7323.7346.56
B1157.3185.71103.2157.5157.4092.46
B1270.7685.56107.8575.5675.46102.88
B137.1626.7018.097.237.2328.90
B147.2915.9619.4111.1511.1615.58
B1538.2283.06132.1546.5546.5887.75
B1631.7854.2180.9432.7732.7863.38
B1729.5847.0358.7831.1431.0550.82
B1889.18132.21163.3089.1889.18147.28
B1993.72144.15165.9193.7293.74172.37
B2010.5448.4938.8810.5410.5454.72
Average45.8081.97107.2849.3349.3197.58
Table 2. Processing-time per pixel comparisons (unit: microsecond).
Table 2. Processing-time per pixel comparisons (unit: microsecond).
ImageMSRAMSRRSRLRSRProposed Method
A16.2313.58411.25511.7713.109
A26.2573.56011.15511.6733.097
A36.2543.52011.21011.6873.043
A46.1793.50211.23211.6913.050
A57.6414.07612.73813.2523.571
A67.1084.35212.75613.2083.650
A76.2413.58811.20611.6423.099
A87.5834.17212.68813.3163.586
A96.1353.60111.09411.5813.147
A106.3063.84111.12211.6583.709
A117.1143.21310.76711.0982.751
A125.9623.18510.70411.1412.761
A135.8953.17510.57811.2302.907
A145.9923.29910.91011.4532.856
A156.2453.19510.71611.2832.871
A165.9293.24610.92211.1022.765
A175.9093.38010.61110.8452.870
A185.6533.11110.46811.0112.728
A195.9523.15810.51310.9822.748
A205.9523.13210.47210.9822.742
B16.9993.81310.97611.3113.311
B25.9783.71810.88811.3473.489
B36.1203.71010.87011.3753.262
B46.0173.83510.79111.2933.221
B55.9553.78110.90411.3323.252
B66.0073.74710.83811.2773.267
B76.0523.68110.79411.3223.292
B86.0193.69410.89011.2903.196
B96.9173.82510.88511.3903.363
B106.9163.78310.91311.3763.181
B116.0413.67710.75411.3303.138
B126.0033.59810.55311.0753.251
B138.1473.79111.96312.3083.462
B145.9543.37310.60811.0152.989
B155.9103.58810.63611.0193.129
B166.0623.61210.58410.9953.254
B175.9353.92310.62910.9703.099
B185.8943.52310.58310.9803.126
B195.8943.59710.53710.9533.252
B206.9843.85011.27711.6793.378
Average6.3093.60011.00011.4563.149
Table 3. DSCQS comparisons.
Table 3. DSCQS comparisons.
ImageMSRAMSRRSRLRSRProposed
Avg.Std.Avg.Std.Avg.Std.Avg.Std.Avg.Std.
A15.5562.6724.5562.0765.4441.8245.6111.8576.8332.256
A25.4442.2894.5561.7745.9441.3596.2781.2365.6672.220
A34.7782.2695.1672.2695.1111.4315.1111.0237.1671.833
A45.0002.2383.4441.7495.6111.2415.7221.2296.5561.562
A55.8891.6093.8891.8134.5001.2004.6671.0716.3332.012
A67.4441.7444.4441.9465.5001.1615.6111.1527.3891.396
A74.9442.1594.7782.1285.1110.8055.1670.7685.5562.308
A86.6671.9523.2781.9055.7221.3825.6671.2366.6681.749
A94.0561.3594.1111.8305.6111.2415.8331.4006.5561.851
A106.5562.6173.7721.7446.0561.2656.1111.2846.7782.632
A114.5502.5233.3502.2075.6501.5655.8501.5316.2502.268
A125.1002.2454.6502.2075.0501.4324.5501.1917.4501.638
A136.9001.7444.3001.7806.8001.2405.8001.6738.2501.446
A147.2001.4733.8501.3096.4501.5724.6001.8757.2502.221
A155.8502.0334.4501.8725.3501.0894.7501.2936.5002.115
A167.2001.7954.3001.5595.3500.9985.1001.2107.0001.864
A175.3002.0034.5001.8215.6001.3925.0501.1466.1502.300
A185.7001.8094.2001.5425.1501.4245.0001.4875.9502.502
A196.8001.4364.0001.5895.9501.4325.8501.5997.5002.103
A206.4001.5013.7001.5257.6001.3927.0001.4518.2501.585
B17.1130.8163.1110.9944.4441.0664.3330.9437.3330.943
B25.5562.0615.1110.8754.5560.6854.6670.6677.1111.286
B37.1111.7285.8891.2865.3330.4715.2220.6297.6670.943
B45.6672.0005.5561.4235.1110.3145.1110.3146.5561.707
B56.7781.7505.8891.1004.8890.7374.9890.7507.7781.685
B63.6671.6335.5560.9564.7780.6294.9870.6927.1110.567
B77.5561.2573.8891.5235.2220.7865.1110.5677.0000.943
B87.3331.8265.4441.2574.8890.5674.8890.5677.4440.685
B97.5561.8723.8891.2865.0000.6674.8890.5677.7781.771
B107.0000.9434.8891.1004.7781.1004.8890.5677.1111.523
B114.3501.9815.6002.0625.3501.4245.0501.0506.2002.215
B124.3001.8094.1001.6515.0500.8874.8001.1055.0502.434
B134.8002.5464.7501.3335.0000.9184.6000.9405.2002.419
B145.9501.9862.8501.6635.6002.3934.8502.1105.2502.468
B155.9002.2223.1501.5316.0501.7615.7501.4826.6002.113
B165.2502.2683.2501.5175.9001.5865.5001.3185.1002.553
B174.4502.1883.2501.6185.3501.2684.9001.4105.9002.382
B185.4002.0105.3002.4525.1501.0405.0001.2065.6002.415
B195.1502.1105.3001.5255.0501.0504.8000.9515.3502.397
B204.7502.1495.2002.2155.1500.8755.0000.6496.1502.412
Average5.8241.9164.3821.6505.4051.1165.2171.1256.6341.891

Share and Cite

MDPI and ACS Style

Oh, J.; Hong, M.-C. Adaptive Image Rendering Using a Nonlinear Mapping-Function-Based Retinex Model. Sensors 2019, 19, 969. https://doi.org/10.3390/s19040969

AMA Style

Oh J, Hong M-C. Adaptive Image Rendering Using a Nonlinear Mapping-Function-Based Retinex Model. Sensors. 2019; 19(4):969. https://doi.org/10.3390/s19040969

Chicago/Turabian Style

Oh, JongGeun, and Min-Cheol Hong. 2019. "Adaptive Image Rendering Using a Nonlinear Mapping-Function-Based Retinex Model" Sensors 19, no. 4: 969. https://doi.org/10.3390/s19040969

APA Style

Oh, J., & Hong, M. -C. (2019). Adaptive Image Rendering Using a Nonlinear Mapping-Function-Based Retinex Model. Sensors, 19(4), 969. https://doi.org/10.3390/s19040969

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop