Next Article in Journal
A Novel Approach to Using Spectral Imaging to Classify Dyes in Colored Fibers
Previous Article in Journal
Optimization and Analysis of Surface Roughness, Flank Wear and 5 Different Sensorial Data via Tool Condition Monitoring System in Turning of AISI 5140
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Single-Image HDR Tone-Mapping by Avoiding Base Layer Extraction

by
Masud An-Nur Islam Fahim
and
Ho Yub Jung
*
Department of Computer Engineering, Chosun University, Gwangju 61452, Korea
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(16), 4378; https://doi.org/10.3390/s20164378
Submission received: 8 July 2020 / Revised: 31 July 2020 / Accepted: 31 July 2020 / Published: 5 August 2020
(This article belongs to the Section Sensing and Imaging)

Abstract

:
The tone-mapping algorithm compresses the high dynamic range (HDR) information into the standard dynamic range for regular devices. An ideal tone-mapping algorithm reproduces the HDR image without losing any vital information. The usual tone-mapping algorithms mostly deal with detail layer enhancement and gradient-domain manipulation with the help of a smoothing operator. However, these approaches often have to face challenges with over enhancement, halo effects, and over-saturation effects. To address these challenges, we propose a two-step solution to perform a tone-mapping operation using contrast enhancement. Our method improves the performance of the camera response model by utilizing the improved adaptive parameter selection and weight matrix extraction. Experiments show that our method performs reasonably well for overexposed and underexposed HDR images without producing any ringing or halo effects.

1. Introduction

Casual photographic devices can represent a fraction of the HDR images due to their limited irradiance range. That is why HDR images displayed by regular devices often look overexposed or underexposed. Therefore, we need a proper tone-mapping algorithm to compress the HDR data into the standard dynamic range (SDR), which is compatible with regular devices. A typical tone-mapping algorithm tries to transform the HDR image into an SDR image without losing any vital spatial information. An example of this is presented in Figure 1.
In recent years, many studies have been conducted on HDR tone-mapping. Even though a significant amount of variance is present in previous tone-mapping algorithms, many studies propose using a base-and-detail layer decomposition to transform HDR into SDR. In this method, the base layer and the detail layer are extracted with the help of a standard edge-aware smoothing algorithm. Each layer goes through an individual manipulation step and unified together through augmentation, leading to the desired transformed image. This approach can faithfully reconstruct the HDR image as an SDR image based upon the smoothing operator. However, several factors are needed to keep in mind during this procedure. The size of the HDR images is generally more massive than the SDR images. Consequently, smoothing operators take a longer time to extract the detail layer and base layer. Detail enhancement-based methods can increase the aesthetics and luminance stretching of an HDR image, although they have to face some challenges due to this enhancement operation. A usual scenario is excessive texture detail enhancement. It is inherently caused by the prevalent tone-mapping operators being unaware of the spatial properties of the detail layer, which leads to a cartoon-like view of the reconstructed image. Halo artifacts are common in typical tone-mapping algorithms because of the inherent lack of smoothing operators. If the underlying smoothing algorithm cannot provide edge-aware smoothing, halo effects remain in processed images. Apart from the previously mentioned scenarios, the inverse gradient problem may arise due to the over smoothing property of the base layer extractor. To faithfully reconstruct HDR images without any artifacts, one may incorporate relevant priors or a robust edge-aware smoothing operator in the base and detail layer extraction algorithm.
The gradient-based tone-mapping operation has gained more attention in recent years because the gradient’s sensitivity is more tractable to the visual system compared to its absolute form [1]. The gradient-based approach takes the magnitude of the gradient into account for tone-mapping operation. This approach operates on larger gradient values to compress the base layer and smaller gradient values for the structure layer. After these operations, augmentation of all the gradients leads to the tone corrected output [1]. Gradient manipulation is crucial in image enhancement since it aids in simultaneous gradient-based image sharpening and image smoothing [1]. In contrast, it faces an unknown intensity range due to gradient integration for which pixel value exceeds the standard radiance bound. To mitigate the above-mentioned scenario, this type of approach often has to incorporate radiance clipping to keep the tone mapped image into the fixed dynamic range. Further, it requires post-processing operations, which sometimes lead to over-saturation or over smoothing [1].
Herein, we present a computationally light and noise-suppressive method that does not suffer from the contrast problem, display adaptive, artifact suppressive, and independent of parameter tweaking. The underlying reason for these features lies in our perception of the tone-mapping problem. In this study, we propose an approach that does not follow the usual way of gradient-domain operation or base layer extraction. Instead of regarding the tone-mapping problem as a base layer or gradient-domain correction problem, our approach treats this problem as a contrast enhancement problem. Accordingly, we do not have to extract the base layer or the gradient layer, which depends significantly on the smoothing operation. This smoothing operation makes the overall computation heavier, and it gets worse with the size of the input image. Additionally, usual smoothing approaches are not free from parameter dependency and involve a post-processing operation.
In contrast to the traditional tone-mapping approach, we proposed an adaptive camera response function with an appropriate weight matrix operator, which makes our approach independent of parameter selection and any post-processing operation. Usual tone-mapping operation accumulates the detail layer or gradient at the end of their approach. This accumulation increases the overall sharpness of the image, as well as the visual clarity. However, this may also increase the noise, introduce the ringing or halo effect, or might lead to the undesirable saturation of the tone mapped image [1]. Since our study does not rely on this approach, the proposed method does not introduce any halo or ringing effects. Further, this study faithfully approximates the exposure information of the input HDR image. This property helps our method to be more noise-suppressive compared to other methods. In summary, our contributions are as follows:
  • Our approach obtains tone-mapped HDR images with the help of contrast enhancement, making it unnecessary to perform any smoothing operations.
  • The proposed approach tries to approximate the exposure information of the input HDR image faithfully. This information aids contrast enhancement so that our method does not require any post-processing.
  • The proposed adaptive parameter selection improves the holistic contrast correction performance.
  • Our utilized weight matrix extraction scheme [2] improves the overall contrast optimization performance.
  • Since this approach does not involve a smoothing operation or detail enhancement, tone-mapped images do not exhibit ringing effect or halo effect. Additionally, it is computationally faster than other state-of-the-art methods due to its single-channel contrast optimization step.
The structure of our paper is as follows. Section 2 discusses related studies of tone mapping. Section 3 presents the proposed tone-mapping method. Section 4 presents the experimental results, and Section 5 concludes the paper.

2. Related Work

Earlier studies of HDR tone mapping can be classified based on performing local and global tone-mapping operations. Several methods [3,4,5] transform HDR images into LDR images using a global tone-mapping operation. The authors of [3] segment the HDR image into two subsections based upon the irradiance value. Afterward, they apply different logarithmic compressions to each section. Tumblin et al. [4] proposed a global brightness-preserving algorithm for HDR tone mapping. Ward et al. [5] mapped HDR images into SDR images by compressing the contrast instead of the luminance of the input images, using a linear compression function.
Global tone mapping leads to locally distorted tone mapping, which has been addressed in local tone mapping-based study [6]. In [7], an HDR image was divided into 11 local irradiance zones and quantized into a compressed form according to those regions. Ma et al. [8] used optimization to enhance the local region visibility. The researchers designed the tone-mapped image quality index (TMQI) [9] as the objective for their optimization algorithm. Duan et al. [10] performed tone mapping for HDR images by correcting the local histogram. Their approach utilized a global contrast correction in the first stage and implemented the same contrast correction algorithm locally. Sira et al. [11] proposed tone correction with a combination of local and global tone mapping. In the first stage, the researchers corrected the saturation globally based on human perception’s properties. In the second stage, they compressed the tone of the input image locally by using a variational model.
Shan et al. [12] have used local linear adjustments on small overlapping windows for the whole HDR image. In this way, each of the overlapping windows has acted as a guidance map which effectively suppresses the local irradiance anomaly. For tone-mapping, symmetrical analysis-synthesis filter banks have used by Li et al. [13]. Their work has exploited local gain control in each sub-band to achieve adaptive property. Gu et al. [14] introduced a local gamma correction with adaptive parameters as an optimization problem to perform tone mapping for an HDR image. Chen et al. [15] proposed a luminance-driven perceptual grouping process to estimate a sparse representation of an HDR image’s irradiance. Due to sub-grouping, researchers could apply a piece-wise illuminance optimization to suppress excessive irradiance values.
Fattal et al. [16] proposed a gradient-domain optimization scheme to tone map the HDR images. They obtained a low-dynamic-range image by solving the Poisson equation. This study [17] manipulated the gradient domain by using the wavelet operation. With the help of edge-avoiding wavelets, researchers reconstructed the HDR image as an LDR image with common artifacts. Ramesh et al. [18] proposed symmetrically fusing multiple pictures in the gradient domain. Their method can preserve important local perceptual cues and improved temporally coherent contextual features. Another study [19] collected edge spectral information from multi-exposure images and then fused all data into a single image. Afterward, it performed derivative manipulation to produce the enhanced low-dynamic-range image. The fusion-based tone-mapping approach was also used in [20,21].
Durand et al. [22] used a piecewise linear approximation for dissecting the base layer and the detail layer to compress the HDR data. Bo et al. [23] used a locally adaptive edge-preserving filter to perform tone mapping, where the resulting image preserved the salient edges. Meylan et al. [24] proposed a Retinex-based tone-mapping algorithm. The researchers’ method utilized an adaptive filter to protect the high-contrast edges from the artifacts and the principal component analysis to suppress the chromatic distortion. Mai et al. [25] proposed a statistical model that approximated the deviation due to the tone mapping and compression operations. The authors optimized the tone curve based upon that model to perform tone mapping. Neil et al. [26] proposed minimal-bracketing algorithms for computing the minimum-sized exposure to compress an HDR image into an LDR image. Malik et al. [27] combined the fusion of different exposures with film response recovery to create an LDR image. Zeev et al. [28] used the weighted least-squares filter for tone mapping. Even though L1 smoothing is an excellent option for edge-aware filtering, it leads to a weak structural prior. To address this, Liang et al. [29] used the L1-L0 model to render an LDR image from an HDR image.
Choudhury et al. [30] proposed a denoising-based detail enhancing approach for tone mapping, which was slower due to its prepossessing and post-processing operations. A local Laplacian filter [31] is very efficient in suppressing the halo effect; however, the relevant operations are prolonged and introduce unnecessary saturation. Recent tone mapping studies have leaned towards CNN-based approaches due to their efficient performance. This study aims to perform inverse tone mapping operation by correcting the input LDR image’s saturation information by using a convolutional neural network. Then they have used a linear function upon the concatenated LDR and the corrected LDR to reconstruct the HDR image [32]. Instead of using multi-exposure input, this study [33] predicted multi-exposures from the input LDR image. Their CNN scheme was later followed by a stack-based fusion step to reconstruct the tone-mapped HDR image. Their approach can efficiently deal with saturation correction but suffers from the linearization problem.
Marnerides et al. [34] avoided the linearization problem by using CNN to reproduce the HDR image directly. However, their method faces challenges with HDR compression due to their utilized normalization scheme. To tackle this, Yuma et al. [35] proposed the L1-Cosine loss function to reconstruct HDR images successfully. Authors claim that their CNN scheme can learn the non-linear relationship between the input LDR image and the reconstructed HDR image. However, CNN-based studies are not of the datasets. Also, existing datasets out there are not comprise of the evenly distributed objects. On the contrary, mathematical modeling-based approaches can be free of dataset dependency. For this, we sought to propose a scheme that can tone map the HDR images as efficiently as possible.

3. Methodology

In this study, we propose a contrast optimization method to render a high-dynamic-range image as a low-dynamic-range image. A visual representation of our proposed algorithm is present in Figure 2. In the first step, we apply a logarithmic transformation to the input image to bound the irradiance information to a range from 0 to 1. At this stage, we can treat the input image as an LDR image with a reduced contrast distribution. Afterward, we apply an RGB-to-HSV transformation to this image. Next, we extract the value channel and plug that image into an adaptive camera response function to obtain the exposure ratio information of the input HDR image. This information is later used to perform non-linear contrast stretching. In the last stage, we performed an HSV-to-RGB transformation to obtain the tone-mapped HDR image. The entire process does not involve base layer extraction or detail enhancement as do the traditional methods. Therefore, our reconstructed image does not exhibit halo effects, ringing effects, or gradient reversal since the proposed model does not entail gradient-domain operations or base layer extraction.
We can cluster the popular contrast enhancement algorithms into two groups: (1) global contrast enhancement [37,38], and (2) local contrast enhancement [39,40]. The global contrast enhancement technique enhances the image contrast without considering the spatial properties of the input image. Hence, a typical global contrast enhancement algorithm performs a linear contrast enhancement. Due to this, the resulting image contains an overly saturated region or distorted detail. Several studies [41,42,43] have performed nonlinear contrast enhancement to mitigate these challenges. On the other hand, local contrast enhancement techniques prioritize the spatial distribution and achieve better contrast correction, although the several studies did not provide any theoretical justification [13]. Unlike other approaches, the Retinex theory assumes that the internal structure of light decomposes into two parts: (1) an illumination layer, and (2) the scene reflection layer [36]. Popular Retinex studies [36] enhance an input image by manipulating the illumination layer. Since this approach does not consider the camera response properties, it faces the challenges of over- and under-enhancement [1]. These studies [44,45,46,47] combined contrast optimization and detail enhancement to perform the HDR tone mapping and reconstruct HDR images with over saturation and undesirable detail suppression. This study [48] uses global histogram correction for tone mapping.
However, these problems can be alleviated if we can process the image with the proper exposure information. The camera response function tries to mitigate this situation by approximating the exposure information for the input image. If the pixel information captured by the sensors is E and X is the non-linear function which takes E as its input to enhance the contrast, then the output image O is as follows:
O = X ( E )
This non-linear function is known as the camera response function. Direct approximation of this function is possible through the ensemble of polynomial model approximation and optimization. However, the nearly accurate estimation of this function is possible through the brightness transformation function (BTF) [36]. If O is the output image and B is the brightness transformation function, for the exposure ratio R, then the desired contrast-corrected approximation of the input image I is as follows:
O = B ( I , R )
The above equation is also known as the brightness transformation function model [36]. For the above equation, we can write down the camera response function [1,36] for recovering the input HDR image with desired exposure is as follows:
O = e x p ( p 1 ( 1 R p 2 ) ) * I R p 2
β = e x p ( p 1 ( 1 R p 2 ) ) , γ = R p 2
Here, p 1 and p 2 are model parameters. The default values of p 1 and p 2 from the previous study are 0.32 and 1.3 [36]. However, these values lead to over-whitening for several images, as shown in Figure 3. We assessed the effect of parameters experimentally and devised an adaptive form of this model. To determine the values of beta and gamma, we first consider p 1 and p 2 to be equal. For any values of p 1 and p 2 , if the value of gamma is greater than 1, the obtained value of beta will be less than 1. Even though the current parameters are suitable for the brighter region, the rendered image will be darker. The reverse scenario of gamma < 1 and beta >1 will lead to a brighter image.
To estimate the values of these parameters adaptively, we first calculate σ , which is the standard deviation of the input image. Next, we set the value of p 1 to 1 + σ and that of p 2 to − p 1 / 4 . We have estimated these parameters by trial-and-error. Our method achieves more accurate color representation than the original parameter values due to this adaptive parameterization. Additionally, avoiding the use of strictly fixed parameters allows us to retain the tone-mapped image’s naturalness. More on the image’s naturalness is present in Figure 4.
As we see, the Equation (3) is the closed-form solution of Equation (2) bounded by the parameters p 1 and p 2 . These bounds govern a non-linear relationship with the input values and map the whole image from 0 to 1 without any normalization tasks. However, with previous bounds, it is common to have a reconstructed image that exceeds the 0 to 1 limit and distort the overall contrast quality. The proposed adaptive limits can suppress such distortion. Since these parameters do not maintain a linear relationship with the given image, only empirical evidence can justify its efficacy. Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 provide empirical evidence to show the efficacy of the proposed adaptive parameter settings.
To achieve the exposure ratio map, we have to calculate the value of the illumination map, and the inverse of the illumination map will give us the desired exposure ratio. For this, we have adopted the iteration free solver [49]. This study [49] has used the weight matrix based upon the relative total variation technique as in Equation (5). The choice of their weight matrix seems to influence in producing brighter output, which is not free from RGB noise [49]. In their study, they have used denoiser as an extension to mitigate this challenge. This additional denoising step makes the overall computation process lengthier. As an improvement, this study [36] has avoided the nominator part of the weight matrix from the Equation (5). They proposed their weight matrix as in the Equation (6) to enhance the contrast of the input image. The resultant illumination map achieved by their study is blurrier compared to the illumination map of [49]. Their study does not exploit denoiser at the end of the enhancement procedure and achieves brighter resolution compared to [49]. The weight matrix W D ( ) from [49], ref. [36] expressed as follows:
W D ( m ) = 1 | y ω ( m ) Δ D L ( n ) | + ϵ ; D ( W , H )
W D ( m ) = y ω ( m ) [ G σ ( m , n ) | y w ( m ) G σ ( m , n ) Δ D L ( n ) | + ϵ ]
here, L ( ) is the illumination information extracted from the HSV transformation, ω ( ) is the local window, Δ D is gradient operator, G σ ( m , n ) is the Gaussian kernel, D indicates the dimension, W , H indicates horizontal and vertical axes, ϵ is a very small value in order to avoid zero denominator.Images produced by [49] is dimmer and hazier in contrast [36]. Since their method [36] avoid the denominator, produced images are brighter than [49], which in some cases distort the color and naturalness as in the figure below.
To overcome these challenges, we have used the relativity-of-the- Gaussian [2] weight matrix. The reason of using Relativity-of-the-Gaussian weight matrix lies in its cross-scale smoothing property. Due to this, this operator can capture the small and large scale information compared to other operators. The decomposed formation [2] of this operator is adopted as our weight matrix:
W D ( m ) = G σ 1 / 2 * 1 | ( G σ 1 / 2 * Δ D L ) ( G σ 1 / 2 * Δ D L ) | + ϵ ; D ( W , H )
This weight matrix also helps the optimization technique to keep the gradient regulized. So, the optimization function for the illumination map is as follows:
T m i n x T ( x ) L ( x ) 2 + λ * y ω ( m ) W D ( x ) * Δ D T ( x ) | Δ D L ( x ) | + ϵ ; D ( W , H )
here, λ is the balancing factor, x is each entity of the given input. T is the illumination information from [36]. This optimization aims to obtain T on the basis of value channel L from the HSV transformation. We have used 0.001 as the fixed value for λ . Since this equation is in quadratic form, a closed-form solution is available and it can be obtained directly [49]. Now, we can write the Equation [36] for the exposure ratio as follows:
R = 1 m a x i m u m ( T ( x ) , ϵ )
The result from the Equation (9) contains desirable exposure ratio information. Now we can plug this exposure ratio into Equation Equation (3) to obtain the contrast-corrected value channel for the tone-mapped HDR image. Originally, Equation Equation (3) works for the RGB input image. In our case, this equation takes I as the value channel of the input HDR image and approximates the value channel with the desired contrast. Later, we perform an HSV-to-RGB conversion to obtain the final tone-mapped HDR result.
Ying et al. [36] performed this enhancement for three channels of the input image. Here, we implement it for only the value channel. This procedure does not degrade the overall hue and saturation information significantly. Moreover, our single-channel operation makes the total computation faster than other studies. More about computational time is present in the next section. Additionally, HSV transformation allows the proposed study to robust contrast correction. Due to this transformation, this scheme escapes chromatic distortion as well as incorrect luminescence approximation.

4. Comparative Analysis

We have compared our tone-mapping results with several state-of-the-art studies. Our comparative analysis includes L0-L1 base layer decomposition [29], weighted least square filter [28], Relativity-of-the-Gaussian tone-mapping [2], L0 gradient minimization [50], Intensity range decomposition [1], linear windowed tone-mapping [12]. For comparison, we consider subjective, objective, and time analysis. We maintain the default parameter settings for all tone-mapping operators. Our tone mapping operator uses contrast correction at its core. To demonstrate the contrast-correction performance of our algorithm, we have compared our study with CLAHE [51], CRF [36], LIME [49]. For quantitative evaluation, we have used commonly used evaluation matrices like mean absolute error, PSNR, and SSIM. In the later part of the study, we have presented the necessary tables and figures for comparative analysis.

4.1. Dataset

In our study, we have used 150 different HDR images from various sources. We have collected them from various researchers over the internet. Due to the unavailability of the ground truth for the HDR images, we have to perform visual and quantitative comparison for performance analysis. For contrast enhancement performance analysis.we have used Kodak 24 true-color image database [52] and Berkeley image database [52].

4.2. Visual Analysis

As mentioned above, we have compared our study with six different state-of-art studies. For a fair comparison, we have not included deep learning-based tone mapping studies. Furthermore, our current study does not concerns video tone mapping. Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 contain side by side comparison between our study and other mentioned studies. From these figures, we can observe that our study performs well along with other studies. Compared to other studies, our study does not produce any over-enhanced image and free from edge hallucination, over-saturation, and ringing effect.

4.3. Subjective Analysis

An image can convey equivocal meanings to its viewers from the aesthetic point of view. Hence, we have performed a subjective analysis based on personal opinion. The selected subjects are 16 individuals, and equal numbers of males and females are present in this group. In our subjective analysis, we have used a casual display (ASUS Monitor) to obtain the mean opinion score for our test images. We have presented HDR images to the participants without any annotations, and there was no ground truth for our test images. Participants in our experiment judged test images based on clarity, contrast, and aesthetics. The mean opinion metric ranges from bad (score = 1) to the excellent ( score = 5). The results of our study have achieved comparable and better scores from the viewers. The mean score and the standard deviation for each method are present in Table 1.

4.4. TMQI Analysis

To evaluate our method’s performance, we used the tone map quality index (TMQI) [9]. This metric entails two steps. In the first step, it estimates the structural fidelity and the naturalness score. Afterward, it uses a power function to adjust the computed scores and performs an averaging operation to determine the TMQI score for the input HDR and LDR images. The TMQI score ranges from 0 to 1. Attaining a TMQI score close to 1 indicates that the respective tone-mapping method produces sound tone-mapped output. For our study, we have collected 150 HDR images to create our tone-mapping database. The average TMQI score for our tone-mapping operator is 0.9046. The proposed tone-mapping operator has achieved the highest score of 0.9053. Our method has also attained excellent results in preserving naturalness and fidelity. The average naturalness score is 0.5721, and the fidelity score of our method is 0.8619. A comparative analysis for this study is present in Table 2.
As from Table 2, the proposed study achieves a low fidelity score on average. We know that the fidelity score measures the standard deviation of the given image for various scale sizes in the local domain. In other words, it measures the detail capturing performance of the given tone mapping operator. This mechanism justifies the L0 operator’s highest fidelity score even though it tends to hallucinate images due to its over detail enhancement. On the other hand, our study performs tone mapping without applying detail enhancement, which is the sole reason for our low fidelity score.

4.5. Time Analysis

In terms of computational time, our method is significantly faster than other state-of-the-art approaches. In contrast to the trivial tone mapping study, the proposed scheme uses contrast optimization to extract exposure ratio over a single channel. Additionally, this optimization is solvable without any iterations. Altogether, these properties reduce the required computational time. On average, our method takes only 2.6 s to perform the tone mapping operation. The results of our time analysis are presented in Table 3. We have used MATLAB to perform this tone-mapping operation with the AMD-Ryzen 5 2600 processor for all the studies.

4.6. Contrast Correction Analysis

Our study uses a contrast correcting operator at its heart to tone map the HDR images. Necessarily, the proposed contrast correction method can work well to restore the images with poor contrast. From Figure 11a, the proposed method can enhance the darkest part of the input image without introducing any noise. For the cropped section in Figure 11a, we can see that our study can restore the barely visible hidden tiles. For Figure 11b, we can see that unlike LIME, our method can restore the brightness of the input image without damaging the saturation.
Along with visual performance, proposed approach has demonstrated its efficacy quantitatively, as shown in the Table 4 below. For quantitative analysis, we have estimated the mean absolute error value, SSIM, and PSNR for all the 24 true-color images from the KODAK database [52]. As from the table, we can see that average MAE, PSNR, SSIM achieved by our study outperforms the compared methods.

5. Conclusions

In this paper, we have proposed a modified version of the camera response function model (CRFm) for HDR tone-mapping operation. Our proposed adaptive parameter control aids the contrast correction performance of the vanilla CRFm. Additionally, our choice of weight-extracting function helps the camera response function model to maintain the spatial consistency of the input HDR image as well as the low light images. These features altogether improve the visual and physical quality of the tone-mapped images. Our method reduces the tone mapping computational complexity by using only single-channel contrast optimization. Experimental results have shown that without performing the detail enhancement operation, the proposed method can preserve structural fidelity without compromising computational speed and spatial information.

Author Contributions

Conceptualization, M.A.-N.I.F. and H.Y.J.; methodology, M.A.-N.I.F.; software, M.A.-N.I.F.; validation, M.A.-N.I.F.; formal analysis, M.A.-N.I.F.; investigation, H.Y.J.; resources, H.Y.J.; data curation, M.A.-N.I.F.; writing—original draft preparation, M.A.-N.I.F.; writing—review and editing, H.Y.J.; visualization, M.A.-N.I.F.; supervision, H.Y.J.; project administration, H.Y.J.; funding acquisition, H.Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the research fund from Chosun University, 2020.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shibata, T.; Tanaka, M.; Okutomi, M. Gradient-domain image reconstruction framework with intensity-range and base-structure constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2745–2753. [Google Scholar]
  2. Cai, B.; Xing, X.; Xu, X. Edge/structure preserving smoothing via relativity-of-Gaussian. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Fez, Morocco, 22–24 May 2017; pp. 250–254. [Google Scholar]
  3. Drago, F.; Myszkowski, K.; Annen, T.; Chiba, N. Adaptive logarithmic mapping for displaying high contrast scenes. Comput. Graph. Forum. 2003, 22, 419–426. [Google Scholar] [CrossRef]
  4. Tumblin, J.; Rushmeier, H. Tone reproduction for realistic images. IEEE Comput. Graphics Appl. 1993, 13, 42–48. [Google Scholar] [CrossRef]
  5. Ward, G.J. The radiance lighting simulation and rendering system. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, USA, 24–29 July 1994; pp. 459–472. [Google Scholar]
  6. Li, H.; Jia, X.; Zhang, L. Clustering based content and color adaptive tone mapping. Comput. Vis. Image Underst. 2018, 168, 37–49. [Google Scholar] [CrossRef]
  7. Reinhard, E.; Devlin, K. Dynamic range reduction inspired by photoreceptor physiology. IEEE Trans. Visual Comput. Graph. 2005, 11, 13–24. [Google Scholar] [CrossRef]
  8. Ma, K.; Yeganeh, H.; Zeng, K.; Wang, Z. High dynamic range image compression by optimizing tone mapped image quality index. IEEE Trans. Image Process. 2015, 24, 3086–3097. [Google Scholar]
  9. Yeganeh, H.; Wang, Z. Objective quality assessment of tone-mapped images. IEEE Trans. Image Process. 2012, 22, 657–667. [Google Scholar] [CrossRef]
  10. Duan, J.; Bressan, M.; Dance, C.; Qiu, G. Tone-mapping high dynamic range images by novel histogram adjustment. Pattern Recognit. 2010, 43, 1847–1862. [Google Scholar] [CrossRef]
  11. Ferradans, S.; Bertalmio, M.; Provenzi, E.; Caselles, V. An analysis of visual adaptation and contrast perception for tone mapping. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2002–2012. [Google Scholar] [CrossRef] [Green Version]
  12. Shan, Q.; Jia, J.; Brown, M.S. Globally optimized linear windowed tone mapping. Pattern Recognit. 2009, 16, 663–675. [Google Scholar]
  13. Li, Y.; Sharan, L.; Adelson, E.H. Compressing and companding high dynamic range images with subband architectures. ACM Trans. Graph. 2005, 24, 836–844. [Google Scholar] [CrossRef] [Green Version]
  14. Gu, H.; Wang, Y.; Xiang, S.; Meng, G.; Pan, C. Image guided tone mapping with locally nonlinear model. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelbeg, Germany, 2012; pp. 786–799. [Google Scholar]
  15. Chen, H.T.; Liu, T.L.; Fuh, C.S. Tone reproduction: A perspective from luminance-driven perceptual grouping. Int. J. Comput. Vis. 2005, 65, 73–96. [Google Scholar] [CrossRef]
  16. Fattal, R.; Lischinski, D.; Werman, M. Gradient domain high dynamic range compression. ACM Trans. Graph. 2002, 21, 249–256. [Google Scholar] [CrossRef] [Green Version]
  17. Fattal, R. Edge-avoiding wavelets and their applications. ACM Trans. Graph. 2009, 28, 1–10. [Google Scholar]
  18. Raskar, R.; Ilie, A.; Yu, J. Image Fusion for Context Enhancement and Video Surrealism; ACM SIGGRAPH: New York, NY, USA, 2005. [Google Scholar]
  19. Connah, D.; Drew, M.S.; Finlayson, G.D. Spectral edge image fusion: Theory and applications. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 65–80. [Google Scholar]
  20. Wu, S.; Yang, L.; Xu, W.; Zheng, J.; Li, Z.; Fang, Z. A mutual local-ternary-pattern based method for aligning differently exposed images. Comput. Vis. Image Underst. 2016, 152, 67–78. [Google Scholar] [CrossRef]
  21. Sun, J.; Zhu, H.; Xu, Z.; Han, C. Poisson image fusion based on Markov random field fusion model. Inf. Fusion 2013, 14, 241–254. [Google Scholar] [CrossRef]
  22. Durand, F.; Dorsey, J. Fast bilateral filtering for the display of high-dynamic-range images. ACM Trans. Graph. 2002, 21, 257–266. [Google Scholar] [CrossRef] [Green Version]
  23. Gu, B.; Li, W.; Zhu, M.; Wang, M. Local edge-preserving multi-scale decomposition for high dynamic range image tone mapping. IEEE Trans. Image Process. 2012, 22, 70–79. [Google Scholar]
  24. Meylan, L.; Susstrunk, S. High dynamic range image rendering with a retinex-based adaptive filter. IEEE Trans. Image Process. 2006, 15, 2820–2830. [Google Scholar] [CrossRef] [Green Version]
  25. Mai, Z.; Mansour, H.; Mantiuk, R.; Nasiopoulos, P.; Ward, R.; Heidrich, W. Optimizing a tone curve for backward-compatible high dynamic range image and video compression. IEEE Trans. Image Process. 2010, 20, 1558–1571. [Google Scholar]
  26. Barakat, N.; Hone, A.N.; Darcie, T.E. Minimal-bracketing sets for high-dynamic-range image capture. IEEE Trans. Image Process. 2008, 17, 1864–1875. [Google Scholar] [CrossRef]
  27. Debevec, P.E.; Malik, J. Recovering high dynamic range radiance maps from photographs. ACM SIGGRAPH 2008, 31, 1–10. [Google Scholar]
  28. Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  29. Liang, Z.; Xu, J.; Zhang, D.; Cao, Z.; Zhang, L. A Hybrid l1-l0 Layer Decomposition Model for Tone Mapping. In Proceedings of the IEEE conference on computer vision and pattern recognition 2018, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4758–4766. [Google Scholar]
  30. Choudhury, A.; Medioni, G. Hierarchy of nonlocal means for preferred automatic sharpness enhancement and tone mapping. JOSA A 2013, 30, 353–366. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Paris, S.; Hasinoff, S.W.; Kautz, J. Local laplacian filters: Edge-aware image processing with a laplacian pyramid. ACM Trans. Graph. 2011, 30, 68. [Google Scholar] [CrossRef]
  32. Endo, Y.; Kanamori, Y.; Mitani, J. Deep reverse tone mapping. ACM Trans. Graph. 2017, 36, 1–10. [Google Scholar] [CrossRef]
  33. Eilertsen, G.; Kronander, J.; Denes, G.; Mantiuk, R.K.; Unger, J. HDR image reconstruction from a single exposure using deep CNNs. ACM Trans. Graph. 2017, 36, 1–15. [Google Scholar] [CrossRef]
  34. Marnerides, D.; Bashford-Rogers, T.; Hatchett, J.; Debattista, K. ExpandNet: A deep convolutional neural network for high dynamic range expansion from low dynamic range content. Comput. Graph. Forum 2018, 37, 37–49. [Google Scholar] [CrossRef] [Green Version]
  35. Kinoshita, Y.; Kiya, H. iTM-Net: Deep Inverse Tone Mapping Using Novel Loss Function Considering Tone Mapping Operator. IEEE Access 2019, 7, 73555–73563. [Google Scholar] [CrossRef]
  36. Ying, Z.; Li, G.; Ren, Y.; Wang, R.; Wang, W. A new low-light image enhancement algorithm using camera response model. In Proceedings of the IEEE International Conference on Computer Vision Workshops 2017, Venice, Italy, 22–29 October 2017; pp. 3015–3022. [Google Scholar]
  37. Chen, S.D.; Ramli, A.R. Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation. IEEE Trans. Consum. Electron. 2003, 49, 1301–1309. [Google Scholar] [CrossRef]
  38. Sen, D.; Pal, S.K. Automatic exact histogram specification for contrast enhancement and visual system based quantitative evaluation. IEEE Trans. Image Process. 2010, 20, 1211–1220. [Google Scholar] [CrossRef] [Green Version]
  39. Vonikakis, V.; Andreadis, I.O.; Gasteratos, A. Fast centre–surround contrast modification. IET Image Process. 2008, 2, 19–34. [Google Scholar] [CrossRef]
  40. Wang, L.; Xiao, L.; Liu, H.; Wei, Z. Variational Bayesian method for retinex. IEEE Trans. Image Process. 2014, 23, 3381–3396. [Google Scholar] [CrossRef] [PubMed]
  41. Beghdadi, A.; Le Negrate, A. Contrast enhancement technique based on local detection of edges. Comput. Vis. Graph. Image Process. 1989, 46, 162–174. [Google Scholar] [CrossRef]
  42. Gonzales, R.C.; Woods, R.E. Digital Image Process; Prentice Hall Press: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  43. Baxes, G.A. Digital Image Processing: Principles and Applications; Weily Press: Hoboken, NJ, USA, 1994. [Google Scholar]
  44. Kwon, H.-J.; Lee, S.-H. Contrast Sensitivity Based Multiscale Base–Detail Separation for Enhanced HDR Imaging. Appl. Sci. 2020, 10, 2513. [Google Scholar] [CrossRef] [Green Version]
  45. Liu, Y.; Lv, B.; Huang, W.; Jin, B.; Li, C. Anti-Shake HDR Imaging Using RAW Image Data. Information 2020, 11, 213. [Google Scholar] [CrossRef] [Green Version]
  46. Choi, H.-H.; Kang, H.-S.; Yun, B.-J. Tone Mapping of High Dynamic Range Images Combining Co-Occurrence Histogram and Visual Salience Detection. Appl. Sci. 2019, 9, 4658. [Google Scholar] [CrossRef] [Green Version]
  47. Rousselot, M.; Le Meur, O.; Cozot, R.; Ducloux, X. Quality Assessment of HDR/WCG Images Using HDR Uniform Color Spaces. J. Imaging 2019, 5, 18. [Google Scholar] [CrossRef] [Green Version]
  48. Khan, I.R.; Rahardja, S.; Khan, M.M.; Movania, M.M.; Abed, F. A Tone-Mapping Technique Based on Histogram Using a Sensitivity Model of the Human Visual System. IEEE Trans. Ind. Electr. 2018, 65, 3469–3479. [Google Scholar] [CrossRef]
  49. Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef]
  50. Xu, L.; Lu, C.; Xu, Y.; Jia, J. Image smoothing via L0 gradient minimization. In Proceedings of the SIGGRAPH Asia Conference 2011, Hong Kong, China, 12–15 December 2011; pp. 1–12. [Google Scholar]
  51. Reza, A.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Proc. Syst. 2004, 38, 35–44. [Google Scholar] [CrossRef]
  52. Xiao, B.; Xu, Y.; Tang, H.; Bi, X.; Li, W. Histogram Learning in Image Contrast Enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops 2019, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
Figure 1. (a) Example of the typical approach of tone-mapping procedure. (b) Tone mapping performance of the proposed study.
Figure 1. (a) Example of the typical approach of tone-mapping procedure. (b) Tone mapping performance of the proposed study.
Sensors 20 04378 g001
Figure 2. The left-most image is the input image, and the image at the most right position is the tone mapped output image. For ease of view, we have presented the normalized view of the exposure. Here, we estimate the exposure ratio [36] with our proposed adaptive parameter settings, and we have improved the contrast stretching with the help of this weight matrix extraction scheme [2].
Figure 2. The left-most image is the input image, and the image at the most right position is the tone mapped output image. For ease of view, we have presented the normalized view of the exposure. Here, we estimate the exposure ratio [36] with our proposed adaptive parameter settings, and we have improved the contrast stretching with the help of this weight matrix extraction scheme [2].
Sensors 20 04378 g002
Figure 3. Example of over brightness due to fixed parameters of the camera response function model. For the (ac), we can see that overall saturation degrades significantly. Due to over brightness, significant detail loss is present in (a,b). For (c), over-brightness leads to a cartoon-like effect.
Figure 3. Example of over brightness due to fixed parameters of the camera response function model. For the (ac), we can see that overall saturation degrades significantly. Due to over brightness, significant detail loss is present in (a,b). For (c), over-brightness leads to a cartoon-like effect.
Sensors 20 04378 g003
Figure 4. The above figure shows the effects of weight matrices and their respective outputs for the table lamp image. Image (a) is the resultant from the Equation (6) produced Image (b) with the brightest intensities and vibrant colors. This property contradicts the naturalness of the input scenario. The fourth Image (d) is a little dimmer than the first and slightly hazy compared to our Image (f). For the shown images, the contrast resulting from the use of Equation (7) causes them to appear more natural. Image (g) shows the approximation performance of the weight matrices for a scan line from the input image. For Equation (5), the scan line is least similar to the input scan line, which makes it less aware of the input image’s spatial property. For Equation (7), the resultant scan line is more likely to the input scan line, and from Image (f), produced output has a more desirable contrast distribution than Image (b) and Image (d). (a) Equation (6), (c) Equation (5), (e) Equation (7).
Figure 4. The above figure shows the effects of weight matrices and their respective outputs for the table lamp image. Image (a) is the resultant from the Equation (6) produced Image (b) with the brightest intensities and vibrant colors. This property contradicts the naturalness of the input scenario. The fourth Image (d) is a little dimmer than the first and slightly hazy compared to our Image (f). For the shown images, the contrast resulting from the use of Equation (7) causes them to appear more natural. Image (g) shows the approximation performance of the weight matrices for a scan line from the input image. For Equation (5), the scan line is least similar to the input scan line, which makes it less aware of the input image’s spatial property. For Equation (7), the resultant scan line is more likely to the input scan line, and from Image (f), produced output has a more desirable contrast distribution than Image (b) and Image (d). (a) Equation (6), (c) Equation (5), (e) Equation (7).
Sensors 20 04378 g004
Figure 5. Effect of parameters on tone mapping. The subfigures show the following scenarios:(a) p1 and p2 increase towards the positive infinity, (b) p1 and p2 increase in magnitude and tend towards the negative infinity, (c) p1 increases towards the positive infinity, and p2 tends towards the negative infinity, (d) p2 is fixed, and p1 increases towards the positive infinity, (e) p1 is fixed, and p2 tends towards the negative infinity, (f) original parameters from [36], (g) proposed parameters α1 = 1 + σ, α2 = −α1/4; where σ is the standard deviation of the respective image, and (h) input HDR image. The tone-mapped image is brighter because of the fixed parameters, and our adaptive settings produce a more realistic image than do the fixed parameter settings.
Figure 5. Effect of parameters on tone mapping. The subfigures show the following scenarios:(a) p1 and p2 increase towards the positive infinity, (b) p1 and p2 increase in magnitude and tend towards the negative infinity, (c) p1 increases towards the positive infinity, and p2 tends towards the negative infinity, (d) p2 is fixed, and p1 increases towards the positive infinity, (e) p1 is fixed, and p2 tends towards the negative infinity, (f) original parameters from [36], (g) proposed parameters α1 = 1 + σ, α2 = −α1/4; where σ is the standard deviation of the respective image, and (h) input HDR image. The tone-mapped image is brighter because of the fixed parameters, and our adaptive settings produce a more realistic image than do the fixed parameter settings.
Sensors 20 04378 g005
Figure 6. Comparison of tone-mapping methods. (a) LW [12], (b) WLS [28], (c) RoG [2], (d) L0 [50], (e) IRD [1], (f) L0–L1 [29], (g) Proposed study, (h) Input.
Figure 6. Comparison of tone-mapping methods. (a) LW [12], (b) WLS [28], (c) RoG [2], (d) L0 [50], (e) IRD [1], (f) L0–L1 [29], (g) Proposed study, (h) Input.
Sensors 20 04378 g006
Figure 7. Comparison of tone-mapping methods. (a) LW [12], (b) WLS [28], (c) RoG [2], (d) L0 [50], (e) IRD [1], (f) L0–L1 [29], (g) Proposed study, (h) Input.
Figure 7. Comparison of tone-mapping methods. (a) LW [12], (b) WLS [28], (c) RoG [2], (d) L0 [50], (e) IRD [1], (f) L0–L1 [29], (g) Proposed study, (h) Input.
Sensors 20 04378 g007
Figure 8. Comparison of tone-mapping methods. (a) LW [12], (b) WLS [28], (c) RoG [2], (d) L0 [50], (e) IRD [1], (f) L0–L1 [29], (g) Proposed study, (h) Input.
Figure 8. Comparison of tone-mapping methods. (a) LW [12], (b) WLS [28], (c) RoG [2], (d) L0 [50], (e) IRD [1], (f) L0–L1 [29], (g) Proposed study, (h) Input.
Sensors 20 04378 g008
Figure 9. Comparison of tone-mapping methods. (a) LW [6], (b) WLS [28], (c) RoG [2], (d) L0 [50], (e) IRD [1], (f) L0–L1 [29], (g) Proposed study, (h) Input.
Figure 9. Comparison of tone-mapping methods. (a) LW [6], (b) WLS [28], (c) RoG [2], (d) L0 [50], (e) IRD [1], (f) L0–L1 [29], (g) Proposed study, (h) Input.
Sensors 20 04378 g009
Figure 10. Comparison of tone-mapping methods. (a) LW [12], (b) WLS [28], (c) RoG [2], (d) L0 [50], (e) IRD [1], (f) L0–L1 [29], (g) Proposed study, (h) Input.
Figure 10. Comparison of tone-mapping methods. (a) LW [12], (b) WLS [28], (c) RoG [2], (d) L0 [50], (e) IRD [1], (f) L0–L1 [29], (g) Proposed study, (h) Input.
Sensors 20 04378 g010
Figure 11. Image contrast enhancement comparison between the proposed method and other studies. (a) We select the most cumbersome portion of the image to demonstrate the contrast correction performance. From (a), our approach can illuminate the foreground and background image with desirable contrast. (b) Contrast performance upon the selected portions from the KODAK-24 true color dataset. We can see that the proposed scheme can illuminate the selected regions without distorting the overall content. On the other hand, compared methods show over contrast stretching.
Figure 11. Image contrast enhancement comparison between the proposed method and other studies. (a) We select the most cumbersome portion of the image to demonstrate the contrast correction performance. From (a), our approach can illuminate the foreground and background image with desirable contrast. (b) Contrast performance upon the selected portions from the KODAK-24 true color dataset. We can see that the proposed scheme can illuminate the selected regions without distorting the overall content. On the other hand, compared methods show over contrast stretching.
Sensors 20 04378 g011
Table 1. Subjective evaluation of the compared methods.
Table 1. Subjective evaluation of the compared methods.
MethodsMeanStandard Deviation
LW [12]3.460.23
WLS [28]4.10.19
RoG [2]3.20.41
L0 [50]3.00.35
IRD [1]3.680.24
L0-L1 [29]4.460.17
Proposed study4.510.08
Table 2. TMQI evaluation of the compared methods.
Table 2. TMQI evaluation of the compared methods.
MethodsTMQIFidelityNaturalness
LW [12]0.86160.79820.4995
WLS [28]0.85710.85780.4815
RoG [2]0.85450.86890.5037
L0 [50]0.86790.87040.5122
IRD [1]0.87130.86360.5205
L0-L1 [29]0.87830.84230.5669
Proposed study0.90460.86190.5721
Table 3. Time analysis between the compared methods.
Table 3. Time analysis between the compared methods.
MethodsTime
LW [12]30.73 s
WLS [28]10.16 s
RoG [2]41.2 s
L0 [50]53.04 s
IRD [1]78.25 s
L0-L1 [29]8.73 s
Proposed study2.6 s
Table 4. Contrast performance between the compared methods.
Table 4. Contrast performance between the compared methods.
MethodsMAESSIMPSNR
LIME [49]0.03860.851236.14
CRF [36]0.03440.86937.55
CLAHE [51]0.07390.78332.07
Proposed study0.02160.88238.71

Share and Cite

MDPI and ACS Style

Fahim, M.A.-N.I.; Jung, H.Y. Fast Single-Image HDR Tone-Mapping by Avoiding Base Layer Extraction. Sensors 2020, 20, 4378. https://doi.org/10.3390/s20164378

AMA Style

Fahim MA-NI, Jung HY. Fast Single-Image HDR Tone-Mapping by Avoiding Base Layer Extraction. Sensors. 2020; 20(16):4378. https://doi.org/10.3390/s20164378

Chicago/Turabian Style

Fahim, Masud An-Nur Islam, and Ho Yub Jung. 2020. "Fast Single-Image HDR Tone-Mapping by Avoiding Base Layer Extraction" Sensors 20, no. 16: 4378. https://doi.org/10.3390/s20164378

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop