Next Article in Journal
Nematocidal Properties of Wild Strains of Pleurotus ostreatus Progeny Derived from Buller Phenomenon Crosses
Previous Article in Journal
Trajectory Tracking via Interconnection and Damping Assignment Passivity-Based Control for a Permanent Magnet Synchronous Motor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Image Dehazing with a Multi-DCP Approach with Adaptive Airlight and Gamma Correction

Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 7978; https://doi.org/10.3390/app14177978
Submission received: 30 July 2024 / Revised: 1 September 2024 / Accepted: 3 September 2024 / Published: 6 September 2024
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
Haze imagery suffers from reduced clarity, which can be attributed to atmospheric conditions such as dust or water vapor, resulting in blurred visuals and heightened brightness due to light scattering. Conventional methods employing the dark channel prior (DCP) for transmission map estimation often excessively amplify fogged sky regions, causing image distortion. This paper presents a novel approach to improve transmission map granularity by utilizing multiple 1 × 1 DCPs derived from multiscale hazy, inverted, and Euclidean difference images. An adaptive airlight estimation technique is proposed to handle low-light, hazy images. Furthermore, an adaptive gamma correction method is introduced to refine the transmission map further. Evaluation of dehazed images using the Dehazing Quality Index showcases superior performance compared to existing techniques, highlighting the efficacy of the enhanced transmission map.

1. Introduction

Photographs taken in natural settings frequently encounter adverse conditions. Weather fluctuations, atmospheric contaminants, haze, fire, smoke, and dust can degrade image quality. Hazy images suffer from reduced visibility due to fog, dust scattering, and light attenuation, complicating object identification and color fidelity. Addressing the challenges posed by hazy images constitutes a significant area of research in image processing, with various approaches being explored [1,2,3,4,5,6,7,8,9,10,11,12]. Techniques for haze removal, such as Markov random field (MRF) [1], filtering [3,5], color analysis [2,7,9,10], transmission map estimation [4,6,8], and deep learning approaches [11,12], play pivotal roles in enhancing visibility compromised by haze in the resulting images.
He et al. [4] presented a seminal work on transmission map estimation using the dark channel prior (DCP) notion. This method capitalizes on the observation that, within local regions of outdoor images, at least one color channel typically exhibits markedly low intensity, which they term the dark channel. They noted that the dark channel tends to become brighter in the presence of haze. The algorithm effectively estimates and mitigates fog, leveraging this characteristic alteration and enhancing visibility and image clarity in outdoor settings.
However, in situations where the color of the target object closely matches atmospheric light and lacks shadows, the DCP exhibits constraints, potentially misidentifying such instances as haze. Additionally, during haze depth estimation, the transmission map may incorrectly interpret sky regions as heavily fogged, leading to inaccuracies. The restoration process could introduce image distortions in areas with significant fog due to over-correction. Moreover, utilizing a patch size of 15 × 15 for DCP computation might leave halo artifacts.
Various efforts have been made to enhance the DCP-based transmission map estimation approach [6,8,13]. These refinements are geared towards enhancing the precision of transmission map estimation. While techniques such as boundary constraints [6], multiscale Laplacian and Gaussian pyramids [8], and dual transmission map estimation [13] show promise, challenges persist in addressing night-time haze and mitigating halo artifacts.
Drawing from the insights of [4], we extend the discourse with fresh perspectives and innovations by proposing an approach for image dehazing. We introduce a method for estimating enhanced transmission maps utilizing multiple 1 × 1 DCPs derived from multiscale hazy, inverted, and Euclidean difference images which strive to enhance the granularity of transmission maps. To tackle the challenge posed by low-light, hazy images, we propose a new technique for airlight estimation. Furthermore, we introduce an adaptive gamma correction method to refine the transmission map further.
Our proposed method outperforms the next-best technique by achieving a significantly higher score of approximately 3.03 points on the Dehazing Quality Index (DHQI) [14] within a Real-World Task-Driven Testing Set (RTTS) [15].

2. Related Work

2.1. Markov-Random-Field-Based Approach

Tan [1] utilizes Markov random fields to enhance image contrast and adjust the changes in airlight based on distance. It is applicable to both color and grayscale images and effectively processes without the need for geometric information. While this method has shown significant visibility improvement in real outdoor images, it may cause issues like halo artifacts in areas with depth discontinuities.

2.2. Filtering-Based Approach

Filtering techniques to remove fog from images represent a pragmatic approach to enhancing visibility. These methods involve applying various filters to augment image clarity.
Tarel et al. [3] introduce a median filter to restore visibility in individual images. This method offers the advantage of controlling image enhancement through a small set of parameters, facilitating rapid processing of both color and grayscale images. Nonetheless, the outcomes of fog removal may not consistently meet aesthetic expectations.
Kaplan [5] proposes a single-image dehazing technique that integrates sharpening, smoothing filters, contrast enhancement, and exposure fusion to enhance visibility. While effective, this technique is susceptible to generating halo artifacts and excessively darkening shadows, particularly in densely foggy images.
Filtering techniques are crucial in mitigating the impact of fog and enhancing image visibility. However, it is equally imperative to acknowledge each method’s inherent limitations and relevant application contexts.

2.3. Color-Analysis-Based Approach

Numerous methods leveraging color information have emerged to enhance image visibility and quality across diverse environmental scenarios. Fattal [2] proposes a method centered on estimating the scene’s albedo and inferring medium transmission. However, its reliability diminishes in scenarios where the method necessitates a diversity in the independent components of local patches. Moreover, it is ineffective for grayscale images or heavily foggy conditions characterized by low variability or signal-to-noise ratios.
Conversely, Zhu et al. [7] utilize the Color Attenuation Prior (CAP) to devise a linear model for scene depth. Their approach aids in restoring luminance in fog-laden images, effectively mitigating fog and thereby significantly enhancing image quality in foggy environments.
Additionally, Bui et al. [9] propose a method using a color ellipsoid in RGB space for dehazing foggy images. This method simultaneously performs the calculation of the transmission map and image improvement through fuzzy segmentation, minimizing the occurrence of unnatural artifacts.
Narasimhan et al. [10] propose a method that combines color-based approaches and transmission map techniques to reconstruct the 3D structure of scenes under adverse weather conditions. This approach has the advantage of effectively restoring the color and structure of scenes in various weather conditions by simultaneously utilizing color changes and depth information.
While these methodologies demonstrate efficacy under specific conditions, Fattal’s method [2] may encounter challenges with low variability or signal-to-noise ratios. On the other hand, Zhu et al.’s [7] machine-learning-based approach demands substantial data for training, and inadequate data may result in ineffective haze removal. Furthermore, Bui et al.’s method [9] acknowledges the potential drawbacks of reduced boundary accuracy in fuzzy segmentation and the high computational complexity of the implementation. Narasimhan et al.’s method [10] may have limited effectiveness in cases of grayscale images or low color contrast, and it requires high complexity for transmission map computation.

2.4. Transmission Map Estimation Approach

Drawing from the seminal work of He et al. [4], advancements in the DCP approach to transmission map estimation have led to notable improvements. These refinements are geared towards enhancing the precision of transmission map prediction, consequently augmenting the efficacy of haze removal techniques.
Meng et al. [6] contributed to this refinement by introducing a novel boundary constraint for the transmission function and context regularization based on a weighted L1 Norm. Their approach treats the recovery of unknown transmission quantities as an optimization problem, yielding natural images with reduced ambiguity between color and depth and mitigating excessive contrast.
Meanwhile, Li et al. [8] propose a non-local dehazing method employing line averaging to reduce morphological artifacts. They construct a multiscale dehazing image model utilizing the Laplacian pyramid of the hazy image and the Gaussian pyramid of the transmission map. However, the effectiveness of this method hinges on the proper selection of the parameter η , as an inappropriate value may lead to suboptimal haze removal. Nonetheless, its advantages lie in artifact removal and simplicity, rendering it suitable for mobile device applications.
On another front, Ehsan et al. [13] present a single-image dehazing technique leveraging dual transmission estimation and gradient domain guided image filtering, enhancing brightness and ensuring fast computational times. However, this technique encounters challenges in effectively dehazing night-time images and may introduce halo artifacts.
These studies collectively introduce diverse methodologies for single-image haze removal, each contributing technically to the field and holding the potential for enhancing image restoration quality in real-world scenarios. While methodologies like boundary constraints [6] and multiscale approaches [8] demonstrate effectiveness, challenges persist in boundary detection accuracy, parameter selection, and applicability under specific conditions, such as night-time haze and halo artifact mitigation, despite computational efficiency [13].

2.5. Deep-Learning-Based Approach

Recent advancements in image dehazing and restoration have been propelled by the deep-learning-based approach, particularly leveraging Convolutional Neural Networks (CNNs) and Transformers [16]. These techniques, as noted in the literature [16], have significantly improved image clarity and texture preservation.
DeHamer [11] stands out in this domain by amalgamating CNNs, Swin-Transformers [17], and 3D positional embedding with the deep convolutional prior (DCP). This integration aims to enhance clarity in foggy images, yielding superior results on benchmark datasets. Similarly, Restormer [12], a Transformer model designed for high-resolution image restoration, emphasizes global connections and detailed texture preservation. This model has demonstrated outstanding performance across various image restoration tasks.
Despite their effectiveness, deep-learning-based approaches necessitate extensive training data and encounter challenges associated with data biases. Addressing these issues remains crucial for further improving these models’ robustness and generalization capabilities.

3. Preliminary

He’s image dehazing method [4] revolves around generating a DCP in a hazy image, estimating the transmission map, and then recovering it to produce a dehazed image. The DCP is computed by selecting the minimum value among the RGB channels for each pixel and further refining it locally using a patch size of 15 × 15 . The brightest pixel within this dark channel image is the basis for estimating the airlight present in the scene. The transmission map t ( x ) , indicative of the ratio of light reaching the observer from objects within the scene, is estimated as follows:
t ( x ) = 1 ω min c min y Ω ( x ) I c ( y ) A c
where ω is a preservation constant, I c ( y ) is the hazy image of a color channel c, A c is the estimated airlight of c, and  Ω ( x ) is the patch centered on pixel x.

4. Proposed Method

Despite the widespread adoption of the DCP-based transmission map estimation approach for image dehazing [4] and its various extensions [6,8,13], they still exhibit notable limitations in image quality. This paper introduces an image dehazing method inspired by [4], as outlined in Algorithm 1. Our proposed method harnesses multiscale and multiple features to compute the 1 × 1 DCPs for better transmission map estimation. Additionally, we incorporate an enhanced airlight estimation method and adaptive gamma correction to refine the estimation of the transmission map, aiming to address the quality issues of the image above.
Algorithm 1 Enhanced Transmission Map with Multiscale Multiple Features DCP and Adaptive Gamma Correction
  • Input:  I ( x ) —Hazy Image
  • Output:  I d z ( x ) —Dehazed Image
  • Procedure DehazingProcess( I ( x ) )
  • Step 1: Resize I ( x ) to generate × 0.5 and × 0.25 of I ( x )
  • Step 2: Estimate an airlight adaptively using Equation (6).
  • Multiscale multiple features transmission map estimation
  •     ▹ Begin Multi-scale multiple features transmission map estimation
  •     Step 3: Generate three 1 × 1 DCPs based on I ( x ) , × 0.5 and × 0.25 of I ( x ) .
  •     Step 4: Estimate three intermediate transmission maps based on DCPs and airlight from steps 2 and 3, followed by a Guided filtering.
  •     Step 5: Each intermediate transmission map is rescaled to match the original size before being averaged, resulting in a transmission map t o r .
  •     Step 6: Generate inverted and Euclidean difference images with Equations (2) and (3).
  •     Step 7: Repeat step 3 to step 5 to yield t r e v , and  t e .
  •     ◃ End Multi-scale multiple features transmission map estimation
  • Step 8: Utilizing t o r , t r e v , t e , and the image I e ( x ) derived from Equation (4), an enhanced transmission map t e h is computed using Equations (7)–(9).
  • Step 9: Refine t e h by adaptive gamma correction using Equation (10).
  • Step 10: Recover the final image, I d z ( x ) based on t e h in step 9.

4.1. 1 × 1 Dark Channel Priors

This section delineates the derivation of the multiple 1 × 1 DCPs from multiscale hazy, inverted, and Euclidean difference images that constitute the core component of our proposed method.

4.1.1. Inverted and Euclidean Difference Images

Hazy images often display notable differentiation between low-light and bright regions, resulting in significant discrepancies during transmission map estimation. To address this issue, we propose employing image inversion to enhance low-light portions and darken bright ones, thereby alleviating distortions caused by haze. The image inversion is carried out channel-wise and defined as follows:
I c r e v ( x ) = 1 I c ( x )
where I c r e v ( x ) is the inverted image.
To measure the disparity between the original and inverted images, we leverage the Euclidean difference image I c e ( x ) , which is defined as follows:
I c e ( x ) = I G ( x ) I c s ( x )
where I G ( x ) is a grayscale image, and I c s ( x ) is the stretched I c r e v ( x ) .
The Euclidean difference image improves the visibility of haze-affected regions and enhances transmission map estimation precision.
For a unified analysis, the Euclidean difference images from R, G, and B channels are combined as follows:
I e ( x ) = ( I R e ( x ) ) 2 + ( I G e ( x ) ) 2 + ( I B e ( x ) ) 2
where I e ( x ) is the aggregate Euclidean difference image, enhancing the estimation’s accuracy and reducing distortions in the haze removal process.

4.1.2. Intermediate Transmission Maps

In He’s approach [4], DCP derived from the 15 × 15 patch size led to residual haze persisting around objects (halo artifacts). Conversely, DCP with 1 × 1 patch size focuses solely on pixel values, disregarding object boundaries and local patterns due to the absence of neighboring pixel information. Additionally, guided filters [18] coupled with multiscale processing are introduced to estimate a more precise transmission map.
Specifically, as depicted in Figure 1, a hazy image I ( x ) is first resized to × 0.5 and × 0.25 of its original size. Subsequently, three distinct dark channels with a patch size of 1 × 1 are computed from these two resized images and I ( x ) . Intermediate transmission maps for each dark channel are estimated with (1), along with our proposed airlight estimation method (Section 4.2). A guided filter [18] is then applied on each intermediate transmission map. Following this, the scaled intermediate transmission maps are restored to match the dimensions of I ( x ) and aggregated through averaging to yield t o r ( x ) .
The same procedure is iterated for inverted image I c r e v ( x ) and channel-wise Euclidean difference images I c e ( x ) , resulting in intermediate transmission maps t r e v ( x ) and t e ( x ) , respectively. Multiscale processing and averaging are also applied to the I e ( x ) obtained in (4).

4.2. Airlight Estimation

In [4], airlight estimation entails selecting the brightest top 1000 pixels, a method pivotal for distinguishing foreground from background in hazy images based on brightness. However, applying this technique to inherently low-light images can lead to excessive darkening of the images, thereby compromising the retention of original features and risking the loss of image details.
To resolve this challenge, we leverage the brightest and the darkest 1% of pixels to estimate airlight. The process can be outlined as follows: First, calculate the average brightness of I ( x ) , denoted as B ¯ , and normalize it to the range of [ 0 , 1 ] through division by 255. Secondly, determine the number of pixels, denoted as S, that corresponds to 1% of the total number of pixels in the image. Lastly, estimate the number of dark pixels as follows:
S d = S × ( 0.1 × ( 1 B ¯ ) )
where S d is the dark pixel count, and the bright pixel count is obtained by subtracting S d from S. We select the darkest I d pixels with S d and the brightest I b pixels through S b from the dark channel image.
The estimated airlight A R can be determined from the following:
A R = i I d I b P i S
where P i represents the color of pixel i, and I d and I b are the indices of the darkest and brightest pixels, respectively.

4.3. Enhanced Transmission Map

For image dehazing, precise identification of haze presence and intensity is paramount for enhancing image quality. By taking a product of t r e v and t e , we aim to preserve authentic foreground features while enhancing image contrast, as depicted in Figure 2a. However, the product operation may lead to a darkening effect, necessitating a compensatory stretching process. Thus, t f e a is determined as follows (Figure 2b):
t f e a ( x ) = t r e v ( x ) · t e ( x )
Nonetheless, such contrast adjustment could potentially obscure or eliminate specific image details. Therefore, we take an average of t f e a and t o r , yielding t a v g ( x ) , which results in darker structural elements and brighter hazy regions, as illustrated in Figure 2c.
t a v g ( x ) = t f e a ( x ) + t o r ( x ) 2
While this operation aligns with hazy image characteristics, it comes at the expense of reduced structural intricacy. To address this issue, an enhanced transmission map, t e h ( x ) , as shown in Figure 2d, is obtained by averaging the I e ( x ) with t a v g ( x ) :
t e h ( x ) = t a v g ( x ) + I e ( x ) 2
The averaging process enhances structural details and mitigates distortions in the sky region. Structures farther away are typically shrouded in denser fog, necessitating more extensive restoration efforts. Conversely, corrections applied to the sky region are relatively modest. Meanwhile, structures in closer proximity are presumed to encounter lighter fog, thus undergoing a more subdued correction process.

4.4. Adaptive Gamma Correction

Low-valued transmission maps may lead to distortions in dehazed images, potentially causing darkening. For example, in Figure 2d, shallow pixel values indicate a risk of the dehazed image appearing similarly shallow and darker if not corrected. Adjusting the transmission map is necessary to address this issue and achieve proper brightness correction.
Gamma correction is a remedy that involves adjusting pixel values in an image to compensate for the non-linear way human eyes perceive light through a γ parameter [19]. As shown in Figure 3 and Figure 4, when γ is increased, images become darker, gradually reducing fog, whereas decreasing γ retains more fog. An improper selection of γ may result in insufficient darkening or ineffective fog removal. Therefore, choosing an appropriate γ should consider the characteristics of the fog image. Instead of manual adjustment, we propose an adaptive approach to determine the γ value based on the maximum and minimum values of the enhanced transmission map t e h ( x ) :
γ = 1 ( α + min ( t e h ( x ) ) ) max ( t e h ( x ) ) + max ( t e h ( x ) ) · ( 1 ( α + min ( t e h ( x ) ) ) )
To prevent the maximum and minimum values from being saturated, α is set to 0.1.

4.5. Image Dehazing

Throughout the haze removal process, the hazy image I ( x ) undergoes traversal across its color dimensions alongside the estimated airlight A R and t e h ( x ) . A dehazed image I d z ( x ) can be obtained via the following:
I d z ( x ) = I ( x ) A R max ( t e h ( x ) , t 0 ) + A R
where t 0 is the threshold to preserve the transmission map and is set to 0.1 in this paper. The dehazing process mitigates the influence of airlight from each channel, restoring the lost color and brightness of the hazy image.

5. Evaluations

The proposed method is assessed using the Real-World Task-Driven Testing Set (RTTS) obtained from the Realistic Single Image Dehazing (RESIDE) benchmark [15]. The RTTS is specifically curated to consist solely of images depicting natural haze, ensuring the authenticity and relevance of the evaluation. This dataset comprises 4322 images characterized by diverse dimensions and a wide range of haze intensities and environmental conditions.
In our evaluation, we utilize the Dehazing Quality Index (DHQI) [14], a metric designed to assess the effectiveness of dehazing algorithms specifically on images affected by fog or hazy weather, rather than evaluating overall noise reduction.

5.1. Comparisons with Existing Works

As depicted in Figure 5, our method effectively removes haze from the image while preserving the integrity of the sky region. Furthermore, as demonstrated in Table 1, our proposed approach outperforms the second-highest scorer, the method of Tarel and Hautiere [3], by approximately 3.03 in DHQI.
A comparison between our method and that of He et al. [4] reveals significant disparities. He et al.’s approach leads to color distortion within the sky region and excessive removal of haze around objects, characterized by visible white bands, as depicted in Figure 6. In contrast, our method exhibits no such distortions in the sky and avoids the formation of band-like haze around objects, as detailed in Table 1.
While deep-learning-based approaches have shown success in various applications, including image dehazing [11] and restoration [12], they do not demonstrate a clear advantage in realistic hazy image datasets such as the RTTS and image dehazing metric DHQI compared to our method. Deep-learning-based methods often rely on the training dataset, leading to a performance influenced by the data-driven approach.
In contrast, our proposed method directly performs adaptive dehazing from the original image, yielding consistent results under various conditions. This method also possesses the capability to analyze the image’s state in real time and execute optimal dehazing without depending on specific training data.

5.2. Ablation Study

In this section, we conduct an ablation study to assess the impact of various components introduced in our method on dehazing performance. Table 2 shows that the proposed enhanced transmission map generated from inverted and Euclidean difference images contributes most significantly, outperforming He’s approach [4] by 7.59 points despite the absence of guided filtering, adaptive AR estimation, multiscale processing, or adaptive gamma correction.
In Figure 7c, residual haze can be observed surrounding objects, suggesting additional refinement is necessary. We employed a 1 × 1 DCP alongside a guided filter to tackle this issue. While this approach led to a marginal decrease of approximately three points in the DHQI, as shown in Table 2, the integration of adaptive gamma correction yielded an approximate one point improvement. Moreover, incorporating multiscale processing techniques and adaptive AR estimation contributed to further performance enhancement, as illustrated in Figure 7d.
Figure 8 visually demonstrates the efficacy of employing a 1 × 1 DCP and guided filter combination for more precise and detailed dehazing, consequently improving object detection performance. The contrast between using He et al. [4] and our 1 × 1 DCP and guided filter is highlighted in Figure 8b and Figure 8c, respectively. Additionally, as depicted in Figure 8d, a multiscale approach boosts object detection confidence.
Ultimately, integrating all components, our method achieves a notable numerical improvement of approximately nine points over the approach by He et al. [4].

5.3. Discussion

Our approach excels in images with light-to-moderate haze, ensuring that object features remain clearly visible. However, as demonstrated in row #4 of Figure 9l, when haze becomes too dense, there is a heightened risk of losing structural details, leading to incomplete haze removal. This limitation is particularly noticeable with distant structures heavily obscured by thick haze, where the fine details become challenging to recover, making it difficult to remove the haze effectively while preserving the clarity of objects. Furthermore, as shown in row #7 of Figure 9l, using a guided filter can occasionally result in a stair-step effect in sky regions with sharp color transitions.
Thus, the most suitable images for this method are those with moderate haze, where the objects remain distinguishable. Achieving effective dehazing without compromising object visibility becomes increasingly challenging when the haze becomes too thick.

6. Conclusions

This paper introduces a novel method for image dehazing employing the DCP-based transmission map estimation technique. Our approach incorporates several innovative elements to derive an enhanced transmission map, such as employing multiple 1 × 1 DCPs derived from multiscale hazy, inverted, Euclidean difference images, guided filtering, airlight estimation, and adaptive gamma correction. The proposed method outperformed He’s method by 9 points and achieved a 3.03 point higher DHQI score than the second-best approach. Additionally, it significantly reduced distortion in the sky region, surpassing all existing state-of-the-art techniques. However, effectively eliminating highly dense haze remains a persistent challenge, highlighting the need to explore further fog removal techniques tailored to diverse environmental conditions.

Author Contributions

Conceptualization, J.K.; methodology, J.K.; software, J.K.; validation, T.-S.N.; formal analysis, J.K.; investigation, J.K.; resources, J.K. and A.B.J.T.; data curation, J.K. and T.-S.N.; writing—original draft preparation, J.K.; writing—review and editing, J.K., T.-S.N. and A.B.J.T.; visualization, J.K.; supervision, A.B.J.T.; project administration, A.B.J.T.; funding acquisition, A.B.J.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (no. NRF-2022R1A2C1010710).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This literature involves one publicly available dataset, the RTTS https://sites.google.com/view/reside-dehaze-datasets/reside-standard?authuser=3D0 (accessed on 12 December 2017) [15].

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–8. [Google Scholar]
  2. Fattal, R. Dehazing using color-lines. ACM Trans. Graph. (TOG) 2014, 34, 1–14. [Google Scholar] [CrossRef]
  3. Tarel, J.P.; Hautiere, N. Fast visibility restoration from a single color or gray level image. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 27 September–4 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 2201–2208. [Google Scholar]
  4. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
  5. Kaplan, N.H. Real-world image dehazing with improved joint enhancement and exposure fusion. J. Vis. Commun. Image Represent. 2023, 90, 103720. [Google Scholar] [CrossRef]
  6. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient image dehazing with boundary constraint and contextual regularization. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 617–624. [Google Scholar]
  7. Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [PubMed]
  8. Li, Z.; Shu, H.; Zheng, C. Multi-scale single image dehazing using laplacian and gaussian pyramids. IEEE Trans. Image Process. 2021, 30, 9270–9279. [Google Scholar] [CrossRef] [PubMed]
  9. Bui, T.M.; Kim, W. Single Image Dehazing Using Color Ellipsoid Prior. IEEE Trans. Image Process. 2018, 27, 999–1009. [Google Scholar] [CrossRef] [PubMed]
  10. Narasimhan, S.G.; Nayar, S.K. Vision and the atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254. [Google Scholar] [CrossRef]
  11. Guo, C.L.; Yan, Q.; Anwar, S.; Cong, R.; Ren, W.; Li, C. Image dehazing transformer with transmission-aware 3d position embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5812–5820. [Google Scholar]
  12. Chen, X.; Wan, Y.; Wang, D.; Wang, Y. Image Deblurring Based on an Improved CNN-Transformer Combination Network. Appl. Sci. 2022, 13, 311. [Google Scholar] [CrossRef]
  13. Ehsan, S.M.; Imran, M.; Ullah, A.; Elbasi, E. A single image dehazing technique using the dual transmission maps strategy and gradient-domain guided image filtering. IEEE Access 2021, 9, 89055–89063. [Google Scholar] [CrossRef]
  14. Min, X.; Zhai, G.; Gu, K.; Yang, X.; Guan, X. Objective quality evaluation of dehazed images. IEEE Trans. Intell. Transp. Syst. 2018, 20, 2879–2892. [Google Scholar] [CrossRef]
  15. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 2018, 28, 492–505. [Google Scholar] [CrossRef] [PubMed]
  16. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 1–11. [Google Scholar]
  17. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
  18. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  19. Huang, S.C.; Cheng, F.C.; Chiu, Y.S. Efficient Contrast Enhancement Using Adaptive Gamma Correction with Weighting Distribution. IEEE Trans. Image Process. 2013, 22, 1032–1041. [Google Scholar] [CrossRef] [PubMed]
  20. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overview of the proposed method. Our method consists of main modules such as multiscale and multiple features 1 × 1 DCP computation for enhanced transmission map estimation, airlight estimation, and adaptive gamma correction.
Figure 1. Overview of the proposed method. Our method consists of main modules such as multiscale and multiple features 1 × 1 DCP computation for enhanced transmission map estimation, airlight estimation, and adaptive gamma correction.
Applsci 14 07978 g001
Figure 2. The illustrations of transmission maps. (a) A product of t r e v ( x ) and t e ( x ) preserves foreground features while improving image contrast. (b) t f e a ( x ) addresses the darkening issue of (a) by stretching. (c) An average of t f e a ( x ) and t o r ( x ) . (d) Enhanced transmission map wherein taking an average of I e ( x ) and t a v g ( x ) aims to remove sky distortion and restore structural details.
Figure 2. The illustrations of transmission maps. (a) A product of t r e v ( x ) and t e ( x ) preserves foreground features while improving image contrast. (b) t f e a ( x ) addresses the darkening issue of (a) by stretching. (c) An average of t f e a ( x ) and t o r ( x ) . (d) Enhanced transmission map wherein taking an average of I e ( x ) and t a v g ( x ) aims to remove sky distortion and restore structural details.
Applsci 14 07978 g002
Figure 3. The distribution of the histogram changes with γ . As γ increases, pixel values decrease, and pixel values intensify as γ decreases.
Figure 3. The distribution of the histogram changes with γ . As γ increases, pixel values decrease, and pixel values intensify as γ decreases.
Applsci 14 07978 g003
Figure 4. The images illustrate the transmission map and result in images corresponding to variations in γ . With an increase in γ , structures become clearer, yet overly high values lead to excessive darkness. With a decrease in γ , the persistence of haze is evident.
Figure 4. The images illustrate the transmission map and result in images corresponding to variations in γ . With an increase in γ , structures become clearer, yet overly high values lead to excessive darkness. With a decrease in γ , the persistence of haze is evident.
Applsci 14 07978 g004
Figure 5. (a) Original hazy images, (b) Fattal [2], (c) Tarel et al. [3], (d) He et al. [4], (e) Kaplan [5], (f) Meng et al. [6], (g) Zhu et al. [7], (h) Li et al. [8], (i) Ehsan et al. [13], (j) DeHamer [11], (k) Restormer [12], and (l) our proposed method.
Figure 5. (a) Original hazy images, (b) Fattal [2], (c) Tarel et al. [3], (d) He et al. [4], (e) Kaplan [5], (f) Meng et al. [6], (g) Zhu et al. [7], (h) Li et al. [8], (i) Ehsan et al. [13], (j) DeHamer [11], (k) Restormer [12], and (l) our proposed method.
Applsci 14 07978 g005
Figure 6. (a) Original haze image. (b) Transmission map based on He et al. [4]. (c) Transmission map based on our proposed method. (d) Dehazed image based on He et al. [4]. (e) Dehazed image based on our proposed method.
Figure 6. (a) Original haze image. (b) Transmission map based on He et al. [4]. (c) Transmission map based on our proposed method. (d) Dehazed image based on He et al. [4]. (e) Dehazed image based on our proposed method.
Applsci 14 07978 g006
Figure 7. (a) uses t e h ( x ) with 15 × 15 patches. (b) uses t e h ( x ) with 1 × 1 patches and a guided filter. (c,d) are zoomed-in versions of (a,b), respectively.
Figure 7. (a) uses t e h ( x ) with 15 × 15 patches. (b) uses t e h ( x ) with 1 × 1 patches and a guided filter. (c,d) are zoomed-in versions of (a,b), respectively.
Applsci 14 07978 g007
Figure 8. The result of object detection using pretrained Faster RCNN [20]. (a) Hazy image I ( x ) . (b) He’s methods and (c) Our methods with a 1 × 1 patch and a guided filter. (d) Proposed method.
Figure 8. The result of object detection using pretrained Faster RCNN [20]. (a) Hazy image I ( x ) . (b) He’s methods and (c) Our methods with a 1 × 1 patch and a guided filter. (d) Proposed method.
Applsci 14 07978 g008
Figure 9. (a) Original image, (b) Fattal [2], (c) Tarel et al. [3], (d) He et al. [4], (e) Kaplan [5], (f) Mend et al. [6], (g) Zhu et al. [7], (h) Li et al. [8], (i) Ehsan et al. [13], (j) Dehamer [11], (k) Restormer [12], and (l) our proposed method. The DHQI value for each dehazed image is given.
Figure 9. (a) Original image, (b) Fattal [2], (c) Tarel et al. [3], (d) He et al. [4], (e) Kaplan [5], (f) Mend et al. [6], (g) Zhu et al. [7], (h) Li et al. [8], (i) Ehsan et al. [13], (j) Dehamer [11], (k) Restormer [12], and (l) our proposed method. The DHQI value for each dehazed image is given.
Applsci 14 07978 g009
Table 1. Comparisons with various competing image dehazing methods.
Table 1. Comparisons with various competing image dehazing methods.
RTTS [15]Fattal
[2]
Tarel et al.
[3]
He et al.
[4]
Kaplan
[5]
Meng et al.
[6]
Zhu et al.
[7]
Li et al.
[8]
Ehsan et al.
[13]
DeHamer
[11]
Restormer
[12]
Ours
Average DHQI ↑46.6253.8647.9141.7946.3750.0353.3145.3444.3747.3756.89
Table 2. Ablation study of the impacts of different components in the proposed method.
Table 2. Ablation study of the impacts of different components in the proposed method.
Transmission MapPatch SizeAirlight EstimationMultiscaleAdaptive Gamma CorrectionDHQI
t o r ( x ) 15 × 15 *OriginalXX47.91
t e h ( x ) OriginalXX55.07
t e h ( x ) ImprovedXX55.52
t e h ( x ) OriginalXO55.50
t e h ( x ) ImprovedXO55.92
t e h ( x ) 1 ×  1 OriginalXX48.94
t e h ( x ) ImprovedXX52.74
t e h ( x ) OriginalXO56.40
t e h ( x ) ImprovedXO56.68
t e h ( x ) ImprovedOO56.89
*—without guided filter, —with guided filter.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J.; Ng, T.-S.; Teoh, A.B.J. Enhancing Image Dehazing with a Multi-DCP Approach with Adaptive Airlight and Gamma Correction. Appl. Sci. 2024, 14, 7978. https://doi.org/10.3390/app14177978

AMA Style

Kim J, Ng T-S, Teoh ABJ. Enhancing Image Dehazing with a Multi-DCP Approach with Adaptive Airlight and Gamma Correction. Applied Sciences. 2024; 14(17):7978. https://doi.org/10.3390/app14177978

Chicago/Turabian Style

Kim, Jungyun, Tiong-Sik Ng, and Andrew Beng Jin Teoh. 2024. "Enhancing Image Dehazing with a Multi-DCP Approach with Adaptive Airlight and Gamma Correction" Applied Sciences 14, no. 17: 7978. https://doi.org/10.3390/app14177978

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop