Next Article in Journal
Using a Two-Stage Method to Reject False Loop Closures and Improve the Accuracy of Collaborative SLAM Systems
Previous Article in Journal
Rapid Detection of Small Faults and Oscillations in Synchronous Generator Systems Using GMDH Neural Networks and High-Gain Observers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Single-Image Dehazing

Korean Intellectual Property Office, Daejeon 35208, Korea
Electronics 2021, 10(21), 2636; https://doi.org/10.3390/electronics10212636
Submission received: 21 September 2021 / Revised: 23 October 2021 / Accepted: 26 October 2021 / Published: 28 October 2021
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
This paper proposes a new single-image dehazing method, which is an important preprocessing step in vision applications to overcome the limitations of the conventional dark channel prior. The dark channel prior has a tendency to underestimate transmissions of bright regions or objects that can generate color distortions during the process of dehazing. In order to suppress the distortions in a large sky area or a bright white object, the sky probabilities and the white-object probabilities calculated in the non-sky area are proposed. The sky area is detected by combining the advantages of a region-based and a boundary-based sky segmentation in order to consider various sky shapes in road scenes. The performance of the proposed methods is evaluated using synthetic and real-world datasets. When compared to conventional methods in the reviewed literature, the proposed method produces significant improvements concerning visual and numerical criteria.

1. Introduction

Recently, intelligent vision systems in fields such as advanced driver assistance systems (ADAS) and surveillance monitoring technology have advanced rapidly and much research is being conducted. The accuracy and performance of these systems are highly dependent on the quality of the input image. Unlike indoor environments, outdoor scenes are affected by weather conditions such as haze, rain, and dust. The existence of haze often causes low contrast and limited visibility in outdoor images [1], which adversely degrades the precision of object recognition [2]. To solve this problem, many dehazing methods were proposed, and they can be grouped into multi-image-based approaches and single-image-based approaches.
The multi-image-based approach needs additional information for dehazing, such as polarized filters, multiple images, weather conditions, as well as depth. Schechner et al. [3] used a polarized camera and took multiple photos of the same scene using different polarization for scene depth. For haze removal, Narasimhan et al. [4] obtained the 3D structure of the hazy scene by employing two or more images captured in different weather conditions. Kopf et al. [5] proposed a 3D model of the scene depth and texture information for dehazing. Liang et al. [6] proposed a dehazing algorithm that fuses infrared and visible images for the improved visual quality of hazy images. In order to perform contrast enhancement, a fusion-based strategy using two original hazy images was proposed [7].
The single-image-based approach received more attention than the multi-image-based approach recently because of its relative simplicity, minimal cost, and efficiency that do not reference image information. Fattal [8] estimated the medium of transmission based on the fact that the transmission and scene albedo are locally uncorrelated. Tan [9] maximized the local contrast of hazy images to dehaze a single image. A median filter and adaptive tone mapping methods were used to obtain haze-free images in [2]. Boundary constraints and contextual regularization were proposed by Meng et al. [10] for single-image dehazing. Fattal [11] introduced color-line prior based on the observation that pixels in small image patches typically exhibit a one-dimensional distribution in the RGB color space. A non-local approach uses changes in pixel values to estimate the scene depth and atmospheric light in [12]. Zhu et al. [13] introduced color attenuation prior to haze removal to estimate the scene depth.
Inspired by the success of the convolution neural network (CNN) in image processing and computer vision tasks, CNN was recently applied in haze removal. Ren et al. [14] used a multi-scale CNN, where the network is trained to estimate the transmission map of the hazy image in a coarse-to-fine approach. Cai et al. [15] proposed the Dehaze-Net that adopts the deep CNN structure (four-layer) model specially designed to perform image dehazing. An end-to-end CNN-based design (AOD-Net) was proposed by Li et al. [16] by reformulating the physical scattering model. Dehazing was regarded as an image-to-image translation task, and an enhanced Pix2pix network was proposed to handle it in [17]. Fu et al. [18] employed the multi-feature-based bilinear CNN in order to reduce the halo effect around the abrupt edges and restrain image noise. The densely connected pyramid dehaze network was proposed by Zhang and Patel [19], which can simultaneously examine scene depth and atmospheric light. Liu et al. [20] restored clear images from hazy images using a multi-scale network based on an attention mechanism.
Among single-image-based approaches, dark channel prior (DCP) is the most popular for its simple principle and excellent results [21]. However, DCP has two limitations: (1) high computational complexity due to the soft matting algorithm and (2) its invalidity for a large sky area or a bright white object. To overcome these drawbacks, He [22] replaced the soft matting algorithm with a guided filter that has edge-maintaining characteristics to refine the transmission map. Wang et al. [23] removed the haze by estimating the transmission maps of the sky and non-sky areas separately and combining them with the refined transmission map for haze removal. Liu et al. [24] used an SVM classifier to detect large areas of the sky, and the dark channel for dehazing is adaptively calculated using the multiscale open dark channel model. Quad-tree splitting-based feature pixel detection is used to segment the sky region, then a transmission map for the sky is calculated in [25]. Zhang et al. [26] used saliency detection to distinguish white objects for the correct transmission map.
In this article, a robust single image dehazing method for vision application is proposed to overcome the limitations of the DCP. The main contributions are:
  • A novel sky detection method that takes advantage of a region-based and a boundary-based sky segmentation is proposed to detect the sky with various shapes taking into account the characteristics of the road scene.
  • Sky and white-object probabilities in local patches are introduced to prevent distortions in a large sky area or a bright white object.
As a result, the proposed method recovers haze-removed images while reducing over-saturated areas.
The rest of the paper is structured as follows. Section 2 briefly reviews the related works. Section 3 proposes a novel sky detection method and single-image haze removal based on the sky and white-object probabilities. Section 4 evaluates the performance of the proposed method by analyzing the subjective quality and objective metrics. Finally, Section 5 concludes the paper.

2. Related Work

An atmospheric scattering model explaining the formation of hazy images is expressed as
I ( x ) = t ( x ) J ( x ) + ( 1 t ( x ) ) A
where I is the hazy image, x is the spatial image index, t ( x ) is the transmission map, J ( x ) is the scene radiance, and A represents the global atmospheric light RGB vector.
Figure 1 shows the theory of the atmospheric scattering model of Equation (1). According to Lambert Beer’s law, in homogeneous atmospheric conditions, t ( x ) can be expressed as [9]
t ( x ) = e β d ( x )
where β is the attenuation coefficient of the atmosphere, d ( x ) is the depth between the objects and the camera. Assuming β is constant, t ( x ) is a value between 0 and 1, considering the relative depth of the scene. By simply transforming the Equation (1), J ( x ) is restored by
J ( x ) = I ( x ) A t ( x ) + A
However, Equation (3) is an ill-posed problem because of the unknown variables, t ( x ) and A. Therefore, the accurate calculation of t ( x ) and A is essential for the performance of the algorithms based on the atmospheric scattering model. One of the most widely-used methods to compute these unknown variables for image dehazing is the dark channel prior.

2.1. Dark Channel Prior

The dark channel prior is based on the empirical observation of natural images where the minimum of the local RGB patches is very close to zero. The dark channel is defined as follows [21]:
J d ( x ) = min y Ω x ( min c { R , G , B } J c ( y ) )   0
where J c is a color channel of J , Ω x is a local patch centered at x . The atmospheric light A is calculated by choosing the highest intensity pixels from the top 0.1% brightest pixels in the dark-channel in a hazy image. Finally, transmission t(x) can be calculated from Equations (1) and (4) with a constant factor:
t ( x ) = 1 ω min y Ω x ( min c { R , G , B } I c ( y ) A c )  
where ω is the constant factor to control the haze removal rate and generally takes a value of 0.95.

2.2. Sky Detection

There are primarily two categories of sky-detection algorithms. One is to find the pixels belonging to the sky (a region-based approach) and the other is to find the sky–ground boundary (a boundary-based approach).

2.2.1. A Region-Based Approach

Color is considered the most accurate thus powerful feature for sky detection. A region-based approach is proposed for sky detection incorporating RGB values for pixel classification [27]. If the values of red and green are close to each other and smaller than that of blue, the color will be in the blue range.
| R G | < T 1   & &     | G B | < T 2     & &     B > R     & &     B > G     & &     T 3 < B < T 4
where T 1 , T 2 , T 3 , T 4 is a predefined threshold.

2.2.2. A Boundary-Based Approach

The sky area is detected with the assumption that its luminance changes smoothly and that it is above the ground area. The boundary-based approach segments the image into the sky and the ground and determines the sky boundary location according to the maximization of the energy function using the gradient information [28]. Sky boundary position function is defined as
1 b ( x ) H           ( 1 x W )
where W is the width and H is the height of the image, while b(x) determines the location of the sky boundary in the x-th column. By using the sky boundary position, sky and ground regions are defined as follows:
s k y = { ( x , y ) |   1 x W , 1 y b ( x ) }   g r o u n d = { ( x , y ) |   1 x W , b ( x ) y H }  
A parameter t , which is a threshold value for comparison with the gradient value, is used to calculate the sky boundary position function b(x).

3. Proposed Method

Figure 2 shows the flow diagram of the proposed dehazing approach. This method starts with a super pixel method [29] to accurately represent the depth information of a scene with an image patch. Then the sky area detection is performed using the dark-channel along with edge and gradient information. The sky probabilities of local patches are calculated and the atmospheric light is estimated in the detected sky area in Section 3.1. The white-object probabilities of local patches are calculated in the non-sky area in Section 3.2. Finally, the transmission t(x) is estimated and refined by taking the sky and white-object probabilities into account, and then the haze-removed image is obtained.

3.1. Sky Probability and Atmospheric Light

The proposed method is based on assumptions about the sky area in hazy outdoor images. The dark channel in the sky region is comparatively higher than that of other regions, the luminance of the sky region changes smoothly (i.e., the gradient is low), and the sky area is assumed to be above the ground area because the application is in outdoor vision application.
As discussed in Section 2.2, a region-based approach and a boundary-based approach are mainly used for sky detection. As shown in Figure 3, a region-based approach can detect any shape of the sky, but a predefined color center for the sky is essential. A boundary-based approach, on the other hand, is independent of the sky color, but cannot handle different shapes of the sky. The assumption about dark-channel is related to the region-based approach, and the assumptions about changes within the sky region and the location of the sky region are directly related to the boundary-based approach. Based on the assumption about the sky region, a novel method to combine the strengths of the two categories of algorithms and to avoid the shortcomings is proposed by taking into account the characteristics of each approach.

3.1.1. Sky Region Detection with Region-Based Approach

The most important thing for the region-based approach is to determine a threshold for separating the foreground and the background. Among the various methods of finding the threshold, the widely used OTSU method [30] is adopted for performing automatic image thresholding. However, as can be seen in Figure 4, the performance of the OTSU method is limited because small object sizes, small average differences between the foreground and the background pixels, and large deviations between pixels belonging to objects and pixels belonging to the background can lead to false thresholds [31].
If the OTSU method is performed on selected patches that may contain the sky boundary, a more accurate threshold can be determined since the portion of the sky area in the patches with the sky boundary tends to be larger and the average difference between the sky and the background becomes bigger when compared to the whole image. Therefore, to improve the performance of the OTSU method for sky detection, image patches with a possible sky boundary in the hazy input image are selected. Firstly, the hazy image is evenly divided into multiple rectangular patches (e.g., a 2 × 3 patch is used, but is not limited). For each patch F i , j , the mean of dark channel value, and edge strength values are calculated.
D ¯ ( F i , j ) = ( p , q ) F i , j I d ( p , q ) N i , j
when ( p , q ) Ω x , I d ( p , q ) is equal to a dark channel value I d ( x ) , and N i , j is the number of pixels in patch F i , j .
E ( F i , j ) = ( p , q ) F i , j e ( p , q ) N i , j ·   ( p , q ) F i , j G ( p , q ) N i , j
where e ( p , q ) is an edge map from the canny edge detector, G ( p , q ) is a gradient value at ( p , q ) pixel location.
Based on the assumptions that the sky is above the ground, high dark channel values and the low gradient in the sky region and the high gradient value at the sky boundary, more than one patch that satisfies specific conditions are selected as an input for the OTSU method.
F O T S U = { F 0 ,     D m a x E ( F 0 ,     D m a x ) T e d g e F 0 ,     D m a x + F 1 ,     D m a x E ( F 0 ,     D m a x ) < T e d g e
where F 0 ,   D m a x is the selected patch with the highest D ¯ value among the first rows of rectangular patches. An example of selected patches can be seen in Figure 5.
Then OTSU threshold ( T O T S U ) is obtained from selected image patches F O T S U , and the region with a value higher than T O T S U in the input dark channel value ( I d ) is determined as the sky region ( S R ). Small regions in S R are removed by applying a morphological closing operation, and the improvement of the results with the selected patches can be clearly seen in Figure 4c,f.

3.1.2. Sky Region Detection with Boundary-Based Approach

As shown in Figure 6b, a boundary-based approach cannot extract the sky boundary when there is another gradient value above the sky boundary. To solve this problem, the search range of the boundary-based approach is limited by the results of the region-based approach. First, the threshold to find the border is calculated using the S R mask obtained in Section 3.1.1. As with Equation (10), edge strength values are calculated in the local patch.
E ( x ) = y Ω x e ( y ) N x ·   x Ω x G ( y ) N x
Using the mean ( μ E ) and standard deviation ( σ E ) of edge strength values calculated within the S R mask, the thresholds are calculated as
T E = μ E + g T ·   σ E
where g T is a predefined parameter. Then the sky borderline at each column location can be obtained by taking the row index which has a high edge strength value within the search range.
b ( q ) = {   p , E ( p , q ) > T E ,     b R ( q ) S R a n g e < p < b R ( q ) + S R a n g e   0 , o t h e r w i s e
where E ( p , q ) = E ( x ) , when ( p , q ) Ω x and b R ( q ) is an initial sky borderline obtained from S R   mask, and S R a n g e is a parameter to limit the search range. Multiple sky borderlines within the search range can be resolved by excluding the area between the maximum and minimum indexes. Finally, the sky region with a boundary-based approach S B is obtained.

3.1.3. Sky Probabilities Calculation and Atmospheric Light Estimation in the Sky Region

The difference between the two candidate sky masks ( S R , S B ) and the boundary of each sky mask should be refined for the final sky mask. Firstly, S A N D , which is a common area between S R and S B , and S a m b i g u i t y , which is the region to be refined, are defined as
S A N D = S R   S B ,               S a m b i g u i t y = ( S R   S B ) + ( S D i l a t i o n R S R ) + ( S D i l a t i o n B S B )
Then, μ R , μ G , μ B and σ R G B , which are color statistics within S A N D , are calculated. By using the color and gradient information sky probabilities are calculated as
P s k y ( x ) = exp ( y Ω x D G r a d i e n t ( y ) D c o l o r ( y ) α · σ R G B )
where α is a predefined parameter, D G r a d i e n t ( y ) = 0.5 · E ( y ) T E + 0.5 and D c o l o r ( y ) = ( R y μ R ) 2 + ( G μ G ) 2 + ( B y μ B ) 2 . Then sky probabilities in S a m b i g u i t y is compared with the threshold calculated inside S A N D to decide the sky and the non-sky regions.
S A D D = {   1 , P s k y ( x ) > μ P g P ·   σ P       0 , o t h e r w i s e
where μ P , σ P are the mean and the standard deviation of sky probabilities in S A N D , and g P is a predefined parameter. The final sky mask ( S m a s k ) is obtained by adding S A D D to S A N D .
Figure 7 depicts the final sky mask with a probability map and shows the distortion suppression performance of the sky region. The atmospheric light A is calculated by averaging the pixels in the hazy input image which belongs to the final sky mask. During the calculation of A, the darkest top 10% pixels in the dark channel, which are pixels that may belong to the sky boundary, are excluded. When the sky region is not detected, the atmospheric light A is estimated by picking the highest intensity pixels from the top 0.1% brightest pixels in the dark channel in a hazy image as in [21].

3.2. White-Object Probability in the Non-Sky Region

White objects should have a high value, low saturation, and an achromatic color in the HSV color space. Thus, the white-object probability of a local patch Ω x can be expressed as:
P w ( x ) = P V ( x ) P S ( x ) P c o l o r ( x )
where x is the center location of the patch Ω x and P V ( Ω x ) , P S ( x ) , P c o l o r ( Ω x ) represent the value, saturation, color probabilities for a white object. Since the value is proportional and the saturation is inverse proportional to the white-object probability, the value and saturation probability model is simply defined as:
P V ( x ) P S ( x ) = V ( x ) · ( 1 S ( x ) ) = M ( x ) · ( 1 M ( x ) m ( x ) M ( x ) ) = m ( x )
where M ( x ) = max y Ω x ( max c { R , G , B } I c ( y ) ) , m ( x ) = min y Ω x ( min c { R , G , B } I c ( y ) ) .
Inspired by the angular error in color constancy which measures the similarity between the actual white point and the estimated illuminant, P c o l o r ( x ) can be calculated. The angular error in color constancy is calculated as [32]
e A n g l u l a r = cos 1 I y · I w | | I y | | 2 | | I w | | 2
where I y is the RGB vector, I w is the reference white point vector, and | |    | | 2 refers to the L2 norm. e A n g l u l a r becomes 0 when I y is close to I w . When there is no color cast in the image, the ground truth white point vector can be assumed I w = [ 1 , 1 , 1 ] , then the angular error in the local patch Ω x can be expressed as:
e A n g l u l a r ( x ) = y Ω x cos 1 | I y | 3 | | I y | | 2
Using Equation (23) the color probability model is defined as
P c o l o r ( x ) = exp ( e A n g l u l a r ( x ) β ) = exp ( 1 β · x Ω x cos 1 | I y | 3 | | I y | | 2 )
where β is a predefined parameter, and |     | refers to the L1 norm.
By combining Equations (21) and (24), the final white-object probability for each region is defined as,
P w ( x ) = m ( x ) ·   exp ( 1 β · y Ω x cos 1 | I y | 3 | | I y | | 2 )
Figure 8 illustrates the gray probability map and shows the color distortion suppression performance of the white object region.

3.3. Haze-Free Image Recovery

Instead of using a constant ω for all images as in Equation (5), a region adaptive factor ω(x) can be used to avoid distortion of the sky or white objects. Then the initial transmission calculated in the local patch is expressed as
t 0 ( x ) = 1 ω ( x ) min y Ω x ( min c { R , G , B } I c ( y ) A c )  
A higher ω(x) results in more haze removal but can produce distortion, and a lower ω(x) results in less haze removal but less distortion. As mentioned in Section 1, the dark channel prior in the sky and the white-object region is invalid, thus ω(x) should become lower to prevent distortion. The hazy image is separated into the sky and the non-sky regions according to the results of Section 3.1. For the sky region, the sky probabilities P s k y ( x ) has a non-zero value and the white-object probability P w ( x ) has a value of zero, while the opposite is true for the non-sky region. Using the fact that P s k y ( x ) and P w ( x ) are inverse proportional to ω(x), the relationship between ω(x) and P s k y ( x ) , P w ( x ) can be expressed as:
ω ( x ) = ω 0 ( 1 ω s k y P s k y ( x ) ) ( 1 ω w P w ( x ) )
where ω 0 is a parameter for controlling the overall haze rate, and ω s k y and ω w are parameters for controlling the haze rate in the sky and white object regions respectively. Then patch level transmission t 0 ( x ) should be refined to have pixel-level transmission t ( p ,   q ) using the guided filter [22]. Using the atmospheric light A and the refined transmission map t ( p ,   q ) , the haze removed image J can be recovered as follows:
J ( p ,   q ) = I ( p ,   q ) A max ( t ( p ,   q ) , t 0 ) + A
The transmission is limited by a lower limit (0.05), which is the same empirical value in [21], to avoid excessive enhancement.

4. Experimental Results

In this section, the effectiveness of the proposed method is verified through qualitative and quantitative comparisons with conventional methods. The proposed method was tested on datasets with and without ground truth, to compare with the methods proposed by He et al. [21], Meng et al. [10], Berman et al. [12], Zhu et al. [13], Ren et al. [14], and Cai et al. [15].

4.1. Synthetic Images Experiment

The synthetic hazy images which have ground truth are collected from the O-HAZE database [33], Synthetic Objective Testing Set (SOTS) and Hybrid Subjective Testing Set (HSTS) from a Realistic Single Image Dehazing (RESIDE) dataset [34]. The performance of the proposed method was numerically evaluated using the full reference metrics PSNR and SSIM.
In Table 1, PSNR and SSIM indices are calculated for images restored from O-HAZE, HSTS, and SOTS datasets. The proposed method showed the best or second-highest performance in SSIM and PSNR, respectively, and Cai et al. showed better performance only in the HSTS dataset. This shows that the proposed method effectively removes haze using quantitative metrics.
Figure 9, Figure 10 and Figure 11 show detailed comparisons of various methods using the O-HAZE, HSTS, and SOTS datasets, respectively. He et al. and Meng et al. are particularly prone to distortion of the sky area. Berman et al. and Zhu et al. also show color distortions.
Ren et al. and Cai et al. did not completely eliminate the haze. On the other hand, the proposed method recovered the closest image to the real image and the visual performance was much better.

4.2. Natural Images Experiment

The proposed methods are further validated using real-world hazy images. Although real-world hazy images do not have ground truth unlike synthetic images, it is necessary to check the applicability of the proposed method using natural hazy images. From the Realistic Single Image Dehazing (RESIDE) dataset, the natural hazy images with the sky, roads, and white objects are collected to be evaluated as an application of outdoor vision applications. Since there is no image for reference in the real-world images experiment, for the evaluation of the quality of the dehazing results the IQA evaluation index is used.
As an IQA evaluation index, the natural image quality evaluator (NIQE) [35], the blind/referenceless image spatial quality evaluator (BRISQUE) which the value of output range is (0–100) [36], and the perception-based image quality evaluation (PIQE) [37] are used in this paper. The better the picture quality, the smaller the NIQE, BRISQUE, and PIQE values.
Table 2 shows the haze removal performance of natural images in NIQE [35], BRISQUE [36], and PIQE [37]. It can be seen that the proposed method has better performance than other conventional methods except for the NIQE value. In addition, the proposed method had similar values when compared to Ren et al., which was the best in NIQE. This shows the effectiveness of the proposed method in a real case.
As can be seen in Figure 12, Figure 13, Figure 14 and Figure 15, proposed methods outperformed the conventional haze removal methods, in terms of the amount of haze removal as well as without producing undesirable artifacts in the sky area and color distortions in objects. He et al., Meng et al. and Berman et al. produce artifacts in the sky area and partially leave the haze. Compared to these methods, Zhu et al. and Ren et al. show stable results, but the haze is still seen in the far area, and there is some color distortion in the sky area. Cai et al. makes the image dim and leaves the haze in the output. These results prove the effectiveness of the proposed method in real cases.

4.3. Experiments for ADAS Application

The effectiveness of the proposed method for haze removal is clearly shown in Section 4.1 and Section 4.2 In this section, the performance of the proposed method as an ADAS application is evaluated by verifying the improvement of the lane detection accuracy. The haze removal results of the existing and proposed methods are used to detect road lanes and horizons using Wang’s method [38]. Figure 16 and Figure 17 show the detection results of road lanes (blue lines), horizon (green lines), and lane centers (red circles).
In Figure 16, Meng et al., Zhu et al., and Ren et al. lost the dotted line in the center, and the proposed method detected more lanes on the right side compared to other methods (proposed 88.9%, He et al. 64.1%, Meng et al. 74.7%, Berman et al. 86.5%, Zhu et al. 65.4%, Ren et al. 72.2.6%, Cai et al. 63.3%). In Figure 17, while the other methods cannot accurately detect the horizon, the proposed method detects the horizon and lanes more accurately (proposed 81.8%, He et al. 72.7%, Meng et al. 74.5%, Berman et al. 78.3%, Zhu et al. 65.5, Ren et al. 45.6%, Cai et al. 41.4%). These results prove that the proposed method is effective for ADAS application.

4.4. Computational Complexity

In order to indicate the computational complexity of the proposed method, Table 3 presents the average run times in seconds. The experiment was performed considering all methods with various image sizes (640 × 480, 1024 × 768, 1280 × 720, 1920 × 1080) on a PC equipped with 2.8 Ghz Intel Core i7 and 16 GB RAM. The proposed method is implemented in C++ and the other methods are implemented in Matlab. Although a fair comparison cannot be made due to the difference in implementation, the efficiency of the proposed method can be seen when comparing the computational complexity according to the image size. The proposed method increases the complexity by 1.88× when the image size increases by 6.75×, but the other methods increase the complexity by 2.28× to 7.01×.

5. Conclusions

In this paper, a method to recover images from hazy input images was presented. To this end, dehazing was performed based on DCP, which is the most studied in this field, and sky probabilities and white-object probabilities were proposed to solve oversaturation and distortion problems during dehazing. Region-based and boundary-based sky segmentations were used to detect sky areas and merge them into the final sky area. The performance of the proposed method was evaluated through subjective and objective analyses of synthetic and natural images. Experiments confirmed that the proposed method effectively suppresses the artifacts of the sky area and white objects while effectively removing haze. The results of the proposed algorithm showed better performance than the conventional methods in both quantitative and qualitative criteria.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Xu, Y.; Wen, J.; Fei, L.; Zhang, Z. Review of Video and Image Defogging Algorithms and Related Studies on Image Restoration and Enhancement. IEEE Access 2015, 4, 165–188. [Google Scholar] [CrossRef]
  2. Tarel, J.-P.; Hautiere, N.; Caraffa, L.; Cord, A.; Halmaoui, H.; Gruyer, D. Vision Enhancement in Homogeneous and Heterogeneous Fog. IEEE Intell. Transp. Syst. Mag. 2012, 4, 6–20. [Google Scholar] [CrossRef] [Green Version]
  3. Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Polarization-based vision through haze. Appl. Opt. 2003, 42, 511–525. [Google Scholar] [CrossRef]
  4. Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 713–724. [Google Scholar] [CrossRef] [Green Version]
  5. Kopf, J.; Neubert, B.; Chen, B. Deep photo: Model-based photograph enhancement and viewing. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef] [Green Version]
  6. Liang, J.; Zhang, W.; Ren, L.; Ju, H.; Qu, E. Polarimetric dehazing method for visibility improvement based on visible and infrared image fusion. Appl. Opt. 2016, 55, 8221. [Google Scholar] [CrossRef] [PubMed]
  7. Ancuti, C.O. Single Image Dehazing by Multi-Scale Fusion. IEEE Trans. Image Process. 2013, 22, 3271–3282. [Google Scholar] [CrossRef] [PubMed]
  8. Fattal, R. Single image dehazing. Proc. ACM Trans. Graph. (TOG). 2008, 27, 1–9. [Google Scholar] [CrossRef]
  9. Tan, R.T. Visibility in Bad Weather from a Single Image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar] [CrossRef]
  10. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient Image Dehazing with Boundary Constraint and Contextual Regularization. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 2–8 December 2013; pp. 617–624. [Google Scholar] [CrossRef]
  11. Fattal, R. Dehazing using color-lines. ACM Trans. Graph. 2014, 34, 1–14. [Google Scholar] [CrossRef]
  12. Berman, D.; Treibitz, T.; Avidan, S. Non-Local Image Dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar] [CrossRef]
  13. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.-H. Single Image Dehazing via Multi-scale Convolutional Neural Networks. Eur. Conf. Comput. Vis. 2016, 9906, 154–169. [Google Scholar] [CrossRef]
  15. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. AOD-Net: All-in-One Dehazing Network. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4780–4788. [Google Scholar] [CrossRef]
  17. Qu, Y.; Chen, Y.; Huang, J.; Xie, Y. Enhanced Pix2pix Dehazing Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8152–8160. [Google Scholar] [CrossRef]
  18. Fu, H.; Wu, B.; Shao, Y. Multi-Feature-Based Bilinear CNN for Single Image Dehazing. IEEE Access 2019, 7, 74316–74326. [Google Scholar] [CrossRef]
  19. Zhang, H.; Patel, V.M. Densely Connected Pyramid Dehazing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3194–3203. [Google Scholar] [CrossRef] [Green Version]
  20. Liu, X.; Ma, Y.; Shi, Z.; Chen, J. GridDehazeNet: Attention-Based Multi-Scale Network for Image Dehazing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 7313–7322. [Google Scholar] [CrossRef] [Green Version]
  21. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [CrossRef] [PubMed]
  22. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  23. Wang, G.; Ren, G.; Jiang, L.; Quan, T. Single Image Dehazing Algorithm Based on Sky Region Segmentation. Inf. Technol. J. 2013, 12, 1168–1175. [Google Scholar] [CrossRef] [Green Version]
  24. Liu, Y.; Li, H.; Wang, M. Single Image Dehazing via Large Sky Region Segmentation and Multiscale Opening Dark Channel Model. IEEE Access 2017, 5, 8890–8903. [Google Scholar] [CrossRef]
  25. Yuan, H.; Liu, C.; Guo, Z.; Sun, Z. A Region-Wised Medium Transmission Based Image Dehazing Method. IEEE Access 2017, 5, 1735–1742. [Google Scholar] [CrossRef]
  26. Zhang, L.; Wang, X.; She, C.; Wang, S.; Zhang, Z. Saliency-driven single image haze removal method based on reliable airlight and transmission. J. Electron. Imag. 2018, 27, 023038. [Google Scholar] [CrossRef]
  27. Irfanullah, K.H.; Sattar, Q.; Sadaqat-ur-Rehman, A.A. An efficient approach for sky detection. IJCSI Int. J. Comput. Sci. 2013, 10, 1694–1814. [Google Scholar]
  28. Shen, Y.; Wang, Q. Sky Region Detection in a Single Image for Autonomous Ground Robot Navigation. Int. J. Adv. Robot. Syst. 2013, 10, 362. [Google Scholar] [CrossRef] [Green Version]
  29. Achanta, R.; Susstrunk, S. Superpixels and Polygons Using Simple Non-Iterative Clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 22–25 July 2017; pp. 4651–4660. [Google Scholar]
  30. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  31. Lee, S.U.; Chung, S.Y.; Park, R.-H. A comparative performance study of several global thresholding techniques for segmentation. Comput. Vision Graph. Image Process. 1990, 52, 171–190. [Google Scholar] [CrossRef]
  32. Hordley, S.; Finlayson, G. Re-evaluating colour constancy algorithms. Int. Conf. Pattern Recognit. 2004, 1, 76–79. [Google Scholar] [CrossRef] [Green Version]
  33. Ancuti, O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C. O-HAZE: A Dehazing Benchmark with Real Hazy and Haze-Free Outdoor Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 754–762. [Google Scholar]
  34. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking Single-Image Dehazing and Beyond. IEEE Trans. Image Process. 2018, 28, 492–505. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a ‘completely blind’ image quality analyzer. IEEE Signal Process. Lett. Mar. 2013, 20, 209–212. [Google Scholar] [CrossRef]
  36. Mittal, A.; Moorthy, A.K.; Bovik, A. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  37. Venkatanath, N.; Praneeth, D.; Chandrasekhar, B.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. 2015 Twenty First Natl. Conf. Commun. (NCC) 2015, 1–6. [Google Scholar] [CrossRef] [Green Version]
  38. Wang, Y.; Teoh, E.K.; Shen, D. Lane detection and tracking using B-Snake. Image Vis. Comput. 2004, 22, 269–280. [Google Scholar] [CrossRef]
Figure 1. Atmospheric Scattering Model.
Figure 1. Atmospheric Scattering Model.
Electronics 10 02636 g001
Figure 2. The overall procedure of the proposed haze removal scheme.
Figure 2. The overall procedure of the proposed haze removal scheme.
Electronics 10 02636 g002
Figure 3. The comparisons of the region-based approach and boundary-based approach: (a) input image, (b) result of the region-based approach, (c) gradient image of (a), (d) result of the boundary-based approach.
Figure 3. The comparisons of the region-based approach and boundary-based approach: (a) input image, (b) result of the region-based approach, (c) gradient image of (a), (d) result of the boundary-based approach.
Electronics 10 02636 g003
Figure 4. The limitations of OTSU threshold: (a,d) input image, (b,e) result of conventional OTSU thresholds, (c,f) the result of proposed method without morphological operation.
Figure 4. The limitations of OTSU threshold: (a,d) input image, (b,e) result of conventional OTSU thresholds, (c,f) the result of proposed method without morphological operation.
Electronics 10 02636 g004
Figure 5. Selected patch for OTSU threshold: (a) selected patch when E ( F 0 ,   D m a x ) T e d g e , (b) selected patches when E ( F 0 ,   D m a x ) < T e d g e .
Figure 5. Selected patch for OTSU threshold: (a) selected patch when E ( F 0 ,   D m a x ) T e d g e , (b) selected patches when E ( F 0 ,   D m a x ) < T e d g e .
Electronics 10 02636 g005
Figure 6. Sky region detection with boundary-based approach: (a) input image, (b) sky region from conventional boundary-based approach, (c) the search ranges from the S R mask, (d) sky region from proposed method.
Figure 6. Sky region detection with boundary-based approach: (a) input image, (b) sky region from conventional boundary-based approach, (c) the search ranges from the S R mask, (d) sky region from proposed method.
Electronics 10 02636 g006
Figure 7. Improvement using sky probability: (a) superpixel input image, (b) calculated sky probability, (c) result of conventional DCP, (d) result proposed method.
Figure 7. Improvement using sky probability: (a) superpixel input image, (b) calculated sky probability, (c) result of conventional DCP, (d) result proposed method.
Electronics 10 02636 g007
Figure 8. Improvements using white-object probability: (a) superpixel input image, (b) calculated white-object probability, (c) result of conventional DCP, (d) result of proposed method.
Figure 8. Improvements using white-object probability: (a) superpixel input image, (b) calculated white-object probability, (c) result of conventional DCP, (d) result of proposed method.
Electronics 10 02636 g008
Figure 9. Comparison of dehazing results from O-HAZE dataset: (a) hazy image; (b) ground truth; (c) He et al.; (d) Meng et al.; (e) Berman et al.; (f) Zhu et al.; (g) Ren et al.; (h) Cai et al.; (i) proposed.
Figure 9. Comparison of dehazing results from O-HAZE dataset: (a) hazy image; (b) ground truth; (c) He et al.; (d) Meng et al.; (e) Berman et al.; (f) Zhu et al.; (g) Ren et al.; (h) Cai et al.; (i) proposed.
Electronics 10 02636 g009
Figure 10. Comparison of dehazing results from HSTS dataset: (a) hazy image; (b) ground truth; (c) He et al.; (d) Meng et al.; (e) Berman et al.; (f) Zhu et al.; (g) Ren et al.; (h) Cai et al.; (i) proposed.
Figure 10. Comparison of dehazing results from HSTS dataset: (a) hazy image; (b) ground truth; (c) He et al.; (d) Meng et al.; (e) Berman et al.; (f) Zhu et al.; (g) Ren et al.; (h) Cai et al.; (i) proposed.
Electronics 10 02636 g010
Figure 11. Comparison of dehazing results from SOTS dataset: (a) hazy image; (b) ground truth; (c) He et al.; (d) Meng et al.; (e) Berman et al.; (f) Zhu et al.; (g) Ren et al.; (h) Cai et al.; (i) proposed.
Figure 11. Comparison of dehazing results from SOTS dataset: (a) hazy image; (b) ground truth; (c) He et al.; (d) Meng et al.; (e) Berman et al.; (f) Zhu et al.; (g) Ren et al.; (h) Cai et al.; (i) proposed.
Electronics 10 02636 g011
Figure 12. Comparison of dehazing results from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Figure 12. Comparison of dehazing results from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Electronics 10 02636 g012
Figure 13. Comparison of dehazing results from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Figure 13. Comparison of dehazing results from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Electronics 10 02636 g013
Figure 14. Comparison of dehazing results from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Figure 14. Comparison of dehazing results from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Electronics 10 02636 g014
Figure 15. Comparison of dehazing results from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Figure 15. Comparison of dehazing results from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Electronics 10 02636 g015
Figure 16. Comparison of road lane detection from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Figure 16. Comparison of road lane detection from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Electronics 10 02636 g016
Figure 17. Comparison of road lane detection from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Figure 17. Comparison of road lane detection from real-world image: (a) hazy image; (b) He et al.; (c) Meng et al.; (d) Berman et al.; (e) Zhu et al.; (f) Ren et al.; (g) Cai et al.; (h) proposed.
Electronics 10 02636 g017aElectronics 10 02636 g017b
Table 1. Quantitative comparison with synthetic dataset.
Table 1. Quantitative comparison with synthetic dataset.
DatasetMetricHe et al.Meng et al.Berman et al.Zhu et al.Ren et al.Cai et al.Proposed
O-Haze
(45 samples)
PSNR13.9615.1315.0115.3617.1715.0616.32
SSIM0.30.340.40.330.380.360.44
SOTS-Outdoor
(500 samples)
PSNR14.8115.5718.0818.2519.6122.9222.33
SSIM0.75490.7830.80260.78670.86330.88860.8999
HSTS
(10 samples)
PSNR15.0915.1617.6319.8418.6724.4824.22
SSIM0.76560.74140.79330.81570.81740.92160.9017
Table 2. Quantitative comparison with natural images.
Table 2. Quantitative comparison with natural images.
MetricHe et al.Meng et al.Berman et al.Zhu et al.Ren et al.Cai et al.Proposed
BRISQUE26.7326.9226.9226.9325.5826.2124.94
NIQE3.2113.283.5893.23.1863.2063.191
PIQE42.2741.646.4742.4142.3742.4941.09
Table 3. Computational complexity comparisons.
Table 3. Computational complexity comparisons.
Image ResolutionHe et al.Meng et al.Berman et al.Zhu et al.Ren et al.Cai et al.Proposed
640 × 4801.360 s2.986 s2.638 s0.691 s1.855 s1.619 s0.232 s
1024 × 7683.536 s4.033 s4.140 s1.359 s2.587 s3.576 s0.358 s
1280 × 7204.098 s3.678 s4.489 s1.750 s2.907 s4.373 s0.382 s
1920 × 10809.534 s6.825 s8.149 s3.182 s6.432 s9.872 s0.438 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, C. Robust Single-Image Dehazing. Electronics 2021, 10, 2636. https://doi.org/10.3390/electronics10212636

AMA Style

Kim C. Robust Single-Image Dehazing. Electronics. 2021; 10(21):2636. https://doi.org/10.3390/electronics10212636

Chicago/Turabian Style

Kim, Changwon. 2021. "Robust Single-Image Dehazing" Electronics 10, no. 21: 2636. https://doi.org/10.3390/electronics10212636

APA Style

Kim, C. (2021). Robust Single-Image Dehazing. Electronics, 10(21), 2636. https://doi.org/10.3390/electronics10212636

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop