**1. Introduction**

In recent years, self-driving vehicles, underwater robots, and remote sensing have attracted attention; such applications employ fast and robust image-recognition techniques. However, images of outdoor or underwater scenes have poor image quality because of haze (Figure 1a), thus affecting image recognition. To solve this problem, many haze removal techniques were proposed, and these techniques can be classified into non-learning-based and learning-based approaches.

Non-learning-based approaches use multiple haze images [1], depth information [2] and prior knowledge from a single haze image [3–5]. Methods employing prior knowledge maximise contrast within the local patch [3], assuming that surface shading and transmission are locally uncorrelated [4], and statistically observe that at least one RGB colour channel within most local patches in a haze-free image has a low-intensity value [5]. Median and guided-image filters [6,7] are used for accelerating haze removal; however, these methods could not achieve real-time processing (defined as 20 fps for our calculations herein). Learning-based approaches employ random forest [8], colour-attenuation and prior-based brightness-saturation relation [9] and deep learning [10,11]. These methods can achieve accurate and fast haze removal compared with conventional non-learning-based approaches. In deep-learning-based methods, large-scale pairs of haze images and corresponding haze-free images must be prepared and their relation must be trained. Image pairs of haze and haze-free images cannot be existed simultaneously in actual situation; therefore, haze images are generated from haze-free images by employing haze-observation model [10] and depth information from the corresponding haze-free images [11]. The haze-removal accuracy of deep-learning-based methods depends on the dataset and preparing large datasets is cumbersome. Deep-learning-based methods [10,11] are faster

than conventional methods [5,6]; however, computational times of 1.5 s [10] and 0.61 s [11] are required for haze removal of a 640 × 480 image using 3.4 GHz CPU without GPU acceleration; these methods also could not achieve real-time processing.

This study proposes a real-time haze-removal method using a normalised pixel-wise dark-channel prior (DCP) to enable real-time application (Figure 1b,c). This paper is an extended version of [12]. Contributions of the proposed method are as follows:

**(a)** Normalised pixel-wise DCP

Original patch-wise DCP method requires high computational cost to refine the transmission map using an image-matting technique. In this paper, we propose a normalised pixel-wise DCP method with no need for refinement of transmission map compared with the patch-wise method.


To reduce the computational time and improve robustness, we propose a coarse-to-fine search strategy for atmospheric-light estimation.

The remainder of this paper is organized as follows. In Section 2, we introduces He et al.'s method [5] in detail because it forms the basis of the proposed method. Section 3 provides the description of proposed method. The experimental results and discussion are reported in Section 4, and conclusion is drawn in Section 5.

**Figure 1.** Examples of original haze image and proposed haze-removal image and transmission map (*γ* = 0.5).

#### **2. Traditional Dark Channel Prior**

This section describes DCP [5], which is the basis of the proposed method. The haze-observation model [1,5] is represented by

$$\mathbf{I(x)} = t(\mathbf{x})\mathbf{J(x)} + (1 - t(\mathbf{x}))\mathbf{A},\tag{1}$$

where **I**(**x**) is the observed RGB colour vector of haze image at coordinate **x**, **J**(**x**) is the ideal haze-free image at coordinate **x**, **A** is the atmospheric light, *t*(**x**) is the value of transmission map at coordinate **x**. To solve the haze-removal problem, some prior knowledge such as DCP must be applied. The transmission map derivation (Section 2.1), atmospheric-light estimation (Section 2.2) and haze-removal image creation (Section 2.3) are explained as follows.
