**5. Results**

In the efficient traffic video dehazing method using adaptive dark channel prior and spatial-temporal correlations, a video sequence is converted into *YUV* color space where *Y* represents the luminance and *U*/*V* represents the chromaticity. Human eyes are more sensitive to high-frequency signals than low-frequency signals and more sensitive to changes in visibility than changes in color. The *U* and *V* components are less affected by haze than the *Y* component. Thus, we can only adopt the luminance (*Y*) component to reduce computational complexity. In our experiments, we implemented each method with Opencv and C/C++ language. The source codes were compiled with Microsoft Visual Studio 2010 and run on an Intel Core I5-2400 processor and 4 GB of main memory running a Windows 7 system.

#### *5.1. Results for Single Image Dehazing*

Our adaptive method can determine the initial transmission according to the image characteristics, thus it can produce a more satisfactory dehazing result than the method with fixed initial transmission. Figure 10 shows the restored images using our adaptive method, and there are four different initial transmission values, 0.1, 0.2, 0.3, and 0.4. It is obvious from the experimental results that the smaller initial transmission values may lead to some blocks in the images with overstretched contrast, therefore the optimal initial transmission for the first image is between 0.2 and 0.3, the value for the second image is between 0.3 and 0.4, and the value for the third and fourth images is above 0.4. The *T* ∗ *X* values for the images obtained by our method are all located in the range of the optimal initial transmission. Therefore, our method is adaptable for images with different degrees of haze.

**Figure 10.** Results for different initial transmission using our adaptive method.

Figure 11 shows four images from Foggy Road Image Database (FRIDA) [33] and restored these images using the dark-channel-prior-based method [9,31], the visibility enhancement algorithm [34], the image-contrast-enhanced method [25], the non-local image dehazing method [20,21], and our method. The SSIM values in Figure 7 are the average values of three channels of RGB. In FRIDA [33], each image without fog is associated with some hazy images, and different kinds of fog are added in each image—uniform fog, heterogeneous fog, cloudy fog, and cloudy heterogeneous fog. According to the experimental results, the dark-channel-prior-based method does not have satisfactory results for haze removal in heterogeneous fog and cloudy heterogeneous fog, while the image-contrast-enhanced method and our method achieves more satisfactory results for these two cases. In addition, our method obtains the highest SSIM for the restored images compared to the first three methods, thus the restored images using our method are more similar to ground truth. As to the results of non-local image dehazing method [20,21], the SSIM for some restored images may be higher than those of our method. However, the non-local image dehazing method takes longer processing time, as shown in Table 4. Table 4 provides the overall processing times of these methods. Our method is faster than the dark-channel-prior-based method [9,31] and visibility enhancement algorithm [34]. However, our method takes more time than the image-contrast-enhanced method [25] because it spends some time in calculating the image haziness flag value and the initial transmission correction value. However, the results for haze removal using the proposed method are better than the results of the image-contrast-enhanced method. Although the non-local image dehazing method can ge<sup>t</sup> more satisfactory restored images, it is too slow to be used in real-time scenarios. In addition, it usually needs to manually set the parameters to different scenes, which is not suitable for real-time traffic video processing. Further still, we can spread this part of the computation time over all frames in video dehazing and reach a faster dehazing speed through the fusion of spatial and temporal information.

**Figure 11.** Comparison of the restored images using different methods; \* SSIM = structural similarity.


**Table 4.** Processing times for single-image dehazing.

#### *5.2. Results for Traffic Video Dehazing*

To ge<sup>t</sup> better restored images, we restore the whole image for the first frame of a time slice and use the area outside the lane space of the restored frame to replace those areas of the following frames. Moreover, we adopt the parallel programming tools SIMD [35] and OpenMP [36] for rapid calculation. Figure 12 presents a comparison of three approaches for traffic video dehazing, where Figure 12a shows the original videos; Figure 12b shows the results for the dark-channel-prior-based method with guided filtering [9,31], which uses the transmission map obtained from the first frame to filter the following frames; Figure 12c shows the results for the image-contrast-enhanced method [25], whose initial transmission is a constant value 0.3; Figure 12d shows the results produced by our method. Experimental results demonstrate that the image-contrast-enhanced method leads to some blocks with overstretched contrast, such as the images in groups (1), (3), and (4). For some urban scenes, the color is not obviously different between the driveway and background, such as the examples in group (1) with medium haze and group (2) with dense haze. Our method can restore these videos in a manner more similar to the haze-free scenes, and the driveway and the vehicles can been seen more clearly. However, the dark-channel-prior-based method cannot deal with these videos. For the suburban scenes where the trees and road surface are obviously different in color, such as images in group (3) that were captured in daytime and images in group (4) that were captured in dense haze with vehicle headlights on, our method achieves better restored results than the other two methods. For the restored images using our method in group (3), the driveway color is more uniform. For the restored images using our method in group (4), there are no blocks with overstretched contrast, and the color of trees with hierarchical structure is more realistic. Therefore, our method can maintain the image details and restore images that are more similar to the real scene with proper contrast.

As we can see from the experiment results, our method produces better haze removal results by determining parameters according to image characteristics. It is also applicable to dense fog or a variety of fog densities. Moreover, it makes the restored images more similar to the real scene and avoids the problem that the restored images exhibit overstretched contrast. Therefore, it can solve the general problems in the existing dehazing algorithms—contrast distortion after video dehazing and failure to remove dense haze.

In addition, our method adopts the spatial correlation, time continuity, lane separation, and spatial distribution of cameras to improve computational efficiency. Besides the processing time, the performance parameters of frames per second (fps) and SSIM of different methods for the video dehazing in Figure 12 are shown in Table 5. In order to meet the actual traffic scenarios, we process the video frame by frame, and the data show the total processing time for 1000 frames. Our method uses the initial frame in a time slice to calculate the transmission map and atmospheric light and adopts the lane separation to decrease the dehazing areas. Compared with other methods, the time of dehazing in our method decreases when the time slice increases. According to the experiment results, our method can obviously speed up video dehazing, especially if the video has high resolution or the driveway is only a small part of the whole image. Our method can restore the video with a resolution of 720 × 592 at about 57 fps, nearly four times faster than dark-channel-prior-based method and one time faster than image-contrast-enhanced method. Furthermore, our method obtains the highest SSIM for the restored videos compared with other existing methods, thus the restored videos using our method are more similar to ground truth. Therefore, the proposed method not only has superior haze removing and color balancing capabilities but also restores and enhances the degraded videos in real time.

**Figure 12.** Comparison of restored videos. (**a**) Original Videos; (**b**) Dark-channel-prior-based method; (**c**) Image-contrast-enhanced method; (**d**) Non-local Image Dehazing; (**e**) Our method.


