Next Article in Journal
Application of Infrared Digital Holography for Characterization of Inhomogeneities and Voluminous Defects of Single Crystals on the Example of ZnGeP2
Previous Article in Journal
SVND Enhanced Metaheuristic for Plug-In Hybrid Electric Vehicle Routing Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image-Based Dedicated Methods of Night Traffic Visibility Estimation

The Institute of Electron Device & Application, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(2), 440; https://doi.org/10.3390/app10020440
Submission received: 3 December 2019 / Revised: 1 January 2020 / Accepted: 2 January 2020 / Published: 7 January 2020

Abstract

:
Traffic visibility is an essential reference for safe driving. Nighttime conditions add to the difficulty of estimating traffic visibility. To estimate the visibility in nighttime traffic images, we propose a Traffic Sensibility Visibility Estimation (TSVE) algorithm that combines laser transmission and image processing and needs no reference to the corresponding fog-free images and camera calibration. The information required is first obtained via the roadside equipment which collects environmental data and captures road images and then analyzed locally or remotely. The proposed analysis includes calculating the current atmospheric transmissivity with the laser atmospheric transmission theory and acquiring image features by using the cameras and the adjustable brightness target. Image analysis is performed using two image processing algorithms, namely, dark channel prior (DCP) and image brightness contrast. Finally, to improve the accuracy of visibility estimation, multiple nonlinear regression (MNLR) is performed on the various visibility indicators obtained by the two methods. Extensive on-site measurements analysis confirms the advantages of TSVE. Compared with other visibility estimation methods, such as the laser atmospheric transmission theory and image analysis method, TSVE significantly decreases the estimation errors.

1. Introduction

In bad weather, the absorption or scattering of light by atmospheric particles such as rain, snow, fog, or haze can significantly reduce the visibility of scenes [1]. On the highway, the reduction of visibility makes it difficult for the drivers to clearly see the road ahead of them, dramatically degrading driver’s judgment in vehicles and causing various traffic accidents [2]. The visual conditions of nighttime driving are challenging for drivers, including dim street lighting, oncoming headlight glare, and the need for quick adaption across a wide range of lighting levels [3]. If visibility can be accurately identified and predicted in time, traffic accidents will be significantly reduced.
At present, the attenuation characteristics of the laser in the atmosphere were generally used to estimate the visibility at major airports, highways, and meteorological monitoring stations. The implementation of this theory needs to be equipped with laser transmitting and receiving units. Although there are many hardware settings and fixed detection distance, this method is relatively mature, stable, and has less environmental interference. The visibility meter on the market mainly uses this method.
In recent years, making use of image analysis to estimate visibility has attracted more and more attention. The advantages of these methods include fewer hardware facilities needed and no additional equipment required, and thus, the estimation can be made only with the video processed by the cameras. However, the shortcomings of these methods are also undeniable, such as low accuracy in special circumstances, high instability, and so on. Nicolas Hautière et al. have developed a system which is used in-vehicle CCD cameras, aiming to estimate the visibility in adverse weather conditions, particularly in fog situations [4]. They presented several dedicated roadside sites and used them to validate the proposed model. Rachid Belaroussi et al. have identified an approach to estimate visibility conditions using an onboard camera and a digital map [5]. The method determines the current visual range in hazy conditions with reference to the characteristics of the traffic sign detectors in the fog and the registered detection by vision and information encoded in the map. Li Yang et al. proposed a visibility range estimation algorithm for real-time adaptive speed-limit control in intelligent transportation systems [2]. Environmental data and road images are collected via roadside units. The analysis is performed using two image processing algorithms, namely, the improved dark channel prior (DCP) and weighted image entropy (WIE), and the support vector machine (SVM) classier is used to produce a visibility indicator in real-time. The method mainly relies on the road image taken by the camera to achieve real-time visibility estimation, and it also requires high precision and speed calculation of the analysis processing unit. According to the visual effects of night fog, Gallen R et al. proposed two methods [6]. The first method evaluates the presence of fog around a vehicle due to the detection of the backscattered veil created by the headlamps. The second evaluates the presence of fog due to the detection of halos around light sources ahead of the vehicle. These two schemes can work in the condition either with streetlights or without streetlights. The scheme of recognizing the night fog by the halo of the lamp is a solution that can be widely applied to traffic. Andrey Giyenko et al. discussed the possibility of application of a Convolutional Neural Network for visual atmospheric visibility estimation [7]. The authors implemented a Convolutional Neural Network with 3 convolution layers and trained it on a data set taken from CCTV cameras in South Korea. The authors divided the visibility condition into 20 levels, so the estimate results were not specific visibility values. Yang You et al. proposed a CNN–RNN model for directly estimating relative atmospheric visibility from outdoor photos without relying on weather images or data that require expensive sensing or custom capture [8]. The CNN captures the global view while the RNN simulates human’s attention shift, namely, from the whole image (global) to the farthest discerned region (local).
Present literature shows that most methods for road visibility estimation are performed on two separate methods, namely, laser transmission and image processing. The measurement range of the laser atmospheric transmission method is not wide enough. The image processing method is affected strongly by camera quality. Based on this, some people have proposed methods for estimating visibility, which combine laser transmission and image analysis. Miclea Razvan-Catalin et al. built a system that uses light-scattering properties to estimate the visibility distance in foggy conditions [9]. The laser emitter is mounted next to the camera. The camera captures the laser images during the launch and estimates the visibility distance by image characteristics such as the contrast. Silea Ioan et al. have presented some principles which can lead to the development of a system capable of estimating the visibility distance in foggy weather conditions by using the light scattering property [10]. The effect of fog concentration on the light source and the relationship between fog concentration and visual acuity are used as the input data. The authors analyzed the influence of the LED light source and the laser light source on the fog environment. Experiments using visual acuity charts show a link between fog concentration and vision. These methods are not suitable for road visibility estimation, because the use of images is limited to the simple analysis of laser image contrast and eye chart.
Consequently, we propose a Traffic Sensibility Visibility Estimation (TSVE) algorithm which combines laser transmission and image analysis. By extracting the visibility characteristics obtained from laser transmission and image processing, we can estimate the visibility of foggy traffic scenes at night without reference to the corresponding fog-free images and camera calibration. Guided by laser atmospheric transmission theory, TSVE organizes laser emitters and receivers to detect the transmissivity of the current atmospheric environment. For the road images and the adjustable brightness target images, the dark channel prior (DCP) algorithm is used to detect and calculate the image transmittance [11]; then, the gray distribution information in the brightness and contrast of the adjustable brightness target images is extracted for visibility calculation. The adjustable brightness target can be applied to different brightness environments. Multiple nonlinear regression (MNLR) is used to fuse the multiple different visibility features obtained before. Due to the low ambient brightness at night, the final estimation results may be different from human perception. Based on the actual nighttime ambient brightness, we adjust the nighttime road visibility. The algorithm has been evaluated by using actual traffic images and reference visibility meter. TSVE has significant advantages compared with the method of laser atmospheric transmission theory or image analysis. Once the visibility is estimated, the driver can adjust his or her current driving speed accordingly.

2. Backgrounds

2.1. Laser Atmospheric Transmission Theory

When the laser propagates in the atmosphere, part of the optical radiation energy is absorbed and converted into other forms of energy (such as thermal energy, etc.); the total effect of absorption and scattering weakens the intensity of transmitted light radiation. The change in light intensity is the following:
d I I 0 = I I 0 I 0 = β d l
I is the light intensity after the laser atmosphere transmission, I 0 is the light intensity emitted by the laser, l is the transmission distance, β is the atmospheric extinction coefficient; the atmospheric transmissivity (%) is τ at the transmission distance L:
τ = I I 0 = e 0 L β d l
If β is constant at the transmission distance L:
τ = I I 0 = e β L
Equation (3) is Lambert’s law which is used to describe the atmospheric attenuation.

2.2. Dark Channel Prior (DCP)

Atmospheric scattering models have been widely used in image processing and computer vision. The model describing the foggy image represents [12]:
I ( x ) = J ( x ) t ( x ) + A ( 1 t ( x ) )
I(x) is an image captured by the camera (fog image), x is the 2D spatial location, J(x) is an image with no fog on a sunny day, A is the atmospheric light, and t(x) is the transmittance. Now assume that the transmittance t(x) is constant in the local Ω(x). Taking the min operation for Equation (4), we have the following:
m i n y Ω ( x ) ( I c ( y ) ) = t ( x ) m i n y Ω ( x ) ( J c ( y ) ) + ( 1 t ( x ) ) A c
y is any pixel contained in the local Ω(x), c is an R, G, B color channel, and A c usually takes the first 0.1% of the brightness of the c color channel. Adjust the Equation (5) as what follows:
m i n y Ω ( x ) ( I c ( y ) A c ) = t ( x ) m i n y Ω ( x ) ( J c ( y ) A c ) + ( 1 t ( x ) )
Then, find the minimum of the three colors channels in Equation (6) as what follows:
m i n c ( r , g , b ) ( m i n y Ω ( x ) ( I c ( y ) A c ) ) = t ( x ) m i n c ( r , g , b ) ( m i n y Ω ( x ) ( J c ( y ) A c ) ) + ( 1 t ( x ) )
According to the mathematical definition of the dark channel, the fogless images J(x) always have smaller pixel values in one channel of RGB. Supposing the dark channel values of J(x) approach 0:
J d a r k ( x ) = m i n c { r , g , b } ( m i n y Ω ( x ) J c ( y ) ) 0
The transmittance t(x) obtained by the above two formulas is the following:
t ( x ) = 1 m i n c { r , g , b } ( m i n y Ω ( x ) ( I c ( y ) A c ) )
In sunny weather, there is always a small amount of fog on distant objects captured by the camera. Therefore, it is necessary to introduce a weight coefficient ω (range 0 < ω ≤ 1). The correction formula is the following:
t ( x ) = 1 ω m i n c { r , g , b } ( m i n y Ω ( x ) ( I c ( y ) A c ) )
As shown in Figure 1, the dark channel of the original image can approximate the concentration of the fog. The higher the fog concentration, the brighter the corresponding position in the dark channel image. The dark channel image of the fog-free image is the darkest. Since the fog concentration directly affects visibility, the DCP theory can be used for visibility estimation.

2.3. Multivariate Nonlinear Regression Theory

In multivariate nonlinear regression (MNLR) analysis, observational data are modeled by a function, which is a nonlinear combination of the model parameters and depends on one or more independent variables [13]. Assume that the general behavior of a nonlinear relationship is the following:
Y = α 0 ( X 1 α 1 ) ( X 2 α 2 ) ( X n α n )
α 0 , α 1 , …, α n are the parameters representing a nonlinear relationship. Some nonlinear regression problems can be transferred to the linear domain by the appropriate transformation of the model formulation [13]. Taking the logarithm of Equation (11), the nonlinear relationship becomes linear:
l o g ( Y ) = l o g ( α 0 ) + α 1 l o g ( X 1 ) + α 2 l o g ( X 2 ) + + α n l o g ( X n )
We can obtain the regression equation for l o g ( Y ) on l o g ( X 1 ) , l o g ( X 2 ) l o g ( X n ) by estimating the parameters.
When the goodness-of-fit test is performed on the selected curve equation fitting results, the sample determination coefficient R 2 is used as the basis for determining the goodness of fit [14]. R 2 is defined as:
R 2 = S S R S S T = 1 S S E S S T = 1 ( y y ^ ) 2 ( y y ¯ ) 2
SSR is the sum of the squares of the regression. SSE is the sum of the squares of the residuals. SST is the sum of the squares of the total deviations. R 2 is a comprehensive measure of the fitting precision to the regression equation. The larger the R 2 , the better the fitting precision. The smaller the R 2 , the worse the fitting precision. The R 2 range is from 0 to 1. The closer the R 2 is to 1, the higher the degree of fit will be.
The F-test is used to test the significance of the regression equation [14]. The F statistic is the ratio of the sum of the squared regressions of the mean to the sum of the squares of the residuals of the average:
F = S S R / k S S E / ( n k 1 ) = ( y ^ y ¯ ) 2 / k ( y y ^ ) 2 / ( n k 1 )
SSR is the sum of the squares of the regression, SSE is the sum of the squares of the residuals, n is the number of samples, and k is the number of independent variables. The statistic F obeys the F distribution of the first degree of freedom k and the second degree of freedom nk − 1, that is, F ~ (k, nk − 1). It can be concluded from the definition of F statistic that if the F value is significant, the change of the dependent variable caused by the independent variable is more significant than the influence of the random factors on the dependent variable. The more significant the F statistic, the better the fitting precision.

3. Proposed Visibility Estimation Methodology

In this section, the proposed visibility estimation process is to be demonstrated. Firstly, obtain the atmospheric transmissivity of the current environment. Then, combine the DCP algorithm with brightness and contrast to obtain image visual visibility indicators. Finally, merge the multiple visibility indicators by MNLR analysis. The obtained visibility results are calibrated to span the range [10, 10000], where the lower limit corresponds to the worst visibility possibly, and the upper limit corresponds to the visibility which has no adverse effects on people’s travel.

3.1. Visibility Estimation Based on Laser Atmospheric Transmission Theory

The theory of laser atmospheric transmission mentioned in Section 2.1 is to obtain the current atmosphere transmissivity by the ratio of the received optical power to the optical power emitted by the laser, and then to obtain the visibility according to Lambert’s law. Our laser transmission system uses a multi-point measurement method to obtain the optical power received at different points and obtains multiple sets of atmospheric transmissivity [15], as shown in Figure 2.
To test the attenuation of the system, the distance between the transmitter and the receiver is set to zero, at which point the laser transmission does not need to pass through the atmosphere. T 0 indicates the attenuation of light energy generated by the system (without atmospheric attenuation):
T 0 = P 0 P r ( 0 )
P 0 is the emitted power of the laser in the initial state; P r ( 0 ) represents the power received by the receiver at the zero position. At this time, the atmospheric length is zero.
When the distance between the transmitter and the receiver is 1 cm, the measured value of the receiver is T ( 1 ) :
T ( 1 ) = P 0 P r ( 1 )
T ( 1 ) is the total amount of light attenuation generated by the optical measurement system, including the light attenuation T ( 1 ) caused by the atmosphere and the light attenuation T 0 generated by the system:
T ( 1 ) = T ( 1 ) + T 0
Therefore, the actual power emitted by the laser emitter can be considered as P 0 T 0 , at which time the atmospheric transmissivity is the following:
τ = P r ( 1 ) P 0 T 0 = P r ( 1 ) P r ( 0 )
Suppose that the transmittance of the gas to be tested is uniform and static during the measurement. When the distance between the receiver and the transmitter is n cm, by analogy, the measured transmittance at the point n is the following:
τ = P r ( n ) P r ( 0 )
P r ( n ) is the power received by the laser receiver at the point n. By moving the laser receiver, n groups transmittances can be obtained.

3.2. Visibility Estimation Based on Image Analysis

Image brightness and contrast contain the information about the fog concentration during image acquisition. We set a horizontal brightness histogram: the horizontal axis represents left-to-right information of the intermediate horizontal line selected in the image, and the vertical axis represents the pixel value of each point. For example, Figure 3 shows the horizontal brightness histograms of clear daytime, foggy daytime, clear evening, foggy evening, clear midnight, and foggy midnight. Figure 3a,b corresponds to a daytime image with good air quality and a foggy daytime image. The horizontal brightness histogram y-axis absolute value of the fog-free image is larger than the foggy image. The darkness of the foggy image is higher than that of the fog-free image. Since the darkness makes high-value pixels be fewer in the images, it can be seen from the histograms of the evening images given in Figure 3c,d that the y-axis absolute value of the foggy image with poor visibility is small, and the brightness of the bright portion of the foggy image is significantly reduced while the brightness of the dark part is significantly increased. The same pattern can be seen in the midnight images shown in Figure 3e,f. Therefore, image brightness histogram analysis can be used to estimate the visibility level of the given image.
Develop the visibility indicators which are based on the image horizontal brightness histogram. Making use of the concept of image brightness, from the image get t l , the 20% darkest pixels, and t h , the 20% lightest pixels. In the heavy foggy days, t l and t h are around 0.5; in sunny and clear weather conditions, t l = 0, t h = 1. Additionally, using the concept of image contrast, in heavy foggy dense, t c = 0; in clear weather dense, t c = 1. However, these are ideal scenarios that do not often occur in the real world. For example, the values of ( t l ,   t h ,   t c ) of Figure 3a–f are (0.311, 0.559, 0.248), (0.342, 0.516, 0.174), (0.251, 0.781, 0.467), (0.316, 0.476, 0.16), (0.182, 0.474, 0.292), and (0.208, 0.366, 0.158), respectively. t c is significantly higher in the foggy images than that of the clear weather images, while t l , t h mainly indicate the current ambient brightness, such as daytime, streetlight night or no streetlight night.
Although the DCP algorithm and image brightness contrast analysis are effective methods to characterize the image visibility, in the high visibility environment, the gaps of image features are minimal. To optimize the results, the method of obtaining atmospheric transmissivity described in Section 3.1 is invoked to fuse the process as what is described in Section 3.3.

3.3. Feature Fusion and Classification

Visibility estimation by laser transmission or image analyses can generally produce satisfactory results. However, the results of the two methods do not necessarily match, and there will be differences between them. Therefore, the method of combining the results of the two methods through the MNLR model to generate final image visibility is proposed. The variables describing the visibility are composed of the atmospheric transmissivity obtained by the laser transmission system, the road image transmittance, the target image transmittance, the target image low brightness values, the target image high brightness values, and contrasts obtained by image analyses. The detailed descriptions of the variables are given in Table 1. The MNLR model is first trained by using train set, then tested by using the test set, and compared by using statistical measures of goodness of fit [16].
The visibility d is considered as the dependent variable. The variables in Table 1 are considered as the independent variables. MNLR analysis is carried out using a computing package SPSS (Statistical Package for the Social Sciences). The model developed is given as follow, α 1 , α 2 , …, α 6 , b 1 , b 2 , …, b 6 , c are the parameters:
d = α 1 e τ + b 1 + α 2 e t R + b 2 + α 3 e t T + b 3 + α 4 e t l + b 4 + α 5 e t h + b 5 + α 6 e t c + b 6 + c
Since the actual night traffic environment includes two different situations, with streetlight (ambient brightness) and without streetlight (non-ambient brightness), the degree of its influence on traffic is different. We divide the MNLR model into two cases: ambient brightness and non-ambient brightness. It is worth noting that the subdivision of the lighting environment and the production of the training set should be processed and optimized empirically on a large number of images.

3.4. Road Visibility Adjustment in Night Scenes

Although the methods described in Section 3.1, Section 3.2 and Section 3.3 might be used to estimate the visibility, they do not generally produce accurate results because the DCP algorithm does not consider the visibility in other driving conditions such as during night time. For example, as shown in Figure 4, from daytime to evening and to late-night, as the ambient brightness decreases, even when the air is clear, the information about the road ahead that people can recognize during driving is less and less. Night road visibility should be based on the driver’s identification of road information such as road directions, road boundaries, road signs, etc., so we refer to the actual ambient brightness to estimate night road visibility.
In the same context, we use the average brightness of the image as the current ambient brightness. First, the ambient brightness g 0 of the clear day in this context is to be obtained. The ambient brightness of the current image is g . The adjustment parameter α at this time is:
α = g ( x ) g 0 ( x )
Night road visibility is:
d = α d 0 , α ( 0 , 1 ]

4. Experiment

4.1. Laser Transmission System

The laser transmission system consists of a laser, a data acquisition, and the control unit (photodiode and peripheral circuitry). We chose a point-like near-infrared laser with a wavelength of 905 nm and a power of 50 mW as the emitter. The parameters are shown in Table 2. The data acquisition and control unit consists of an avalanche photodiode (APD), several amplifiers, several relays, and a STM32. The transmitted laser signal, which is collected by an avalanche photodiode, then amplified and A/D sampled by the data acquisition and control unit, is the output as a high signal to noise ratio signal [17]. During the measurement process, a visibility meter is placed at the same time for data calibration.
The avalanche photodiode measures the photocurrent according to the amount of laser irradiation received. Response r = I P , received optical power P λ = I r , APD(AD500-9) response r is 52 A/W, and P λ 0 is the input power of the laser. The distance between the laser and the receiver is 10 cm. During this process, the atmospheric transmissivity of the system is:
τ = P λ P λ 0

4.2. Adjustable Brightness Target

So far, our methods have passed the laboratory analysis of the measured values in different cases under the assumption of continuous unfavorable visibility. In the actual environment, the detection needs a reference target. We installed a large, specific target with adjustable brightness on the test road. The idea is to take photos of the target in different weather conditions and to estimate the visibility, which is based on the attenuation of their brightness and contrast. This static measurement, which uses a reference target, can then be compared on the images with the results of the same context, which require no reference. The target is designed to have the goal of maximum contrast, as shown in Figure 5.
Compared to other targets, our target has two advantages. First, the goal is an LED panel light with adjustable brightness, which can adapt to different actual environments. Secondly, the target has six black parts in different sizes, and the current atmospheric transmissivity can be estimated by the visible condition of the six parts, which has different powers for estimating visibility. Take the centerline of the adjustable brightness target image as the reference line. The left-to-right position of the center line is the x-axis, and the pixel value of each position of the center line is the y-axis. A horizontal brightness histogram is created, as shown in Figure 6. The ( t l ,   t h ,   t c ) of the adjustable brightness target can be obtained.

4.3. System Simulation Verification

In order to verify the accuracy of the above detection method, a comparison test system is established [18]. The test system includes a laser transmission system, a visibility meter, two cameras, an adjustable brightness target, and a high-pressure spray equipment. The visibility meter measurement range is 10 to 10,000 m, which is unaffected by ambient brightness and weather conditions. The brightness of the adjustable brightness target can be adjusted according to the environment and is suitable for daytime and nighttime. One of the cameras is used to acquire the image of the road surface for the analysis of the road image transmittance; the other is used to collect the images of the target for the analysis of the transmittance, brightness, and contrast of the target images. High-pressure spray equipment is used to adjust the fog concentration at the site. When the fog concentration is insufficient, we will use equipment to supplement the fog to create a different visibility environment.
Before verification, the placement of hardware devices such as cameras is double checked to meet our requirements: the camera for shooting target is on the same horizontal line as the target; the target installation height is about 1.5 m, 10 m from the camera; the position of the camera shooting the road is 5–10 m higher; the visibility meter installation height is consistent with the target and placed near the target with the laser transmission system.
In the verification process, first, via the laser transmission system, the current atmospheric transmissivity is estimated. Two cameras respectively collect the road images and the adjustable brightness target images. Combining the DCP algorithm, ( t R , t T , t l ,   t h ,   t c ) is obtained. Finally, SPSS is used to perform MNLR analyses on six variables for visibility prediction in similar environments. The prediction results should be compared with the results of the visibility meter.

5. Experiment Results and Analysis

In different weather conditions such as sunny, foggy, and rainy days, pictures of the verification site were captured. Since both fog and rainfall can lead to low visibility, no distinction is made between fog and rainfall in practice. Approximately 600 verified images under different visibility conditions have been collected.
In some pictures, cars and pedestrians are set on the road at various distances to test the robustness of the method. Some samples of the original images are given in Figure 7. The laser transmission scheme mentioned in Section 3.1 is used to estimate the atmospheric transmissivity of the current state; the image analysis mentioned in Section 3.2, to obtain the image characteristics; and the method mentioned in Section 3.3, to assess visibility as a whole. Table 3 shows the estimated visibility of the images in Figure 7 and the errors between the estimated and measured values. The results show that the visibility estimation errors are within 20% when there are cars and pedestrians on the road, which has a certain degree of credibility.
In this paper, three values are presented for each image, which are based on the method used to derive the visibility, namely, laser transmission method alone (V1), image analysis method alone (V2), and proposed TSVE method (V). The estimated results and the errors with the visibility meter measurements of the six representative sample images in Figure 8 are shown in Table 4. It can be seen that the trend of the results obtained by using V1 is in line with the expectations. Both fog and rain can block out the transmission of the laser and lead to low visibility. The results obtained by using V2 are more inaccurate in the case of low visibility. Therefore, visibility must be calculated with the aid of laser transmission in the case of low visibility. Laser transmission and image analysis are required for reference in the case of high visibility. In addition, Table 4 shows a significant difference in the visibility levels calculated by the three methods, where the V2 estimation error is higher than that of the V1. The estimation result of V1 is consistent with the trend of the visibility meter results, with the visibility level getting lower and lower from Figure 8a to Figure 8f. The estimated result of V does not match V1 result of Figure 8d, and does not match V2 result of Figure 8f, which to a certain extent agrees with the human perception of these images. In the case of visibility greater than 2000 m, the accuracy of our proposed TSVE method is less than 10%, and in the case of visibility below 2000 m, it is 10% to 30%.
We evaluate the proposed visibility algorithm by calculating the mean absolute error (MAE) [2]. The MAE is defined as:
M A E = 1 N i = 1 N | d i y i |
d i is the estimated visibility, y i is the measured value by visibility meter, { d i , y i } [ 0 ,   10000 ] , N is the number of samples. The final experimental results in Table 5 show that the proposed algorithm made fewer errors than V1 and V2. Some of the experimental images were captured in various common nighttime conditions (clear day, foggy, and rainy conditions), which shows that our algorithm has high accuracy under common nighttime weather conditions. In addition, experiments have shown that our approach provides better performance when data size increases.
Indeed, experimental verification of these methods is difficult. Taking photos in natural fog is a daunting task because weather conditions and fog uniformity cannot be controlled. Increasing the fog concentration of the environment through high-pressure spray equipment is also uneven. Our implementation is done through a target, which has some minor flaws. Using a purely “black” and larger target will result in a better estimate, but this is not realistic. In the future, we will test these methods with better cameras, measure the atmospheric extinction coefficient in a more precise way, and minimize or not use the target.

6. Conclusions

In this paper, we have presented a traffic perception visibility estimation algorithm that combines laser transmission and image processing. Nighttime road visibility was estimated by current atmospheric transmissivity and images characteristics, which include road image transmittance, target image transmittance, target brightness, and contrast. Multivariate nonlinear regression analyses were performed on various visibility indicators; two regression models with ambient brightness and without ambient brightness were established. Different classifications and multiple data improved the accuracy of the estimation. This algorithm was suitable for road visibility estimation under different weather conditions, without camera calibration. Finally, the adjustment suggestions for nighttime road visibility were also given. Experiments show that our proposed method can accurately estimate nighttime road visibility. When applied to highways, the estimated visibility can be used to adjust the speed limit of expressways, thereby reducing the occurrence of traffic accidents.

Author Contributions

Conceptualization, H.Q. (Huibin Qin); methodology, H.Q. (Huibin Qin) and H.Q. (Hongshuai Qin); software, H.Q. (Hongshuai Qin); validation, H.Q. (Hongshuai Qin); formal analysis, H.Q. (Huibin Qin) and H.Q. (Hongshuai Qin); investigation, H.Q. (Huibin Qin) and H.Q. (Hongshuai Qin); resources, H.Q. (Huibin Qin); data curation, H.Q. (Hongshuai Qin); writing—original draft preparation, H.Q. (Hongshuai Qin); writing—review and editing, H.Q. (Hongshuai Qin); supervision, H.Q. (Huibin Qin); project administration, H.Q. (Huibin Qin). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 713–724. [Google Scholar] [CrossRef] [Green Version]
  2. Yang, L.; Muresan, R.; Al-Dweik, A. Image-Based Visibility Estimation Algorithm for Intelligent Transportation Systems. IEEE Access 2018, 6, 1. [Google Scholar] [CrossRef]
  3. Kimlin, J.A.; Black, A.; Wood, J.M. Nighttime Driving in Older Adults: Effects of Glare and Association with Mesopic Visual Function. Investig. Ophthalmol. Vis. Sci. 2017, 58, 2796–2803. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Hautiere, N.; Aubert, D.; Dumont, E. Experimental Validation of Dedicated Methods to In-Vehicle Estimation of Atmospheric Visibility Distance. IEEE Trans. Instrum. Meas. 2008, 57, 2218–2225. [Google Scholar] [CrossRef]
  5. Belaroussi, R.; Gruyer, D. Road sign-aided estimation of visibility conditions. In Proceedings of the IEEE 2015 14th IAPR International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 18–22 May 2015. [Google Scholar]
  6. Gallen, R.; Cord, A.; Hautiere, N. Nighttime Visibility Analysis and Estimation Method in the Presence of Dense Fog. IEEE Trans. Intell. Transp. Syst. 2015, 16, 310–320. [Google Scholar] [CrossRef] [Green Version]
  7. Giyenko, A.; Palvanov, A.; Cho, Y. Application of convolutional neural networks for visibility estimation of CCTV images. In Proceedings of the 2018 International Conference on Information Networking, Barcelona, Spain, 10–12 January 2018. [Google Scholar]
  8. Yang, Y.; Cewu, L.; Weiming, W. Relative CNN-RNN: Learning Relative Atmospheric Visibility from Images. IEEE Trans. Image Process. 2018, 1, 45–55. [Google Scholar]
  9. Miclea, R.C.; Silea, I. Visibility detection in foggy environment. In Proceedings of the International Conference on Control Systems & Computer Science, Bucharest, Romania, 27–29 May 2015. [Google Scholar]
  10. Silea, I.; Miclea, R.-C.; Alexa, F. System for visibility distance estimation in fog conditions based on light sources and visual acuity. IEEE 2016. [Google Scholar] [CrossRef]
  11. He, K.; Jian, S.; Tang, X. Single image haze removal using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition, Long Beach, CA, USA, 15–21 June 2019. [Google Scholar]
  12. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 24–26 June 2008. [Google Scholar]
  13. Yasar, A.; Bilgili, M.; Simsek, E. Water Demand Forecasting Based on Stepwise Multiple Nonlinear Regression Analysis. Arab. J. Sci. Eng. 2012, 37, 2333–2341. [Google Scholar] [CrossRef]
  14. Duan, M.; Tang, B.; Liu, T. Accident Prediction Model of Freeway with High Ratio of Bridges and Tunnels Based on Multivariate Nonlinear Regression. Highw. Eng. 2018, 43, 126–130. [Google Scholar]
  15. Tai, H.; Zhuang, Z.; Jiang, L. Multi-point mobile measurement of atmospheric transmittance. Opt. Precis. Eng. 2016, 24, 1894–1901. [Google Scholar]
  16. Choi, L.K.; You, J.; Bovik, A.C. Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging. IEEE Trans. Image Process. 2015, 24, 3888–3901. [Google Scholar] [CrossRef] [PubMed]
  17. Guo, J.; Zhang, H.; Zhang, X. Avalanche Photodiode Detecting Technology for Laser Fuze. J. Detect. Control 2010, 32, 77–79. [Google Scholar]
  18. Jiandong, Z.; Mingmin, H.; Changcheng, L. Visibility Video Detection with Dark Channel Prior on Highway. Math. Probl. Eng. 2016, 2016, 1–21. [Google Scholar]
Figure 1. The images of the DCP algorithm. (a) The original image. (b) The dark channel image. (c) The transmittance image.
Figure 1. The images of the DCP algorithm. (a) The original image. (b) The dark channel image. (c) The transmittance image.
Applsci 10 00440 g001
Figure 2. Schematic diagram of multi-point measurement.
Figure 2. Schematic diagram of multi-point measurement.
Applsci 10 00440 g002
Figure 3. Horizontal brightness histograms of the images under different visibility conditions. (a) clear daytime; (b) foggy daytime; (c) clear evening; (d) foggy evening; (e) clear midnight; (f) foggy midnight.
Figure 3. Horizontal brightness histograms of the images under different visibility conditions. (a) clear daytime; (b) foggy daytime; (c) clear evening; (d) foggy evening; (e) clear midnight; (f) foggy midnight.
Applsci 10 00440 g003aApplsci 10 00440 g003b
Figure 4. Images in the same visibility (10,000 m) at different time. (a) daytime image; (b) evening image; (c) late-night image.
Figure 4. Images in the same visibility (10,000 m) at different time. (a) daytime image; (b) evening image; (c) late-night image.
Applsci 10 00440 g004aApplsci 10 00440 g004b
Figure 5. Graphic design of the reference target.
Figure 5. Graphic design of the reference target.
Applsci 10 00440 g005
Figure 6. Horizontal brightness histogram of the target.
Figure 6. Horizontal brightness histogram of the target.
Applsci 10 00440 g006
Figure 7. Images grabbed on the validation site with cars and pedestrians on the road. (a) cars and pedestrians; (b) pedestrians; (c) pedestrian with an umbrella.
Figure 7. Images grabbed on the validation site with cars and pedestrians on the road. (a) cars and pedestrians; (b) pedestrians; (c) pedestrian with an umbrella.
Applsci 10 00440 g007
Figure 8. Samples of different visibility representative images acquired in actual measurements. The measured values are (a) 10,000 m, (b) 8007 m, (c) 5181 m, (d) 2250 m, (e) 861 m, (f) 177 m.
Figure 8. Samples of different visibility representative images acquired in actual measurements. The measured values are (a) 10,000 m, (b) 8007 m, (c) 5181 m, (d) 2250 m, (e) 861 m, (f) 177 m.
Applsci 10 00440 g008
Table 1. List of variables describing visibility.
Table 1. List of variables describing visibility.
VariableDescription
Atmospheric transmissivity τ Atmospheric transmissivity obtained according to the theory of laser atmospheric transmission
Road image transmittance t R Image characteristic: road image transmittance obtained according to the DCP algorithm
Target image transmittance t T Image characteristic: target image transmittance obtained according to the DCP algorithm
Target image low brightness values t l Image characteristic: low brightness area pixel value of the image
Target image high brightness values t h Image characteristic: high brightness area pixel value of the image
Target image contrast t c Image characteristic: t c = t h t l
Table 2. Main parameters of the laser.
Table 2. Main parameters of the laser.
ParameterRange
Wavelength905 nm
Power50 mW
VoltageDC2.8–5.0V
Working current≤180 mA
Sizeϕ22 × 110 mm
Divergence angle0.5–1.0 mrad
Table 3. Visibility estimation results of the images in Figure 7.
Table 3. Visibility estimation results of the images in Figure 7.
ModelTSVEVisibility MeterError
Figure 7a882810,00012%
Figure 7b899210,00010%
Figure 7c3138379117%
Table 4. Estimated results of the images in Figure 8.
Table 4. Estimated results of the images in Figure 8.
ModelV1ErrorV2ErrorVErrorVisibility Meter
Figure 8a99510.49%92947.06%97762.24%10,000
Figure 8b79061.26%618822.72%80460.49%8007
Figure 8c369128.76%680331.3%49664.15%5181
Figure 8d125044.44%23152.9%22300.89%2250
Figure 8e9247.32%158584.08%97012.64%861
Figure 8f368107.91%1845942.12%12429.78%177
Table 5. Average absolute error of different visibility estimation methods (MAE).
Table 5. Average absolute error of different visibility estimation methods (MAE).
ModelV1V2V
MAE482446128

Share and Cite

MDPI and ACS Style

Qin, H.; Qin, H. Image-Based Dedicated Methods of Night Traffic Visibility Estimation. Appl. Sci. 2020, 10, 440. https://doi.org/10.3390/app10020440

AMA Style

Qin H, Qin H. Image-Based Dedicated Methods of Night Traffic Visibility Estimation. Applied Sciences. 2020; 10(2):440. https://doi.org/10.3390/app10020440

Chicago/Turabian Style

Qin, Hongshuai, and Huibin Qin. 2020. "Image-Based Dedicated Methods of Night Traffic Visibility Estimation" Applied Sciences 10, no. 2: 440. https://doi.org/10.3390/app10020440

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop