Vision under Adverse Weather Conditions

A special issue of Atmosphere (ISSN 2073-4433). This special issue belongs to the section "Atmospheric Techniques, Instruments, and Modeling".

Deadline for manuscript submissions: closed (30 April 2021) | Viewed by 23003

Special Issue Editors


E-Mail Website
Guest Editor
Cerema, Equipe-projet STI, 8-10, rue, Bernard Palissy, CEDEX 2, F-63017 Clermont-Ferrand, France
Interests: Meteorological factors, Electromagnetic propagation in absorbing media, Camera and 2d sensors, Artificial intelligence, Intelligent transportation systems, Autonomous vehicles, Mathematical modelling

E-Mail Website
Guest Editor
Pics-L, Gustave Eiffel University, 14-20 Boulevard Newton, F-77420 Champs-sur-Marne, France
Interests: machine vision; image processing; cameras; infrared imaging; meteorological factors; artificial intelligence; intelligent transportation systems; autonomous vehicles

E-Mail Website
Guest Editor
Cerema, Equipe-projet STI, 8-10, rue, Bernard Palissy, CEDEX 2, F-63017 Clermont-Ferrand, France
Interests: Meteorological factors, Machine vision, Image processing, Cameras and 2d sensors, Lidar, Electromagnetic propagation in absorbing media, Artificial intelligence, Intelligent transportation systems, Autonomous vehicles

Special Issue Information

Dear Colleagues,

Artificial vision systems, whether active or passive, are increasingly used for applications ranging from intelligent visual surveillance to automated driving. These systems are largely disrupted by adverse weather conditions such as fog, rain, or snow. Several approaches can be used to limit these disturbances at both hardware and software levels. For example, sensor design could include the choice of wavelengths at which the impacts of adverse weather conditions on light transmission are minimized. It is also well known that contrast enhancement algorithms can be used at a software level or directly by hardware improvements. Knowledge of the effects of weather conditions on the performance of different types of sensors is therefore essential to improve sensors’ capabilities and more generally those of artificial vision systems. Due to their weather conditions sensitivity, artificial vision systems can be used to detect these conditions and even to characterize them, for example, estimating fog visibility or the number and size of rain drops and snowflakes. In this Special Issue, original works on methods for limiting meteorological disturbances in artificial vision and new knowledge on the modelling of adverse weather conditions’ effects on light transmission are welcomed. Studies in all types of weather conditions and even in more complex situations such as in the presence of smoke or dust clouds exerting similar visual effects are of interest.


Prof. Dr. Frédéric Bernardin
Prof. Dr. Jean-Philippe Tarel
Dr. Pierre Duthon
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Atmosphere is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Meteorological factors
  • Fog
  • Rain
  • Snow
  • Electromagnetic propagation in absorbing media
  • Camera and 2D sensors
  • Radar
  • Lidar
  • Image processing
  • Machine vision
  • Artificial intelligence
  • Autonomous systems
  • Surveillance
  • Intelligent transportation systems
  • Autonomous vehicles

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 14168 KiB  
Article
Single Image Dehazing Using Sparse Contextual Representation
by Jing Qin, Liang Chen, Jian Xu and Wenqi Ren
Atmosphere 2021, 12(10), 1266; https://doi.org/10.3390/atmos12101266 - 28 Sep 2021
Cited by 1 | Viewed by 1934
Abstract
In this paper, we propose a novel method to remove haze from a single hazy input image based on the sparse representation. In our method, the sparse representation is proposed to be used as a contextual regularization tool, which can reduce the block [...] Read more.
In this paper, we propose a novel method to remove haze from a single hazy input image based on the sparse representation. In our method, the sparse representation is proposed to be used as a contextual regularization tool, which can reduce the block artifacts and halos produced by only using dark channel prior without soft matting as the transmission is not always constant in a local patch. A novel way to use dictionary is proposed to smooth an image and generate the sharp dehazed result. Experimental results demonstrate that our proposed method performs favorably against the state-of-the-art dehazing methods and produces high-quality dehazed and vivid color results. Full article
(This article belongs to the Special Issue Vision under Adverse Weather Conditions)
Show Figures

Figure 1

15 pages, 3222 KiB  
Article
Experimental Evaluation of PSO Based Transfer Learning Method for Meteorological Visibility Estimation
by Wai Lun Lo, Henry Shu Hung Chung and Hong Fu
Atmosphere 2021, 12(7), 828; https://doi.org/10.3390/atmos12070828 - 28 Jun 2021
Cited by 6 | Viewed by 1927
Abstract
Estimation of Meteorological visibility from image characteristics is a challenging problem in the research of meteorological parameters estimation. Meteorological visibility can be used to indicate the weather transparency and this indicator is important for transport safety. This paper summarizes the outcomes of the [...] Read more.
Estimation of Meteorological visibility from image characteristics is a challenging problem in the research of meteorological parameters estimation. Meteorological visibility can be used to indicate the weather transparency and this indicator is important for transport safety. This paper summarizes the outcomes of the experimental evaluation of a Particle Swarm Optimization (PSO) based transfer learning method for meteorological visibility estimation method. This paper proposes a modified approach of the transfer learning method for visibility estimation by using PSO feature selection. Image data are collected at fixed location with fixed viewing angle. The database images were gone through a pre-processing step of gray-averaging so as to provide information of static landmark objects for automatic extraction of effective regions from images. Effective regions are then extracted from image database and the image features are then extracted from the Neural Network. Subset of Image features are selected based on the Particle Swarming Optimization (PSO) methods to obtain the image feature vectors for each effective sub-region. The image feature vectors are then used to estimate the visibilities of the images by using the Multiple Support Vector Regression (SVR) models. Experimental results show that the proposed method can give an accuracy more than 90% for visibility estimation and the proposed method is effective and robust. Full article
(This article belongs to the Special Issue Vision under Adverse Weather Conditions)
Show Figures

Figure 1

13 pages, 3884 KiB  
Article
Polarimetric Imaging vs. Conventional Imaging: Evaluation of Image Contrast in Fog
by Maria Ballesta-Garcia, Sara Peña-Gutiérrez, Aina Val-Martí and Santiago Royo
Atmosphere 2021, 12(7), 813; https://doi.org/10.3390/atmos12070813 - 24 Jun 2021
Cited by 4 | Viewed by 2376
Abstract
We compare conventional intensity imaging against different modes of polarimetric imaging by evaluating the image contrast of images taken in a controlled foggy environment. A small-scale fog chamber has been designed and constructed to create the necessary controlled foggy environment. A division-of-focal-plane camera [...] Read more.
We compare conventional intensity imaging against different modes of polarimetric imaging by evaluating the image contrast of images taken in a controlled foggy environment. A small-scale fog chamber has been designed and constructed to create the necessary controlled foggy environment. A division-of-focal-plane camera of linear polarization and a linearly polarized light source has been used for performing the experiments with polarized light. In order to evaluate the image contrast of the different imaging modes, the Michelson contrast of samples of different materials relative to their background has been calculated. The higher the image contrast, the easier it is to detect and segment the targets of interest that are surrounded by fog. It has been quantitatively demonstrated that polarimetric images present an improvement in contrast compared to conventional intensity images in the situations studied. Full article
(This article belongs to the Special Issue Vision under Adverse Weather Conditions)
Show Figures

Figure 1

21 pages, 20201 KiB  
Article
Single Image Atmospheric Veil Removal Using New Priors for Better Genericity
by Alexandra Duminil, Jean-Philippe Tarel and Roland Brémond
Atmosphere 2021, 12(6), 772; https://doi.org/10.3390/atmos12060772 - 15 Jun 2021
Cited by 4 | Viewed by 2715
Abstract
From an analysis of the priors used in state-of-the-art algorithms for single image defogging, a new prior is proposed to obtain a better atmospheric veil removal. Our hypothesis is based on a physical model, considering that the fog appears denser near the horizon [...] Read more.
From an analysis of the priors used in state-of-the-art algorithms for single image defogging, a new prior is proposed to obtain a better atmospheric veil removal. Our hypothesis is based on a physical model, considering that the fog appears denser near the horizon rather than close to the camera. It leads to more restoration when the fog depth is more important, for a more natural rendering. For this purpose, the Naka–Rushton function is used to modulate the atmospheric veil according to empirical observations on synthetic foggy images. The parameters of this function are set from features of the input image. This method also prevents over-restoration and thus preserves the sky from artifacts and noises. The algorithm generalizes to different kinds of fog, airborne particles, and illumination conditions. The proposed method is extended to the nighttime and underwater images by computing the atmospheric veil on each color channel. Qualitative and quantitative evaluations show the benefit of the proposed algorithm. The quantitative evaluation shows the efficiency of the algorithm on four databases with different types of fog, which demonstrates the broad generalization allowed by the proposed algorithm, in contrast with most of the currently available deep learning techniques. Full article
(This article belongs to the Special Issue Vision under Adverse Weather Conditions)
Show Figures

Figure 1

25 pages, 6841 KiB  
Article
A Quantitative Analysis of Point Clouds from Automotive Lidars Exposed to Artificial Rain and Fog
by Karl Montalban, Christophe Reymann, Dinesh Atchuthan, Paul-Edouard Dupouy, Nicolas Riviere and Simon Lacroix
Atmosphere 2021, 12(6), 738; https://doi.org/10.3390/atmos12060738 - 8 Jun 2021
Cited by 19 | Viewed by 5354
Abstract
Light Detection And Ranging sensors (lidar) are key to autonomous driving, but their data is severely impacted by weather events (rain, fog, snow). To increase the safety and availability of self-driving vehicles, the analysis of the phenomena consequences at stake is necessary. This [...] Read more.
Light Detection And Ranging sensors (lidar) are key to autonomous driving, but their data is severely impacted by weather events (rain, fog, snow). To increase the safety and availability of self-driving vehicles, the analysis of the phenomena consequences at stake is necessary. This paper presents experiments performed in a climatic chamber with lidars of different technologies (spinning, Risley prisms, micro-motion and MEMS) that are compared in various artificial rain and fog conditions. A specific target with calibrated reflectance is used to make a first quantitative analysis. We observe different results depending on the sensors, valuable multi-echo information, and unexpected behaviors in the analysis with artificial rain are seen where higher rain rates do not necessarily mean higher degradations on lidar data. Full article
(This article belongs to the Special Issue Vision under Adverse Weather Conditions)
Show Figures

Figure 1

16 pages, 2184 KiB  
Article
WeatherEye-Proposal of an Algorithm Able to Classify Weather Conditions from Traffic Camera Images
by Khouloud Dahmane, Pierre Duthon, Frédéric Bernardin, Michèle Colomb, Frédéric Chausse and Christophe Blanc
Atmosphere 2021, 12(6), 717; https://doi.org/10.3390/atmos12060717 - 2 Jun 2021
Cited by 9 | Viewed by 3262
Abstract
In road environments, real-time knowledge of local weather conditions is an essential prerequisite for addressing the twin challenges of enhancing road safety and avoiding congestions. Currently, the main means of quantifying weather conditions along a road network requires the installation of meteorological stations. [...] Read more.
In road environments, real-time knowledge of local weather conditions is an essential prerequisite for addressing the twin challenges of enhancing road safety and avoiding congestions. Currently, the main means of quantifying weather conditions along a road network requires the installation of meteorological stations. Such stations are costly and must be maintained; however, large numbers of cameras are already installed on the roadside. A new artificial intelligence method that uses road traffic cameras and a convolution neural network to detect weather conditions has, therefore, been proposed. It addresses a clearly defined set of constraints relating to the ability to operate in real-time and to classify the full spectrum of meteorological conditions and order them according to their intensity. The method can differentiate between five weather conditions such as normal (no precipitation), heavy rain, light rain, heavy fog and light fog. The deep-learning method’s training and testing phases were conducted using a new database called the Cerema-AWH (Adverse Weather Highway) database. After several optimisation steps, the proposed method obtained an accuracy of 0.99 for classification. Full article
(This article belongs to the Special Issue Vision under Adverse Weather Conditions)
Show Figures

Figure 1

16 pages, 10030 KiB  
Article
Visual Weather Property Prediction by Multi-Task Learning and Two-Dimensional RNNs
by Wei-Ta Chu, Yu-Hsuan Liang and Kai-Chia Ho
Atmosphere 2021, 12(5), 584; https://doi.org/10.3390/atmos12050584 - 1 May 2021
Viewed by 1699
Abstract
We attempted to employ convolutional neural networks to extract visual features and developed recurrent neural networks for weather property estimation using only image data. Four common weather properties are estimated, i.e., temperature, humidity, visibility, and wind speed. Based on the success of previous [...] Read more.
We attempted to employ convolutional neural networks to extract visual features and developed recurrent neural networks for weather property estimation using only image data. Four common weather properties are estimated, i.e., temperature, humidity, visibility, and wind speed. Based on the success of previous works on temperature prediction, we extended them in terms of two aspects. First, by considering the effectiveness of deep multi-task learning, we jointly estimated four weather properties on the basis of the same visual information. Second, we propose that weather property estimations considering temporal evolution can be conducted from two perspectives, i.e., day-wise or hour-wise. A two-dimensional recurrent neural network is thus proposed to unify the two perspectives. In the evaluation, we show that better prediction accuracy can be obtained compared to the state-of-the-art models. We believe that the proposed approach is the first visual weather property estimation model trained based on multi-task learning. Full article
(This article belongs to the Special Issue Vision under Adverse Weather Conditions)
Show Figures

Figure 1

20 pages, 24548 KiB  
Article
Artifact-Free Single Image Defogging
by Gabriele Graffieti and Davide Maltoni
Atmosphere 2021, 12(5), 577; https://doi.org/10.3390/atmos12050577 - 29 Apr 2021
Cited by 4 | Viewed by 2513
Abstract
In this paper, we present a novel defogging technique, named CurL-Defog, with the aim of minimizing the insertion of artifacts while maintaining good contrast restoration and visibility enhancement. Many learning-based defogging approaches rely on paired data, where fog is artificially added to clear [...] Read more.
In this paper, we present a novel defogging technique, named CurL-Defog, with the aim of minimizing the insertion of artifacts while maintaining good contrast restoration and visibility enhancement. Many learning-based defogging approaches rely on paired data, where fog is artificially added to clear images; this usually provides good results on mildly fogged images but is not effective for difficult cases. On the other hand, the models trained with real data can produce visually impressive results, but unwanted artifacts are often present. We propose a curriculum learning strategy and an enhanced CycleGAN model to reduce the number of produced artifacts, where both synthetic and real data are used in the training procedure. We also introduce a new metric, called HArD (Hazy Artifact Detector), to numerically quantify the number of artifacts in the defogged images, thus avoiding the tedious and subjective manual inspection of the results. HArD is then combined with other defogging indicators to produce a solid metric that is not deceived by the presence of artifacts. The proposed approach compares favorably with state-of-the-art techniques on both real and synthetic datasets. Full article
(This article belongs to the Special Issue Vision under Adverse Weather Conditions)
Show Figures

Figure 1

Back to TopTop