Next Article in Journal
Optical Interferometric Fringe Pattern-Incorporated Spectrum Calibration Technique for Enhanced Sensitivity of Spectral Domain Optical Coherence Tomography
Previous Article in Journal
An Area-Efficient and Highly Linear Reconfigurable Continuous-Time Filter for Biomedical Sensor Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision

by
Bruno Berenguel-Baeta
*,
Jesus Bermudez-Cameo
and
Jose J. Guerrero
Instituto de Investigación en Ingeniería de Aragón, Universidad de Zaragoza, 50018 Zaragoza, Spain
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(7), 2066; https://doi.org/10.3390/s20072066
Submission received: 3 March 2020 / Revised: 31 March 2020 / Accepted: 3 April 2020 / Published: 7 April 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

Omnidirectional and 360° images are becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. Their wide field of view allows the gathering of a great amount of information about the environment from only an image. However, the distortion of these images requires the development of specific algorithms for their treatment and interpretation. Moreover, a high number of images is essential for the correct training of computer vision algorithms based on learning. In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information. These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4 through an interface plugin. We gather a variety of well-known projection models such as equirectangular and cylindrical panoramas, different fish-eye lenses, catadioptric systems, and empiric models. Furthermore, we include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems. As far as we know, this is the first reported tool for generating photorealistic non-central images in the literature. Moreover, since the omnidirectional images are made virtually, we provide pixel-wise information about semantics and depth as well as perfect knowledge of the calibration parameters of the cameras. This allows the creation of ground-truth information with pixel precision for training learning algorithms and testing 3D vision approaches. To validate the proposed tool, different computer vision algorithms are tested as line extractions from dioptric and catadioptric central images, 3D Layout recovery and SLAM using equirectangular panoramas, and 3D reconstruction from non-central panoramas.
Keywords: computer vision; omnidirectional cameras; virtual environment; deep learning; non-central systems; image generator; semantic label computer vision; omnidirectional cameras; virtual environment; deep learning; non-central systems; image generator; semantic label

Share and Cite

MDPI and ACS Style

Berenguel-Baeta, B.; Bermudez-Cameo, J.; Guerrero, J.J. OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision. Sensors 2020, 20, 2066. https://doi.org/10.3390/s20072066

AMA Style

Berenguel-Baeta B, Bermudez-Cameo J, Guerrero JJ. OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision. Sensors. 2020; 20(7):2066. https://doi.org/10.3390/s20072066

Chicago/Turabian Style

Berenguel-Baeta, Bruno, Jesus Bermudez-Cameo, and Jose J. Guerrero. 2020. "OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision" Sensors 20, no. 7: 2066. https://doi.org/10.3390/s20072066

APA Style

Berenguel-Baeta, B., Bermudez-Cameo, J., & Guerrero, J. J. (2020). OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision. Sensors, 20(7), 2066. https://doi.org/10.3390/s20072066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop