**1. Introduction**

Driving automation, as defined by the Society of Automotive Engineers (SAE), is the internationally accepted standard. SAE provides taxonomy with detailed definitions for six levels of driving automation. These range from no driving automation (Level 0) to full driving automation (Level 5) [1]. Mass-produced vehicles have recently begun to be generally equipped with Level 2 autonomous driving technology. This technology provides drivers with partial driving automation and is called advanced driver assistance systems (ADAS). Among the examples of ADAS, adaptive cruise control (ACC) and lane-keeping assist system (LKAS) are Level 1 technologies, and highway driving assist (HDA) is a Level 2 technology.

The primary goal of autonomous driving technology is to respond proactively to unanticipated scenarios, such as traffic accidents and construction sites. This requires rapid and effective identification of the environment surrounding the vehicle. To achieve this, various sensors, such as light detection and ranging (LiDAR) and radar sensors, and cameras are used for detection [2]. Among these, cameras capture images containing a large quantity of information. This information enables object detection, traffic information collection, lane detection, among other tasks. Furthermore, cameras are more readily accessible compared to other sensors. Therefore, several studies have been conducted on the camera-based collection of environmental information and its processing.

With regard to the correction of camera images, Lee et al. [3] proposed a method to correct the radial distortion caused by camera lenses. In addition, Detchev et al. [4] proposed a method for simultaneously estimating the internal and relative direction parameters for calibrating measurement systems comprising multiple cameras.

**Citation:** Lee, S.-H.; Kim, B.-J.; Lee, S.-B. Study on Image Correction and Optimization of Mounting Positions of Dual Cameras for Vehicle Test. *Energies* **2021**, *14*, 4857. https:// doi.org/10.3390/en14164857

Academic Editors: Guzek Marek, Rafał Jurecki and Wojciech Wach

Received: 5 July 2021 Accepted: 5 August 2021 Published: 9 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

With regard to camera-based lane detection, Kim et al. [5] proposed an algorithm to improve nocturnal lane detectability based on image brightness correction and the lane angle. In addition, Kim et al. [6] performed real-time lane detection using the lane path obtained based on the lane gradient and width information in conjunction with the previous frame. Choi et al. [7] proposed and validated a novel lane detection algorithm (namely random sample consensus (RANSAC) algorithm) by applying conventional lane detection algorithms. Kalms et al. [8] used the Viola-Jones object detection method to design and implement an algorithm for lane detection and autonomous driving. Wang et al. [9] achieved lane detection through pre-processing images and utilized the features extracted from the image pixel coordinates to detect lane departure using a stacked sparse autoencoder (SSAE). Andrade et al. [10] recommended a novel three-stage strategy for lane detection and tracking, comprising image correction and region of interest (ROI) set-up, edge detection.

With regard to monocular camera-based distance measurement, Bae et al. [11] proposed a method to measure the distance to the vehicle in front using the relationship between the lane and the geometry of the camera and the vehicle. A method suggested by Park et al. [12] involved measuring distances by training the distance classifier using the distance information obtained from LiDAR sensors in conjunction with the width and height of the bounding box corresponding to the detected object. Huang et al. [13] proposed a method to estimate inter-vehicle distances based on monocular camera images captured by cameras installed within a vehicle by combining the vehicular attitude angle information with the segmentation information. Moreover, Zhe et al. [14] constructed an area-distance geometric model based on the camera projection principle. They leveraged the advantages of 3D detection to combine the 3D detection of vehicles with the distance measurement model, proposing a robust inter-vehicle distance measurement method based on a monocular camera installed within a vehicle. Bongharriou et al. [15] combined vanishing point extraction, lane detection, and vehicle detection based on actual images and proposed a method to estimate distances between cameras and vehicles in front by correcting camera images.

Object detection and distance measurements using stereo cameras have also been researched extensively. Kim [16] proposed an algorithm to estimate vehicular driving lanes by generating 3D trajectories based on the coordinates of detected traffic signs, and Seo [17] proposed an improved distance measurement method using a disparity map. A method proposed by Kim et al. [18] comprises measuring distances by correcting image brightness and estimating central points of objects using two webcams. Furthermore, Song et al. [19] proposed a forward collision distance measurement method by combining a Hough space lane detection model with stereo matching. Additionally, Sie et al. [20] proposed an algorithm for real-time lane and vehicle detection and measurement of distances to vehicles in the driving lane by combining a portable embedded system with a dual-camera vision system. A method involving efficient pose estimation of on-board cameras using 3D point sets obtained from stereo cameras, and the ground surface was proposed by Sappa et al. [21]. Yang et al. [22] proposed a stereo vision-based system that detects vehicle license plates and calculates their 3D position in each frame for vehicular speed measurement. Zaarane et al. [23] proposed an inter-vehicle distance measurement system based on image processing utilizing a stereo camera by considering the position of vehicles in both the cameras and certain geometric angles. Cafiso et al. [24] proposed a system based on invehicle stereo vision and the global positioning system (GPS) to detect and assess collisions between vehicles and pedestrians. Wang et al. [25] proposed real-time object detection and depth estimation based on Deep Convolutional Neural Networks (DCNNs). Lin et al. [26] proposed a vision-based driver assistance system.

Several extensive studies have been conducted on the correction of camera images, road lane detection, and distance measurement. However, few studies have been conducted to optimize camera mounting positions and measure distances to objects in front of a vehicle. This gap is addressed in the present study by using testing and evaluation methods based

on dual-camera-based image correction and lane detection. In this study, only theoretical concepts used in experiments are described in the computer vision phase for lane detection and the image distortion correction phase. In addition, focal length correction is performed after image distortion correction to reduce the effect of the change in detection distance caused by image distortion correction. Moreover, it investigates the three main variables related to the dual-camera installation. The experimental results according to installation height, camera baseline, and angle of inclination are presented. The parameters according to the camera's rotation axis are set as roll, pitch, and pan. The pitch is the installation angle of the camera, and the parallel stereo camera method applied in this study does not consider roll and pan. In addition, the parameters such as angle-of-view and focal length of the camera are excluded from the analysis because the two cameras constituting the dual-camera configuration had the same specifications. The actual test was conducted on the three presented variables to determine the optimal position of the dual camera. The three variables were tested for three values, respectively. Based on the obtained optimal value, the actual test was conducted on the theoretical equation. This study proceeds as follows.

