Next Article in Journal
A Real-Time Deep Machine Learning Approach for Sudden Tool Failure Prediction and Prevention in Machining Processes
Previous Article in Journal
Development of a Knowledge Graph for Automatic Job Hazard Analysis: The Schema
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automotive LiDAR Performance Test Method in Dynamic Driving Conditions

1
Durability Technology Team, Hyundai Motor Company, Hwaseong 18280, Republic of Korea
2
IT Convergence Components Research Center, Korea Electronics Technology Institute (KETI), Gwangju 61005, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(8), 3892; https://doi.org/10.3390/s23083892
Submission received: 19 February 2023 / Revised: 30 March 2023 / Accepted: 6 April 2023 / Published: 11 April 2023
(This article belongs to the Section Vehicular Sensing)

Abstract

:
The Light Detection and Ranging (LiDAR) sensor has become essential to achieving a high level of autonomous driving functions, as well as a standard Advanced Driver Assistance System (ADAS). LiDAR capabilities and signal repeatabilities under extreme weather conditions are of utmost concern in terms of the redundancy design of automotive sensor systems. In this paper, we demonstrate a performance test method for automotive LiDAR sensors that can be utilized in dynamic test scenarios. In order to measure the performance of a LiDAR sensor in a dynamic test scenario, we propose a spatio-temporal point segmentation algorithm that can separate a LiDAR signal of moving reference targets (car, square target, etc.), using an unsupervised clustering method. An automotive-graded LiDAR sensor is evaluated in four harsh environmental simulations, based on time-series environmental data of real road fleets in the USA, and four vehicle-level tests with dynamic test cases are conducted. Our test results showed that the performance of LiDAR sensors may be degraded, due to several environmental factors, such as sunlight, reflectivity of an object, cover contamination, and so on.

1. Introduction

The Light Detection and Ranging (LiDAR) sensor is a transformative technology that detects the surrounding environment of an autonomous car driving in real-time. Thanks to their ability to measure long-range spatial information in a few microseconds, LiDAR sensors have become widely used in level three, or higher levels, in driving automated cars [1], as well as in standard Advanced Driver Assistance Systems (ADASs). The functional safety and reliability of such driving assistance are highly correlated with the quality of perception of the automotive sensors (Front camera, Radar, and LiDAR), which have different mounting positions, redundancy designs, and intrinsic characteristics [2]. Due to such variations, direct quality assessment for automotive sensors remains an open research topic.
The most standard protocols for ADAS & self-driving functional safety focus on system-level testing, not sensor-level testing [3]. The quantitative evaluation of LiDAR’s performance under harsh environments, such as fog [4], rain [5], background light [6], dust [7], and snow [8], is essential for providing safe ADAS and self-driving features, for both car consumers and manufacturers, in terms of proper redundancy design. It is an ongoing research field to expand test cases based on the operation design domain (ODD) of automotive cars.
Many reports in the literature refer to the performance deterioration of LiDAR sensors under harsh environments over decades. Yet, LiDAR test methods are not standardized for several reasons. First, qualitative measurement and reproduction of environmental factors are extremely difficult (e.g., visibility of fog). Second, the LiDAR system itself has unique characteristics in 3D measurement and in having different form factors (wavelengths of laser, beam steering mechanisms, or photon transmitting and receptive modules).
The existing LiDAR validation methods examine its capability and signal repeatability in an indoor facility [9], a static scenario [10], and virtual scene [5] by measuring the rate of performance drop with a controllable environmental factor. In these cases, a quantified environmental factor is controlled in a test facility, while a LiDAR sensor captures signals received from a referent target. Recently, related work deals with vehicle-level validation in laboratory conditions [4,11]. Considering the fact that actual ODD of the automotive car is more complex than the artificial environments, it is still necessary to examine automotive LiDAR performance in a real driving fleet under different weather conditions. In our review, the state-of-the-art methods investigate LiDAR capabilities in real driving cases [8,12]. Still, their results are subjected to a qualitative evaluation by means of human vision, such as visual comparison of LiDAR scan data, pass & fail tests judged by experts, and laborious work on statistics.
In this paper, we propose a LiDAR performance test method that fully accounts for quantitative analysis in real road driving. We propose an unsupervised learning method for feature extraction of reference targets from point clouds of a LiDAR sensor, which can be utilized to produce quantitative data statistics for LiDAR performance tests under dynamic situations. For LiDAR performance evaluation, we demonstrate eight different environmental tests, including a dynamic case in which both the ego-vehicle and counter-vehicle are driving through a real road at a constant speed. Our environmental tests were twofold. First, we simulated the performance degradation of an automotive LiDAR sensor under the following adverse cases: high sunlight, high ambient temperature, low ambient temperature, and cover contamination by dust. Our simulations were based on time-series captures of ambient temperature, sunlight, and visibility against dust contamination from real road fleets in Nevada and Alaska, the USA. Second, we also discovered variations of LiDAR signal characteristics for four driving scenes: daylight versus night, color of front vehicle, vibration on a country road, and interference by LiDAR by an oncoming vehicle. We performed real road tests for these cases with our feature extraction algorithm.
In summary, our contributions are as follows:
  • We show a vehicle-level performance evaluation pipeline consisting of a schematic diagram of the test vehicle, the test metric & procedure, and the environmental factors.
  • We present performance test results for an automotive-graded LiDAR sensor, in both a real road driving scenario and an environmental simulation based on time-series measurements from real road fleets in harsh weather conditions.
  • We propose a spatio-temporal point segmentation algorithm using the density-based unsupervised clustering algorithm that can be utilized in dynamic test scenarios (e.g., feature extraction from moving objects, such as a car).

2. Related Work

Literature on LiDAR performance validation in adverse conditions can be categorized into three groups. The first group focuses on performance degeneration in a weather simulation using a rain/fog chamber. The second group involves sensor validation in a dynamic scene where a test vehicle, or target, moves, based on test scenarios. The third group expands on the test cases using physical simulations in combination with high-fidelity modeling of sensors, the surrounding objects, and the environment. In this section, we briefly introduce existing studies related to each group.

2.1. Lidar Performance Evaluation in Adverse Weather Conditions

Many studies have been conducted to evaluate the performance degradation of automotive LiDAR in adverse weather conditions. Wojtanowski et al. [13] analyzed the range degradation of 1550 nm and 905 nm LiDAR in rain and fog conditions using a numerical analysis of signal-to-noise (SNR) correlated to the laser wavelength. Instead of a laboratory test and numeric simulation, Kutila et al. [14] performed a relative comparison of 1550 nm and 905 nm LiDAR sensors’ performances in fog and rain conditions, as well as conducting a vehicle-level test for commercial 905 nm LiDAR sensors in a rain/fog chamber. Rasshofer et al. [10] and Filgueira et al. [15] quantified the influence of rain on Time-of-flight (ToF) LiDAR through empirical tests in outdoor scenes, finding that LiDAR capabilities varied, based on the amount of rain and object material. Similarly, Hasirlioglu et al. [9] investigated the influence of fog on automotive LiDAR using an indoor fog simulator. Li et al. [4] modeled a range process of ToF LiDAR under artificial fog conditions with visibility recording data against fog density, providing a Gaussian process to estimate the operational feasibility of a LiDAR under known fog density. Caballo et al. [16] introduced a benchmark dataset for LiDAR sensors, which includes static chamber test data under adverse weather conditions. In particular, they captured point cloud degradation caused by fog, rain, and strong sunlight for six commercial LiDAR sensors.
The early stage test setup in the aforementioned work was limited to a sensor-level test in which the test LiDAR was installed towards a static target. For an automotive LiDAR sensor, quality assessment in a variety of dynamic scenes must be included, in order to account for sensor errors possibly occurring in the ODD of the automotive vehicle which is to be combined with the LiDAR sensor.

2.2. Lidar Performance Test in Dynamic Scene

The previous work in this category aimed to provide dynamic tests for the automotive LiDAR. Dannheim et al. [17] proposed a weather detection system using LiDAR scan data in driving situations. Their results were validated in laboratory simulations. Tang et al. [18] performed a driving test for autonomous LiDAR sensors under rainy conditions, covering pedestrian detection in sunny and rainy weather using system-level metrics, such as stop distance, time to stop, hard brake, and LiDAR/camera detection. Heinzler et al. [5] developed test scenarios that are more realistic in both driving and static conditions. In their experimental setup, pedestrian dummies, real traffic signs, bicycles, and a driving car were placed and moved against a test vehicle in a fog chamber. However, the dynamic test results, which mainly focused on the effect of multi-echo versus different fog density, were restricted to visual comparison by human observers. Kim et al. [11] verified the number of points received from a static target of different materials (wood, plastic, etc.) according to the driving speed of a test vehicle and the amount of rainfall, using a rain simulation chamber. These methods offer LiDAR capabilities against different weather conditions in dynamic scenarios. However, the test metrics seem to depend on expert judgment, and the statistics are gathered from manual labor.
Measuring LiDAR performance on real roads is beneficial for capturing a full spectrum of signal characteristics of LiDAR from rich surroundings, compared to the aforementioned artificial test environments. Jokela et al. [8] claimed that previous test methods focused on rain and fog conditions, neglecting other critical weather conditions, such as snow. Therefore, they presented outdoor LiDAR test results in snowy conditions by attaching six LiDAR sensors to the top and front of a test vehicle. However, the test results in an outdoor scene were limited to a qualitative analysis by human experts, such as, for instance, observation of point cloud occlusion by snow. Bijelic et al. [12] released a multi-modal adverse weather dataset covering camera, radar, and LiDAR sensors over about 10,000 km of driving in northern Europe. Based on the multi-modality, they also proposed a sensor fusion network validated on the driving dataset. Dhananjaya et al. [19] proposed an active learning-based weather- and light-level classification model and provided real road datasets for different weather conditions.
In summary, the existing work on the LiDAR performance test in dynamic scenes tends to rely on empirical analysis by human experts, so there is a demand for quantitative and automated test methods.

2.3. Lidar Quality Assessment Based on Simulation

Simulation in a virtual environment has clear strengths in validation of automotive sensors, as well as for ADAS/AD functional safety, since it significantly excels at both the sensor-level and the system-level tests by loop testing in combination with modeling of a real-world environment based on test scenarios. Early studies aimed to build a framework for modeling sensors, digitizing the real world, and gathering datasets using driving fleets with attached automotive sensors. Gschwandtner et al. [20] introduced a 3D scan simulator for commercial LiDAR sensors, while Pereira et al. [21] presented an integrated framework for a traffic simulator based on automotive sensor models (LiDAR and camera).
The state-of-the-art literature suggests leveraging physical simulation for LiDAR performance measurement in adverse conditions. Goodin et al. [22] developed a physics-based simulation of the influence of rain on terrestrial LiDAR performance. Manivasagam et al. [23] proposed a LiDAR simulator utilizing both physics-based and learning-based simulations via raycasting over the 3D scene and a neural network for signal deviations. The simulator was capable of performing a repetition test under the test scenarios captured from real self-driving fleets. However, environmental changes were not considered, so that various noise factors from the real world were not fully reflected in the LiDAR sensor model. Hahner et al. [24] simulated LiDAR-based 3D object detection in foggy weather by modeling an attenuation factor driven by fog as a soft target. This model can be applied to an actual LiDAR measurement to evaluate 3D object detection in simulated fog conditions, but their solution is restricted to fog conditions.
Simulation-based methods obviously outperform actual road tests for LiDAR capabilities in adverse weather in terms of quantity, variety, and accessibility. However, there is still a large gap between the real world and the synthesized world. First, the LiDAR industry seeks a dominant mechanism for the LiDAR principles [25]. At this stage, a universal model of attenuation and noise factors is infeasible to create, unlike the current state-of-the-art simulators for standardized sensors, such as front camera [26], and radar [27]. These use an oversimplified LiDAR model in weather simulations, in terms of measurement variance in range and angle, sporadic sensor errors, or multi-level noise factors [28]. Therefore, automotive sensors must be validated on real roads with various test cases, including dynamic driving conditions. We focused on the performance evaluation of real road test cases with a tool to aid in labeling and analyzing the road data.

3. Methodology

3.1. Test Vehicle

In the Figure 1, we illustrate our test vehicle, which was modified from a consumer-grade SUV to attach automotive LiDAR sensors and accompanying data acquisition systems. We considered the mounting positions of automotive LiDAR sensors for both levels 2, 3 and level 4 self-driving cars, and placed the sensors at three different positions: front, top, and rear. To perform the performance evaluation, the Device Under Test (DUT) was mounted on additional frames. For the front position, we used Valeo Scala gen2 and Innoviz one as the DUT, while, for the top position, we used Ouster OS1 and Velodyne Alpha Prime. For the left and right sides of the rear LiDAR, we considered Velodyne Ultra Puck and Hesai XT32. We changed the combination of LiDAR sensors according to the scope and purpose of each test. In this paper, we focused only on the automotive-graded front LiDAR sensors used for an ADAS feature.
The test vehicle is able to capture not only LiDAR scan data, but the surrounding information. For example, ambient temperature, humidity, acceleration, and illuminance are captured on the cover of the LiDAR sensors using IPETRONIK’s IPElog2 data logger. Additional time series data, such as the position, velocity of the ego-vehicle and video from the windshield camera are also captured and stored in an industrial PC installed in the luggage compartment.
Figure 2 is a schematic diagram of the test vehicle. The major components can be categorized into sensors, signal grabber, data acquisition system, power suppliers, and time-series data analysis. In our schematic, parallel data pipelines, connected to an industrially graded PC, are used to transfer raw image buffers from the front camera, LiDAR point clouds from an Ethernet grabber, and ego-vehicle status into the PC. The feature extraction algorithm in the Section 3.3 was utilized in the time series data analysis for LiDAR point clouds.

3.2. Environmental Data Acquisition

This section describes the measurement of the driving environmental data on real roads. The road fleet data was used for two purposes. The first was to observe the resistance of LiDAR performance against extreme driving environments by means of data statistics. Second, we collected road fleet data for a realistic environmental simulation in laboratory conditions. For example, we demonstrate a temperature simulation test result for automotive LiDAR sensors in Section 3.1. In this test, a temperature profile captured in Alaska for low temperature was utilized.
We considered two harsh road environments selected in accordance with our purposes. First, we captured time-series data of road conditions in an Alaskan winter in the USA. The data includes the internal/external temperature, 3-axis position/acceleration, and amount of light coming through the cover of the test LiDAR sensor. Our second series of data were acquired from desert road fleets in a summer in Nevada, the USA. These road data were only collected in conditions under strong sunlight and high-temperature (Figure 3).
For the road data captures, we utilized the data acquisition system mentioned in Section 3.1. This multi-channel data acquisition box collects 1D signals of connected electronic sensors, such as thermocouples, accelerometers, and light meters. An example of the data acquisition process is described in Figure 4. In this figure, a thermocouple is attached on the top of a front LiDAR sensor in order to capture the cover temperature for a road fleet. A capture result is shown as a 2D graph where the x-axis measures time and the y-axis is the value of an electronic sensor. In this example, we measured the range of the cover temperature of a front LiDAR attached to the front of the test vehicle.

3.3. Dynamic Target Feature Extraction Algorithm

Instead of the semantic classification method conducted in a supervised manner, we adopted a density-based clustering approach [29] to collect points of a predefined target from the LiDAR scan data. Deep-learning networks are the most promising candidate for point-wise segmentation via supervision from a large dataset [25]. However, they were not adequate in this study, which dealt with the extraction of LiDAR signals back-scattered from a specific target. First, we investigated the capabilities and signal repeatability of LiDAR systems in harsh weather conditions. In general, weak signals lead to weak inference and erroneous decisions. Second, in this case, the detection of objects was pre-specified before training the models. There was no reason for such a bulky framework for the detection. We had domain knowledge for a test environment surrounding a LiDAR system so we could efficiently design a deterministic algorithm that accounted for it.
The procedure by which we derived target feature extraction algorithm is described in Figure 5. First, we collect scan data labeled with a timestamp from a LiDAR sensor. Second, the scan data is divided into several groups using the DBSCAN clustering algorithm. Third, a spatio-temporal matching algorithm is performed for object tracking. Finally, we obtain segmented points of a target object for the scan data.
The choice of DBSCAN as a scene interpreter was based on the characteristics of LiDAR signals in driving conditions. Point signals from target objects usually have a structural similarity in both spatial and temporal domains. Under this assumption, we performed the spatio-temporal matching between point groups in two consecutive frames. Let S i t be ith point groups in a scan frame labeled as t. In the clustering selection process, a similarity between S i t and a point group S i t 1 in a previous scan frame is measured using the Euclidean distance of the feature vector f i t = ( x ^ , y ^ , z ^ , i ^ , θ ^ , ϕ ^ ) where x ^ , y ^ , z ^ is the center position of S i t in 3D coordinates, i ^ is the mean intensity, and θ ^ , ϕ ^ is the azimuth and elevation angle of the center point. The Euclidean distance between two groups d ( S i t , S i t 1 ) can be calculated as
d ( S i t , S i t 1 ) = f i t f i t 1 .
In Equation (1), a density difference between a target object and background points becomes more pronounced because we fully leverage the intensity of LiDAR signals in target object segmentation. For all scan frames, the feature extraction algorithm is attractively processed to get segmented points of the target objects. A further optimization for the segmentation quality can be performed using an additional filtering algorithm. In this paper, we used the RANSAC algorithm [30] as a spatial filtering that could reject noise points in S i t .

3.4. Test Metrics

The dynamic target feature extraction algorithm is able to segment test targets for the LiDAR scan data. In this section, we demonstrate the test metrics that can be acquired from the extracted scan data.

3.4.1. Number of Points

The number of points is counted as the amount of points in the Region of interest (ROI) selected from the algorithm. It indicates the ability to detect an object at a certain distance range, since more scan points on the object with high spatial resolution leads to a better detection rate [2]. It can also be used as an indicator for signal repeatability by observing temporal coherence of the number of scan points in a static test, in which both the DUT and the reference target are fixed.

3.4.2. Intensity

Intensity is, in general, proportional to the signal strength of received laser power reflected from a surrounding object. A modern LiDAR equation models the received laser power with various attenuation factors, such as distance, aperture of photo-diode lens, reflectivity of a target, and laser scattering in air [31]. The received laser power is transformed into electrical signals by the photo diode, and electrical signals are further processed using an unique signal processing module, so the exact definition of intensity varies according to the type of laser receiver integrated into the system and how it is processed through the ECU. A large volume of literature deals with reflectivity estimation for real road objects from point intensity [32]. In this paper, we utilized intensity of points as a signal strength indicator.

3.4.3. Scan Frequency

A definition of the scan frame depends on its mechanism, but for an automotive LiDAR sensor it is generally referred to as a time gap between two consecutive scan frames sent to a client. Scan frequency is especially important for self-driving features since it is highly correlated with a design of system latency and safety validation.

3.4.4. Field of View and Angular Resolution

Angular resolution is the minimum deviation of two distinguishable objects in radial coordinates. It is crucial for long range detection, since the projection area of the laser scanning range becomes exponentially large depending on distance from the origin.
The field of view (FoV) of a LiDAR sensor is defined as the angle between two scan points at the end of the scan range. The FoV is the most fundamental system factor to design in both a sensor system and the ODD of a self-driving car. According to the objective, the specifications on FoV and angular resolution are determined by considering a trade-off relationship between them.

3.4.5. Number of Noise Points

A modern LiDAR sensor has various error sources. In this paper, we refer to the number of noise points as the amount of external noise in point clouds from a pre-described interference source, such as sunlight, laser coming from other LiDAR sensors, back-scattering by rain/fog, multi-reflection of shiny materials, and white noise due to high/low temperature. The noise point can be classified by comparing a test scene, in which external noise sources exist, with a reference scene, in which a targeted external noise source is blocked. In this paper, we distinguished noise points using a similarity measure of each point in scanned point clouds between a test scene and a reference scene.

3.4.6. Range Accuracy/Precision

Range accuracy and precision measures the quality of range detection in scanning of LiDAR sensors. In the literature and in field guidance [1,33], accuracy is calculated as a difference between a reference measurement of the distance of a specified object and the distance measured by the DUT. For this, a high-precision rangefinder is required to obtain reference range data against the DUT. The range precision is a deviation of the measured distance in LiDAR scanning. It can be calculated with a static target within a certain distance of the DUT. Accumulation of scan points leads to statistics for the range accuracy and precision measurement.

4. Results

In this section, we demonstrate test scenarios to examine the degeneration of LiDAR performance under 8 different environmental factors. They were performed with our test vehicle, described in Section 3.1. Table 1 describes the test scenario and provides a brief summary on the test results.
There were two types of test scenarios in our study. The first type involved simulating harsh driving conditions in North America in a laboratory setting. These simulations considered factors such as high/low temperatures, direct sunlight on the cover of the DUT, and cover contamination in off-road driving. To test for high temperatures, we measured the ambient temperature at the cover of front LiDAR sensors in the summer in Nevada, the USA (Section 3.2). We configured the minimum and maximum temperatures in the temperature profile and observed the impact of high ambient temperatures on the performance of the front LiDAR sensor in a chamber capable of containing a car. Similarly, we conducted a low temperature test with a captured temperature profile from a winter in Alaska (Figure 4). For the sunlight test, we evaluated the performance of the front LiDAR sensor under the maximum light intensity measured in the Nevada desert using an artificial solar light source. During this test, we evaluated the signal intensity and number of points of a reference object in a static environment while the illumination varied from 0 to 80,000 lux. The cover contamination test was derived from off-road driving in the Nevada desert. We captured the visibility of the DUT every day while driving on a muddy road. In this test, we intentionally restricted the visibility of the DUT by spreading mud on the cover. Similar to the temperature test, we measured the intensity and number of points of a reference object.
The second type of test scenario involved conducting driving tests for an automotive graded LiDAR sensor under various environmental conditions. We examined the position and intensity variations of a front car as it moved on a country road, interference noise from a front LiDAR on a vehicle coming from the opposite side, signal variations during day and night transitions, and front vehicle colors. In the vibration test, we recorded the force applied to the DUT and the signal variations of the DUT using the data acquisition toolbox in Section 3.2. Interference from the opposite LiDAR sensor was observed in a driving test in which a LiDAR-attached vehicle was moved from 0 to 20 m in front of the ego-vehicle. Signal variations during day/night transitions and different front vehicle colors were captured on a straight road as a front car moved forward from 0 to 100 m under various environmental conditions. The signal difference was analyzed to determine the effects of changes in illumination and the color of the front vehicle.
In this paper, the aforementioned analysis was bounded to the Valeo Scala gen2, which is an automotive-graded scanning LiDAR sensor using a 905 nm pulse laser.

4.1. Vibration Test

In the vibration test, we drove the ego-vehicle on a country road in Nevada while also driving a reference car at a constant speed in front of the ego-vehicle for performance analysis. We used our feature extraction algorithm (Section 3.3) to segment the points from the front car. Figure 6 shows the coordinates of the DUT and the example scan data, as well as the segmented point for the front vehicle.
To analyze how the impact on the DUT affected sensor signals, we analyzed the standard deviation of the center position of the front car. Figure 7 shows the mean position of the points segmented for the front car in xyz coordinates and their mean intensities, with the x-axis indicating the scan frame. The graph indicates that the signal influence was particularly significant for the measured height of an object since the z-axis standard deviation for the center position of the front vehicle drifted by 0.24 m, and the corresponding mean intensity also changed with a deviation of 0.18. Regarding the z-axis in the up direction, such a large drift due to vibration might cause an instant drop in the detection rate of surrounding vehicles.

4.2. String Sunlight Test

Sunlight contains the wavelength bands of commercial LiDAR sensors at both 905 nm and 1550 nm, making it a noise source that influences the sensor’s SNR and echo reception rate. Therefore, we analyzed the signal strength reduction rate based on the brightness of sunlight using the HMI DXS’s solar light emulator. In this experiment, the solar emulator was installed in front of the DUT, and the DUT captured points from a reference object attached to the bottom of the emulator. We then analyzed the point data extracted using the feature extraction algorithm. The analysis (shown in Figure 8) revealed that the average intensity decreased by 47%, from 3.5 to 1.84, at 80,000 lux compared to the zero sunlight condition. Meanwhile, the number of points on the reference target only decreased by 1.6%, from 80,656 to 79,310. We chose the maximum illumination of the sunlight simulation as the maximum sunlight in a summer desert in Nevada, the USA. Considering this simulation was conducted in indoor conditions, where direct sunlight was lit in front of the DUT, the maximum detection range and the probability of detection of point clouds could be affected by the sunlight, due to increase in the noise levels of received signals.

4.3. High/Low Ambient Temperature Test

Ensuring the durability of automotive sensors against ambient temperature variations is crucial. To this end, we conducted a temperature variation test, simulating extreme temperature conditions based on real road fleet temperature profiles. We used a temperature chamber that can hold a car to control the ambient temperature within the minimum and maximum values recorded. In front of the ego-vehicle, we placed a 1.5 m × 1.5 m square target with 50% reflectivity as a reference object. We gradually increased or decreased the temperature in 5 °C intervals, to simulate the temperature variation experienced in the summer desert and winter arctic regions.
The results, as shown in Figure 9, indicate that the intensity of the reference target remained almost the same as the initial temperature in both high and low temperature tests. For instance, in the low-temperature test (Figure 10), it only decreased by 1.3% from 0 °C to −30 °C. Additionally, we observed no significant change in the number of points against temperature variations on both sides. This outcome suggests that automotive-grade LiDAR sensors may have a compensation system for temperature variations, since the range characteristics (accuracy, precision, and detection rate) are highly relevant to the temperature of the LiDAR ranging module [34].

4.4. Interference by External Laser Source

Preventing interference from external laser sources, especially from other LiDAR sensors in oncoming vehicles, is critical to ensure the accurate detection of surrounding objects. To evaluate the anti-interference system of LiDAR sensors, we conducted a test in which a LiDAR-equipped vehicle moved from 0 to 20 m in front of our ego-vehicle. We measured the number of noise points generated by the LiDAR sensor mounted on the oncoming vehicle. To isolate the noise, we captured a reference scene with no interference in advance and then captured point clouds from the DUT while driving towards the oncoming vehicle, and, finally, we segmented the noise points using structural similarity comparison.
As shown in Figure 11, we observed a significant reduction in interference noise from the LiDAR sensor in a short distance of approximately 3 m. At a distance of 1 m, a large amount of interference noise (18,111 points with a mean intensity of 1.36) was present, but this reduced to 467 points at 3 m distance. This result demonstrates the effectiveness of the LiDAR sensor’s anti-interference system in mitigating noise caused by external laser sources.

4.5. Cover Contamination Test

In real-world driving conditions, automotive sensors can be obstructed by various types of contaminants, such as dust, snow, rain, or bugs. Such blockages can cause ADAS features to malfunction, making it necessary to develop better coating or cleaning systems to prevent them. In this test, we simulated contamination on the covers of LiDAR sensors as would occur in muddy off-road conditions in Nevada, the USA. Prior to the simulation, we measured the visibility of a front LiDAR sensor immediately after driving in muddy off-road conditions. During the off-road trials, we drove the test vehicle in harsh conditions having numerous puddles, causing mud to cover the front LiDAR sensor. After just one day of driving, the visibility of the sensor was nearly zero.
For the contamination simulation, we applied mud to the DUT until its visibility matched the measured visibility. For each trial, we measured the intensity and number of points of a 1.5 m × 1.5 m square target (50% reflectivity).
The results, shown in Figure 12, indicate that when the cover was fully covered by mud, the intensity decreased by up to 95.1% and the number of points decreased by up to 99.9% compared to the normal operation of the DUT. It is important to note that these results do not imply that automotive LiDAR sensors are weak on muddy roads. Rather, this test intended to observe the behavior of the sensors in pure blockage situations and to develop a fault mode operation to deal with blockages.

4.6. Color Change Test

Due to eye safety regulations for laser-based sensors, the power of induced laser for automotive LiDAR sensors is limited, leading to a restricted detection range. In such cases, the surface reflectivity of the induced laser wavelength is a crucial factor in detecting surrounding objects. Therefore, it is imperative to consider color variations of commercial vehicles in the detection process of automotive LiDAR sensors.
For this test we drove two identical model cars (one black and one white) on a straight road in Nevada, the USA, at night. We captured points from the front car as it moved from 0 m to 100 m in front of the ego-vehicle at a constant speed of 20 km per hour. In each scan frame, the feature extraction algorithm in Section 3.3 was used to segment the points of the front car.
The results in Figure 13 indicate that the overall intensity of the white car was higher than that of the black car. For example, for a short and middle range (0 to 80 m), the intensity curve was 10% higher than that of the black car. In the long range (above 80 m) the intensity difference converged to near 0.
While there was a clear intensity change with respect to the color of the front car, there was no significant variation in the number of points with respect to distance. We observed that the black and white cars had almost the same number of points for all distances.
Overall results indicate that the detection quality for the surrounding cars would not be changed according to the color of the car, in terms of probability on detection, since there was no significant difference of number of points captured from the moving car for all distances. Nevertheless we observed intensity difference. This fact did not lead to decrease in the true positive rate.

4.7. Day and Night Transition Test

As discussed in Section 4.2, natural sunlight can act as an external noise source and affect the scan data of the LiDAR sensor. To investigate this effect, we conducted tests, similar to the color change case (Section 4.6), in both day and night conditions. In this test, we used a white car as the front car and varied the time of day during which the test was conducted.
The results in Figure 14 show that the intensity dropped similarly for both day and night conditions as the distance increased. However, the rate at which the number of points dropped was noticeably different. Under night conditions, more points were captured at intervals of 10 m to 50 m compared to day conditions. This observation suggests that natural sunlight can degrade the signal strength and SNR of a LiDAR sensor.

5. Conclusions

In this paper, we presented a comprehensive test framework to evaluate the performance of LiDAR sensors under harsh weather conditions. Our approach included a specially equipped test vehicle, data acquisition systems for three different data pipelines, and a feature extraction method for qualitative analysis of LiDAR point clouds. Our method was designed to analyze dynamic test cases where the ego-vehicle or test targets were in motion. In the vibration test, our feature extraction model successfully extracted ROIs of targets while the ego-vehicle and front car were driving on a country road.
Using our feature extraction model, we conducted two types of performance tests: environmental simulations, based on real road fleet data in extreme hot and cold weather conditions, and dynamic field tests on various real roads. Our test results clearly demonstrated that environmental factors, as described in Section 4, can significantly affect the performance of LiDAR sensors.
In the future, we plan to extend our framework to real-time scene interpretation from point clouds to obtain more comprehensive information for LiDAR sensor performance analysis. This extension will be useful for multi-modal analysis of LiDAR capabilities for real road fleets. We plan to adopt a deep learning scheme for multi-object segmentation [35] in this framework.

Author Contributions

Conceptualization, J.P. and Y.K.; methodology, J.C.; software, S.L.; validation, S.B., Y.K. and J.P.; formal analysis, S.L.; investigation, J.C.; resources, J.P.; data curation, S.B.; writing—original draft preparation, Y.K. and J.P.; writing—review and editing, Y.K.; visualization, J.C.; supervision, S.L.; project administration, J.P.; funding acquisition, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. J3016-202104; Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International: Warrendale, PA, USA, 2018.
  2. Roriz, R.; Cabral, J.; Gomes, T. Automotive LiDAR technology: A survey. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6282–6297. [Google Scholar] [CrossRef]
  3. 2019-25217; Advanced Driver Assistance Systems Draft Research Test Procedures. NHTSA: Washington, DC, USA, 2020.
  4. Li, Y.; Duthon, P.; Colomb, M.; Ibanez-Guzman, J. What happens for a ToF LiDAR in fog? IEEE Trans. Intell. Transp. Syst. 2020, 22, 6670–6681. [Google Scholar] [CrossRef]
  5. Heinzler, R.; Schindler, P.; Seekircher, J.; Ritter, W.; Stork, W. Weather influence and classification with automotive lidar sensors. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1527–1534. [Google Scholar]
  6. Sun, W.; Hu, Y.; MacDonnell, D.G.; Weimer, C.; Baize, R.R. Technique to separate lidar signal and sunlight. Opt. Express 2016, 24, 12949–12954. [Google Scholar] [CrossRef] [PubMed]
  7. Mona, L.; Liu, Z.; Müller, D.; Omar, A.; Papayannis, A.; Pappalardo, G.; Sugimoto, N.; Vaughan, M. Lidar measurements for desert dust characterization: An overview. Adv. Meteorol. 2012, 2012, 356265. [Google Scholar] [CrossRef] [Green Version]
  8. Jokela, M.; Kutila, M.; Pyykönen, P. Testing and validation of automotive point-cloud sensors in adverse weather conditions. Appl. Sci. 2019, 9, 2341. [Google Scholar] [CrossRef] [Green Version]
  9. Hasirlioglu, S.; Doric, I.; Kamann, A.; Riener, A. Reproducible fog simulation for testing automotive surround sensors. In Proceedings of the 2017 IEEE 85th Vehicular Technology Conference (VTC Spring), Sydney, NSW, Australia, 4–7 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–7. [Google Scholar]
  10. Rasshofer, R.H.; Spies, M.; Spies, H. Influences of weather phenomena on automotive laser radar systems. Adv. Radio Sci. 2011, 9, 49–60. [Google Scholar] [CrossRef] [Green Version]
  11. Kim, J.; Park, B.j.; Roh, C.G.; Kim, Y. Performance of mobile LiDAR in real road driving conditions. Sensors 2021, 21, 7461. [Google Scholar] [CrossRef] [PubMed]
  12. Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11682–11692. [Google Scholar]
  13. Wojtanowski, J.; Zygmunt, M.; Kaszczuk, M.; Mierczyk, Z.; Muzal, M. Comparison of 905 nm and 1550 nm semiconductor laser rangefinders’ performance deterioration due to adverse environmental conditions. Opto-Electron. Rev. 2014, 22, 183–190. [Google Scholar] [CrossRef]
  14. Kutila, M.; Pyykönen, P.; Holzhüter, H.; Colomb, M.; Duthon, P. Automotive LiDAR performance verification in fog and rain. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1695–1701. [Google Scholar]
  15. Filgueira, A.; González-Jorge, H.; Lagüela, S.; Díaz-Vilariño, L.; Arias, P. Quantifying the influence of rain in LiDAR performance. Measurement 2017, 95, 143–148. [Google Scholar] [CrossRef]
  16. Carballo, A.; Lambert, J.; Monrroy, A.; Wong, D.; Narksri, P.; Kitsukawa, Y.; Takeuchi, E.; Kato, S.; Takeda, K. LIBRE: The multiple 3D LiDAR dataset. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October 2020–13 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1094–1101. [Google Scholar]
  17. Dannheim, C.; Icking, C.; Mäder, M.; Sallis, P. Weather detection in vehicles by means of camera and LIDAR systems. In Proceedings of the 2014 Sixth International Conference on Computational Intelligence, Communication Systems and Networks, Tetova, Macedonia, 27–29 May 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 186–191. [Google Scholar]
  18. Tang, L.; Shi, Y.; He, Q.; Sadek, A.W.; Qiao, C. Performance test of autonomous vehicle lidar sensors under different weather conditions. Transp. Res. Rec. 2020, 2674, 319–329. [Google Scholar] [CrossRef]
  19. Dhananjaya, M.M.; Kumar, V.R.; Yogamani, S. Weather and light level classification for autonomous driving: Dataset, baseline and active learning. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 2816–2821. [Google Scholar]
  20. Gschwandtner, M.; Kwitt, R.; Uhl, A.; Pree, W. BlenSor: Blender sensor simulation toolbox. In Proceedings of the Advances in Visual Computing: 7th International Symposium, ISVC 2011, Las Vegas, NV, USA, 26–28 September 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 199–208. [Google Scholar]
  21. Pereira, J.L.; Rossetti, R.J. An integrated architecture for autonomous vehicles simulation. In Proceedings of the 27th Annual ACM Symposium on Applied Computing, Trento, Italy, 26–30 March 2012; pp. 286–292. [Google Scholar]
  22. Goodin, C.; Carruth, D.; Doude, M.; Hudson, C. Predicting the Influence of Rain on LIDAR in ADAS. Electronics 2019, 8, 89. [Google Scholar] [CrossRef] [Green Version]
  23. Manivasagam, S.; Wang, S.; Wong, K.; Zeng, W.; Sazanovich, M.; Tan, S.; Yang, B.; Ma, W.C.; Urtasun, R. Lidarsim: Realistic lidar simulation by leveraging the real world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11167–11176. [Google Scholar]
  24. Hahner, M.; Sakaridis, C.; Dai, D.; Van Gool, L. Fog simulation on real LiDAR point clouds for 3D object detection in adverse weather. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 15283–15292. [Google Scholar]
  25. Li, Y.; Ma, L.; Zhong, Z.; Liu, F.; Chapman, M.A.; Cao, D.; Li, J. Deep learning for lidar point clouds in autonomous driving: A review. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 3412–3432. [Google Scholar] [CrossRef] [PubMed]
  26. Reway, F.; Huber, W.; Ribeiro, E.P. Test methodology for vision-based adas algorithms with an automotive camera-in-the-loop. In Proceedings of the 2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Madrid, Spain, 12–14 September 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–7. [Google Scholar]
  27. Buddappagari, S.; Asghar, M.; Baumgärtner, F.; Graf, S.; Kreutz, F.; Löffler, A.; Nagel, J.; Reichmann, T.; Stephan, R.; Hein, M.A. Over-the-air vehicle-in-the-loop test system for installed-performance evaluation of automotive radar systems in a virtual environment. In Proceedings of the 2020 17th European Radar Conference (EuRAD), Utrecht, The Netherlands, 10–15 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 278–281. [Google Scholar]
  28. Rasshofer, R.H.; Gresser, K. Automotive radar and lidar systems for next generation driver assistance functions. Adv. Radio Sci. 2005, 3, 205–209. [Google Scholar] [CrossRef] [Green Version]
  29. Schubert, E.; Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. DBSCAN revisited, revisited: Why and how you should (still) use DBSCAN. ACM Trans. Database Syst. 2017, 42, 1–21. [Google Scholar] [CrossRef]
  30. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point-cloud shape detection. Comput. Graph. Forum 2007, 26, 214–226. [Google Scholar] [CrossRef]
  31. Wandinger, U. Introduction to Lidar; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  32. Jeong, J.; Kim, A. LiDAR intensity calibration for road marking extraction. In Proceedings of the 2018 15th International Conference on Ubiquitous Robots (UR), Honolulu, HI, USA, 26–30 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 455–460. [Google Scholar]
  33. Gomes, T.; Roriz, R.; Cunha, L.; Ganal, A.; Soares, N.; Araújo, T.; Monteiro, J. Evaluation and Testing Platform for Automotive LiDAR Sensors. Appl. Sci. 2022, 12, 13003. [Google Scholar] [CrossRef]
  34. Gao, T.; Gao, F.; Zhang, G.; Liang, L.; Song, Y.; Du, J.; Dai, W. Effects of temperature environment on ranging accuracy of lidar. In Proceedings of the Tenth International Conference on Digital Image Processing (ICDIP 2018), Shanghai, China, 11–14 May 2018; Volume 10806, pp. 1915–1921. [Google Scholar]
  35. Wu, B.; Wan, A.; Yue, X.; Keutzer, K. Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1887–1893. [Google Scholar]
Figure 1. LiDAR test vehicle. Test devices are attached on Front, Top, and Rear positions using additional metal structures. At each position, the corresponding LiDAR sensors can be replaced and mounted as the DUT.
Figure 1. LiDAR test vehicle. Test devices are attached on Front, Top, and Rear positions using additional metal structures. At each position, the corresponding LiDAR sensors can be replaced and mounted as the DUT.
Sensors 23 03892 g001
Figure 2. Schematic diagram of the test vehicle. In real road fleets, the test vehicle collects front camera images, point clouds of LiDAR sensors, and time-series signals of environments and car status. The gathered data is stored in industrial PCs installed in the luggage compartment. Additional batteries are used to supply electronic power to each component.
Figure 2. Schematic diagram of the test vehicle. In real road fleets, the test vehicle collects front camera images, point clouds of LiDAR sensors, and time-series signals of environments and car status. The gathered data is stored in industrial PCs installed in the luggage compartment. Additional batteries are used to supply electronic power to each component.
Sensors 23 03892 g002
Figure 3. Real road fleet examples for the collecting of driving data (a) under strong sunlight and high temperature (above 40 °C) in Nevada, the USA (b) under low sunlight and low temperature (below −17 °C) in Alaska, the USA.
Figure 3. Real road fleet examples for the collecting of driving data (a) under strong sunlight and high temperature (above 40 °C) in Nevada, the USA (b) under low sunlight and low temperature (below −17 °C) in Alaska, the USA.
Sensors 23 03892 g003
Figure 4. Car state logger example (a) IPETRONIK’s IPElog2 as a data acquisition toolbox. (b) Cover temperature measurement setup using a thermocouple. (c) Measured temperature profile in Alaska. The yellow mark indicates the attachment location of a wired thermal sensor connected to the IPElog2 (shown in (a)).
Figure 4. Car state logger example (a) IPETRONIK’s IPElog2 as a data acquisition toolbox. (b) Cover temperature measurement setup using a thermocouple. (c) Measured temperature profile in Alaska. The yellow mark indicates the attachment location of a wired thermal sensor connected to the IPElog2 (shown in (a)).
Sensors 23 03892 g004
Figure 5. Overall procedure of feature extraction algorithm.
Figure 5. Overall procedure of feature extraction algorithm.
Sensors 23 03892 g005
Figure 6. Example of scan data in the vibration test. (left) raw point cloud in a scan frame, (right) segmented points of the front vehicle using the feature extraction algorithm.
Figure 6. Example of scan data in the vibration test. (left) raw point cloud in a scan frame, (right) segmented points of the front vehicle using the feature extraction algorithm.
Sensors 23 03892 g006
Figure 7. Example of vibration test result. We observed the position and the intensity of the front car being driven at a constant speed (30 km/h) on a country road.
Figure 7. Example of vibration test result. We observed the position and the intensity of the front car being driven at a constant speed (30 km/h) on a country road.
Sensors 23 03892 g007
Figure 8. Example of strong sunlight test. The number of points in the reference object and the mean intensity vs. amount of illumination (lux) was plotted in this figure.
Figure 8. Example of strong sunlight test. The number of points in the reference object and the mean intensity vs. amount of illumination (lux) was plotted in this figure.
Sensors 23 03892 g008
Figure 9. Example of high temperature test results. (Left) mean intensity versus ambient temperature (Right) number of points versus ambient temperature.
Figure 9. Example of high temperature test results. (Left) mean intensity versus ambient temperature (Right) number of points versus ambient temperature.
Sensors 23 03892 g009
Figure 10. Example of low temperature test results. (Left) mean intensity versus ambient temperature (Right) number of points versus ambient temperature.
Figure 10. Example of low temperature test results. (Left) mean intensity versus ambient temperature (Right) number of points versus ambient temperature.
Sensors 23 03892 g010
Figure 11. (Left) mean intensity of interference noise from a LiDAR sensor faced with the DUT (Right) number of LiDAR interference noise per distance.
Figure 11. (Left) mean intensity of interference noise from a LiDAR sensor faced with the DUT (Right) number of LiDAR interference noise per distance.
Sensors 23 03892 g011
Figure 12. (Left) mean intensity of captured points from a reference target (Right) number of captured points from a reference target. For each graph the x-axis indicates the contamination condition captured by an off-road fleet in Nevada.
Figure 12. (Left) mean intensity of captured points from a reference target (Right) number of captured points from a reference target. For each graph the x-axis indicates the contamination condition captured by an off-road fleet in Nevada.
Sensors 23 03892 g012
Figure 13. (Left) mean intensity of captured points from black/white car according to the distance (Right) number of captured points from black/white car according to the distance.
Figure 13. (Left) mean intensity of captured points from black/white car according to the distance (Right) number of captured points from black/white car according to the distance.
Sensors 23 03892 g013
Figure 14. (Left) mean intensity of captured points according to the distance in day/night (Right) number of captured points according to the distance in day/night.
Figure 14. (Left) mean intensity of captured points according to the distance in day/night (Right) number of captured points according to the distance in day/night.
Sensors 23 03892 g014
Table 1. Summary of environmental test scenarios and results.
Table 1. Summary of environmental test scenarios and results.
TypeEnvironmental ConditionsTest Results
Simulation with road fleet data (Section 3.2)Cover contaminationMud was spread mud on the cover of the DUT based on visibility of a front LiDAR sensor from an off-road fleet. A reference target was placed in front of the DUT at a 5 m distance.The signal intensity and the number of points decreased when the cover of the LiDAR was obscured by mud contamination.
Strong sunlightApplying direct sunlight to the DUT while the DUT captured points from a reference target. Illumination flux was controlled by measured maximum sunlight on a desert surface in NevadaThe mean intensity decreased due to sunlight but the number of points from a reference target remained the same. The intensity drop rate was 47% at the highest illumination flux compared to the zero sunlight case.
High temperatureSimulating ambient temperature using a large temperature chamber capable of containing a test car, with temperature profiles captured from an extremely hot area in Nevada, the USAAs the temperature increased, the signal strength increased by 19.8% compared to the initial stage.
Low temperatureSimulating ambient temperature using a large temperature chamber capable of containing a test car with temperature profiles captured from an extremely cold area in Alaska, the USAAt low temperatures, the difference of signal intensity changed insignificantly.
Performance test in dynamic conditionsVibrationRecording the force applied to the DUT and the signal variations of the DUT using the data acquisition toolbox and driving feature extraction algorithm (Section 3.3)Position in z-axis and intensity of a front vehicle drifted highly in response to road vibrations. A high deviation in the z-axis position and intensity were measured.
InterferenceCapturing interference noise from a front LiDAR on a vehicle coming from the opposite side while the opposite car was being moved from 0 to 20 m distance to the ego-vehicleInterference noise might occur at a very short distance (e.g., 1 m). It drastically decreased as the distance to the interference source increased.
Color change of front car (reflectivity of a target)Driving the front car in a straight road from 0 to 100 m distance to the ego-vehicle. We utilized the black and white car which had a different reflectivity to 905 nm wavelength of light.A 33.2% drop rate was measured in the intensity curve. There was no significant change in the number of points in the graph.
Day and night transitionDriving the front car on a straight road from 0 to 100 m distance to the ego-vehicle in day and night conditionsThe signal intensity decreased under day conditions compared to night conditions.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, J.; Cho, J.; Lee, S.; Bak, S.; Kim, Y. An Automotive LiDAR Performance Test Method in Dynamic Driving Conditions. Sensors 2023, 23, 3892. https://doi.org/10.3390/s23083892

AMA Style

Park J, Cho J, Lee S, Bak S, Kim Y. An Automotive LiDAR Performance Test Method in Dynamic Driving Conditions. Sensors. 2023; 23(8):3892. https://doi.org/10.3390/s23083892

Chicago/Turabian Style

Park, Jewoo, Jihyuk Cho, Seungjoo Lee, Seokhwan Bak, and Yonghwi Kim. 2023. "An Automotive LiDAR Performance Test Method in Dynamic Driving Conditions" Sensors 23, no. 8: 3892. https://doi.org/10.3390/s23083892

APA Style

Park, J., Cho, J., Lee, S., Bak, S., & Kim, Y. (2023). An Automotive LiDAR Performance Test Method in Dynamic Driving Conditions. Sensors, 23(8), 3892. https://doi.org/10.3390/s23083892

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop