Next Article in Journal
A Method for Medical Data Analysis Using the LogNNet for Clinical Decision Support Systems and Edge Computing in Healthcare
Previous Article in Journal
Contactless Gait Assessment in Home-like Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measuring Vehicle Profile Size: Lidar-Based System and K-Frame-Based Methodology

1
College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
2
Institute of Transportation and Acoustical Metrology, Zhejiang Institute of Metrology, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(18), 6206; https://doi.org/10.3390/s21186206
Submission received: 9 August 2021 / Revised: 4 September 2021 / Accepted: 8 September 2021 / Published: 16 September 2021
(This article belongs to the Section Sensing and Imaging)

Abstract

:
At present, light curtain is a widely-used method to measure the vehicle profile size. However, it is sensitive to temperature, humidity, dust and other weather factors. In this paper, a lidar-based system with a K-frame-based algorithm is proposed for measuring vehicle profile size. The system is composed of left lidar, right lidar, front lidar, control box and industry controlling computer. Within the system, a K-frame-based methodology is investigated, which include several probable algorithm combinations. Three groups of experiments are conducted. An optimal algorithm combination, A16, is determined through the first group experiments. In the second group experiments, various types of vehicles are chosen to verify the generality and repeatability of the proposed system and methodology. The third group experiments are implemented to compare with vision-based methods and other lidar-based methods. The experimental results show that the proposed K-frame-based methodology is far more accurate than the comparative methods.

1. Introduction

In recent years, researches about intelligent transportation system has been developing rapidly and various research results have been widely applied, which are changing and affecting our life day by day. With the continuous improvement of various performances of automobiles, higher and higher requirements have been put forward for automobile performance detection technology. With the increasing of performance parameters, more and more kinds of testing instruments and equipment are needed and they are developing towards miniaturization, full-automatic and intelligent direction. These instruments and equipment can be used to the vehicle handling performance, safety and reliability, emission environmental protection and other performance indicators for a full range of testing and evaluation.
Since the vehicles are produced according to some standard, many people are not satisfied with the monotonous style, color, or size. So, some vehicles will be modified to satisfy their individual requirements, e.g., adding some parts to enlarge the size to load more materials, etc., which may bring safety hazards like permanent damage to the road, accident and influence on normal driving. To avoid hidden dangers caused by illegal modification, all kinds of vehicles especially trucks need to be regularly measured at the inspection station. Nowadays, several methods such as Coordinate Measuring Machine (CMM) based method [1], light curtain-based method [2], vision-based method [3,4] and lidar-based method [5] were presented to measure the vehicle profile size. There are several limitations about the afore-mentioned methods such complex installation, harsh lighting requirements, sensitive to illuminations, high cost, etc. In this paper, a 2D lidar-based measurement system for vehicle profile size is proposed and a K-frame-based methodology is also investigated. In contrast to most afore-mentioned solutions, the proposed system installs three 2D lidars on fixed brackets to collect point cloud data of vehicles. The point cloud data is used to calculate the profile size of a vehicle through a proposed K-frame-based methodology. The scheme in this paper can better adapt to complex real scenarios with high reliability and measurement accuracy. The main contributions are as follows:
  • A novel solution of lidar-based system for automated measuring vehicle profile size is designed. A method is also investigated to calibrate the parameters of lidar selves and the whole system, which can improve the accuracy and efficiency of system parameter calibration and ensure the accuracy of original point cloud data;
  • An original point cloud data filtering method and a vehicle profile size measurement method are put forward, which can overcome the slight deflection and incline of vehicles and reduce the interference of vibration, smoke and noise in real measuring scenarios;
  • A K-frame-based methodology is proposed to eliminate the measurement error caused by the deviation angle between driving track and central axis.

2. Related Work

The CMM-based method [1] can reach a high measurement accuracy while the limitations are obvious such as severe working condition, high cost and time-consuming. The light curtain-based method [2] is a usual method since the advantages are relatively convincing, for instances, simple measurement principle, less requirement for vehicle color and surroundings and so on. Similarly, disadvantages including complex installation process and harsh lighting requirements are also obvious.
Vision-based methods include measurement method based on digital photography and coded mark points [3], binocular stereo-based method [4], structured light and color coding based on method [6], etc. Jia et al. [7] proposed an effective field measurement method for large objects based on a multi-view stereo vision system, the method is effective at measuring the height of a large hot forging. Pu et al. [8] proposed a new method for measuring object size using an ordinary digital camera. Rong et al. [9] proposed a measurement method based on the dual camera vision system and the relative measurement principle and achieved high precision measurement of bayonet size of large automobile brake pads. Zhang et al. [10] proposed a method of sheep body size measurement based on visual image analysis, which can be actually applied in farm environment without disturbing animals. Wang et al. [11] proposed a new portable automatic pig body size measurement system. These methods are of lower costs but depend heavily on good lighting. In addition, vision-based method is of low measurement accuracy and not robust.
In contrast to afore-mentioned methods, lidar-based method [12] has many advantages, such as higher accuracy, stronger anti-jamming ability, lower environmental requirements and higher reliability. The key of lidar-based method lies in lidar parameter calibration, point cloud data analysis and processing, etc. Many lidar-based methods and systems were developed and applied on object size measurement. Li et al. [12] proposed a non-contact laser scanning based 3D measurement system to obtain the structure of vegetation canopy, which is based on the flying point scanning triangulation. Yin et al. [13] introduced the measurement principle of 3D lidar sensor and analyzed the relationship between projection points, measurement distance and object size. Xu et al. [14] proposed an automatic untouched measurement based on terrestrial 3D lidar (FARO Photon120), which facilitated plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. Yang et al. [15] used 3D lidar to obtain point data, analyze point data, obtain plane information, combine plane information with position information and establish 3D model. Wu et al. [16] studied the morphological characteristics of ravines by using 3D lidar. All the above-mentioned methods employed 3D lidar to measure or model a specific object. 3D lidar can rotate scanning plane to realize 3D scanning and acquire the 3D information of environment more quickly and directly, which brings many applications in the field of engineering measurement, complex terrain robot navigation and so on. However, 3D lidar has the disadvantages of complex installation and high cost.
Among lidar-based methods, 2D lidar also can be used to measure or model the object through cooperating with a movable mechanical device or a specific installation method. Gong et al. [17] proposed a 3D ice shape measurement technique through laser light sheet scanning. Yan et al. [18] detected 3D objects with a 2D laser scanning sensor and proposed specially designed algorithm to collect data and construction three-dimensional images. Sanz-Cortiella et al. [19] installed 2D lidar on the retractable bracket to scan the plant and build a 3D model of the plant and measure its size. Rosell et al. [20] employed remote 2D terrestrial lidars to obtain the 3D profile size of tree orchards. Keightley et al. [21] designed a system composed of linear laser sensors and other devices. The system could be moved by a rotating mechanical arm and be used to calculate the biomass of Agronomy. Dias et al. [22] proposed a 3D reconstruction technology based on the real world environment, which was based on the modification of traditional 2D lidar to simulate 3D lidar. Xu et al. [23] developed mechanical devices and motors to rotate the lidar around fixed points, of which three dimensional space data was obtained by single line lidar and 2D data was transformed into three-dimensional data. Li et al. [24] proposed a whole waveform echo decomposition method to improve the ranging accuracy of the whole waveform lidar. Niola et al. [25] showed a robot with two-dimensional lidar that could be used to reverse engineer objects. Fang et al. [26] proposed a real-time low-cost 3D sensing and reconstruction system, which was suitable for autonomous navigation and large-scale environment reconstruction. Choi et al. [27] investigated a sensing model of building structural deformation, which confirmed that the deformation measurement model based on 2D lidar may be a promising alternative. Ringdahlet al. [28] evaluated several existing diameter estimation algorithms using 2D lidar. An enhancement algorithm for compensating beam width and using multiple scans was also developed and evaluated by Bretschneider et al. [29]. Yamada et al. [30] used a 2D lidar to obtain road shape information. According to the road information, the road damage area was detected automatically. Most above-mentioned methods employed single or multi 2D lidars to obtain three-dimensional information of an object, which were usually installed on a movable mechanism and could reduce the equipment cost. However, the reliability and measurement accuracy were also reduced since the moving mechanical mechanism could not ensure a stable measurement system. So, lidar-mechanism-based solution is not suitable for the scene of measuring vehicle profile size since it is more complex and may include smoke, noise and vibration.

3. Lidar-Based Measurement System

3.1. Structure and Principle

As shown in Figure 1, the lidar-based measurement system is composed of left lidar, right lidar, front lidar, control box and industry controlling computer. The left lidar and the right lidar are arranged in the upper left and right corner of the gantry at the entrance of the channel and the front lidar is arranged in the middle of the gantry at the exit of the channel.
The basic principle of the automated measurement system is as follows: when the head of a vehicle is detected to be entering the measurement region, lidars start collecting point cloud data of the vehicle; when the tail of the vehicle is detected to be leaving the measurement region, lidars stop collecting data. The measurement algorithm processes and analyzes the point cloud data and, finally, obtains the vehicle profile size. The workflow chart of the measurement process is depicted as Figure 2.
The installation locations of these devices are shown in Figure 3. The point cloud data collected by left, right and front lidars are transformed from TCP/IP protocol data by Ethernet switch in control box and entered into the measurement software installed in the industry controlling computer, which further calculates the length, width and height of the vehicle.

3.2. Automated Calibration of Lidars

The working principle of lidar is to send a laser beam to an object and then compare the received signal (object echo) reflecting back from the object with the transmitting signal. After proper processing, the distance between the lidar and object can be obtained. To measure the 3D size, the obtained original data needs to be converted into point cloud data in a coordinate system. As shown in Figure 4, a 2D rectangular coordinate system according to each lidar is set up, where the lidar is the origin, vertical downward direction of the lidar is the Y-axis and the direction parallel to the ground in the scanning plane of the lidar is the X-axis. The coordinate transformation can be obtained according to Equation (1):
{ x i = D i s i n ( β i ) y i = D i c o s ( β i )
where, i represents the serial number of a laser beam in the scanning plane, Di means the distance between the point on the object that is hit by i-th laser beam and the lidar (origin of the coordinate system), βi is the angle between the i-th laser beam and Y-axis, (xi, yi) is the coordinate point under the coordinate system according to the reflected point on the object as shown in Figure 4.
According to the working principle, laser beam scans cyclically from the position of start laser beam to that of the end laser beam with a certain frequency clockwise within a period. As shown in Figure 5, the maximal scanning angle is marked as α. According to Equation (1), (xi, yi) can be calculated if and only if βi is known. To calculate βi, it is necessary to calculate γ that is the angle from the position of start laser beam to Y-axis, which is called the central angle, then, the installation height of the lidar, h, can be calculated.
βi and h are the basic parameters for the automated measurement system of vehicle profile size. How to get the two parameters? When a lidar is working, it continually emits laser beams to the object at regular intervals within a fixed scanning period. Suppose the distance detected by the i-th laser beam between the lidar and the reflection point on the object be Di and a set of the distances be {Di|i = 0, 1, 2, ..., K − 1}, K is the total number of emitted laser beams within the scanning plane angle α. Let the angle of any two adjacent laser beams be ψ, which is a constant. Then, βi and h of the lidar can be calculated according to Equation (2). The calibration process is shown in Figure 6.
{ h = m i n { D i } i = 0 K 1   i * = a r g m i n i { D i } i = 0 K 1 γ = i × ψ β i = γ i × ψ

3.3. Status Judgment of Vehicle and Frame Data Collection

When the measurement system is started, no matter whether or not there is a vehicle that is moving into the measurement region, the three lidars will keep scanning and continuously transfer the original data frame to the measurement software. Among them, real effective ones are those collected from the time when the vehicle just enters the measurement region to that when it just leaves. So, how to determine when the vehicle enters and leaves the measurement region?
Let point cloud data frames collected by left lidar and right lidar at t moment be LPt = {(xil,t, yil,t)|i = 0, 1, …, F − 1} and RPt = {(xir,t, yir,t)|i = 0, 1, …, F − 1}, respectively. F presents the number of coordinate points in each frame, (xil,t, yil,t) represents the converted reflection coordinate point of i-th laser beam collected by left lidar, (xir,t, yir,t) represents the converted reflection coordinate point of i-th laser beam collected by right lidar. (xil,t, yil,t) and (xir,t, yir,t) are obtained via Equations (1) and (2). LPt and RPt are unified into a new coordinate system, of which the right lidar is the origin, horizontal direction is the X-axis and vertical downward direction is the Y-axis. Then, the new LPt and RPt are merged into a uniform set named Pt according to Equation (3). The minimal X coordinate value, xmin and maximal X coordinate value, xmax, can be found according to Equation (4). Then, the rule of determining when the vehicle enters and leaves the measurement region is as follows: if (xmaxxmin) ≥ w0 is just satisfied, the vehicle is judged to have just entered the measurement region; If (xmaxxmin) ≥ w0 is always satisfied after the vehicle is judged to have entered the measurement region, the vehicle is judged to be still in the measurement region; if it changes just from (xmaxxmin) ≥ w0 to (xmaxxmin) < w0, the vehicle is judged to just have left the measurement region.
P t = { ( x i t , y i t ) | ( ( x i t , y i t ) L P t   | |   ( x i t , y i t ) R P t )   & &   y i t h 0 }
{ x m i n   = min { x i t | ( x i t , y i t ) P t x m a x   = max { x i t | ( x i t , y i t ) P t
where h0 represents the vehicle height threshold, w0 represents the vehicle width threshold.
When the vehicle is judged to have just entered the measurement region, the measurement software begins to record the data frames until the vehicle is judged to just leave the measurement region.

4. K-Frame-Based Measurement Methodology

4.1. Vehicle Width Measurement

Let the unified point cloud data sets collected by left and right lidar be LP = {(xij, yij)|i ∈ [0, n − 1], j ∈ [0, F − 1]} and RP = {(xkv, ykv)|k ∈ [0, m − 1], v ∈ [0, F − 1]}, respectively, n represents the quantity of data frames collected by left lidar and m represents the quantity of data frames collected by right lidar, (xij, yij) represents the j-th point in the i-th frame, (xkv, ykv) represents the v-th point in the k-th frame, F represents the quantity of coordinate points in each data frame. Since there may be some noise data due to the interference of dust, smoke, etc., the points near the ground or significantly higher than the vehicle must be preliminarily removed, i.e., those points in LP that meet the requirements of yij > h1 or yij < h2 will be removed. Similarly, the points in RP that meet the requirements of ykv > h1 or ykv < h2 are also removed. h1 represents the upper threshold value in the Y-axis direction, which usually represents the ordinate Y-axis value near the top of the vehicle. h2 represents the lower threshold value in the Y-axis direction, which usually represents the ordinate Y-axis value near the ground. Let the denoised point cloud dataset be L and R, respectively.
Theoretically, the vehicle width can be obtained easily by subtracting the minimum X-axis value R from the maximum X-axis value in L. However, considering that the data collections of left and right lidar are usually not synchronous, it is a problem that which two data frames in R and L should be chosen for subtraction. In addition, if the single frame subtraction method is adopted, it will bring random error. To overcome the random error caused by the asynchronous collections, a K-frame-based measurement algorithm is proposed. The principle of K-frame-based method is that point cloud data of left and right lidars is sub-grouped according to K1 and K2 frames, respectively, namely, each sub-group of point cloud data of left lidar includes K1 frames and each sub-group of point cloud data of right lidar includes K2 frames. The algorithm can be represented by Equations (5)–(7).
X L Θ = { x l t | x l t = Θ { max { x i j | j [ 0 , F 1 ] } } } i = K 1 t , K 1 t + 1 , , K 1 ( t + 1 ) 1 ,   t = 0 , 1 , , N 1 1
X R Λ = { x r t | x r t = Λ { min { x k s | s [ 0 , F 1 ] } } } k = K 2 t , K 2 t + 1 , , K 2 ( t + 1 ) 1 ,   t = 0 , 1 , , N 2 1
W = Γ ( X L Θ , X R Λ )
where W represents the final result of vehicle width; L is divided into N1 = n/K1 sub-groups, each sub-group includes K1 frames; N1 = n/K1, K1 is preset; similarly; R is divided into N2 = m/K2 sub-groups, each sub-group includes K2 frames, K2 is also preset; xij represents the X coordinate of j-th point in the i-th frame in L; Θ may be a certain operation like max, min, or average, etc.; xlt represents the result value of the t-th sub-group of data frame set in XLΘ via Θ operation; XLΘ represents the result set of all N1 sub-groups; xks represents the X coordinate of s-th point in the k-th frame in R; Λ may be a certain operation like max, min, or average, etc.; xrt represents the result value of the t-th sub-group of data frame set in XRΛ via Λ operation; XRΛ represents the result set of all N2 sub-groups; Γ represents a certain operation like Θ and Λ; min and max represent the minimum operation and the maximum operation, respectively.
According to the above-mentioned model, different methods can be derived from different Θ, Λ and Γ, as shown in Table 1, Γ can be either Equations (8), (9), (10) or (11).
Γ ( X L Θ , X R Λ ) = | mean ( X L Θ ) mean ( X R Λ ) |
Γ ( X L Θ , X R Λ ) = | min ( X L Θ ) max ( X R Λ ) |
Γ ( X L Θ , X R Λ ) = max { | x r t x l t | | t [ 0 , min ( N 1 , N 2 ) ] }
Γ ( X L Θ , X R Λ ) = mean { | x r t x l t | | t [ 0 , min ( N 1 , N 2 ) ] }
In the actual measurement, it is difficult to keep the driving direction completely parallel to the lane axis. In this case, when Equations (8) and (9) are adopted for Γ, it will lead to a large error between the measured value of vehicle width and the actual one. So, Equations (10) or (11) is often employed for Γ in the actual measurement. The proposed K-frame-based method can effectively reduce the measurement error.

4.2. Vehicle Length Measurement

As shown in Figure 7 and Figure 8, in the vehicle length measurement, the function of left and right lidar is to get the time tstart when the vehicle enters the measurement region and the time tend when the vehicle leaves the measurement region. The length L0 of the measurement region is already determined when the equipment is installed. At the moment of tstart, the system collects a frame of point cloud data of the front lidar. After preliminary filtering, the X coordinate set { x t start } of point cloud data is obtained. At the moment of tend, the system also collects a frame of point cloud data of the front lidar. Similarly, after preliminary filtering, the point cloud data X coordinate set { x t end } is obtained. Then, the vehicle length L can be calculated according to Equation (12).
L = min { x t start } min { x t end }

4.3. Vehicle Height Measurement

The algorithm of vehicle height measurement is similar to that of vehicle width measurement. Theoretically, the left or right lidar can measure the vehicle height separately, as shown in Figure 9; however, this may lead to a large error. To eliminate the measurement error, the point cloud data of both the two lidars are still fully fused and an algorithm similar to that of vehicle width measurement is advised. So, the vehicle height can be calculated according to Equation (13).
H = m a x { h t | h t = max ( Y L θ , Y R Λ ) } t = 0 , 1 , , min ( N 1 , N 2 ) 1
where the calculation methods of YLΘ and YRΛ are similar to XLΘ and XRΛ.

5. Error Analysis

5.1. Calibration Error

Suppose the maximum angle of the lidar scanning plane be α and total number of laser beams of each scanning cycle be K. Due to the limited number of laser beams, sometimes it is impossible to find a laser beam that coincides with the vertical direction as the Y-axis. In this case, the closest laser beam is chosen as the Y-axis, which absolutely will lead error. As shown in Figure 10, when the vertical line from lidar to ground is exactly in the middle of two laser beams, calibration error will be maximum. Under the situation, the angle value between the closest laser beam and the vertical line is α/2K. The influence of the calibration error on vehicle width measurement error is shown in Figure 11. The coordinate conversion relationship between the actually established coordinate system and the standard one with the vertical line as the Y-axis is shown in Equation (14).
{ x 0 = x 0 cos ( α / 2 n ) + y 0 sin ( α / 2 n ) y 0 = x 0 sin ( α / 2 n ) + y 0 cos ( α / 2 n )
If other factors that may cause errors are excluded and the vehicle drives strictly along with the middle of the lane and the horizontal width of the left and right lidars is W0, the theoretical vehicle width is calculated according to Equation (15). In consideration of calibration error, the actual measured value is shown in Equation (16). Then, the relative measurement error of vehicle width caused by calibration error is according to Equation (17).
W = W 0 2 x 0
W = W 0 2 [ x 0 cos ( α / 2 K ) + y 0 sin ( α / 2 K ) ]
δ = | W W W | = 2 [ x 0 cos ( α / 2 K ) + y 0 sin ( α / 2 K ) ] 2 x 0 W 0 2 x 0
To eliminate the measurement error caused by calibration, a calibration object whose profile size is known is used a fine-tuning strategy is facilitated as follows:
  • a calibration object, whose profile size is known, is measured by the proposed system;
  • the lidars are fine-tuned according to difference value between the measured value and actual value of the calibration object;
  • step 1 and 2 are repeated until the measured value is equal to the actual value.

5.2. Deviation Error of Vehicle Moving

As shown in Figure 12, in the basic algorithm combination of Table 1, if Equation (8) or Equation (9) is adopted for Γ and there is deflection angle between the track and the central axis when the vehicle is moving forward, the error will be relatively large. Suppose the angle between the vehicle driving direction and the central lane axis be θ, the actual value of the vehicle width be W, the measured value be W and the actual value of the vehicle length be L, the relative error of the vehicle width caused by the vehicle moving deviation is shown in Equation (18).
δ = | W W W | = | [ W + L tan ( θ ) ] cos ( θ ) W W |
Then, expanding Equation (18) can be expanded to Equation (19).
δ = | cos ( θ ) + L W sin ( θ ) 1 |
Usually, as for trucks, W is limited to 2.5 m, L is limited to 18 m. If the measurement error δ < 1% is required, θ < 0.08° must be satisfied; if L is limited to 13 m, θ < 0.12° must be satisfied, which means the high requirement for vehicle moving trajectory. Thus, a K-frame-based method is proposed, which can eliminate the measurement error caused by the deviation angle between the vehicle and the central axis and overcome the synchronization requirements of the left and right lidar through filtering noise data.

6. Experiments

6.1. Determination of the Optimal Algorithm

Different combinations according to Table 1 are tested for about 800 vehicles to determine the exact measure algorithm that has the minimal measurement error profile size, where there are 102 vans, 36 barn trucks, 59 fence trucks, 20 crane trucks, 150 semi-trailer tractors and 77 buses. Average errors of vehicle width and height according to algorithm combination in Table 1 are shown in Figure 13 and Figure 14, respectively. According to the experimental results, it can be found that the combination A16 in Table 1, i.e., Θ chooses mean method, Λ chooses mean method, Γ chooses Equation (11), can reach the minimal average error.
However, decision about which algorithm combination should be chosen can’t be made just according to the average error. Considering that the number of samples of different types of vehicles in the experiment varies greatly, it is very likely that the combination with smaller average error may have smaller error only when measuring the vehicle type with larger number of samples and the error may be too large when measuring the vehicle type with smaller number of samples. Then, the standard deviation of vehicle width and height is also calculated and is depicted in Figure 15 and Figure 16, respectively. The experiment results of the standard deviation show that A16 still reaches the minimal value. On the whole, A16 is an algorithm combination with smaller error and stronger applicability compared with other algorithms, which is determined to be the exact algorithm for measuring the vehicle profile size.

6.2. Experiments of the Optimal Algorithm

Five specific vehicles from the above-mentioned five categories are chosen for the experiments to verify the applicability and repeatability of A16, which are semi-trailer tractor, van, barn truck, fence truck, crane truck and bus. The profile size of all the five vehicles is measured ten times and compared with the ground truth. The experimental results, i.e., profile size, max error, relative error and repeatability error, are shown in the Table 2.
As shown in Figure 17, max errors of the experimental results are depicted as a line chart in ascending order of vehicle length. The measurement time increases according to the increasing of the vehicle length, where it is supposed that the vehicles move with similar speed. Then, errors caused slight deflection and moving deviation from central axis of the lane will be accumulated. Since it is completely different from the width and height measurement, K-frame-based method is not employed in the length measurement. In Figure 17, it can be clearly found that the max error of length also increases with the increasing of measurement time while that of width and height does not increase with the increasing of measurement time, which further proves that K-frame-based method can eliminate the error caused by moving deviation from central axis of the lane to a certain extent.
Although Figure 17 shows that the max error of length will increase with the increasing of measurement time, it also can be found that the max relative error of length does not change significantly with the increasing of vehicle length. In fact, as shown in Figure 18, the max relative error of length fluctuates in a small range just similar to the max error of width and height, which shows that the proposed measurement method is of high applicability after facilitating the optimal algorithm combination A16.
In terms of repeatability, it can be seen from Figure 19 that the repeatability error is confined to a small range and there is no repeatability error higher than 0.5%, which shows the proposed measurement is of good repeatability.

6.3. Comparison Experiments

Several comparison experiments with Li et al. [31], Robert et al. [32] and Robert et al. [33] are conducted to prove the measurement accuracy of the proposed method, where Li et al. [31] investigated a monocular-vision-based method of vehicle 3D size measurement, Robert et al. [32] designed a scheme to measure the length of moving vehicles by employing stereoscopic video analysis technology, Robert et al. [33] proposed a new method to estimate vehicle size based on active appearance model (AAM) and stereoscopic video analysis. Although most of the experimental measurement objects are passenger vehicles with only a few large vehicles, the above-mentioned three methods are still used in our comparison experiments since they behave well in the measurement of vehicle profile size. The experimental results are shown in Table 3. It can be found that the proposed method is far more accurate than the three methods. Since the principle of the proposed method is completely different from those of the three methods, the purpose of this comparison is to verify the high measurement accuracy in the similar scene.
To further prove how good the K-frame-based method with the algorithm combination A16 is, additional comparison experiments with Xu et al. [34] and Xu et al. [35] are implemented. Xu et al. [34] developed a vehicle size measurement method based on monocular vision, Xu et al. [35] proposed a method of vehicle 3D dimension measurement based on laser ranging is proposed and developed a vehicle 3D dimension measurement system. Although only the experiment of dump truck is carried out [34], the application scene of is similar to that of the system in this paper. The hardware and installation scheme in [35] are very similar to ours. Those are why they are chosen for the comparison. Table 4 shows the experiment results. From Table 4, it can be found that in the similar application scenarios, the accuracy of the proposed method is higher than [34] in the similar application scenarios, which proves the correctness of lidar-based measurement equipment. According to the comparison experimental results with [35], it can be seen that the K-frame-based algorithm and the selected optimal algorithm combination A16 can greatly improve the measurement accuracy and prove the significance of K-frame-based algorithm and the selected optimal combination strategy.
Overall, according to the above-mentioned experimental results, the proposed scheme and system has a high measurement accuracy in the application scenarios. Especially, the K-frame-based algorithm and the selected optimal algorithm combination A16 play important roles in improving the measurement accuracy in the system.

7. Conclusions

In this paper, a complete solution of lidar-based automated measurement system for vehicle profile size is developed and a method including lidar system calibration, vehicle status determination, original data processing and profile size calculation is proposed, which greatly reduces the influence of various interference factors on measurement accuracy. Especially, a K-frame-based algorithm is investigated can eliminate the measurement error caused by the deviation angle between the vehicle and the central axis when the vehicle is moving forward, at the same time, it can overcome the synchronization requirements of the left and right lidar and filter noise data. The experimental results show that, the proposed method makes a great improvement on the measurement accuracy with similar equipment under similar conditions. At the same time, the K-frame-based algorithm is not limited by hardware devices, which has a certain generality. How to facilitate the algorithm in light curtain and machine vision measurement systems is part of our ongoing work, which may reduce the influence of various interference factors on measurement accuracy and reduce measurement error.

Author Contributions

Conceptualization, Q.Z., Z.W., L.W., J.S. and F.G.; methodology, Q.Z. and Z.W.; software, Q.Z. and Z.W.; validation, Q.Z.; formal analysis, L.W., J.S. and F.G.; investigation, L.W.; resources, F.G.; data curation, L.W. and J.S.; writing—original draft preparation, Q.Z.; writing—review and editing, F.G.; visualization, Q.Z.; supervision, Z.W., L.W. and J.S.; project administration, Q.Z.; funding acquisition, F.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work is being supported by the National Key Research and Development Project of China under Grant No. 2020AAA0104001, the Zhejiang Lab. under Grant No. 2019KD0AD011005 and the Zhejiang Provincial Science and Technology Planning Key Project of China under Grant No. 2021C03129.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Further information about the datasets used in the present work are available at the link https://www.dropbox.com/s/xnxf1nqe7psw672/ZPVehicle.rar/ (accessed on 19 December 2019).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, D.P. Application of coordinate measuring machine in reverse engineering. In Advanced Materials Research; Trans Tech Publications Ltd.: Baech, Switzerland, 2011; Volume 301, pp. 269–274. [Google Scholar]
  2. Sun, Y.; Yang, T.; Cheng, X.; Qin, Y. Volume Measurement of Moving Irregular Objects Using Linear Laser and Camera. In Proceedings of the 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Tianjin, China, 19–23 July 2018; pp. 1288–1293. [Google Scholar]
  3. Liu, W.; Lan, Z.; Zhang, Y.; Zhang, Z.; Zhao, H.; Ye, F.; Li, X. Global Data Registration Technology Based on Dynamic Coded Points. IEEE Trans. Instrum. Meas. 2017, 67, 394–405. [Google Scholar] [CrossRef]
  4. Shi, C.; Teng, G.; Li, Z. An approach of pig weight estimation using binocular stereo system based on LabVIEW. Comput. Electron. Agric. 2016, 129, 37–43. [Google Scholar] [CrossRef]
  5. Jiang, Q.; Hou, R.; Wang, S.; Wei, X.; Chang, B.; Shan, J.; Du, D. On-Line 3D reconstruction based on laser scanning for robot machining of large complex components. J. Phys. Conf. Ser. 2018, 1074, 12166. [Google Scholar] [CrossRef]
  6. Barnea, E.; Mairon, R.; Ben-Shahar, O. Colour-agnostic shape-based 3D fruit detection for crop harvesting robots. Biosyst. Eng. 2016, 146, 57–70. [Google Scholar] [CrossRef]
  7. Jia, Z.; Wang, L.; Liu, W.; Yang, J.; Liu, Y.; Fan, C.; Zhao, K. A field measurement method for large objects based on a multi-view stereo vision system. Sens. Actuators Phys. 2015, 234, 120–132. [Google Scholar] [CrossRef]
  8. Pu, L.; Tian, R.; Wu, H.-C.; Yan, K. Novel object-size measurement using the digital camera. In Proceedings of the 2016 IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 3–5 October 2016; pp. 543–548. [Google Scholar]
  9. Xiang, R.; He, W.; Zhang, X.; Wang, D.; Shan, Y. Size measurement based on a two-camera machine vision system for the bayonets of automobile brake pads. Measurement 2018, 122, 106–116. [Google Scholar] [CrossRef]
  10. Zhang, A.L.; Wu, B.P.; Wuyun, C.T.; Jiang, D.X.; Xuan, E.C.; Ma, F.Y. Algorithm of sheep body dimension measurement and its applications based on image analysis. Comput. Electron. Agric. 2018, 153, 33–45. [Google Scholar] [CrossRef]
  11. Wang, K.; Guo, H.; Ma, Q.; Su, W.; Chen, L.; Zhu, D. A portable and automatic Xtion-based measurement system for pig body size. Comput. Electron. Agric. 2018, 148, 291–298. [Google Scholar] [CrossRef]
  12. Li, X.; Zhao, H.; Liu, Y.; Jiang, H.; Bian, Y. Laser scanning based three dimensional measurement of vegetation canopy structure. Opt. Lasers Eng. 2014, 54, 152–158. [Google Scholar] [CrossRef]
  13. Huilin, Y.; Pengfei, X.; Qing, C. A method of objects classification for intelligent vehicles based on number of projected points. In Proceedings of the 2017 2nd IEEE International Conference on Intelligent Transportation Engineering (ICITE), Singapore, 1–3 September 2017; pp. 62–66. [Google Scholar]
  14. Xu, W.-H.; Feng, Z.-K.; Su, Z.-F.; Xu, H.; Jiao, Y.-Q.; Deng, O. An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data. Spectrosc. Spectr. Anal. 2014, 34, 465–471. [Google Scholar]
  15. Yang, S.-C.; Fan, Y.-C. 3D building scene reconstruction based on 3d lidar point cloud. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), Taipei, Taiwan, 12–14 June 2017; pp. 127–128. [Google Scholar]
  16. Wu, H.; Xu, X.; Zheng, F.; Qin, C.; He, X. Gully morphological characteristics in the loess hilly-gully region based on 3D laser scanning technique. Earth Surf. Process. Landf. 2018, 43, 1701–1710. [Google Scholar] [CrossRef]
  17. Gong, X.; Bansmer, S. Laser scanning applied for ice shape measurements. Cold Reg. Sci. Technol. 2015, 115, 64–76. [Google Scholar] [CrossRef]
  18. Yan, T.; Zhu, H.; Sun, L.; Wang, X.; Ling, P. Detection of 3-D objects with a 2-D laser scanning sensor for greenhouse spray applications. Comput. Electron. Agric. 2018, 152, 363–374. [Google Scholar] [CrossRef]
  19. Sanz-Cortiella, R.; Llorens-Calveras, J.; Escolà, A.; Arnó-Satorra, J.; Ribes-Dasi, M.; Masip-Vilalta, J.; Camp, F.; Gràcia-Aguilá, F.; Solanelles-Batlle, F.; Planas-DeMartí, S.; et al. Innovative LIDAR 3D Dynamic Measurement System to Estimate Fruit-Tree Leaf Area. Sensors 2011, 11, 5769–5791. [Google Scholar] [CrossRef]
  20. Rosell, J.R.; Llorens, J.; Sanz, R.; Arnó, J.; Ribes-Dasi, M.; Masip, J.; Escolà, A.; Camp, F.; Solanelles, F.; Gràcia, F.; et al. Obtaining the three-dimensional structure of tree orchards from remote 2D terrestrial LIDAR scanning. Agric. For. Meteorol. 2009, 149, 1505–1515. [Google Scholar] [CrossRef] [Green Version]
  21. Keightley, K.E.; Bawden, G.W. 3D volumetric modeling of grapevine biomass using Tripod LiDAR. Comput. Electron. Agric. 2010, 74, 305–312. [Google Scholar] [CrossRef]
  22. Dias, P.; Matos, M.; Santos, V. 3D Reconstruction of Real World Scenes Using a Low-Cost 3D Range Scanner. Comput. Civ. Infrastruct. Eng. 2006, 21, 486–497. [Google Scholar] [CrossRef]
  23. Xu, N.; Zhang, W.; Zhu, L.; Li, C.; Wang, S. Object 3D surface reconstruction approach using portable laser scanner. IOP Conf. Ser. Earth Environ. Sci. 2017, 69, 12119. [Google Scholar] [CrossRef] [Green Version]
  24. Li, D.; Xu, L.; Xie, X.; Li, X.; Chen, J.; Chen, J. Co-path full-waveform LiDAR for detection of multiple along-path objects. Opt. Lasers Eng. 2018, 111, 211–221. [Google Scholar] [CrossRef]
  25. Niola, V.; Rossi, C.; Savino, S. A new real-time shape acquisition with a laser scanner: First test results. Robot. Comput. Manuf. 2010, 26, 543–550. [Google Scholar] [CrossRef] [Green Version]
  26. Fang, Z.; Zhao, S.; Wen, S.; Zhang, Y. A Real-Time 3D Perception and Reconstruction System Based on a 2D Laser Scanner. J. Sens. 2018, 2018, 2937694. [Google Scholar] [CrossRef] [Green Version]
  27. Choi, S.W.; Kim, B.R.; Lee, H.M.; Kim, Y.; Park, H.S. A Deformed Shape Monitoring Model for Building Structures Based on a 2D Laser Scanner. Sensors 2013, 13, 6746–6758. [Google Scholar] [CrossRef]
  28. Ringdahl, O.; Hohnloser, P.; Hellström, T.; Holmgren, J.; Lindroos, O. Enhanced algorithms for estimating tree trunk diameter using 2D laser scanner. Remote Sens. 2013, 5, 4839–4856. [Google Scholar] [CrossRef] [Green Version]
  29. Bretschneider, T.; Koop, U.; Schreiner, V.; Wenck, H.; Jaspers, S. Validation of the body scanner as a measuring tool for a rapid quantification of body shape. Skin Res. Technol. 2009, 15, 364–369. [Google Scholar] [CrossRef] [PubMed]
  30. Yamada, T.; Ito, T.; Ohya, A. Detection of road surface damage using mobile robot equipped with 2D laser scanner. In Proceedings of the 2013 IEEE/SICE International Symposium on System Integration, Kobe, Japan, 15–17 December 2013; pp. 250–256. [Google Scholar]
  31. Li, S.; Jiang, X.; Qian, H.; Xu, Y. Vehicle 3-dimension measurement by monocular camera based on license plate. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016; pp. 800–806. [Google Scholar]
  32. Ratajczak, R.; Domański, M.; Wegner, K. Vehicle size estimation from stereoscopic video. In Proceedings of the 2012 19th International Conference on Systems, Signals and Image Processing (IWSSIP), Vienna, Austria, 11–13 April 2012; pp. 405–408. [Google Scholar]
  33. Ratajczak, R.; Grajek, T.; Wegner, K.; Klimaszewski, K.; Kurc, M.; Domański, M. Vehicle dimensions estimation scheme using AAM on stereoscopic video. In Proceedings of the 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance, Krakow, Poland, 27–30 August 2013; pp. 478–482. [Google Scholar]
  34. Xu, Z.; Yang, M. Truck Size Measurement System Based on Computer Vision. In Chinese Intelligent Systems Conference; Springer: Singapore, 2019; pp. 126–134. [Google Scholar]
  35. Xu, Z.; Peng, J.; Chen, X. A method for vehicle three-dimensional size measurement based on laser ranging. In Proceedings of the 2015 International Conference on Transportation Information and Safety (ICTIS), Wuhan, China, 25–28 June 2015; pp. 34–37. [Google Scholar]
Figure 1. Structure of the proposed system. 1—Left lidar 2—Right lidar 3—Front lidar 4—Control box 5—Industry controlling computer.
Figure 1. Structure of the proposed system. 1—Left lidar 2—Right lidar 3—Front lidar 4—Control box 5—Industry controlling computer.
Sensors 21 06206 g001
Figure 2. The workflow chart of the measurement process.
Figure 2. The workflow chart of the measurement process.
Sensors 21 06206 g002
Figure 3. Illustration of lidars installation.
Figure 3. Illustration of lidars installation.
Sensors 21 06206 g003
Figure 4. The coordinate system of a lidar and coordinate transformation.
Figure 4. The coordinate system of a lidar and coordinate transformation.
Sensors 21 06206 g004
Figure 5. Calibration related angles.
Figure 5. Calibration related angles.
Sensors 21 06206 g005
Figure 6. The calibration process.
Figure 6. The calibration process.
Sensors 21 06206 g006
Figure 7. State of tstart.
Figure 7. State of tstart.
Sensors 21 06206 g007
Figure 8. State of tend.
Figure 8. State of tend.
Sensors 21 06206 g008
Figure 9. Illustration of how to measure vehicle height.
Figure 9. Illustration of how to measure vehicle height.
Sensors 21 06206 g009
Figure 10. Diagram of maximum calibration error.
Figure 10. Diagram of maximum calibration error.
Sensors 21 06206 g010
Figure 11. The influence of calibration error on vehicle width measurement.
Figure 11. The influence of calibration error on vehicle width measurement.
Sensors 21 06206 g011
Figure 12. Influence of vehicle moving deviation error on vehicle width measurement.
Figure 12. Influence of vehicle moving deviation error on vehicle width measurement.
Sensors 21 06206 g012
Figure 13. Average error of vehicle width according to algorithm combination in Table 1.
Figure 13. Average error of vehicle width according to algorithm combination in Table 1.
Sensors 21 06206 g013
Figure 14. Average error of vehicle height according to algorithm combination in Table 1.
Figure 14. Average error of vehicle height according to algorithm combination in Table 1.
Sensors 21 06206 g014
Figure 15. Standard deviation of vehicle width according to algorithm combination in Table 1.
Figure 15. Standard deviation of vehicle width according to algorithm combination in Table 1.
Sensors 21 06206 g015
Figure 16. Standard deviation of vehicle height according to algorithm combination in Table 1.
Figure 16. Standard deviation of vehicle height according to algorithm combination in Table 1.
Sensors 21 06206 g016
Figure 17. Max error of profile size of the five vehicles.
Figure 17. Max error of profile size of the five vehicles.
Sensors 21 06206 g017
Figure 18. Max relative error of profile size of the five vehicles.
Figure 18. Max relative error of profile size of the five vehicles.
Sensors 21 06206 g018
Figure 19. Repeatability error of profile size of the five vehicles.
Figure 19. Repeatability error of profile size of the five vehicles.
Sensors 21 06206 g019
Table 1. combination of Θ, Λ and Γ.
Table 1. combination of Θ, Λ and Γ.
No.ΘΛΓRemarks
A01minmaxEquation (8)All point cloud data in L and R
A02meanmeanEquation (8)
A03minmaxEquation (9)
A04meanmeanEquation (9)
A05minmaxEquation (10)
A06meanmeanEquation (10)
A07minmaxEquation (11)
A08meanmeanEquation (11)
A09minmaxEquation (8)Remove part of the front and back point cloud frames in L and R
A10meanmeanEquation (8)
A11minmaxEquation (9)
A12meanmeanEquation (9)
A13minmaxEquation (10)
A14meanmeanEquation (10)
A15minmaxEquation (11)
A16meanmeanEquation (11)
Table 2. Experimental results of generality and repeatability of A16.
Table 2. Experimental results of generality and repeatability of A16.
TypeSizeGround TruthTen Measurements (mm)Mean
(mm)
ME
(mm)
MRE
(mm)
RE
(%)
1st2nd3rd4th5th6th7th8th9th10th
Semi-trailer tractorL654065426535654565406537653465406541653565306538−10−0.150.23
W25352533254025352536253725332543254025352533253780.320.39
H371037113707370037133710370236993709371037113707−11−0.300.38
VanL73997408739574107390739773997401740574017400740190.120.27
W23082310230723062311231123052310230823052313230950.220.35
H241324112412241624072410241124162413241524092412−6−0.250.37
Barn truckL899089798992899589888983900089948985898089918989−11−0.120.23
W24652469247024612465246924632463247024702464246650.200.37
H261026092612260726062609260626112610260526102609−5−0.190.27
Fence truckL989099019887990798839889990398929890990598809894170.170.27
W248524882487248524802488248624832486248524892486−5−0.200.36
H383038283834383038283820383438323829383338303830−10−0.260.37
Crane truckL11,12011,11011,10011,12611,13011,13511,11311,12011,12911,10811,10511,118−20−0.180.31
W25482543254725482550254725532543254825452550254750.200.39
H38503858385138443853385038493843385038553843385080.210.39
BusL11,95011,96311,96011,93111,94011,92811,95011,94311,93511,95911,92811,944−22−0.180.29
W254025382537254225402533254025422541253725382539−7−0.280.35
H38403839383638433845383538403842383538403837383950.130.26
ME = Max Error; MRE = Max Relative Error; RE = Repeatability Error. The Mean is the average of all measurement results. The Max Error is the biggest error in all measurement results. The Max Relative Error is (the Max Error/the Ground truth) × 100. The Repeatability Error is ((the maximum measurement − the minimum measurement)/the Mean measurement) × 100.
Table 3. The experimental results of our measurement method and the three reference comparison methods.
Table 3. The experimental results of our measurement method and the three reference comparison methods.
MethodMax Relative Error (%)
LengthWidthHeight
Li et al. [31]8.866.593.90
Robert et al. [32]47//
Robert et al. [33]10.6919.2818.42
Ours0.180.280.30
Table 4. The experimental results of our measurement method and the to main comparison methods.
Table 4. The experimental results of our measurement method and the to main comparison methods.
MethodMax Relative Error (%)
LengthWidthHeight
Xu et al. [34]1.705.303.60
Xu et al. [35]9.3
Ours0.180.280.30
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Q.; Wang, Z.; Shao, J.; Weng, L.; Gao, F. Measuring Vehicle Profile Size: Lidar-Based System and K-Frame-Based Methodology. Sensors 2021, 21, 6206. https://doi.org/10.3390/s21186206

AMA Style

Zhang Q, Wang Z, Shao J, Weng L, Gao F. Measuring Vehicle Profile Size: Lidar-Based System and K-Frame-Based Methodology. Sensors. 2021; 21(18):6206. https://doi.org/10.3390/s21186206

Chicago/Turabian Style

Zhang, Qiang, Zihao Wang, Jianwen Shao, Libo Weng, and Fei Gao. 2021. "Measuring Vehicle Profile Size: Lidar-Based System and K-Frame-Based Methodology" Sensors 21, no. 18: 6206. https://doi.org/10.3390/s21186206

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop