Next Article in Journal
Research on Vertical Cooperation and Pricing Strategy of Electric Vehicle Supply Chain
Previous Article in Journal
Theoretical Analysis of Plate-Type Thermoelectric Generator for Fluid Waste Heat Recovery Using Thermal Resistance and Numerical Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study on Obstacle Detection Method Based on Point Cloud Registration

1
School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
2
Chinese Scholar Tree Ridge State Key Laboratory, Beijing 100072, China
3
Jiangsu Province Engineering Research Center of Intelligent Chassis for Commercial Vehicles, Nanjing 210094, China
4
Higer Bus, Suzhou 215026, China
5
China North Vehicle Research Institute, Beijing 100072, China
6
School of Electrical and Control Engineering, North China University of Technology, Beijing 100144, China
*
Author to whom correspondence should be addressed.
World Electr. Veh. J. 2024, 15(6), 241; https://doi.org/10.3390/wevj15060241
Submission received: 7 May 2024 / Revised: 28 May 2024 / Accepted: 28 May 2024 / Published: 30 May 2024

Abstract

:
An efficient obstacle detection system is one of the most important guarantees for improving the active safety performance of autonomous vehicles. This paper proposes an obstacle detection method based on high-precision positioning applied to blocked zones to solve the problems of the high complexity of detection results, low computational efficiency, and high load in traditional obstacle detection methods. Firstly, an NDT registration method which uses the likelihood function as the optimal value of the registration score function to calculate the registration parameters is designed to match the scanning point cloud and the target point cloud. Secondly, a target reduction method combined with threshold judgment and the binary tree search algorithm is designed to filter the point cloud of non-road obstacles to improve the processing speed of the computing platform. Meanwhile, KD-tree is used to speed up the clustering process. Finally, a vehicle remote control simulation platform with the combination of a cloud platform and mobile terminal is designed to verify the effectiveness of the strategy in practical application. The results prove that the proposed obstacle detection method can improve the efficiency and accuracy of detection.

1. Introduction

It is essential for intelligent driving technology to determine the position of autonomous vehicles through precise positioning. However, applying GPS merely cannot be accurate to the centimeter level required by intelligent vehicles [1]. Although Simultaneous Localization and Mapping (SLAM) can achieve high-precision navigation and positioning, it is difficult to achieve online mapping and positioning [2,3]. Therefore, registration positioning based on high-precision point cloud maps and perception sensors is often applied to intelligent vehicles. The positioning method with LIDAR basically meets the positioning needs of intelligent vehicles and has the advantages of high accuracy and strong reliability [4,5].
The Normal Distribution Transform (NDT) algorithm [6] and Iterative Closest Point (ICP) algorithm are more applied in registration positioning [7,8,9]. Although the ICP algorithm works with high accuracy, it is challenging to eliminate uncertainty problems in dealing with local environmental differences [10]. Additionally, registrations like ICP take too much time to return information to intelligent vehicles, and it is hard to meet the real-time requirements of autonomous driving [11,12]. NDT is a rough registration algorithm that relies on the normal distribution of the point cloud to determine the transformation relationship between the registration point and the target point [13,14]. This algorithm first converts the target point into a normal distribution of multi-dimensional variables [6], and it applies optimization algorithms to find the transformation parameters that maximize the sum of the probability density of the transformed point in the coordinate system [15,16]. And because NDT is a probabilistic registration method, it can eliminate the impact of local environmental differences on the results. Also, it can achieve centimeter-level high-precision positioning in a short time by selecting an appropriate grid size [16].
It is inevitable to encounter various obstacles even when intelligent vehicles drive on a prescribed route. In order to improve the active safety of autonomous vehicles, it is necessary to detect and classify the perceived obstacles, then find a reasonable obstacle avoidance strategy according to the results. In this paper, depending on whether the obstacle may affect driving, the detected obstacles are roughly divided into non-road obstacles and road obstacles.
Non-road obstacles are static obstacles which are located on the side of the road and will not hinder basic driving, such as trees, buildings, and municipal constructions. Semantic segmentation or deep learning which is based on visual sensor detection is used for non-road obstacle detection [17,18]. In ref. [19], the authors processed SAR satellite images with the deep fully convolutional neural network algorithm to identify the road and non-road environment. In ref. [20], the authors proposed a detection method which can significantly improve the accuracy of semantic segmentation and detected objects by combining basic and advanced image features. Most of the above detection methods are feature extraction and are not suitable for LIDAR. In recent years, obstacle detections based on LIDAR have made great achievements. In ref. [21], the authors proved that it is reliable to detect buildings based on the concept of high entropy by qualitative and quantitative analysis. In order to overcome the limitation of the LIDAR detection of vegetation, ref. [22] estimated tree size and frequency distribution applying the ITC algorithm. Yi and his colleagues came up with a naive Bayes–RANSAC method, which was used to detect building plans, reduced pseudo-planes, and improved the efficiency of RANSAC [23]. Previous practice has proven that although the detection of non-road obstacles can be achieved, this process requires considerable computing power for segmentation and clustering. These operations would reduce the real-time performance and seriously affect the safety of driving.
Road obstacles are vehicles, pedestrians, roadblocks, scattered objects, and other static or dynamic obstacles around the road which may affect driving safety. Vision-based or vehicle-mounted LIDAR-based detection methods are often used to detect such obstacles [24,25]. In ref. [26], the authors identified static and dynamic objects according to time in the event camera to reduce latency. Considering the reliability of the algorithm, the application of LIDAR is more extensive. In the detection process, processing methods such as point cloud filtering, ground segmentation, and clustering are usually used to obtain the location information of obstacles [27,28], and the detected obstacles are identified and classified. Lee et al. proposed a Geometric Model-Free Approach with a Particle Filter (GMFA-PF), which realized the good recognition and tracking of moving objects in the driving environment [29]. Jin et al. designed a ground segmentation algorithm based on a Gaussian process, which effectively solved the problem of over-segmentation and under-segmentation in the traditional ground segmentation algorithm [30]. Lin et al. proposed an adaptive clustering segmentation method to further segment the preliminary segmented point cloud according to the standard deviation of clustering points and realized the efficient segmentation of scene point cloud data [31]. Additionally, Miller and his partners took screenshots of the point cloud by frame and detected the screenshots with a visual method. However, this method can only obtain obstacle information on a two-dimensional plane [32]. In ref. [33], the authors illustrated the extraction and classification of 3D point cloud features with the help of machine learning algorithms and achieved the purpose of identifying obstacles. However, it is not suitable for changing environments without a rich data set.
In this paper, an obstacle detection process based on high-precision positioning is proposed, including an obstacle detection method based on high-precision positioning and an obstacle map generation method based on filtering non-road factors. Details are explained in Figure 1. The main contributions of this paper are as follows:
(1)
A point cloud data matching and clustering algorithm based on the combination of the KD-tree and NDT matching algorithm is designed, which accelerates the process of location recognition, reduces the deviation between the scanned point cloud and point cloud map, and realizes centimeter-level high-precision positioning.
(2)
A target reduction method based on threshold partitions is proposed. The point cloud is further divided on the basis of down-sampling, which reduces the amount of calculation of the platform, realizes the reduction in communication load, and speeds up the processing speed of external information in the vehicle perception system.
(3)
A test platform with the combination of a cloud platform and mobile terminal is designed, and the real situation of the data communication processing of intelligent vehicle remote control is simulated by the Mulran data set and ROS, which proves the effectiveness of the obstacle detection algorithm based on a high-precision map in practical application.
The rest of the paper is structured as follows: first, the proposal is introduced in Section 2, then the experiment is carried out, the results are analyzed in Section 3, and finally the conclusions and prospects are explained in detail in Section 4.

2. Proposed Obstacle Detection Methods

In order to improve the computational efficiency of obstacle detection and obtain better detection results, an obstacle detection method based on the high-precision location of the occlusion region is proposed. Firstly, taking the maximum registration score of the likelihood function as the optimization objective, the high-precision point cloud matching based on NDT method is realized; then combining threshold judgment and the binary tree search algorithm, the non-road obstacles are filtered; and finally, by using KD-tree to accelerate the segmented clustering progress, the identification of road obstacles affecting vehicle driving is realized. Figure 1 describes the structure of the proposed obstacle detection method.

2.1. Registration Positioning

The purpose of this section is to realize the registration and location based on the point cloud map. The registration score function based on the likelihood function is designed. The system first grids the point cloud map and calculates the probability density function of each grid and then takes the maximum registration score as the optimization goal to obtain NDT registration parameters which match the point cloud and the target point cloud. Finally, according to optimal registration parameters, the registration is realized, and the alignment point cloud is obtained [15]. After the above operations, high-precision map registration is completed. The specific positioning process is shown in Algorithm 1.
Algorithm 1 Register scan Pr to reference Pm using NDT
 1: Inputs: Registration Point-clouds: Pr and Allocate cell structure: C
 2: Output: Aligned Point-clouds: Pa
 3: for scan k = 1 to N C  do
 4: p ( x ) d e n s i t y ( μ , , y k )
 5: end for
 6: ψ Π k = 1 n p ( T ( p , x k ) )
 7: s c o r e ( p ˜ ( x k ) , x k ) G ( ψ )
 8: while not converged do
 9: s c o r e max O p t i m i z e ( s c o r e ( p ˜ ( x k ) , x k ) )
 10: end while
 11: T ( p , x k ) s c o r e max ( p ˜ ( x k ) , x k )
 12: P a T r a n s ( T , P r )

2.1.1. Grid Processing

The size of voxels is marked as ‘C’ meters in this paper, as shown in Figure 2. The normal distribution is adopted to normalize the voxels. Then, the information of the point cloud can be represented by the probability density function of the grid. In order to obtain the probability density function of each grid, it is necessary to calculate the mean and covariance matrix of the corresponding grid, and in the calculation, all scanning points in a grid are represented by yk (k = 1, 2, 3…).
Then, the mean of the normal distribution of a grid is as follows:
μ = 1 m k = 1 m y k
The expression of the covariance matrix is as follows:
Σ = 1 m k = 1 m ( y k μ ) ( y k μ ) T
Finally, the probability density function of a grid is as follows:
p ( x ) = 1 ( 2 π ) 3 / 2 | Σ | e ( x μ ) T Σ 1 ( x μ ) 2

2.1.2. NDT Registration

In order to obtain the best NDT matching results, we use the likelihood function to record the position conversion process of all the points in the space, and the expression is shown as follows:
ψ = Π k = 1 n p ( T ( p , x k ) )
where T ( p , x k ) is the spatial transformation function, p ( T ( p , x k ) ) represents the conversion process, and ψ represents the likelihood function.
And in order to speed up the process of solving the optimal registration parameters, this paper conducts a negative logarithmic transformation on the likelihood function and approximately simplifies it by using the Gaussian function, and the following formula can be obtained:
s c o r e ( p ˜ ( x k ) , x k ) = k = 1 n p ˜ ( T ( p , x k ) )
Then, the problem of solving the maximum value of the likelihood function is transformed into solving the minimum value of the negative logarithmic function.
At the same time, the Newton optimization algorithm is used to calculate the maximum value of the objective function. When the optimization result meets the convergence condition, the optimization process stops, and the optimal registration score is obtained. After the optimization, the spatial transformation function will be calculated according to the best registration score, and taking the point cloud to be registered Pr and the spatial transformation function as input, the point cloud Pa which is aligned with the target point cloud Pm after registration will be obtained.

2.2. Non-Road Factor Obstacle Detection

Vision-based image processing methods [17] or airborne LIDAR detection methods [22] are often used to detect buildings or trees. However, these methods require a large amount of calculation, and it is difficult to meet the randomness requirements with them. There are tiny differences for the point cloud of non-road obstacles in different situations; thus, Pa and Pm after positioning have a high degree of registration. But they have a low matching degree of road obstacles because of the randomness and uncertainty. Therefore, in order to reduce the amount of calculation of the computing platform, improve the computing speed of the computing platform, and speed up the recognition process, this paper designs a target reduction method to further deal with the point cloud after ground segmentation. The specific process is shown in Algorithm 2. Firstly, Pa is regarded as the objects to carry on ground segmentation to reduce the amount of data involved in the calculation. Secondly, according to the distribution regulation of the point cloud map, the distance threshold required for the target reduction method is calculated. Finally, according to the target reduction method, Pa is classified.
Algorithm 2 Non-road factor obstacle detection
 1: Inputs: Aligned Point Clouds: Pa and Map Point Clouds: Pm, D is the dimension of the ray.
 2: Output: Non-Road Point Clouds: PNroad and Road Point Clouds: PYroad
 3: for scan j = 1 to N do
 4:   P j S e g m e n t ( D , P a )
 5:  for scan l = 1 to M P m  do
 6:    ( P N r o a d , P Y r o a d ) D r o p T arg e t ( P j , P m )
 7:  end for
 8: end for

2.2.1. Ground Segmentation

The basis to identify the ground point and non-ground point is whether the slope of the two adjoining points on the same ray meets the set slope threshold [34]. However, the slope partition method requires a large number of multiplication and division operations, which requires a high load capacity of the computing platform. In order to reduce the number of multiplication and division operations of the computing platform in the process of ground segmentation, improve the calculation speed, and ensure the real-time performance of the vehicle perception system, a ground segmentation method based on height division is designed in this paper. The system first determines the position of the point cloud according to the ray sequence and the radius of the laser point. Then, the system sorts the points on the same ray and marks them as Pord. Ground recognition will be realized by comparing the altitude difference between the two adjoining points and the set altitude threshold (hmin, hmax): If the altitude of current point hc is not within this range and the previous point is a ground point, the current point is also a ground point. If hc is within this range and the distance si between two points is greater than the distance threshold s between two adjacent points, the current point is a ground point. If the above conditions are not met, the point is not a ground point, as shown in Figure 3.

2.2.2. Calculation of Distance Threshold

In order to obtain more reliable obstacle classification results, it is necessary to calculate a reasonable distance threshold to divide the spatial range: Figure 4 shows that the distance between two measurement points on the X-axis obtains the maximum value at 0°. So, the distance at 0° is chosen as the X-axis distance threshold ∆x; the change in the Y-axis is the same as that in the X-axis, so the distance threshold of the Y-axis ∆y equals ∆x; the distance threshold of the Z-axis ∆z is related to the density of point cloud contours.
After considering the distribution characteristics of the point cloud, the distance threshold required for point cloud registration is calculated as follows:
Δ y = Δ x = r 2 cos 2 ( ω ) π 2000
Δ z = K r sin ( ω ) r sin ( ω ) 1
where r is the measured distance, ω is the vertical angle of the laser, and K is the gain coefficient.

2.2.3. Target Reduction Method

After determining the distance threshold, in order to obtain the target point cloud corresponding to the point cloud to be processed, it is necessary to determine the range of the target point cloud Pm involved in reduction processing. This paper divides the space from the direction of the three axes of the vehicle coordinate system. Taking the X-axis as an example, the range is (Pm.xmin, Pm.xmax). If the range calculated based on a point Pj in the non-ground point cloud PNground contains some points which are in Pm, then Pj will continue to be included in the Y-axis judgment process. The formulas of Pm.xmin and Pm.xmax are as follows:
P m . x min = P j . x Δ x
P m . x max = P j . x + Δ x
If there are some points in the target point cloud that fall within the range of the Y-axis (Pm.ymin, Pm.ymax) and the Z-axis (Pm.zmin, Pm.zmax) calculated based on Pj, and the number of these points exceeds the target threshold p, then Pj is considered as a non-road factor obstacle point; otherwise, the point is considered as a road factor obstacle point. When all points of PNroad are traversed, Pa will be divided into non-road obstacle points PNroad and road obstacle points PYroad. Considering that the data volume of Pm is too large, in order to ensure real time and accuracy, we choose the data of Pm with the current vehicle location as the center and the radius less than 30 m.

2.3. Road Obstacles Detection

Due to the varying degrees of impact that different obstacles have on driving safety, further obstacle detection is required for the road factor obstacle point cloud PYroad after the target reduction processing. In this paper, road factor obstacle detection is realized by piecewise clustering. The detection process is shown in Algorithm 3, which includes point cloud clipping and filtering, segmented clustering, and obstacle identification. Firstly, road obstacle point PYroad is clipped and filtered to reduce the amount of calculation, then the point cloud is divided into regions according to the radius, and the corresponding clustering radius is used for different regions. Finally, the clustering results are analyzed to complete the obstacle detection.
Algorithm 3 Road factor obstacle detection
 1: Inputs: Road Point Clouds: PYroad
 2: Output: Obstacle: PO
 3: for scan a = 1 to N do
 4:   P a C u t ( h , P Y r o a d )
 5:   P a F i l t e r ( P a , d )
 6:   P C C l u s t e r ( t r e e ( D o w n ( P a , l e a f ) ) , r e , s e , q )
 7:   P O F r a m e ( P C )
 8: end for

2.3.1. Tailoring and Filtering

Since the detection range of LIDAR is 1~120 m, there are problems such as too much irrelevant point cloud information caused by the excessive detection range or radar reflection caused by the vehicle body. So, to reduce the unnecessary calculation of the computing platform, it is necessary to tailor and filter PYroad. Since obstacles with excessively high elevation do not affect the safety of ground vehicles, this paper filters out the point cloud data above a certain elevation h. At the same time, in order to prevent errors caused by the reflection of LIDAR rays from the vehicle body, we filter the point cloud around the vehicle body according to the size of the information collection vehicle. Here, we filter the point clouds whose distance r is less than d. The point cloud after clipping and filtering is marked as Pf.

2.3.2. Segmented Clustering

Cluster processing is suitable for environment perception in low-speed and simple scenarios [35]. This method mainly includes steps such as down-sampling processing, nearest point searching, and clustering.
The KD-tree algorithm is a method to retrieve the closest point data to the query point in the KD tree to achieve the search purpose. Here, the KD-tree template in the PCL library is selected for accelerating processing. By setting the size of the leaf, we achieve the purpose of reducing point cloud density and meeting the real-time requirements of clustering. The Euclidean clustering algorithm is a mature point cloud segmentation algorithm [36,37]. The main parameters that affect the algorithm are the cluster radius threshold re, minimum number of cluster points threshold qmin, and maximum number of cluster points threshold qmax. This paper divides the detection range from near to far according to the distance radius Ri and selects the cluster radius threshold re from small to large. It should be noted that the smaller re is, the longer the clustering time is, the larger the minimum clustering point threshold q is, and the poorer the clustering effect is. We record the clustered point cloud cluster as PC for obstacle modeling.

2.3.3. Establishing Obstacle Models

In order to accurately estimate the motion state of the obstacle, the frame model is used to represent the obstacle. The frame model of the obstacle can be calculated by using the point cloud cluster PC obtained by the clustering processing. Firstly, the centroid of the cluster point cloud Oi is calculated and used as the center of the obstacle. Then, the length, width, and higher parameters of the point cloud cluster are calculated according to the furthest distance from the center point in each direction of PC. Finally, a border is added to obtain the road factor obstacle. And, in order to obtain the obstacle model with the greatest impact on driving safety and the shortest modeling time, we set the side length of the obstacle in the same direction or perpendicular to the driving direction of the vehicle.

3. Experiment and Analysis

In the experiment, to simulate the real situation of the remote control of an intelligent vehicle, an experimental platform combining a cloud platform and mobile terminal is designed in this paper. And the KAIST of the Mulran data set and ROS PCL library are used to test the proposed method. The KAIST sequence data come from an OS1-64 3D LIDAR (Ouster, San Francisco, CA, USA) and a radar. The radar is located behind the LIDAR with an angle of about 70° that is lost in the horizontal direction [38]. In this paper, the function of registration positioning and obstacle detection are evaluated using qualitative evaluation and quantitative evaluation. At the same time, the computational cost in obstacle detection is recorded to evaluate the efficiency of this strategy.

3.1. Experimental Platform Design and Experimental Parameters

The cloud platform used in the experiment is Huawei Cloud Platform, which provides multiple computing nodes for this experiment. For ease of comparative analysis, each computing node is equipped with X86 architecture, Intel Cascade Lake 2.6 GHz CPU, and 4 GB of memory. The baseline bandwidth and maximum bandwidth are both 1 G/s. The image running on this computing node is Ubuntu 16.04 server 64-bit (40 GB). In order to ensure the reliability of the functionality, the experiment also purchased load balancing and elastic scaling services. The VPC configuration required for the elastic scaling service is consistent with the VPC of the aforementioned computing nodes. This experiment is also equipped with an industrial control computer based on X86 architecture and the Ubuntu Linux operating system, which is responsible for processing sensor data, communicating with the cloud, and sending instructions to the core control module as the onboard computing center. To reduce operational complexity, the experiment chose the same Ubuntu 16.04 operating system as the cloud end.
The parameters that affect the results of obstacle detection based on high-precision positioning are shown in Table 1. The integrated navigation system adopts the NAV712/SU7 GNSS/INSS system (Beijing Nuogeng SCI-TECH Development co., LTD; Beijing, China), and the LIDAR uses C16 LIDAR(LSLIDAR, Shenzhen, China). The parameters of the integrated navigation system and LIDAR are shown in Table 2 and Table 3.

3.2. Functional Test of Registration and Positioning

3.2.1. Quantitative Evaluation

The evaluation content of the registration positioning results includes the target point error and quantitative index. The evaluation method adopts the visual analysis method with a high utilization rate, that is, by analyzing the results of different detection sequences and quantitatively evaluating the integrity of non-road factor obstacle identification and the accuracy of road factor obstacle identification. In this paper, the registration conversion rate P, the number of registration iterations NI, and the types of obstacles involved in matching are used to evaluate the results of high-precision positioning. The registration conversion rate P is a measure used to evaluate the registration effect of a certain frame, and the formula is as follows:
P = s c o r e s i z e
where score refers to the best registration score scoremax, and size is the number of grids participating in registration.
Figure 5 and Table 4 show the matching effect of the scanned point cloud before registration and the alignment point cloud and map after registration. We calculate statistics on the registration conversion rate and the number of registration iterations of the selected 500 frame point clouds. The results show that the average registration conversion rate is 6.5526, and the average registration iteration number is 4.98. This paper not only uses the registration conversion rate and the number of registration iterations as the result analysis index but also adds three analysis indexes: vehicle driving state, registration time, and obstacle matching. In sequence 1, the situation of the vehicle is stationary. There is a short distance between the scanned point cloud and the aligned point cloud; it takes less time to perform iterations and registration. At the same time, there is a lot of vegetation and buildings involved in the matching, which provides conditions for registration positioning. In sequences 2–4, the vehicle is moving. The scanning point cloud is far away from the alignment point cloud, so it takes a longer time to perform iterations and registration, which makes the registration more difficult. Although there are more obstacles in sequence 2 than in sequence 3, there are more ground points in sequence 3 which results in more iterations. The positioning effect of sequence 4 performs well even when there are large amounts of obstacle data, because the point cloud in the figure can be well adapted to the outline of trees. Generally speaking, by observing the point cloud matching of different sequences, it can be concluded that the point cloud matching method designed in this paper can achieve high-precision positioning.

3.2.2. Qualitative Evaluation

In this paper, the average road offset ratio D ¯ is selected here to express the overall offset result of the measured point cloud, and the formula is as follows:
D ¯ = tan α 1 + tan α 2 2
where α1 is the lane left boundary offset rate, and α2 is the lane right boundary offset rate.
In Figure 6, sequences 5–6 are the first section and the middle section of the road, respectively. The calculation results show that the D ¯ of Pr in sequence 5 is −2.619%, and the D ¯ of Pa is −0.087%. In sequence 6, the D ¯ of Pr is 1.828%, and the D ¯ of Pa is 0.262%. The road offset range before positioning is 0~66.7 cm and after positioning is 0~10.5 cm. The range of positioning offset is reduced by 84%. The positioning error of the automatic driving system above L4 must be controlled within 20 cm. The lane positioning error of this paper is 0~10.5 cm, so the registration and positioning method designed in this paper can meet the needs of high-precision positioning.

3.3. Function Test of Obstacle Detection

3.3.1. Quantitative Evaluation

We select multiple different scenarios in the data set to study the algorithm for distinguishing road/non-road obstacles and the detection algorithm for road obstacles. The results are shown in Figure 7. In the experiment, the obstacles detected on the road include vegetation, buildings, pedestrians, vehicles, road construction, and other objects.
1.
The distinction of road/non-road obstacles.
In the results, there are 325 road obstacles and 843 non-road obstacles. The number of the missed inspections is 24. Therefore, 27.83% of obstacles may affect driving safety. Table 5 illustrates the test results. Table 5 contains the number of correct, missed, and false road/non-road obstacles of sequences 7, 9, 11, and 13 in Figure 7. The results show that most of the missed obstacles are vehicles parked closely on the roadside (sequence 7), and there are three types of obstacles which are detected incorrectly: trees with low height and sparse point clouds (sequence 13); vehicles that are closer to trees (sequences 9 and 11); and vehicles classified as the same obstacle (sequence 7). By comparing the angle between the vehicle and the obstacle, the accuracy of the recognition result is related to the angle between the vehicle and obstacle direction: if the angle is acute or right, the detection performed with high accuracy (sequence 11); if the relative angle is obtuse, there are the results with low accuracy (sequence 7).
2.
The detection of road obstacles
In the test results, there were 479 obstacles in total, of which 67 were pedestrians, 290 were vehicles, and 16 were missed. As a result/therefore, 74.53% of obstacles may affect driving safety; this shows a significant improvement in the effectiveness of obstacle detection. Table 6 reports the number of correct, missed, and false obstacles of sequences 8, 10, 12, and 14 in Figure 7. The detection results show that most of the undetected obstacles are vehicles (sequences 8 and 10), and the wrong obstacles are low-height and luxuriant vegetation (sequences 10 and 14) and obstacles treated as the same obstacle (sequences 8 and 12). After analysis, the missed vehicles are those parked on the side of the road. The reason is that some vehicles overlap the point clouds of nearby trees so that they are all clustered as vegetation (sequence 10), and this is also the reason why the vehicle is blocked by other obstacles during driving. But in general, the factors that may affect driving safety are mainly located in front of the vehicle, and the coincidence of point clouds at different angles is different. Therefore, even if they are regarded as the same obstacles, they can be identified by the strategy designed in this paper. This means that the strategy in this paper can meet the requirements of obstacle identification and ensure safe driving.
The results of the distinction of road/non-road obstacles and the detection of road obstacles show that the efficiency of obstacle detection after using the target reduction method is significantly improved, and the effective rate of the detected obstacles is increased by 46.7%. Road obstacles that affect driving safety are completely retained, while non-road obstacles that do not affect driving safety are successfully filtered except for those who have a small structure or special location. Among them, important detection results such as pedestrians or vehicles are also ideal.

3.3.2. Qualitative Evaluation

We compared and analyzed the detection results of all sequences in Figure 7 to evaluate the proposed method qualitatively according to the attributes of the detection object. All sequences involved in the analysis have achieved high-precision positioning.
Sequences 7 and 8 are the detection results of the steering vehicle. The objects in sequence 7 are classified as obstacles, including pedestrians, trees, vehicles, and buildings. Because the steering angle of the vehicle forms a certain angle with the road, and the clustering did not perform well due to the lush trees, some road obstacles are identified as non-road obstacles. By reducing the target processing, most of the non-road obstacles in the sequence are no longer involved in the detection except for small trees, and vehicles and pedestrians near the vehicle are successfully detected. Sequences 9 and 10 are also the detection results of the steering vehicle. Obstacles detected in sequence 9 also include buildings, pedestrians, vehicles, and trees. Since the vehicle forms a certain angle with itself when it is parked sideways, obstacles 10–13 are identified as the same obstacle. But vehicles or pedestrians can be nested in the frame of buildings or trees. By reducing the target processing, sequence 10 filters out most of the trees and buildings, while the original detected vehicles and missed vehicles are retained. Sequences 11 and 12 are the results of the straight vehicle. The three pedestrians and vehicles along the way in sequence 11 are completely detected, but the frame of vegetation may cover the vehicle. After the target reduction process, the pedestrians and vehicles close to the vehicle are completely detected in sequence 12 except for the distant vehicles surrounded by vegetation. However, small trees on the side of the road are also identified as obstacles of the road. Sequences 13 and 14 are also the results of straight-through detection. The vehicles close to their own vehicle in sequence 13 are all detected completely, and there is no overlap. Parallel pedestrians are clustered as the same obstacle. By reducing the target processing, sequence 14 completely retains all of the detected vehicles and pedestrians. Distant vehicles are not recognized, but they do not have much of an influence on safe driving. The test result of straight vehicles is slightly better than that of steering vehicles. But generally speaking, road obstacles that affect the safety of vehicle driving can be effectively identified.

3.4. Load Assessment

The computational cost comes from the target point cloud gridding, down-sampling processing, and prediction of the current frame pose during NDT registration. Additionally, it comes from target reduction processing and road obstacle detection. This section mainly analyzes the difference in obstacle detection efficiency caused by different methods after positioning. Experimental sequences 7~14 are used to analyze the computational cost. Figure 8 reports the difference in calculation efficiency between the proposed obstacle processing method and the original processing method; the calculation unit is ms/frame. The results show that the proposed method can improve the calculation efficiency. It can be seen from Figure 9 that the load required for obstacle detection in different driving states is reduced more or less by reducing target processing. Although the load change in sequence 11 (12) with a larger proportion of road factors and obstacles is small, only 1.4%, other sequences have achieved the goal of significantly reducing the load percentage.
In general, there is a contradiction between calculation efficiency and the obstacle detection effect, but the method proposed in this paper effectively solves the above problems. By reducing the calculation of road obstacle detection, the strategy in this paper not only improves the positioning effect of the self-vehicle but also realizes the accurate positioning of obstacles and improves the accuracy of target reduction processing and the detection performance of non-road obstacles and then improves the work efficiency of the algorithm.

4. Conclusions and Prospects

To improve the capability of obstacle detection, which can make intelligent driving significantly safer, this paper proposes an obstacle detection method based on high-precision positioning for blocked zones. To meet the positioning requirements, we use the NDT registration algorithm to match the registered point cloud with the point cloud map. In order to shorten the calculation time and improve efficiency in the obstacle detection process, we propose a target reduction method without reducing the precision of positioning. Finally, the following conclusions are obtained through a combination of theoretical analysis and experimental verification:
(1)
The obstacle detection technology based on registration and location designed in this paper not only shortens the obstacle detection time but also improves the detection accuracy, which provides conditions for the real-time control of the vehicle.
(2)
The positioning error of the registration and positioning technology in this paper is controlled within 10.5 cm. It can meet the needs of high-precision positioning in the process of the automatic driving of intelligent vehicles.
(3)
The target reduction method in this paper can realize the classification of road factor obstacles and non-road factor obstacles. This method reduces the number of irrelevant obstacles and the detection complexity.
(4)
Using KD-tree to accelerate clustering processing and obstacle models establishing processing can meet the requirements of obstacle detection. The processing method of this paper can quickly and accurately obtain obstacle position and shape information.
Although the method proposed in this paper can improve the performance of obstacle detection in blocked zones, there is still some work to conduct to improve the performance further. In this paper, the information involved in obstacle detection comes from a single vehicle, but the information obtained by a single vehicle is limited. In view of this deficiency, the environmental information of obstacles perceived by multi-vehicles will be combined to update the fixed obstacles of structured roads in real time (such as roadblocks, etc.). In this idea, the vehicle control system can quickly obtain the static obstacles near the current driving position by subscribing to the global obstacle map service, shorten the obstacle detection time, improve the detection efficiency, and further improve the reliability of the system.

Author Contributions

Conceptualization, H.W. and D.P.; data curation, H.W. and Y.W.; formal analysis, H.W., J.W. and Y.W.; funding acquisition, H.W.; investigation, Y.W.; methodology, J.W. and Y.W.; project administration, H.W. and D.P.; resources, J.W. and Y.W.; software, J.W. and Y.W.; supervision, Y.C. and J.F.; validation, J.W. and Y.W.; visualization, Y.W.; writing—original draft, J.W. and Y.W.; writing—review and editing, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 52172369 and Jiangsu Province industrial and information industry transformation and upgrading special fund project (Development and industrialization of autonomous controllable full-scene power domain controller for intelligent commercial vehicles).

Data Availability Statement

The original contributions presented in this study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

Yixin Wang is an employee of Higer Bus. Yijie Chen is an employee of China North Vehicle Research Institute. The paper reflects the views of the scientists and not the company.

References

  1. Lu, Y.; Ma, H.; Smart, E.; Yu, H. Real-time performance-focused localization techniques for autonomous vehicle: A review. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6082–6100. [Google Scholar] [CrossRef]
  2. Luo, J.; Qin, S. A Fast Algorithm of Simultaneous Localization and Mapping for Mobile Robot Based on Ball Particle Filter. IEEE Access 2018, 6, 20412–20429. [Google Scholar] [CrossRef]
  3. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. Orb-slam: A versatile and accurate monocular slam system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
  4. Han, Y.; Wang, B.; Guan, T.; Tian, D.; Yang, G.; Wei, W.; Tang, H.; Chuah, J.H. Research on road environmental sense method of intelligent vehicle based on tracking check. IEEE Trans. Intell. Transp. Syst. 2023, 24, 1261–1275. [Google Scholar] [CrossRef]
  5. Wolcott, R.W.; Eustice, R.M. Robust LIDAR localization using multiresolution gaussian mixture maps for autonomous driving. Int. J. Robot. 2017, 36, 292–319. [Google Scholar] [CrossRef]
  6. Chen, S.; Ma, H.; Jiang, C.; Zhou, B.; Xue, W.; Xiao, Z.; Li, Q. NDT-LOAM: A real-time LiDAR odometry and mapping with weighted NDT and LFA. IEEE Sens. J. 2022, 22, 3660–3671. [Google Scholar] [CrossRef]
  7. Santoso, F.; Garratt, M.; Pickering, M.; Asikuzzaman, M. 3D Mapping for Visualization of Rigid Structures: A Review and Comparative Study. IEEE Sens. J. 2016, 16, 1484–1507. [Google Scholar] [CrossRef]
  8. Ma, K.; Chen, Z.; Fu, L.; Tian, W.; Jiang, F.; Yi, J.; Du, Z.; Sun, H. Performance and sensitivity of individual tree segmentation methods for UAV-LiDAR in multiple forest types. Remote Sens. 2022, 14, 298. [Google Scholar] [CrossRef]
  9. Burt, A.; Disney, M.; Calders, K. Extracting individual trees from LIDAR point clouds using treeseg. Methods Ecol. Evol. 2019, 10, 438–445. [Google Scholar] [CrossRef]
  10. Pang, S.; Kent, D.; Cai, X.; Al-Qassab, H.; Morris, D.; Radha, H. 3D scan registration based localization for autonomous vehicles—A comparison of NDT and ICP under realistic conditions. In Proceedings of the IEEE 88th Vehicular Technology Conference (VTC-Fall), Chicago, IL, USA, 27–30 August 2018. [Google Scholar]
  11. Kwok, T.-H. DNSS: Dual-Normal-Space Sampling for 3-D ICP Registration. IEEE Trans. Autom. Sci. Eng. 2019, 16, 241–252. [Google Scholar] [CrossRef]
  12. Vizzo, I.; Guadagnino, T.; Mersch, B.; Wiesmann, L.; Behley, J.; Stachniss, C. KISS-ICP: In defense of point-to-point ICP simple, accurate, and robust registration if done the right way. IEEE Robot. Automat. Lett. 2023, 8, 1029–1036. [Google Scholar] [CrossRef]
  13. Kim, P.; Chen, J.; Cho, Y.K. SLAM-driven robotic mapping and registration of 3D point clouds. Automat. Construct. 2018, 89, 38–48. [Google Scholar] [CrossRef]
  14. Quan, S.; Ma, J.; Hu, F.; Fang, B.; Ma, T. Local voxelized structure for 3D binary feature representation and robust registration of point clouds from low-cost sensors. Inf. Sci. 2018, 444, 153–171. [Google Scholar] [CrossRef]
  15. Xia, X.; Bhatt, N.P.; Khajepour, A.; Hashemi, E. Integrated inertial LiDAR-based map matching localization for varying environments. IEEE Trans. Intell. Veh. 2023, 8, 4307–4318. [Google Scholar] [CrossRef]
  16. Stoyanov, T.; Magnusson, M.; Lilienthal, A.J. Point set registration through minimization of the L2 distance between 3D-NDT models. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), St. Paul, MN, USA, 14–18 May 2012. [Google Scholar]
  17. Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. Deep learning for generic object detection: A survey. Int. J. Comput. Vis. 2020, 128, 261–318. [Google Scholar] [CrossRef]
  18. He, K.M.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
  19. Henry, C.; Azimi, S.M.; Merkle, N. Road segmentation in SAR satellite images with deep fully convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1867–1871. [Google Scholar] [CrossRef]
  20. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  21. dos Santos, R.C.; Galo, M.; Carrilho, A.C.; Pessoa, G.G.; de Oliveira, R.A.R. Automatic Building Change Detection Using Multi-Temporal Airborne LIDAR Data. In Proceedings of the 2020 IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS), Santiago, Chile, 21–26 March 2020. [Google Scholar]
  22. Ferraz, A.; Saatchi, S.; Longo, M.; Clark, D.B. Tropical tree size–frequency distributions from airborne lidar. Ecol. Appl. 2020, 30, e02154. [Google Scholar] [CrossRef] [PubMed]
  23. Yi, Z.; Wang, H.; Duan, G.; Wang, Z. An airborne LiDAR building-extraction method based on the naive Bayes-RANSAC method for proportional segmentation of quantitative features. J. Indian Soc. Remote Sens. 2021, 49, 393–404. [Google Scholar] [CrossRef]
  24. Nurunnabi, A.; West, G.; Belton, D. Outlier detection and robust normal-curvature estimation in mobile laser scanning 3D point cloud data. Pattern Recognit. 2015, 48, 1404–1419. [Google Scholar] [CrossRef]
  25. Jin, X.; Yang, H.; He, X.; Liu, G.; Yan, Z.; Wang, Q. Robust LiDAR-Based Vehicle Detection for On-Road Autonomous Driving. Remote Sens. 2023, 15, 3160. [Google Scholar] [CrossRef]
  26. Falanga, D.; Kleber, K.; Scaramuzza, D. Dynamic obstacle avoidance for quadrotors with event cameras. Sci. Robot. 2020, 5, eaaz9712. [Google Scholar] [CrossRef] [PubMed]
  27. Zhang, Y.; Geng, G.; Wei, X.; Zhang, S.; Li, S. A statistical approach for extraction of feature lines from point clouds. Comput. Graph. 2016, 56, 31–45. [Google Scholar] [CrossRef]
  28. Chu, P.M.; Cho, S.; Park, J.; Fong, S.; Cho, K. Enhanced ground segmentation method for lidar point clouds in human-centric autonomous robot systems. Hum.-Centric Comput. Inf. Sci. 2019, 9, 17. [Google Scholar] [CrossRef]
  29. Lee, H.; Lee, H.; Shin, D.; Yi, K. Moving objects tracking based on geometric model-free approach with particle filter using automotive lidar. IEEE Trans. Intell. Transp. Syst. 2022, 23, 17863–17872. [Google Scholar] [CrossRef]
  30. Jin, X.; Yang, H.; Liao, X.; Yan, Z.; Wang, Q.; Li, Z.; Wang, Z. A Robust Gaussian Process-Based LiDAR Ground Segmentation Algorithm for Autonomous Driving. Machines 2022, 10, 507. [Google Scholar] [CrossRef]
  31. Lin, Z.; Kang, C.; Wu, S.; Li, X.; Cai, L.; Zhang, D.; Wang, S. Adaptive Clustering for Point Cloud. Sensors 2024, 24, 848. [Google Scholar] [CrossRef] [PubMed]
  32. Miller, C.I.; Thomas, J.J.; Kim, A.M.; Metcalf, J.P.; Olsen, R.C. Application of image classification techniques to multispectral lidar point cloud data. Proc. SPIE 2016, 9832, 98320X. [Google Scholar]
  33. García-Gutiérrez, J.; Martínez-álvarez, F.; Troncoso, A.; Riquelme, J.C. A comparison of machine learning regression techniques for lidar-derived estimation of forest variables. Neurocomputing 2015, 167, 24–31. [Google Scholar] [CrossRef]
  34. Himmelsbach, M.; Hundelshausen, F.V.; Wuensche, H.J. Fast segmentation of 3D point clouds for ground vehicles. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 560–565. [Google Scholar]
  35. Zermas, D.; Izzat, I.; Papanikolopoulos, N. Fast segmentation of 3D point clouds: A paradigm on lidar data for autonomous vehicle applications. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar]
  36. Peng, C.; Jin, L.; Yuan, X.; Chai, L. Vehicle Point Cloud Segmentation Method Based on Improved Euclidean Clustering. In Proceedings of the 35th Chinese Control and Decision Conference (CCDC), Yichang, China, 20–22 May 2023. [Google Scholar]
  37. Zhu, X.; Wu, X.; Wu, B.; Zhou, H. An improved fuzzy C-means clustering algorithm using Euclidean distance function. J. Intell. Fuzzy Syst. 2023, 44, 9847–9862. [Google Scholar] [CrossRef]
  38. Kim, G.; Park, Y.S.; Cho, Y.; Jeong, J.; Kim, A. MulRan: Multimodal Range Dataset for Urban Place Recognition. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
Figure 1. The frame of the proposed obstacle detection method.
Figure 1. The frame of the proposed obstacle detection method.
Wevj 15 00241 g001
Figure 2. The process of meshing the target point cloud. The picture on the left represents the reference point cloud to be registered. The figure on the right represents the meshing process of placing the target point cloud into a cube grid with side length ‘C’ meters. Multiple cube grids in the figure form the reference point cloud space.
Figure 2. The process of meshing the target point cloud. The picture on the left represents the reference point cloud to be registered. The figure on the right represents the meshing process of placing the target point cloud into a cube grid with side length ‘C’ meters. Multiple cube grids in the figure form the reference point cloud space.
Wevj 15 00241 g002
Figure 3. The process of ground segmentation. The figure includes the LIDAR coordinate system ouster and the vehicle coordinate system base_link. Point1~point5 are different scanning points, and the distance increases in turn. Object1 and object2 are obstacles in front of the vehicle. If point3 is the current point, point2 is the previous point, and then point3 is the ground point. If point4 is the current point, point3 is the previous point, and then point4 is not a ground point. The same can be proven that if point1 and point2 are ground points, point5 is not a ground point.
Figure 3. The process of ground segmentation. The figure includes the LIDAR coordinate system ouster and the vehicle coordinate system base_link. Point1~point5 are different scanning points, and the distance increases in turn. Object1 and object2 are obstacles in front of the vehicle. If point3 is the current point, point2 is the previous point, and then point3 is the ground point. If point4 is the current point, point3 is the previous point, and then point4 is not a ground point. The same can be proven that if point1 and point2 are ground points, point5 is not a ground point.
Wevj 15 00241 g003
Figure 4. The calculation of the distance range. The left figure shows the distribution of Pa, where P1 and P3 belong to the same ray scan point, and the difference between P2 and P3 is 0.26°. The picture on the right shows the range calculated for P3. If there are some points of Pm in the range and the number of these points is greater than p, P3 is a non-road obstacle point; otherwise, it is a road obstacle point.
Figure 4. The calculation of the distance range. The left figure shows the distribution of Pa, where P1 and P3 belong to the same ray scan point, and the difference between P2 and P3 is 0.26°. The picture on the right shows the range calculated for P3. If there are some points of Pm in the range and the number of these points is greater than p, P3 is a non-road obstacle point; otherwise, it is a road obstacle point.
Wevj 15 00241 g004
Figure 5. The results of NDT registration and positioning. Inside the yellow ellipse are the buildings participating in the registration. The trees participating in the registration are within the green curve. The brown boxes represent ground points. The green point represents the point cloud after registration Pa. The white point represents the target point cloud Pm participating in the registration. Points in other colors all indicate point clouds participating in registration Pr.
Figure 5. The results of NDT registration and positioning. Inside the yellow ellipse are the buildings participating in the registration. The trees participating in the registration are within the green curve. The brown boxes represent ground points. The green point represents the point cloud after registration Pa. The white point represents the target point cloud Pm participating in the registration. Points in other colors all indicate point clouds participating in registration Pr.
Wevj 15 00241 g005
Figure 6. The results of road matching. The white line represents the road boundary in Pm. The red line represents the road boundary in Pr. The green line represents the road boundary in Pa. The symbol in the deflection angle represents the direction compared to the white line. The positive sign represents the right side of the white line, and the negative sign represents the left side of the white line. The red number is the deflection angle of the red line. The green number is the deflection angle of the green line.
Figure 6. The results of road matching. The white line represents the road boundary in Pm. The red line represents the road boundary in Pr. The green line represents the road boundary in Pa. The symbol in the deflection angle represents the direction compared to the white line. The positive sign represents the right side of the white line, and the negative sign represents the left side of the white line. The red number is the deflection angle of the red line. The green number is the deflection angle of the green line.
Wevj 15 00241 g006
Figure 7. The results of obstacle detection. Sequences 7–10 are the detection results when turning. Sequences 11–14 are the test results when traveling straight. The red circular line represents the ground after division. For ease of reading, we mark the white point cloud as the aligned point cloud. The transparent box on the left represents the obstacle detection result without any processing. The yellow box on the right represents the obstacle detection result after reducing the target. The marked green numbers represent obstacles detected before and after the target reduction process. Inside the yellow circle are the obstacles that were missed.
Figure 7. The results of obstacle detection. Sequences 7–10 are the detection results when turning. Sequences 11–14 are the test results when traveling straight. The red circular line represents the ground after division. For ease of reading, we mark the white point cloud as the aligned point cloud. The transparent box on the left represents the obstacle detection result without any processing. The yellow box on the right represents the obstacle detection result after reducing the target. The marked green numbers represent obstacles detected before and after the target reduction process. Inside the yellow circle are the obstacles that were missed.
Wevj 15 00241 g007
Figure 8. The difference in calculation time. The green broken line indicates the calculation time before the reduction target processing method is used. The blue broken line represents the calculation time after using the drop target processing method. The green and blue lines respectively represent the average calculation time of the corresponding processing method.
Figure 8. The difference in calculation time. The green broken line indicates the calculation time before the reduction target processing method is used. The blue broken line represents the calculation time after using the drop target processing method. The green and blue lines respectively represent the average calculation time of the corresponding processing method.
Wevj 15 00241 g008
Figure 9. The difference in computational load. The green bar indicates the load before the target reduction method is used. The blue bar indicates the load after using the drop target processing method.
Figure 9. The difference in computational load. The green bar indicates the load before the target reduction method is used. The blue bar indicates the load after using the drop target processing method.
Wevj 15 00241 g009
Table 1. Parameters for obstacle detection.
Table 1. Parameters for obstacle detection.
Symbol/Unita/cms/cmp/-h/md/mleaf/-qmin/-qmax/-
Value20.251.982.40.1550
Table 2. Parameters for LIDAR.
Table 2. Parameters for LIDAR.
LIDAR ParameterValue/UnitLIDAR ParameterValue/Unit
Laser band905/nmMeasuring range70~200/m
Laser channel16/-Ranging accuracy±3/cm
Vertical field angle±15/°Horizontal field angle360/°
Table 3. Parameters for IMU.
Table 3. Parameters for IMU.
IMU ParameterValue/UnitIMU ParameterValue/Unit
Gyro typeMEMS/-The zero bias stability of the gyroscope2/(°/h)
Gyro range±300/(°/s)The range of the accelerometer±6/g
Table 4. Results of NDT registration.
Table 4. Results of NDT registration.
Seq. NameVehicle
Situation
Time/(ms)PNIInvolved Obstacles
TreesBuilding
1Stationary152.6166.733212
2Moving419.8576.713917
3Moving771.786.9371613
4Moving538.096.7588211/
Table 5. The results of road/non-road obstacle distinction.
Table 5. The results of road/non-road obstacle distinction.
Seq. NameVehicle
Situation
Obstacle TypeMissed
Obstacles
False
Obstacles
Road FactorNon-Road
7Turning182712
9Turning101701
11Straight141711
13Straight222103
Table 6. The result of road obstacle detection.
Table 6. The result of road obstacle detection.
Seq. NameVehicle
Situation
Obstacle TypeMissed
Obstacles
False
Obstacles
PedestriansVehicles
8Turning01713
10Turning2823
12Straight21212
14Straight41812
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Wang, J.; Wang, Y.; Pi, D.; Chen, Y.; Fan, J. Study on Obstacle Detection Method Based on Point Cloud Registration. World Electr. Veh. J. 2024, 15, 241. https://doi.org/10.3390/wevj15060241

AMA Style

Wang H, Wang J, Wang Y, Pi D, Chen Y, Fan J. Study on Obstacle Detection Method Based on Point Cloud Registration. World Electric Vehicle Journal. 2024; 15(6):241. https://doi.org/10.3390/wevj15060241

Chicago/Turabian Style

Wang, Hongliang, Jianing Wang, Yixin Wang, Dawei Pi, Yijie Chen, and Jingjing Fan. 2024. "Study on Obstacle Detection Method Based on Point Cloud Registration" World Electric Vehicle Journal 15, no. 6: 241. https://doi.org/10.3390/wevj15060241

Article Metrics

Back to TopTop