For the experimental environment, one NXP LS1028A board and four VL-AS16 LiDAR sensors were configured, as shown in
Figure 9 [
22]. The Ethernet port of the LS1028A board was connected to the four LiDAR sensors and one external system to configure the proposed IPU system. The client and server of the IPU system were distributed across the two ARM Cortex-A72 cores of the LS1028A board and executed in parallel. To express the data produced by the four LiDAR sensors in a 2D graph, the sensors were set up vertically as shown in
Figure 10a. The experimental data were used to measure the movement of a person in the FOV of the sensors set in free space, as shown in
Figure 10b.
The raw data of LiDAR sensors are usually visualized as a 3D point; however, in this study, the 3D point was expressed as a 2D point projected onto the x–z-plane. In the 2D graph, the x-axis represents the horizontal FOV of the LiDAR and the z-axis represents the vertical FOV. The distance to the object was expressed as the value of each point. The laser emitted by the LiDAR sensor spread out in a fan shape from the sensor. Therefore, a close object was projected with a shorter height and a distant object was projected with a longer height [
23].
Figure 11 shows the results of presenting the data detected by the sensor in a 2D plane, as shown in
Figure 10b. To distinguish the shape of the object in the 2D graph, points having a distance value of less than four are colored black, and points with distance values greater than four are colored gray.
4.1. Reduction
In the experimental environment, the LiDAR sensor measured a person at a distance of approximately four
from the sensor.
Figure 12a shows the data detected by one LiDAR in a 2D graph, and
Figure 12b shows the data detected by the four LiDAR sensors in one graph. The raw data from the LiDAR sensor were composed of data representing the object and those representing the background.
Figure 12c and
Figure 12d show the data from one LiDAR sensor and four LiDAR sensors, respectively, after executing the reduction algorithm. As a result of executing the reduction algorithm, the number of points representing the background and the number of points inside the object were reduced. Compared to the original point cloud, the reduced point cloud output the edge data of the object [
24]. The reduced data of one LiDAR sensor, as shown in
Figure 12c had sufficient data to distinguish the human shape; however, the reconstruction accuracy was reduced because it had less information compared to the original data.
One point of the LiDAR sensor was composed of a distance value of two bytes and a laser intensity value of one byte. Therefore, 1 data packet, composed of 16 points and a horizontal angle value of 2 bytes, contained 50 bytes of data. Because one 2D frame consisted of 1160 data packets, the LiDAR sensor transmitted 58,000 bytes of data in one scan.
Figure 13 shows the results of reducing the data obtained from 300 scans using one LiDAR sensor. The four LiDAR datasets reduced a similar amount of data for the same object; however, the amount of transmitted data differed owing to the difference in the position of the sensor. As a result of the reduction algorithm, the data were reduced to a maximum of 10,143 bytes, a minimum of 5310 bytes, and an average of 7295 bytes. Compared with the raw data, the reduction algorithm reduced the amount of data by up to 89.6%, at least 85.8%, and on average by 87.4%.
We used the Valgrind program to measure the memory usage of the IPU [
25].
Figure 14 shows the memory usage of the client and server processes running on the IPU.
Figure 14a shows the memory usage of the client process that transmitted the raw LiDAR data, and
Figure 14c shows the memory usage of the client process that transmitted the LiDAR data after reduction. The client process executed the data produced by one LiDAR and transmitted them to the server process according to the protocol. The client process without data reduction used an average of 1.48
of memory and took 739.15
to transmit the data. The client process that executed the data reduction algorithm used an average of 1.41
of memory and took 441.0
to transmit the data. The reduction algorithm of the client process reduced the size of the LiDAR data, thereby reducing the memory usage and transmission time.
Figure 14b and
Figure 14d show the memory usage of the server process without and with the reduction algorithm, respectively. For the data transmission produced by the four LiDAR sensors, the server process without and with the reduction algorithms used an average of 319
and 219
of memory, respectively. The server process with the reduction algorithm had reduced memory usage because the amount of data transmitted was reduced compared with the previous one.
4.2. Reconstruction
The data produced by the LiDAR sensor VL-AS16 was reduced by the algorithm loaded on the LS1028A IPU board. The reduced data were transmitted to the LX2160A board, which was an external system, through an Ethernet connection. We used a LX2160A board as an external system for an experiment similar to the vehicle’s embedded processor.
Figure 15 shows the point cloud when the reduced data were restored using the reconstruction algorithm. The data reduced in the IPU were reconstructed using distance grouping and frame reconstruction before being used in the external system.
Equation (
1) describes the reconstruction error in the edge-based distance grouping and convolution-based frame reconstruction steps. The ROIs of raw LiDAR data, reconstruction data after distance grouping, and frame reconstruction data were filtered using the characteristic at which the background data appeared ≤ 0.
N was the number of points inside the ROI of the point cloud and was equal to 3200 points composed of 200 packets with 16 channels.
was the distance value of the
i-th point, and
was the distance value of the reconstructed
i-th point. The total error
E was obtained by dividing the number of points
N by the sum of the errors, based on the mean absolute error. To correct the error of points with different distance values, the value obtained by dividing the absolute value of the difference between
and
by
was set as the error of each point.
Figure 15 shows the point cloud of person–object ROI.
Figure 15a shows the point cloud of the original full data from the LiDAR sensor.
Figure 15b shows the point cloud of the reduced data in the IPU system using the reduction algorithm.
Figure 15c shows the point cloud when the distance grouping algorithm was executed. In the distance grouping step, the distance data inside the edge of the object were filled. However, depending on the number of edges expressed by the reduction algorithm or the amount of data loss owing to the shape of the object, the internal data of the object not filled with similar data or data between objects were filled. Comparison between raw data and length-grouping-based reconstruction data from the four LiDAR sensors using Equation (
1) resulted in errors of 8.60%, 31.06%, 31.70%, and 35.66%. Data reconstruction based on length grouping had low similarity between the data inside the object and those at the edge and recognized a nearby object as a single object.
Figure 15d shows the point cloud of a reconstruction algorithm using the convolution filter. To solve the problem of the length-grouping-based object reconstruction algorithm, convolution strengthens the continuity of objects. A comparison between the raw data and convolution-based reconstruction algorithm data from the four LiDAR sensors using Equation (
1) result in errors of 1.70%, 4.37%, 6.55%, and 4.93%. When the reconstruction data with length-groupings were supplemented based on convolution, the similarity between the internal and edge data of the object increased, and adjacent objects were divided.
The external system reconstructed the reduced data and displayed them on screen. To compare the reduction algorithm with an existing platform, we changed the IPU behavior of the proposed platform. The IPU of the existing platform, which transmitted raw data, bypassed LiDAR sensor data to the external system. The proposed platform, including the reduction and reconstruction algorithm, reduced the data of the LiDAR sensors in the IPU and reconstructed them in the external system.
Figure 16 shows the memory usage when the data transmitted from the IPU were restored from the external system and displayed on the screen. In the experiment, 1244 frames off data generated by each LiDAR sensor were transmitted to the IPU. The IPU collected data from the four LiDAR sensors and sent them to the external system.
Figure 16a shows the memory usage of the external system when the IPU transmitted raw data from the LiDAR sensors. When the IPU transmitted uncompressed raw data, the external system received the data and output them using OpenCV. To process the received LiDAR data, the external system took 67.221
and used an average of 21.61
of memory.
Figure 16b shows the memory usage of the external data when the IPU transmitted the reduced data in the proposed platform. The data reduced by the IPU were restored using the reconstruction algorithm in the external system and then displayed on the screen. To receive and process the reduced data, the external system took 73.307
and used an average of 28.74
of memory.
Figure 17 shows the time to transmit one frame of data from the IPU to the external system.
Figure 17a shows the time taken when sending raw and reduced data from the IPU, and
Figure 17b shows the time taken for the external system to receive raw and reduced data. Without the reduction algorithm, the IPU took an average of 0.20 ms to transmit one frame, and the external system took an average of 1.79 ms to receive one frame. In the proposed platform, the IPU took an average of 0.035 to transmit one frame, and the external system took an average of 0.28 to receive one frame.