Next Article in Journal
Mobile Laser Scanning Data Collected under a Forest Canopy with GNSS/INS-Positioned Systems: Possibilities of Processability Improvements
Previous Article in Journal
A Snow Water Equivalent Retrieval Framework Coupling 1D Hydrology and Passive Microwave Radiative Transfer Models
Previous Article in Special Issue
ITS Efficiency Analysis for Multi-Target Tracking in a Clutter Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Implementation of MIMO Radar-Based Point Cloud Images for Environmental Recognition of Unmanned Vehicles and Its Application

Samsung Advanced Institute of Technology, 130 Samsung-ro, Yeongtong-gu, Suwon-si 16678, Gyeonggi-do, Republic of Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(10), 1733; https://doi.org/10.3390/rs16101733
Submission received: 18 March 2024 / Revised: 2 May 2024 / Accepted: 9 May 2024 / Published: 14 May 2024

Abstract

:
High-performance radar systems are becoming increasingly popular for accurately detecting obstacles in front of unmanned vehicles in fog, snow, rain, night and other scenarios. The use of these systems is gradually expanding, such as indicating empty space and environment detection rather than just detecting and tracking the moving targets. In this paper, based on our high-resolution radar system, a three-dimensional point cloud image algorithm is developed and implemented. An axis translation and compensation algorithm is applied to minimize the point spreading caused by the different mounting positions and the alignment error of the Global Navigation Satellite System (GNSS) and radar. After applying the algorithm, a point cloud image for a corner reflector target and a parked vehicle is created to directly compare the improved results. A recently developed radar system is mounted on the vehicle and it collects data through actual road driving. Based on this, a three-dimensional point cloud image including an axis translation and compensation algorithm is created. As a results, not only the curbstones of the road but also street trees and walls are well represented. In addition, this point cloud image is made to overlap and align with an open source web browser (QtWeb)-based navigation map image to implement the imaging algorithm and thus determine the location of the vehicle. This application algorithm can be very useful for positioning unmanned vehicles in urban area where GNSS signals cannot be received due to a large number of buildings. Furthermore, sensor fusion, in which a three-dimensional point cloud radar image appears on the camera image, is also implemented. The position alignment of the sensors is realized through intrinsic and extrinsic parameter optimization. This high-performance radar application algorithm is expected to work well for unmanned ground or aerial vehicle route planning and avoidance maneuvers in emergencies regardless of weather conditions, as it can obtain detailed information on space and obstacles not only in the front but also around them.

1. Introduction

Radar is most commonly used in vehicles and aircrafts to detect and track long-range targets. Unlike a camera-based system, radar can provide information in backlight or dark conditions at night. Unlike Lidar, it can provide more than 200 m forward information even in foggy, snowy or rainy weather. It also can measure the relative radial velocity of the detected target via the Doppler effect exactly [1,2,3]. Radar systems used in the military, aircraft, ships and automotives play a key role in detecting and tracking moving objects, and emergency braking for ADAS [4]. As the unmanned vehicle technology has developed, much research has been conducted on the implementation of additional functions such as three-dimensional sensing of stationary objects, providing a point cloud image, detailed classification and parking lot and empty space detection [5]. In this work, a point cloud imaging algorithm is implemented based on our radar system. Normally, the point cloud image aligns the information of the detected targets to the current position value and draws them to three-dimensional coordinates [6,7]. The GNSS is generally located in the center of the vehicle drive shaft, and the radar is mounted on the front bumper or roof. In order to improve the quality of the point cloud image, an algorithm for correcting the position coordinate value of GNSS and the alignment error of the sensor that collects actual data is also proposed. The actual improvement is obtained by comparing with the error value of a detected points image for the front single target and the three-dimensional point cloud image realized and compared directly through the actual parked vehicle detection. By overlapping this improved image with the navigation map image, the algorithm that can accurately indicate the location of the vehicle is implemented. In addition, we also fuse the camera image and the three-dimensional point cloud image to achieve redundancy of object detection. This paper is organized as follows. Section 2 details the principle of the Frequency Modulated Continuous Wave (FMCW) radar, a fabricated radar system and algorithm and a misalignment correction method. Section 3 details the implemented point cloud image and sensor fusion image. And finally, conclusions are given in Section 4.

2. Materials and Methods

2.1. Frequency Modulated Radar

FMCW radar is one of the representative systems suitable for application such as an altimeter in unmanned aircraft and transportation collision avoidance involving vehicles and ships. It transmits continuously modulating signals at a pre-defined rate over a fixed time period and measures the distance and velocity of targets. The frequency of the transmitted signal f(t) can be calculated as:
f t = f c + B T t , T 2 t T 2
where f c (Hz) is the center frequency, B (Hz) is over the frequency bandwidth, and T (s) is the sweep time.
The angular argument of a transmitted signal, also known as the phase angle, is defined as the integral of the function f t which is calculated over a given time interval and represents the total area under the curve of the waveform.
arg t = 0 t f c + B T τ d τ = f c t + B 2 T t 2  
Then, the transmitted signal S T t can be written with the above two equations:
S T t = A T c o s ( 2 π f c + B 2 T t t + φ T )  
where A T is the amplitude and φ T (rad) is the phase.
The transmitted signal will be reflected from the objects. The reflection coefficient is ρ and the value of materials is ρ < 1. The received signal S R t will be a time delayed transmitted signal with frequency and phase shifted. FMCW radar extracts the beat signal f beat t by mixing transmitted and received signals as follows:
f beat t = S T t S R t = S T t ρ S T t τ d
where τ d = 2 r c (s) is the delay of the received signal, r (m) is the distance with object, and c (m/s) is the speed of light.
The transmitted signal and τ d time delayed receiving signal Equation (3) are substituted in the beat signal Equation (4), and then,
f beat t = A T 2 ρ cos 2 π f c + B 2 T t t + φ T cos 2 π f c + B 2 T t τ d t τ d + φ T
The formula cos( α ) cos( β ) = 1 2 (cos( α + β ) + cos( α β ) ) is adapted to Equation (5),
= ρ A T 2 2 ( c o s ( 2 π ( 2 f c t + B 2 T 2 t 2 2 τ d t + τ d 2 + c o s ( 2 π ( B T τ d t + f c τ d B 2 T τ d 2 ) ) )
After multiplying two signals, the high-frequency term of Equation (6) can be filtered out via a band pass filter, and then, the last term remains.
f beat t = ρ A T 2 2 c o s ( 2 π B T τ d t + f c τ d B 2 T τ d 2 )
If terms change to the following:
A = ρ A T 2 2 ,         K r = B T τ d = 2 B c T r ,   Φ r r = 2 π f c B 2 T τ d τ d = 4 π c f c B T c r r
The beat signal can be represented in the form of a sinusoidal signal whose frequency changes with distance r:
f beat t = A cos 2 π K r t + Φ r r
Through this method, the FMCW radar could accurately measure the distance between the radar and the target.

2.2. Radar System

The radar system consists of four 79 GHz Silicon Complementary Metal Oxide Semiconductor (CMOS) Radio Frequency Integrated Circuits (RFICs). Within the system, four RFICs are synchronized with each other using 20 GHz local oscillator (LO) signal and they operate like a single RFIC chip system. Multi-Input Multi-Output (MIMO) is a technology that effectively increases the total number of antennas by introducing a virtual antenna concept [8,9]. The orthogonality of each transmitting signals can be obtained through specific methods such as time division multiplexing and MIMO, which improves angular resolution through an increased antenna aperture with a virtual array [10,11]. Combined with an appropriate antenna array arrangement, high-resolution point cloud information of the surroundings can be obtained from the radar. MIMO’s matrix M, consisting of intermediate frequency (IF) signals, is defined as follows:
M = S 1 I F ,   S 2 I F , , S N v I F
where N v = N T X · N R X , S i I F is the complex time-domain chirp signal and N represents sampled points of each virtual element.
The MIMO transceiver antennas are configured to fill the front of the RF board in a complex repetitive pattern along the length rule of wavelength. Therefore, we place the RF chips on the back of the RF board for flexible wiring design and layout on the board. Although there may be a loss in terms of via-interconnection, the freedom of antenna placement can be greatly increased and the line length minimized, which helps to improve the RF performance. Even in the overall design change, MIMO antenna arrangement and layout change can be minimized. The 20 GHz LO signal is allocated to the master and slave chips using the Wilkinson power divider. In order to minimize the deviation among LO synchronization lines and the line length from antennas to transceivers, they are arranged point-symmetrically. As developed in the previous equation, the radar radiates FMCW signals over time and the beat frequency is obtained through down conversion with the received signal. A fast-chirp FMCW radar transmits a large number of chirps with a chirp duration (T chirp) of tens of microseconds in a single frame, and has the advantage of avoiding multiple-target-identification problems over the classical modulation methods [12,13,14]. The main parameters of the radar are summarized in Table 1. The fabricated system is mounted on the bumper of the test vehicle with basic detection functions for on-road scenario verification. It is confirmed that the vehicle at a range of more than 100 m could be stably detected. Basic performance of the system is measured by placing 10 dBsm corner reflectors in front of the vehicle. The maximum field of view of the radar is 160 degrees with an angle resolution of 2.5 degrees. The system is configured in a 3.5 GHz bandwidth, and the maximum detection distance is shortened because the total number of range bins are the same, but the distance resolution is improved to 4.3 cm.

2.3. Radar Signal Processing

After the sampling process of analog-to-digital converter in RF chips, data are transferred to the processor via gigabit Ethernet line. Received signals are organized and stored in the form of data cubes in memory. In order to extract range and velocity information from data cubes, a 2D FFT is carried out. It is necessary to reduce the processing time by simplifying the dimension of the Range-Doppler Map or computational complexity. Our experiments apply a matrix compression method that intentionally omits some antenna channels to configure inter-element spacing larger than a half wavelength. A Doppler coherent focusing Direction of Arrival (DoA) method operates at a relatively small amount of computation by adjusting the distribution of the scanning area [15].

2.4. Misalignment Correction

Figure 1 shows positions of GNSS and the radar sensor. Due to different positions of the radar mounted on the front bumper of the vehicle and the GNSS mounted at the center of the rear shaft of the vehicle, the position error of the points occurs when the point cloud image is generated during rotation and curve driving. In straight driving, data are collected with a certain delay between radars and GNSS and thus are relatively less distorted. However, when driving outside a straight line, such as curving or rotating, the positions of the radars and GNSS in the vehicle are different. so, the detected single point targets become blurred and appear as an object in space. This blurring can be minimized by applying the misalignment offset modification algorithm.
The location of the target viewed from radar sensor is described as:
x r a d a r = r r a d a r × s i n θ r a d a r y r a d a r = r r a d a r × c o s θ r a d a r
The location of the target viewed from GNSS is described as:
x r a d a r l d c = x r a d a r + α y r a d a r l d c = y r a d a r + β
where x r a d a r l d c and y r a d a r l d c are x and y coordinates with local displacement, and α and β are x and y axis distances between GNSS and the sensor.
The misalignment correction coefficient matrix based on the sensor is calculated as:
x r a d a r l d c     y r a d a r l d c T = R M C × x r a d a r     y r a d a r T + [ α β ] T
where R M C is the correction distance for misalignment.
Finally, the movement of the platform observed in global position can be defined as follows:
x g   y g T = R H A × x r a d a r l d c   y r a d a r l d c T + [ x g n s s   y g n s s   ] T
where x g   y g is the global coordinate of x , y and R H A is the heading alignment distance between GNSS and radar sensor.

2.5. Camera-Radar Calibration

To calibrate the distortion caused by the lens of the camera, the internal calibration is performed by obtaining the focal length and optical center radial distortion coefficients of the camera. One point on the camera image [ u v 1 ] T is converted into a camera homogeneous coordinate [ X Y Z 1 ] T , and the relationship between point and coordinate is formulated as follows:
u v 1 = f u s u 0 0 f v v 0 0 0 1
where f u and f v are focal lengths, u 0 and v 0 are optical centers, and s is the skew coefficient.
Skew coefficient indicates the degree of tilt in the Y-axis direction of the cell array of the image sensor. Nowadays, cameras have few skew errors, so this value is generally set to the camera module. To convert a point in the radar’s coordinate [ X Y Z 1 ] T into a point in the camera coordinate [ X Y Z 1 ] T , the rotational matrix R and translation factor values p must be introduced.
X Y Z 1 = R 3 × 3 p 3 × 1 0 1 X Y Z 1
This matrix is referred to as an extrinsic parameter, and eight variable values are determined by performing more than eight tests using the evaluation board and the corner reflector. It is expressed as the following equation:
u v 1 = f u s u 0 0 f v v 0 0 0 1 r 11 r 12 r 13 p 1 r 21 r 22 r 23 p 2 0 0 0 1 X Y Z 1

3. Results

3.1. Point Cloud Image

A Point cloud image is a group of point images in which signals reflected from objects are produced in the form of three-dimensional points, and have been widely applied to implement a digital twin of buildings or objects using laser scanners [16,17,18,19,20,21].
As the performance of the radar has been enhanced, more and more functions such as road course estimation, classification and localization from the existing simple obstacle detection and tracking-oriented functions have been developed. Radar-based point cloud images are a set of data points in space and accumulate radar detection points with X, Y and Z coordinates from temporal and spatial perspectives [6,22,23,24,25,26]. Points of the image adopt the GNSS localization sensor to enhance the accuracy for both temporal and spatial domains. The implementation is relatively straightforward, but if the exact location value is not found, image performance will be greatly deteriorated. However, only when the position alignment and compensation between the GNSS sensor and the radar sensor are exactly determined can the image be generated properly without blur. Figure 2 shows the images before and after misalignment correction for the front single point target collected in vehicle driving, and Figure 3 shows the compared point cloud images of the parked vehicle before and after misalignment correction. Before modification, the signal detected by the radar mounted on the front bumper of the vehicle is transformed based on the GNSS sensor coordinates on the center of the rotation axis of the rear wheel. However, the image generated after applying the coordinate modification algorithm shows that the detected points are more clustered around the vehicle. Since radar can measure the Doppler velocity for the detected object, it can also output the relative speed of the target around an ego-vehicle. In our experiment, we use an ego-velocity estimation algorithm that can estimate the speed of an ego-vehicle and moving objects such as vehicles and stationary objects such as curbs, walls and street trees, which are accurately distinguished and represented using the general characteristics of such a radar implemented with our algorithm.
Figure 4 shows the results of constructing a three-dimensional point cloud radar image around the road via our radar mounted on the vehicle. This figure also shows that the curbstone and the outer wall of the sidewalk appear clearly in the point cloud image.

3.2. A Sensor Fusion Image Based on Camera and Radar

A radar system with a bandwidth of 1 GHz can acutely measure the distance from the radar to the detection object with accuracy up to 20 cm based on the flight time of characteristics of the RF signal. By virtue of an accurate measurement, point cloud images that are well aligned and combined with cameras and radar images become increasingly useful as various usage scenarios have been found in the unmanned vehicle. In our experiment, the radar-based point cloud image overlapping with the navigation road map image can accurately represent the current position of the ego vehicle. As shown in Figure 5, the QtWeb based road map Application Programming Interface (API) is displayed in real time and overlaps with the two-dimensional point cloud radar map image so that the current position of the vehicle can be accurately indicated. The stationary curbstones around the vehicle are represented in blue, and the detected vehicles and moving pedestrians in the front are represented in red.
Image fusion with a camera and a three-dimensional point cloud radar image is also implemented in our work, because improving target-detection performance by combining a camera image and a radar image has attracted much attention. In order to fuse images, three-dimensional radar point coordinate information needs to be converted into the coordinates of a two-dimensional camera. After calibration, Intersection over Union (IoU) is applied. IoU is an evaluation metric used to measure the accuracy of an object detector on a particular dataset. As shown in Table 2, through calibration, the alignment is improved to have an error value within 2.2 pixels and a Root Mean Square Error (RMSE) less than 0.05 m. Figure 6 shows the fusion of the camera image and three-dimensional radar point cloud image. The median barrier in the middle of the road, building walls, street trees and the metal box on the sidewalk can be observed in this fusion image.

4. Discussion

Previously, a 79 GHz high-resolution radar system for unmanned vehicles was developed, and basic detection performance was verified through road driving tests. In order to expand the application of this radar system, we propose to implement a point cloud image by using GNSS sensors and our radar as existing detection and tracking functions. Due to the different mounting positions of the radar and GNSS (i.e., different reference points of coordinates), alignment errors may occur when the vehicle rotates or turns, which causes distortion of the point cloud image. To solve this problem, we develop a correction algorithm for errors. It is confirmed that the error of 2.7 m when driving over 20 m was reduced to less than 0.2 m for vehicles applying the correction algorithm. The accurate measurement can meet the needs of various application scenarios of unmanned vehicles. We present several typical scenarios in the experiments. An algorithm that enables overlapping a point cloud image detected by a radar in real time with a QtWeb-based navigation image is implemented. By aligning and matching road information on navigation and edge information of roads detected via radar, the accurate location of current unmanned vehicles can be estimated in shaded areas where a GNSS signal is cut off or when driving slowly in the city center. Another scenario is to overlap the radar’s image information with the camera’s image information under the same coordinate reference system but with different modality information for the same target. Extrinsic calibration, a correction for different positions of camera lens and sensor, is further performed. This can guarantee not only camera information but also the radar’s location, angle and speed information for objects in front of the vehicle, and thus greatly improve detection accuracy compared to using the single-sensor system. The application and sensor fusion functions of these radars are expected to truly make contributions in unmanned vehicles such as Unmanned Aerial Vehicles (UAVs) and Urban Air Mobility (UMA), helping them successfully reach their destinations.

Author Contributions

Conceptualization, project administration and writing—original draft preparation, review and editing, J.K.; hardware, writing—review and editing, S.K.; radar algorithm, data curation, S.C.; algorithm, validation, M.E.; hardware, J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Meinel, H. Evolving Automotive Radar—From the very beginnings into the future. In Proceedings of the 8th European Conference on Antennas and Propagation (EuCAP 2014), The Hague, The Netherlands, 6–11 April 2014; pp. 3107–3114. [Google Scholar] [CrossRef]
  2. Khalid, F.B.; Nugraha, D.T.; Roger, A.; Ygnace, R.; Bichl, M. Distributed Signal Processing of High-Resolution FMCW MIMO Radar for Automotive Applications. In Proceedings of the 2018 15th European Radar Conference (EuRAD), Madrid, Spain, 26–28 September 2018; pp. 513–516. [Google Scholar] [CrossRef]
  3. Dickmann, J.; Klappstein, J.; Hahn, M.; Appenrodt, N.; Bloecher, H.L.; Werber, K.; Sailer, A. Automotive radar the key technology for autonomous driving: From detection and ranging to environmental understanding. In Proceedings of the 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2–6 May 2016; pp. 1–6. [Google Scholar] [CrossRef]
  4. Rao, R.; Cui, C.; Chen, L.; Gao, T.; Shi, Y. Quantitative Testing and Analysis of Non-Standard AEB Scenarios Extracted from Corner Cases. Appl. Sci. 2024, 14, 173. [Google Scholar] [CrossRef]
  5. Prophet, R.; Hoffmann, M.; Vossiek, M.; Li, G.; Sturm, C. Parking space detection from a radar based target list. In Proceedings of the 2017 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), Nagoya, Japan, 19–21 March 2017; pp. 91–94. [Google Scholar] [CrossRef]
  6. Wang, M.; Yue, G.; Xiong, J.; Tian, S. Intelligent Point Cloud Processing, Sensing, and Understanding. Sensors 2024, 24, 283. [Google Scholar] [CrossRef] [PubMed]
  7. Huch, S.; Lienkamp, M. Towards Minimizing the LiDAR Sim-to-Real Domain Shift: Object-Level Local Domain Adaptation for 3D Point Clouds of Autonomous Vehicles. Sensors 2023, 23, 9913. [Google Scholar] [CrossRef]
  8. Donnet, B.J.; Longstaff, I.D. MIMO Radar, Techniques and Opportunities. In Proceedings of the 2006 European Radar Conference, Manchester, UK, 13–15 September 2006; pp. 112–115. [Google Scholar] [CrossRef]
  9. Sun, P.; Dai, H.; Wang, B. Integrated Sensing and Secure Communication with XL-MIMO. Sensors 2024, 24, 295. [Google Scholar] [CrossRef] [PubMed]
  10. Kim, J.; Kim, B.; Choi, S.; Cho, H.; Kim, W.; Eo, M.; Khang, S.; Lee, S.; Sugiura, T.; Nikishov, A.; et al. 79-GHz Four-RFIC Cascading Radar System for Autonomous Driving. In Proceedings of the 2020 IEEE International Symposium on Circuits and Systems (ISCAS), Seville, Spain, 12–14 October 2020; pp. 1–5. [Google Scholar] [CrossRef]
  11. Sit, Y.L.; Li, G.; Manchala, S.; Afrasiabi, H.; Sturm, C.; Lubbert, U. BPSK-based MIMO FMCW Automotive-Radar Concept for 3D Position Measurement. In Proceedings of the 2018 15th European Radar Conference (EuRAD), Madrid, Spain, 26–28 September 2018; pp. 289–292. [Google Scholar] [CrossRef]
  12. Cho, H.W.; Kim, W.S.; Choi, S.D.; Eo, M.S.; Khang, S.T.; Kim, J.S. Guided Generative Adversarial Network for Super Resolution of Imaging Radar. In Proceedings of the 2020 17th European Radar Conference (EuRAD), Utrecht, The Netherlands, 10–15 January 2021; pp. 144–147. [Google Scholar] [CrossRef]
  13. Kronauge, M.; Rohling, H. New chirp sequence radar waveform. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 2870–2877. [Google Scholar] [CrossRef]
  14. Rohling, H.; Kronauge, M. New radar waveform based on a chirp sequence. In Proceedings of the 2014 International Radar Conference, Lille, France, 13–17 October 2014; pp. 1–4. [Google Scholar] [CrossRef]
  15. Choi, S.D.; Kim, B.K.; Kim, J.S.; Cho, H. Doppler Coherent Focusing DOA Method for Efficient Radar Map Generation. In Proceedings of the 2019 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), Detroit, MI, USA, 15–16 April 2019; pp. 1–4. [Google Scholar] [CrossRef]
  16. Zhang, G.; Geng, X.; Lin, Y.J. Comprehensive mPoint: A Method for 3D Point Cloud Generation of Human Bodies Utilizing FMCW MIMO mm-Wave Radar. Sensors 2021, 21, 6455. [Google Scholar] [CrossRef] [PubMed]
  17. Abdulla, A.R.; He, F.; Adel, M.; Naser, E.S.; Ayman, H. Using an Unmanned Aerial Vehicle-Based Digital Imaging System to Derive a 3D Point Cloud for Landslide Scarp Recognition. Remote Sens. 2016, 8, 95. [Google Scholar] [CrossRef]
  18. Tomi, R.; Eija, H. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera. Sensors 2012, 12, 453–480. [Google Scholar] [CrossRef] [PubMed]
  19. Wu, T.; Fu, H.; Liu, B.; Xue, H.Z.; Ren, R.K.; Tu, Z.M. Detailed Analysis on Generating the Range Image for LiDAR Point Cloud Processing. Electronics 2021, 10, 1224. [Google Scholar] [CrossRef]
  20. Roberto, P.; Marina, P.; Francesca, M.; Massimo, M.; Christian, M.; Eva, S.M.; Emanuele, F.; Andrea, M.L. Point Cloud Semantic Segmentation Using a Deep Learning Framework for Cultural Heritage. Remote Sens. 2020, 12, 1005. [Google Scholar] [CrossRef]
  21. Lukasz, S.; Katarzyna, F.; Adam, D.; Joanna, D. LiDAR Point Cloud Generation for SLAM Algorithm Evaluation. Sensors 2021, 21, 3313. [Google Scholar] [CrossRef] [PubMed]
  22. Werber, K.; Rapp, M.; Klappstein, J.; Hahn, M.; Dickmann, J.; Dietmayer, K.; Waldschmidt, C. Automotive radar gridmap representations. In Proceedings of the 2015 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), Heidelberg, Germany, 27–29 April 2015; pp. 1–4. [Google Scholar] [CrossRef]
  23. Point Cloud. Available online: https://en.wikipedia.org/wiki/Point_cloud (accessed on 22 March 2024).
  24. Wang, J.; Zang, D.; Yu, J.; Xie, X. Extraction of Building Roof Contours from Airborne LiDAR Point Clouds Based on Multidirectional Bands. Remote Sens. 2024, 16, 190. [Google Scholar] [CrossRef]
  25. Wang, Y.; Han, X.; Wei, X.; Luo, J. Instance Segmentation Frustum–Point Pillars: A Lightweight Fusion Algorithm for Camera–LiDAR Perception in Autonomous Driving. Mathematics 2024, 12, 153. [Google Scholar] [CrossRef]
  26. Xie, J.; Hsu, Y.; Feris, R.S.; Sun, M. Fine registration of 3D point clouds with iterative closest point using an RGB-D camera. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013; pp. 2904–2907. [Google Scholar] [CrossRef]
Figure 1. Different positions of the radar and GNSS.
Figure 1. Different positions of the radar and GNSS.
Remotesensing 16 01733 g001
Figure 2. Comparison of images before (a) and after (b) misalignment correction for the accumulation of point target data collected in driving.
Figure 2. Comparison of images before (a) and after (b) misalignment correction for the accumulation of point target data collected in driving.
Remotesensing 16 01733 g002
Figure 3. The multi-frame point cloud image was realized by rotating around the parked vehicle. Before misalignment correction (a) the point blurred the object shape, but after calibration (b) the shape and boundary of the object become clear.
Figure 3. The multi-frame point cloud image was realized by rotating around the parked vehicle. Before misalignment correction (a) the point blurred the object shape, but after calibration (b) the shape and boundary of the object become clear.
Remotesensing 16 01733 g003
Figure 4. A point cloud map image is generated in real time on an urban road using a fabricated radar system mounted on a vehicle. The figure also shows the 3D view image with elevation information added.
Figure 4. A point cloud map image is generated in real time on an urban road using a fabricated radar system mounted on a vehicle. The figure also shows the 3D view image with elevation information added.
Remotesensing 16 01733 g004
Figure 5. The QtWeb-based road map API is displayed in real time and overlaps with the two-dimensional radar point cloud map image. It is easy to capture the exact location and the surrounding information of the vehicle. (a) The top view of the real map image, and (b) the overlapping image.
Figure 5. The QtWeb-based road map API is displayed in real time and overlaps with the two-dimensional radar point cloud map image. It is easy to capture the exact location and the surrounding information of the vehicle. (a) The top view of the real map image, and (b) the overlapping image.
Remotesensing 16 01733 g005
Figure 6. Camera image and the radar point cloud map image are combined to implement a sensor fusion image. (a) The fused image and (b) a three-dimensional point cloud image of the radar.
Figure 6. Camera image and the radar point cloud map image are combined to implement a sensor fusion image. (a) The fused image and (b) a three-dimensional point cloud image of the radar.
Remotesensing 16 01733 g006
Table 1. Main parameters of our tested radar system.
Table 1. Main parameters of our tested radar system.
Center
Frequency
(GHz)
Bandwidth
(GHz)
Velocity
Resolution
(ms)
Range
Resolution
(cm)
Angle
Resolution
(deg.)
Field of
View
(deg.)
793.50.14.32.5160
Table 2. Misalignment value of tested camera and radar.
Table 2. Misalignment value of tested camera and radar.
IndexPixel Value
(pixels)
RMSE
(m)
Misalignment2.20.05
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J.; Khang, S.; Choi, S.; Eo, M.; Jeon, J. Implementation of MIMO Radar-Based Point Cloud Images for Environmental Recognition of Unmanned Vehicles and Its Application. Remote Sens. 2024, 16, 1733. https://doi.org/10.3390/rs16101733

AMA Style

Kim J, Khang S, Choi S, Eo M, Jeon J. Implementation of MIMO Radar-Based Point Cloud Images for Environmental Recognition of Unmanned Vehicles and Its Application. Remote Sensing. 2024; 16(10):1733. https://doi.org/10.3390/rs16101733

Chicago/Turabian Style

Kim, Jongseok, Seungtae Khang, Sungdo Choi, Minsung Eo, and Jinyong Jeon. 2024. "Implementation of MIMO Radar-Based Point Cloud Images for Environmental Recognition of Unmanned Vehicles and Its Application" Remote Sensing 16, no. 10: 1733. https://doi.org/10.3390/rs16101733

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop