Next Article in Journal
Low-Dose COVID-19 CT Image Denoising Using Batch Normalization and Convolution Neural Network
Next Article in Special Issue
Path Tracking for Car-like Robots Based on Neural Networks with NMPC as Learning Samples
Previous Article in Journal
Relative Knowledge Distance Measure of Intuitionistic Fuzzy Concept
Previous Article in Special Issue
Cooperative Localization Based on Augmented State Belief Propagation for Mobile Agent Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

High-Definition Map Representation Techniques for Automated Vehicles

by
Babak Ebrahimi Soorchaei
1,†,
Mahdi Razzaghpour
2,*,†,
Rodolfo Valiente
2,†,
Arash Raftari
2,† and
Yaser Pourmohammadi Fallah
2,†
1
Department of Computer Science, University of Central Florida, Orlando, FL 32816, USA
2
Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA
*
Author to whom correspondence should be addressed.
Connected & Autonomous Vehicle Research Lab (CAVREL), University of Central Florida, Orlando, FL 32816, USA.
Electronics 2022, 11(20), 3374; https://doi.org/10.3390/electronics11203374
Submission received: 31 August 2022 / Revised: 7 October 2022 / Accepted: 8 October 2022 / Published: 19 October 2022
(This article belongs to the Collection Advance Technologies of Navigation for Intelligent Vehicles)

Abstract

:
Many studies in the field of robot navigation have focused on environment representation and localization. The goal of map representation is to summarize spatial information in topological and geometrical abstracts. By providing strong priors, maps improve the performance and reliability of automated robots. Due to the transition to fully automated driving in recent years, there has been a constant effort to design methods and technologies to improve the precision of road participants and the environment’s information. Among these efforts is the high-definition (HD) map concept. Making HD maps requires accuracy, completeness, verifiability, and extensibility. Because of the complexity of HD mapping, it is currently expensive and difficult to implement, particularly in an urban environment. In an urban traffic system, the road model is at least a map with sets of roads, lanes, and lane markers. While more research is being dedicated to mapping and localization, a comprehensive review of the various types of map representation is still required. This paper presents a brief overview of map representation, followed by a detailed literature review of HD maps for automated vehicles. The current state of autonomous vehicle (AV) mapping is encouraging, the field has matured to a point where detailed maps of complex environments are built in real time and have been proved useful. Many existing techniques are robust to noise and can cope with a large range of environments. Nevertheless, there are still open problems for future research. AV mapping will continue to be a highly active research area essential to the goal of achieving full autonomy.

1. Introduction

The problem of mobile robot navigation has traditionally been approached by breaking it down into three parts: environment mapping, localization, and trajectory planning. For autonomous vehicles (AVs), accurate and reliable self-localization is critical [1]. In order to operate safely, AVs must precisely predict the future actions and/or trajectories of other road participants such as connected and nonconnected vehicles and pedestrians [2,3,4,5]. For instance, the ability to accurately predict pedestrian behavior is crucial to ensure safe autonomous driving solutions. However, this task is challenging due to the fact that in general, pedestrian’s trajectories can change rapidly, and they lack temporal smoothness [6]. Accessing the environment information in the form of a prebuilt map can help with such challenging tasks. Furthermore, when combined with a prebuilt map, a high-precision self-localization solution can transform the difficult problem of perception and scene interpretation into a less complex positioning problem [7,8]. The criteria for achieving accurate self-localization on the map have been discussed in [9].
AVs intend to offer a safe and comfortable ride using the output of sensory units, a map, and a high-level route [10,11,12,13]. Meanwhile, in safety-critical applications such as self-driving cars, creating interpretable intermediate representations that explain why the car performed a given maneuver is critical for decision-making [14,15]. If the map is updated regularly and reliably, AV can partially handle the autonomy problem offline by using map information for decision-making during maneuvers [16]. Furthermore, map data can be shared and updated by multiple AVs, allowing for real-time map updates and improving confidence in the accuracy of the map. The higher levels of autonomy require the maps to be more refined in details with quality standards. In this context, the solution for high-precision localization is to provide a unified representation that combines the agent dynamics, collected by perception and tracking systems, with the scene context, commonly provided as prior knowledge in the form of high-definition (HD) maps [17,18,19,20].
Differently from the unified representation, other solutions use an end-to-end approach that creates an internal-learned map representation of the world [21,22,23,24,25,26,27]. End-to-end approaches that learn such internal mapping could be beneficial to scale self-driving solutions that can generalize and find optimal map representations for the driving task [28,29]. Towards this goal, the work in [30] is one of the earliest end-to-end systems and pioneered this field by using a neural network to directly control the AV. Current end-to-end solutions use simultaneous perception and prediction to provide outputs such as object tracking and predicted trajectories, or learning an intermediate semantic mapping that is used to control the AV, enabling end-to-end learning of the full autonomy system [25,28]. Other end-to-end solutions propose controlling the steering angle directly from raw camera images [27]. In [31], the authors proposed an end-to-end trainable long short-term memory (LSTM) [32] network for that purpose. The authors in [33] used a similar strategy and proposed a 3D convolutional neural network (CNN) model with LSTM layers and residual connections. Other studies proposed other combinations of LSTM, CNN, and 3D CNN models for end-to-end driving solutions [34,35]. These approaches focus on imitating human drivers and learning a hidden representation but are not interpretable.
An end-to-end learnable neural network can perform joint perception, prediction, and motion planning for AVs while producing interpretable intermediate representations. The interpretable representations are used by the planner and help to explain the AV decisions [21,23]. In [21], the authors presented an end-to-end approach for predicting intermediate representations in the form of an online map as well as agents’ dynamics and their current and future states. The solution produced probabilistic intermediate representations that were interpretable and ready to use for the motion planner. Although directly outputting driving commands is a general solution, it may have stability and robustness issues, and a combination of an HD map and internal latent representations (feature map) can be advantageous [22] and can also be learned end-to-end from human demonstrations [23]. This is accomplished through the use of a novel differentiable semantic occupancy representation, which is explicitly used as a cost in the motion-planning process.
It is also common to rasterize maps into a top-down view or bird’s-eye view (BEV) map, which can be referred to as a 3D top-view map that respects the nature of the data, making the learning process easier as it can leverage priors about objects’ geometry [23,24,36,37,38,39,40]. Because height localization information is less valuable for AVs, the relevant information that an AV requires for decision-making could be suitably encoded using a BEV map representation. Using a BEV as an output of the perception module will result in interpretable and easy-to-use representation for prediction and motion-planning modules [24,37,40].
Despite significant progress in this area, it still presents significant challenges due to the nature of sensor noise and practical constraints during the map creation. Existing mapping algorithms are commonly surprisingly complex, both from a mathematical and from an implementation point of view. As a result, novel map representations are required for the full adoption of AVs. This review provides a general review of the most common map representation approaches, with a focus on AV mapping. The contributions of the paper are as follows:
  • This paper describes and compares different map representation approaches and their applications, such as highly/moderately simplified map representations, which are primarily used in the robotics domain.
  • We provide a detailed literature review of HD maps for automated vehicles, as well as the structure of their various layers and the information contained within them, based on different companies’ definitions of an HD map.
  • We discuss the current limitations and challenges of the HD map, such as data storage and map update routines, as well as future research directions.

2. Real-Time (Online) Mapping

Robotics applications frequently necessitate real-time processing. This means that the input data must be processed at a rate that is faster than or equal to the input data rate to avoid frame dropping [41]. Real-Time mappings allow the robot to map out unknown environments and perform localization in that map at the same time. However, as the action of driving gradually transfers from humans to machines, the role and scope of maps extend beyond navigation. As a result, offline map-based approaches have received more attention in most AV applications during the last decade. The computational problem of constructing or updating a map of an unknown environment while simultaneously tracking an agent’s location within it is known as simultaneous localization and mapping (SLAM).

Simultaneous Localization and Mapping (SLAM)

In many applications, such as indoor robot navigation, offline maps are not available [42,43]. The agent can utilize SLAM to construct a map on the fly from raw sensory data (mapping), while also using that constructed map to maintain track of its location (localization) [44,45,46,47,48,49,50]. SLAM techniques perform well over short distances, but they suffer from an accumulative inaccuracy over longer distances due to their dependent nature; meanwhile, loop-closure modules in SLAM systems (and pose graph optimization) will compensate for the errors and correct the accumulated drift. The map representations employed in SLAM techniques can vary widely, but the most major distinction is whether they are 2D- or 3D-orientated, or a combination of both. It is reasonable to suppose that when the SLAM is paired with a combination of sensors (GPS, IMU, LIDAR, and radar), it will perform better. Figure 1 shows a result of SLAM using the Cartographer package [51,52].
ORB-SLAM is among the most well-known mapping and localization system that operates in real time while keeping localization and tracking accuracy at a desirable level [53,54,55,56]. In [57], the authors showed that a multilayer perceptron (MLP) could be used as the only scene representation in a real-time SLAM system using a hand-held RGB-D camera.
Some map representation techniques, such as occupancy grid map [58,59,60,61,62,63,64,65,66,67,68,69] and Octomap [70,71,72,73,74,75,76,77], are used in SLAM methods and are discussed in the following section. For SLAM algorithms implementations and their comparison, one can refer to the following surveys: [50,78,79].

3. Highly/Moderately Simplified Map Representations

This category of maps is mainly utilized in the robotics domain and can be classified into three subcategories: topological maps, metric maps, and geometric maps.

3.1. Topological Maps

The topological maps are mainly graph-based representations, exclusively deal with places and their interactions [80,81,82], and describe the environment as a collection of nodes (locations) connected by edges [83]. An edge between two nodes is labeled with a probability distribution over the relative locations of the two poses, conditioned to their mutual measurements [84]. A world representation based on this simplification makes the map extension easier and provides the required information for path planning and motion prediction [85,86,87,88,89]. Despite the world model’s reduction, topological representations lose the sense of proximity and lack explicit information regarding the space’s occupancy. Many authors have approached this problem by storing additional data or combining it with metric maps [83,90].

3.2. Metric Maps

Contrary to topological maps, in metric maps, the objects are represented with precise coordinates. Such maps contain all the required information for a mapping or navigation algorithm to function [91]. In these methods, the map size is directly proportionate to the region of interest’s area. Therefore, mapping vast areas, especially in a 3D representation, is computationally expensive. Landmark-based maps, occupancy grid maps, and geometric maps are the most popular metric mapping methods.

3.2.1. Landmark-Based Maps

Landmark-based representations, also known as feature-based representations, are used to identify and maintain the postures of specific distinguishing landmarks [92,93,94,95,96]. The landmarks must be unique and identifiable by the robot perception system, which is a prerequisite in these representations. Landmarks can be defined as sophisticated descriptors, rather than raw sensor data. Points, lines, and corners can be used to create a minimalist description of the landscape. Some methods such as that in [97] applied topological mapping on landmark-based maps.

3.2.2. Occupancy Grid Maps

Occupancy grid maps [98] divide the environment into so-called grid cells. Each cell contains data about the area it covers [58]. Figure 2 shows an example of a simple grid map. It is typical to save a single value in each cell that represents the likelihood of an obstacle being there. Traditional probability-based techniques, such as particle or Kalman filters, are most typically used to combine input from several sensors and localized to a known prior map [59,60,64,69,99,100,101,102,103].
Occupancy grid maps can be either 2D or 3D [104]. A version known as 2.5D contains height information in an extended 2D grid cell map rather than being a pure 3D grid map [105]. Regular grids or sparse grids can be used to create grid maps. Regular grids discretize continuous space into cells with the same dimensions for the entire region, whereas sparse grids extend the concept of the regular grid by grouping regions with the same values in a tree-like fashion. This map can be used to predict multipedestrian movements [106] as well as obstacle crossing. In general, the occupancy grid maps can be categorized as follows:
  • Octree: The octree encoding [107] is a 3D hierarchical octal tree structure capable of representing objects with any morphology at any resolution. Because the memory required for representation and manipulation is on the order of the area of the object, it is commonly employed in systems that require 3D data storage due to its great efficiency [71,72,73,75,76,77,108,109,110].
  • Costmap: The costmap represents the difficulty of traversing different areas of the map. The cost is calculated by integrating the static map, local obstacle information, and the inflation layer, and it takes the shape of an occupancy grid with abstract values that do not represent any measurement of the environment. It is mostly utilized in path planning [111,112,113,114].

3.3. Geometric Maps

The geometric maps attempt to represent the sensory data with discrete simplified geometric shapes such as circles or polygons [115]. The geometric maps represent the surroundings efficiently without sacrificing too much information; however, it impedes trajectory calculation and data management in general. As a result show in Table 1, this method is rarely used in practice, and the occupancy grid map alternative is preferred [116,117].

4. High-Accuracy Map Representations

Currently, academics and manufacturers are working to develop advanced driver-assistance systems (ADAS) to attain a high-level autonomy in vehicles. Maps can be used for a variety of purposes, including lowering computation cost by providing the offline maps as a prior, implementing safety measures, avoiding sensor range constraints, and sharing maps data among different AVs, all of which can improve ADAS accuracy and reliability. According to [118,119], high-accuracy map representations can be loosely categorized based on their level of information into one of three categories: digital maps, enhanced digital maps, and HD maps. Traditional street maps, such as Google Map, are digital maps. Road geometry, signage, lane design, and speed limits are all included in enhanced digital maps. Finally, HD maps incorporate all of the features found in the preceding categories, as well as a semantically segmented 3D representation of the agent’s surrounding.
If the map is kept accurate and used intelligently, with an understanding of its own limitations, an HD map can be thought of as an extra sensor that is unaffected by environmental occlusions with a nearly perfect detection system.

4.1. Digital Maps

A conventional digital map is a traditional electronic street map and is given by a variety of map providers, such as Google Map. These are topometric (topological and metric) maps that encode street layout, names, and distances. It is worth noting that an automated car can still benefit from these prior maps, but they are unlikely to be a crucial facilitator of fully autonomous operation on their own (as opposed to HD maps). Even with an up-to-date digital map, the lack of positionally accurate and identifiable environment data (such as the location of a stop sign) limits the extent to which it can assist an automated vehicle. However, such level of information is still sufficient for high-level navigation tasks, such as finding the shortest path from point A to point B. For more details, one can refer to [120].

4.2. Enhanced Digital Maps

An enhanced digital map is a conventional digital map that has had certain augmented data, making it useful for both ADAS and AVs. Road speed restrictions, road curvature, lane structure, and road signage have all been added to a basic digital map [121,122]. The list below goes through each of these additions based on TomTom’s ADAS map [123].
  • Road curvature;
  • Gradient (slope) of the roads;
  • Curvature (sharpness) at junctions;
  • Lane markings at junctions;
  • Traffic signs;
  • Speed restrictions (necessary for adaptive cruise control).
Due to the lack of a clear distinction between an enhanced digital map and an HD map, researchers classify any map that stores a 3D world representation as an HD map, while the rest are classified as enhanced digital maps.

4.3. High-Definition (HD) Maps

A high-definition (HD) map is a 3D representation of the world that supplements an enhanced digital map [124,125]. A combination of sensors, including LiDAR, radar, and cameras, can be used to create this representation [126]. A high positional accuracy, on the order of 10 cm, is a common feature of all HD maps [127]. Although technology constraints limit the highest possible accuracy of map features, a higher precision is always desirable.
An HD map can be as simple as a collection of accurate positioning of road signs, lane markings, and guardrails in the surroundings, or be as complex as a dense semantically segmented LiDAR point cloud that stores the distance to every obstacle around the agent as shown in Figure 3. For more information, one can refer to [128].
An HD map is usually divided into numerous layers, each of which contains different sorts of data. Figure 4 illustrates an HD map along with its layer, originally published in [130]. Furthermore, Figure 5 illustrates the layers of the HD map defined by HERE [131].
In Lyft’s HD map, the five core layers are described as follows [132,133]:
  • Base map layer: The entire HD map is layered on top of a standard street map.
  • Geometric map layer: The geometric layer in Lyft’s maps contains a 3D representation of the surrounding road network. This 3D representation is provided by a voxel map with voxels of 5 cm × 5 cm × 5 cm and was built using sensory data of LiDAR and cameras. Voxels are a cheaper alternative to point clouds in terms of required storage.
  • Semantic map layer: The semantic map layer contains all semantic data, such as lane marker placements, travel directions, and traffic sign locations [23,134,135]. Within the semantic layer, there are three major sublayers:
    -
    Road-graph layer;
    -
    Lane-geometry layer;
    -
    Semantic features include all objects relevant to the driving task, such as traffic lights, pedestrian crossings, and road signs.
  • Map priors layer: This layer adds to the semantic layer by integrating data that have been learned via experience (crowd-sourced data). For example, the average time it takes for a traffic light to turn green or the likelihood of coming across parked vehicles on the side of a narrow route, allows the AV to raise its “caution” while driving.
  • Real-time knowledge layer: This is the only layer designed to be updated in real time, to reflect changing conditions such as traffic congestion, accidents, and road work.
Based on a combination of the open-source Apollo software [136] platform and DeepMap’s U.S. patent [137], another description of the core layers of the HD map is offered below.
  • Lane positions and widths: The position of lane markings in 2D along with the type of lane (solid line, dashed line, etc.). Lane markings may also indicate intersections, road edges, and off-ramps.
  • Road sign positions: The 3D position of road signage includes stop signs, traffic lights, give-way signs, one-way road signs, and traffic signs. This task is especially challenging when signage conventions and road rules vary by country.
  • Special road features: such as pedestrian crossings, school zones, speed bumps, bicycle lanes and bus lanes.
  • Occupancy map: A spatial 3D representation of the road and all physical objects around the road. This representation can be stored as a mesh geometry, point cloud, or voxels. The 3D model is essential to centimeter-level accuracy in the AV’s location on the map.

5. Localization in HD Maps

Road DNA, proposed by TomTom [138], is one of the possible solutions for the localization problem in HD maps. In this method, the detailed 3D representation of the road environment with all features and depth information was compressed into a collection of 2D raster images, where the image intensity corresponded to the depth of a certain area of the environment. A 2D depth image was also created utilizing the agent’s sensor data. The Road DNA solution allowed for a precise localization with substantially fewer data storage requirements, compared to using dense LiDAR point clouds. For an accurate localization, pattern matching algorithms were applied. Since significant structural changes occur less frequently in a road environment than appearance changes, depth photos can be more resistant to environmental changes than raw camera images.
In [139], a robust ego-motion estimation technique using sensors and a map-matching technique with HD maps was presented. The authors proposed a new line segmentation matching model and a geometric correction approach of road making obtained by an inverse perspective mapping (IPM) methodology for the map-matching technique with HD map. Combining these two technologies increased robustness and accuracy, according to the authors’ experiments.
The authors in [20] compared sensory scans to an HD map using a particle filter. Their study integrated data from an IMU and a GPS receiver to determine location. The root-mean-squared error (RMSE) of the localization accuracy was 2.8 m without an HD map prior, 1.5 m with an HD map and odometry (IMU), and 1.2 m with an HD map, odometry, and GPS. While the obtained accuracy was not as good as commercial methods, the results confirmed the significant effect of having prior HD maps on AV’s localization.
For LiDAR-enabled self-driving cars, the iterative closest point (ICP) algorithm is commonly used to match a 3D LiDAR point cloud to a previously collected set of points in the map. The ICP algorithm is a least-squares optimizer that tries to determine the best rotation, scale, and translation to transform a set of incoming LiDAR points into a data set of points iteratively [140]. The strategies for aligning LiDAR points using ICP were discussed in [8]. They also utilized a Kalman Filter to fuse sensor data.
Finally, RTK (real-time kinematic) GPS can be used to obtain highly accurate localization. However, because RTK GPS relies on a network of ground stations to function properly, extra infrastructure is required to have AVs locate themselves accurately using this technique. In densely built urban environments, GPS is also vulnerable to dropouts, interference, and multipath reflection, which, although acceptable for long-range navigation planning, is insufficient for a second-by-second local-positioning-based control of AVs.

6. Limitations and Challenges

The broad range of traffic laws between countries, such as restrictions for turning left and right, is one of the challenges in generating HD maps [141,142,143]. The required data storage for HD maps causes another challenge. Google’s Waymo AV, for example, collects about 1 GB of data every 20 s [144]. Since each AV has limited storage space, the vehicle must perform a dynamic map download and cache refresh routine as it travels across the surroundings. DeepMap’s map-tiling technique [137] separates the whole HD map into map tiles and downloads the necessary map tiles based on the vehicle position to decrease the memory requirement. The third issue is exact vehicle localization inside the HD map, which is accomplished by comparing incoming sensor data with the current map and updating the map. The processing of incoming sensory data requires onboard high-performing processing resources, and the real-time execution of the commands needs a latency time of less than 10 ms [145].
The HD map update and maintenance is also a major challenge [141,146]. There are millions of kilometers of roads in the world, and many HD map modeling algorithms are proposed for highway scenarios and neglect input anomaly (such as bad lane marking paint, flatten curb, tree occlusion), and uncertainties around nonroad objects (such as construction zones, nearby vehicles, trees). However, in reality, such uncertainties and anomalies are present in many urban and rural roads. Therefore, more efforts are needed to mitigate the effect of these problems.

7. Conclusions and Future Work

Both specialist mapping businesses, as well as automated vehicle companies, have started to generate HD maps for AVs. There exists a wide range of HD map solutions available or in development, ranging from lightweight HD map solutions that primarily store lane markings and lane logic (Atlatec, Apollo), to maps that include full 3D point cloud representations (Waymo). While the most comprehensive maps with full 3D representations provide the best assurance of safety, they are costly to generate and maintain and necessitate massive quantities of data. A layered strategy, in which precise 3D data are updated less frequently and a lower-memory 2D representation of the road network is updated considerably more frequently, could be the optimal answer.
In order to implement real-time safety-critical HD maps for AVs, some fundamental challenges must be overcome. This include providing a consistent communication system between agents and HD map providers to transfer an agent’s location and corresponding semantic information in real time, a mechanism for informing HD map providers about changes to static road features (such as road signs) or anomalies and consequently rectifying such anomalies, and finally policy considerations on whether HD maps should be privately or publicly owned and operated.
In this paper, we reviewed the major map representations and important open problems in the field of HD map representation. The current state of AV mapping is encouraging; the field has matured to a point where detailed maps of complex environments are built in real time and have been proved useful. Many existing techniques are robust to noise and can cope with a large range of environments. Nevertheless, there are still open problems for future research. It is heartwarming to see new applications and innovations in map representation that can generalize to previously unseen scenarios, are scalable for real-time applications, and are applicable to unstructured, disaster, and extreme weather environments where many of the techniques described are ineffective. AV mapping will remain a highly active research area critical to achieving full autonomy.

Author Contributions

Conceptualization and methodology, B.E.S.; writing—original draft preparation, M.R. and B.E.S.; writing—review and editing, R.V. and A.R. and Y.P.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Science Foundation under grant number CNS-1932037 and the article processing charges were provided in part by the UCF College of Graduate Studies Open Access Publishing Fund.

Institutional Review Board Statement

The study did not require ethical approval as it did not involve any humans or animals.

Informed Consent Statement

The study did not require ethical approval as it did not involve any humans or animals.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Badue, C.; Guidolini, R.; Carneiro, R.V.; Azevedo, P.; Cardoso, V.B.; Forechi, A.; Jesus, L.; Berriel, R.; Paixao, T.M.; Mutz, F.; et al. Self-driving cars: A survey. Expert Syst. Appl. 2021, 165, 113816. [Google Scholar] [CrossRef]
  2. Phan-Minh, T.; Grigore, E.C.; Boulton, F.A.; Beijbom, O.; Wolff, E.M. Covernet: Multimodal behavior prediction using trajectory sets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 14074–14083. [Google Scholar]
  3. Rhinehart, N.; McAllister, R.; Kitani, K.; Levine, S. Precog: Prediction conditioned on goals in visual multi-agent settings. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 2821–2830. [Google Scholar]
  4. Toghi, B.; Grover, D.; Razzaghpour, M.; Jain, R.; Valiente, R.; Zaman, M.; Shah, G.; Fallah, Y.P. A Maneuver-based Urban Driving Dataset and Model for Cooperative Vehicle Applications. In Proceedings of the 2020 IEEE 3rd Connected and Automated Vehicles Symposium (CAVS), Victoria, BC, Canada, 18 November–16 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
  5. Mahjoub, H.N.; Raftari, A.; Valiente, R.; Fallah, Y.P.; Mahmud, S.K. Representing Realistic Human Driver Behaviors using a Finite Size Gaussian Process Kernel Bank. In Proceedings of the 2019 IEEE Vehicular Networking Conference (VNC), Los Angeles, CA, USA, 4–6 December 2019; pp. 1–8. [Google Scholar]
  6. Ma, Y.; Zhu, X.; Zhang, S.; Yang, R.; Wang, W.; Manocha, D. TrafficPredict: Trajectory Prediction for Heterogeneous Traffic-Agents. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 6120–6127. [Google Scholar] [CrossRef] [Green Version]
  7. Laconte, J.; Kasmi, A.; Aufrère, R.; Vaidis, M.; Chapuis, R. A Survey of Localization Methods for Autonomous Vehicles in Highway Scenarios. Sensors 2022, 22, 247. [Google Scholar] [CrossRef] [PubMed]
  8. Wang, L.; Zhang, Y.; Wang, J. Map-based localization method for autonomous vehicles using 3D-LIDAR. IFAC-PapersOnLine 2017, 50, 276–281. [Google Scholar] [CrossRef]
  9. Javanmardi, E.; Javanmardi, M.; Gu, Y.; Kamijo, S. Factors to Evaluate Capability of Map for Vehicle Localization. IEEE Access 2018, 6, 49850–49867. [Google Scholar] [CrossRef]
  10. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuScenes: A Multimodal Dataset for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  11. Liu, L.; Wu, T.; Fang, Y.; Hu, T.; Song, J. A smart map representation for autonomous vehicle navigation. In Proceedings of the 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China, 15–17 August 2015; pp. 2308–2313. [Google Scholar] [CrossRef]
  12. Razzaghpour, M.; Mosharafian, S.; Raftari, A.; Velni, J.M.; Fallah, Y.P. Impact of Information Flow Topology on Safety of Tightly-coupled Connected and Automated Vehicle Platoons Utilizing Stochastic Control. In Proceedings of the 2022 European Control Conference (ECC), London, UK, 11–14 July 2022; pp. 27–33. [Google Scholar] [CrossRef]
  13. Valiente, R.; Raftari, A.; Zaman, M.; Fallah, Y.P.; Mahmud, S. Dynamic Object Map Based Architecture for Robust Cvs Systems. Technical Report, SAE Technical Paper. 2020. Available online: https://www.sae.org/publications/technical-papers/content/2020-01-0084/ (accessed on 10 August 2022). [CrossRef]
  14. Leurent, E. A Survey of State-Action Representations for Autonomous Driving. Working Paper or Preprint. 2018. Available online: https://hal.archives-ouvertes.fr/hal-01908175/document (accessed on 10 August 2022).
  15. Jami, A.; Razzaghpour, M.; Alnuweiri, H.; Fallah, Y.P. Augmented Driver Behavior Models for High-Fidelity Simulation Study of Crash Detection Algorithms. arXiv 2022, arXiv:2208.05540. [Google Scholar] [CrossRef]
  16. Pannen, D.; Liebner, M.; Hempel, W.; Burgard, W. How to Keep HD Maps for Automated Driving Up To Date. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2288–2294. [Google Scholar] [CrossRef]
  17. Petrovskaya, A.; Thrun, S. Model based vehicle detection and tracking for autonomous urban driving. Auton. Robot. 2009, 26, 123–139. [Google Scholar] [CrossRef]
  18. Ilci, V.; Toth, C. High Definition 3D Map Creation Using GNSS/IMU/LiDAR Sensor Integration to Support Autonomous Vehicle Navigation. Sensors 2020, 20, 899. [Google Scholar] [CrossRef] [Green Version]
  19. Zheng, L.; Li, B.; Zhang, H.; Shan, Y.; Zhou, J. A High-Definition Road-Network Model for Self-Driving Vehicles. ISPRS Int. J.-Geo-Inf. 2018, 7, 417. [Google Scholar] [CrossRef] [Green Version]
  20. Bauer, S.; Alkhorshid, Y.; Wanielik, G. Using High-Definition maps for precise urban vehicle localization. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 492–497. [Google Scholar] [CrossRef]
  21. Casas, S.; Sadat, A.; Urtasun, R. MP3: A Unified Model To Map, Perceive, Predict and Plan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 14403–14412. [Google Scholar]
  22. Phillips, J.; Martinez, J.; Barsan, I.A.; Casas, S.; Sadat, A.; Urtasun, R. Deep Multi-Task Learning for Joint Localization, Perception, and Prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 4679–4689. [Google Scholar]
  23. Sadat, A.; Casas, S.; Ren, M.; Wu, X.; Dhawan, P.; Urtasun, R. Perceive, Predict, and Plan: Safe Motion Planning Through Interpretable Semantic Representations. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 414–430. [Google Scholar]
  24. Luo, W.; Yang, B.; Urtasun, R. Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting With a Single Convolutional Net. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  25. Liang, M.; Yang, B.; Zeng, W.; Chen, Y.; Hu, R.; Casas, S.; Urtasun, R. PnPNet: End-to-End Perception and Prediction With Tracking in the Loop. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  26. Zhang, J.; Ohn-Bar, E. Learning by watching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 12711–12721. [Google Scholar]
  27. Valiente, R.; Zaman, M.; Ozer, S.; Fallah, Y.P. Controlling steering angle for cooperative self-driving vehicles utilizing cnn and lstm-based deep networks. In Proceedings of the 2019 IEEE Intelligent Vehicles symposium (IV), Paris, France, 9–12 June 2019; pp. 2423–2428. [Google Scholar]
  28. Natan, O.; Miura, J. Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent. arXiv 2022, arXiv:2204.05513. [Google Scholar]
  29. Valiente, R.; Toghi, B.; Pedarsani, R.; Fallah, Y.P. Robustness and Adaptability of Reinforcement Learning-Based Cooperative Autonomous Driving in Mixed-Autonomy Traffic. IEEE Open J. Intell. Transp. Syst. 2022, 3, 397–410. [Google Scholar] [CrossRef]
  30. Pomerleau, D.A. Alvinn: An Autonomous Land Vehicle in a Neural Network. In Advances in Neural Information Processing Systems; Morgan Kaufmann: Denver, CO, USA, 1989. [Google Scholar]
  31. Eraqi, H.M.; Moustafa, M.N.; Honer, J. End-to-End Deep Learning for Steering Autonomous Vehicles Considering Temporal Dependencies. arXiv 2017, arXiv:1710.03804. [Google Scholar]
  32. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  33. Du, S.; Guo, H.; Simpson, A. Self-Driving Car Steering Angle Prediction Based on Image Recognition. arXiv 2019, arXiv:1912.05440. [Google Scholar]
  34. Dirdal, J. End-to-End Learning and Sensor Fusion with Deep Convolutional Networks for Steering an Off-Road Unmanned Ground Vehicle. Ph.D. Thesis, NTNU, Trondheim, Norway, 2018. [Google Scholar]
  35. Yu, H.; Yang, S.; Gu, W.; Zhang, S. Baidu driving dataset and end-To-end reactive control model. In Proceedings of the IEEE Intelligent Vehicles Symposium, Los Angeles, CA, USA, 11–14 June 2017. [Google Scholar] [CrossRef]
  36. Cui, A.; Casas, S.; Sadat, A.; Liao, R.; Urtasun, R. LookOut: Diverse Multi-Future Prediction and Planning for Self-Driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 16107–16116. [Google Scholar]
  37. Casas, S.; Luo, W.; Urtasun, R. Intentnet: Learning to predict intention from raw sensor data. In Proceedings of the Conference on Robot Learning, PMLR, Zürich, Switzerland, 29–31 October 2018; pp. 947–956. [Google Scholar]
  38. Toghi, B.; Valiente, R.; Pedarsani, R.; Fallah, Y.P. Towards Learning Generalizable Driving Policies from Restricted Latent Representations. arXiv 2021, arXiv:2111.03688. [Google Scholar]
  39. Yang, B.; Liang, M.; Urtasun, R. HDNET: Exploiting HD Maps for 3D Object Detection. In Proceedings of the Conference on Robot Learning, Zürich, Switzerland, 29–31 October 2018; Volume 87, pp. 146–155. [Google Scholar]
  40. Bansal, M.; Krizhevsky, A.; Ogale, A. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. arXiv 2018, arXiv:1812.03079. [Google Scholar]
  41. Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [Green Version]
  42. Xu, Z.; Deng, D.; Shimada, K. Autonomous UAV exploration of dynamic environments via incremental sampling and probabilistic roadmap. IEEE Robot. Autom. Lett. 2021, 6, 2729–2736. [Google Scholar] [CrossRef]
  43. Biswas, R.; Limketkai, B.; Sanner, S.; Thrun, S. Towards object mapping in non-stationary environments with mobile robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 30 September–4 October 2002; Volume 1, pp. 1014–1019. [Google Scholar]
  44. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef] [Green Version]
  45. Bailey, T.; Durrant-Whyte, H. Simultaneous localization and mapping (SLAM): Part II. IEEE Robot. Autom. Mag. 2006, 13, 108–117. [Google Scholar] [CrossRef] [Green Version]
  46. Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous Localization and Mapping: A Survey of Current Trends in Autonomous Driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220. [Google Scholar] [CrossRef] [Green Version]
  47. Lamon, P.; Stachniss, C.; Triebel, R.; Pfaff, P.; Plagemann, C.; Grisetti, G.; Kolski, S.; Burgard, W.; Siegwart, R. Mapping with an autonomous car. In Proceedings of the Workshop on Safe Navigation in Open and Dynamic Environments (IROS), Zürich, Switzerland, 2006. [Google Scholar]
  48. Ort, T.; Paull, L.; Rus, D. Autonomous Vehicle Navigation in Rural Environments Without Detailed Prior Maps. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2040–2047. [Google Scholar] [CrossRef]
  49. Aldibaja, M.; Suganuma, N. Graph SLAM-Based 2.5 D LIDAR Mapping Module for Autonomous Vehicles. Remote Sens. 2021, 13, 5066. [Google Scholar] [CrossRef]
  50. Lluvia, I.; Lazkano, E.; Ansuategi, A. Active mapping and robot exploration: A survey. Sensors 2021, 21, 2445. [Google Scholar] [CrossRef] [PubMed]
  51. Cartographer. Available online: https://github.com/cartographer-project/cartographer (accessed on 10 August 2022).
  52. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-Time Loop Closure in 2D LIDAR SLAM. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
  53. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  54. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  55. Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
  56. Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast semi-direct monocular visual odometry. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 15–22. [Google Scholar] [CrossRef]
  57. Sucar, E.; Liu, S.; Ortiz, J.; Davison, A.J. iMAP: Implicit Mapping and Positioning in Real-Time. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 6229–6238. [Google Scholar]
  58. Yamauchi, B. A frontier-based approach for autonomous exploration. In Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97.’ Towards New Computational Principles for Robotics and Automation’, Monterey, CA, USA, 10–11 July 1997; pp. 146–151. [Google Scholar]
  59. Stachniss, C.; Hahnel, D.; Burgard, W. Exploration with active loop-closing for FastSLAM. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 2, pp. 1505–1510. [Google Scholar] [CrossRef] [Green Version]
  60. Stachniss, C.; Grisetti, G.; Burgard, W. Information Gain-based Exploration Using Rao-Blackwellized Particle Filters. In Robotics: Science and Systems; MIT Press: Cambridge, MA, USA, 2005; Volume 2, pp. 65–72. [Google Scholar]
  61. Maurović, I.; ðakulović, M.; Petrović, I. Autonomous exploration of large unknown indoor environments for dense 3D model building. IFAC Proc. Vol. 2014, 47, 10188–10193. [Google Scholar] [CrossRef] [Green Version]
  62. Carlone, L.; Du, J.; Kaouk Ng, M.; Bona, B.; Indri, M. Active SLAM and exploration with particle filters using Kullback–Leibler divergence. J. Intell. Robot. Syst. 2014, 75, 291–311. [Google Scholar] [CrossRef]
  63. Trivun, D.; Šalaka, E.; Osmanković, D.; Velagić, J.; Osmić, N. Active SLAM-based algorithm for autonomous exploration with mobile robot. In Proceedings of the 2015 IEEE International Conference on Industrial Technology (ICIT), Seville, Spain, 17–19 March 2015; pp. 74–79. [Google Scholar]
  64. Umari, H.; Mukhopadhyay, S. Autonomous robotic exploration based on multiple rapidly-exploring randomized trees. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1396–1402. [Google Scholar] [CrossRef]
  65. Valencia, R.; Andrade-Cetto, J. Active pose SLAM. In Mapping, Planning and Exploration with Pose SLAM; Springer: Cham, Switzerland, 2018; pp. 89–108. [Google Scholar]
  66. Sodhi, P.; Ho, B.J.; Kaess, M. Online and consistent occupancy grid mapping for planning in unknown environments. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 7879–7886. [Google Scholar]
  67. Davison, A.J. Real-time simultaneous localisation and mapping with a single camera. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; Volume 3, p. 1403. [Google Scholar]
  68. Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular SLAM. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 834–849. [Google Scholar]
  69. Bourgault, F.; Makarenko, A.A.; Williams, S.B.; Grocholsky, B.; Durrant-Whyte, H.F. Information based adaptive robotic exploration. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 30 September–4 October 2002; Volume 1, pp. 540–545. [Google Scholar] [CrossRef]
  70. Bircher, A.; Kamel, M.; Alexis, K.; Oleynikova, H.; Siegwart, R. Receding horizon “next-best-view” planner for 3d exploration. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1462–1468. [Google Scholar]
  71. Zhu, C.; Ding, R.; Lin, M.; Wu, Y. A 3d frontier-based exploration tool for mavs. In Proceedings of the 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), Vietri sul Mare, Italy, 9–11 November 2015; pp. 348–352. [Google Scholar] [CrossRef]
  72. Bircher, A.; Kamel, M.; Alexis, K.; Oleynikova, H.; Siegwart, R. Receding horizon path planning for 3D exploration and surface inspection. Auton. Robot. 2018, 42, 291–306. [Google Scholar] [CrossRef]
  73. Selin, M.; Tiger, M.; Duberg, D.; Heintz, F.; Jensfelt, P. Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments. IEEE Robot. Autom. Lett. 2019, 4, 1699–1706. [Google Scholar] [CrossRef] [Green Version]
  74. Senarathne, P.; Wang, D. Towards autonomous 3D exploration using surface frontiers. In Proceedings of the 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Lausanne, Switzerland, 23–27 October 2016; pp. 34–41. [Google Scholar]
  75. Papachristos, C.; Khattak, S.; Alexis, K. Uncertainty-aware receding horizon exploration and mapping using aerial robots. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4568–4575. [Google Scholar] [CrossRef]
  76. Faria, M.; Maza, I.; Viguria, A. Applying frontier cells based exploration and Lazy Theta* path planning over single grid-based world representation for autonomous inspection of large 3D structures with an UAS. J. Intell. Robot. Syst. 2019, 93, 113–133. [Google Scholar] [CrossRef]
  77. Dai, A.; Papatheodorou, S.; Funk, N.; Tzoumanikas, D.; Leutenegger, S. Fast frontier-based information-driven autonomous exploration with an mav. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9570–9576. [Google Scholar] [CrossRef]
  78. The List of Vision-Based SLAM. Available online: https://github.com/tzutalin/awesome-visual-slam (accessed on 10 August 2022).
  79. Taketomi, T.; Uchiyama, H.; Ikeda, S. Visual SLAM algorithms: A survey from 2010 to 2016. IPSJ Trans. Comput. Vis. Appl. 2017, 9, 16. [Google Scholar] [CrossRef] [Green Version]
  80. Casas, S.; Gulino, C.; Liao, R.; Urtasun, R. Spagnn: Spatially-aware graph neural networks for relational behavior forecasting from sensor data. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9491–9497. [Google Scholar]
  81. Shi, W.; Rajkumar, R. Point-gnn: Graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 1711–1719. [Google Scholar]
  82. Ivanovic, B.; Pavone, M. The trajectron: Probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 2375–2384. [Google Scholar]
  83. Fraundorfer, F.; Engels, C.; Nistér, D. Topological mapping, localization and navigation using image collections. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 3872–3877. [Google Scholar]
  84. Grisetti, G.; Kümmerle, R.; Stachniss, C.; Burgard, W. A tutorial on graph-based SLAM. IEEE Intell. Transp. Syst. Mag. 2010, 2, 31–43. [Google Scholar] [CrossRef]
  85. Liang, M.; Yang, B.; Hu, R.; Chen, Y.; Liao, R.; Feng, S.; Urtasun, R. Learning Lane Graph Representations for Motion Forecasting. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 541–556. [Google Scholar]
  86. Gao, J.; Sun, C.; Zhao, H.; Shen, Y.; Anguelov, D.; Li, C.; Schmid, C. VectorNet: Encoding HD Maps and Agent Dynamics From Vectorized Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  87. Bender, P.; Ziegler, J.; Stiller, C. Lanelets: Efficient map representation for autonomous driving. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 420–425. [Google Scholar] [CrossRef]
  88. Poggenhans, F.; Pauls, J.H.; Janosovits, J.; Orf, S.; Naumann, M.; Kuhnt, F.; Mayr, M. Lanelet2: A high-definition map framework for the future of automated driving. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1672–1679. [Google Scholar] [CrossRef]
  89. Yang, L.; Cui, M. Lane Network Construction Using High Definition Maps for Autonomous Vehicles. U.S. Patent 10,545,029, 28 January 2020. [Google Scholar]
  90. Chen, Y.; Huang, S.; Fitch, R.; Zhao, L.; Yu, H.; Yang, D. On-line 3D active pose-graph SLAM based on key poses using graph topology and sub-maps. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 169–175. [Google Scholar]
  91. Feder, H.J.S.; Leonard, J.J.; Smith, C.M. Adaptive mobile robot navigation and mapping. Int. J. Robot. Res. 1999, 18, 650–668. [Google Scholar] [CrossRef]
  92. Lazanas, A.; Latombe, J.C. Landmark-based robot navigation. Algorithmica 1995, 13, 472–501. [Google Scholar] [CrossRef]
  93. Dailey, M.N.; Parnichkun, M. Landmark-based simultaneous localization and mapping with stereo vision. In Proceedings of the Asian Conference on Industrial Automation and Robotics, Bangkok, Thailand, 11–13 May 2005; Volume 2. [Google Scholar]
  94. Schuster, F.; Keller, C.G.; Rapp, M.; Haueis, M.; Curio, C. Landmark based radar SLAM using graph optimization. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 2559–2564. [Google Scholar]
  95. Engel, N.; Hoermann, S.; Horn, M.; Belagiannis, V.; Dietmayer, K. Deeplocalization: Landmark-based self-localization with deep neural networks. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 926–933. [Google Scholar]
  96. Zhou, B.; Li, Q.; Mao, Q.; Tu, W.; Zhang, X.; Chen, L. ALIMC: Activity landmark-based indoor mapping via crowdsourcing. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2774–2785. [Google Scholar] [CrossRef]
  97. Ghrist, R.; Lipsky, D.; Derenick, J.; Speranzon, A. Topological Landmark-Based Navigation and Mapping; Tech. Rep. University of Pennsylvania, Department of Mathematics: Pennsylvania, PA, USA, 2012; Volume 8. [Google Scholar]
  98. Yoo, H.; Oh, S. Localizability-based Topological Local Object Occupancy Map for Homing Navigation. In Proceedings of the 2021 18th International Conference on Ubiquitous Robots (UR), Gangneung, Korea, 12–14 July 2021; pp. 22–25. [Google Scholar]
  99. Kim, B.; Kang, C.M.; Kim, J.; Lee, S.H.; Chung, C.C.; Choi, J.W. Probabilistic vehicle trajectory prediction over occupancy grid map via recurrent neural network. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 399–404. [Google Scholar]
  100. Li, H.; Tsukada, M.; Nashashibi, F.; Parent, M. Multivehicle cooperative local mapping: A methodology based on occupancy grid map merging. IEEE Trans. Intell. Transp. Syst. 2014, 15, 2089–2100. [Google Scholar] [CrossRef] [Green Version]
  101. Meyer-Delius, D.; Beinhofer, M.; Burgard, W. Occupancy grid models for robot mapping in changing environments. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, Canada, 22–26 July 2012. [Google Scholar]
  102. Tsardoulias, E.G.; Iliakopoulou, A.; Kargakos, A.; Petrou, L. A review of global path planning methods for occupancy grid maps regardless of obstacle density. J. Intell. Robot. Syst. 2016, 84, 829–858. [Google Scholar] [CrossRef]
  103. Wirges, S.; Stiller, C.; Hartenbach, F. Evidential occupancy grid map augmentation using deep learning. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 668–673. [Google Scholar]
  104. Xue, H.; Fu, H.; Ren, R.; Wu, T.; Dai, B. Real-time 3D Grid Map Building for Autonomous Driving in Dynamic Environment. In Proceedings of the 2019 IEEE International Conference on Unmanned Systems (ICUS), Beijing, China, 17–19 October 2019; pp. 40–45. [Google Scholar] [CrossRef]
  105. Han, S.J.; Kim, J.; Choi, J. Effective height-grid map building using inverse perspective image. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Korea, 28 June–1 July 2015; pp. 549–554. [Google Scholar]
  106. Luo, K.; Casas, S.; Liao, R.; Yan, X.; Xiong, Y.; Zeng, W.; Urtasun, R. Safety-Oriented Pedestrian Motion and Scene Occupancy Forecasting. arXiv 2021, arXiv:2101.02385. [Google Scholar] [CrossRef]
  107. Meagher, D. Geometric modeling using octree encoding. Comput. Graph. Image Process. 1982, 19, 129–147. [Google Scholar] [CrossRef]
  108. Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef] [Green Version]
  109. Papachristos, C.; Kamel, M.; Popović, M.; Khattak, S.; Bircher, A.; Oleynikova, H.; Dang, T.; Mascarich, F.; Alexis, K.; Siegwart, R. Autonomous exploration and inspection path planning for aerial robots using the robot operating system. In Robot Operating System (ROS); Springer: Cham, Swizterland, 2019; pp. 67–111. [Google Scholar]
  110. Suresh, S.; Sodhi, P.; Mangelson, J.G.; Wettergreen, D.; Kaess, M. Active SLAM using 3D Submap Saliency for Underwater Volumetric Exploration. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3132–3138. [Google Scholar] [CrossRef]
  111. Moravec, H.P. Sensor fusion in certainty grids for mobile robots. In Sensor Devices and Systems for Robotics; Springer: Berlin/Heidelberg, Germany, 1989; pp. 253–276. [Google Scholar]
  112. Lu, D.V.; Hershberger, D.; Smart, W.D. Layered costmaps for context-sensitive navigation. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 709–715. [Google Scholar]
  113. Toghi, B.; Valiente, R.; Sadigh, D.; Pedarsani, R.; Fallah, Y.P. Cooperative autonomous vehicles that sympathize with human drivers. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021. [Google Scholar]
  114. Newcombe, R.A.; Lovegrove, S.J.; Davison, A.J. DTAM: Dense tracking and mapping in real-time. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2320–2327. [Google Scholar]
  115. González-Banos, H.H.; Latombe, J.C. Navigation strategies for exploring indoor environments. Int. J. Robot. Res. 2002, 21, 829–848. [Google Scholar] [CrossRef]
  116. Jiang, R.; Yang, S.; Ge, S.S.; Wang, H.; Lee, T.H. Geometric map-assisted localization for mobile robots based on uniform-Gaussian distribution. IEEE Robot. Autom. Lett. 2017, 2, 789–795. [Google Scholar] [CrossRef]
  117. Maturana, D.; Chou, P.W.; Uenoyama, M.; Scherer, S. Real-time semantic mapping for autonomous off-road navigation. In Field and Service Robotics; Springer: Cham, Switzerland, 2018; pp. 335–350. [Google Scholar]
  118. Liu, R.; Wang, J.; Zhang, B. High definition map for automated driving: Overview and analysis. J. Navig. 2020, 73, 324–341. [Google Scholar] [CrossRef]
  119. Kim, C.; Cho, S.; Sunwoo, M.; Jo, K. Crowd-Sourced Mapping of New Feature Layer for High-Definition Map. Sensors 2018, 18, 4172. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  120. Armand, A.; Ibanez-Guzman, J.; Zinoune, C. Digital maps for driving. In Automated Driving; Springer: Berlin/Heidelberg, Germany, 2017; pp. 201–244. [Google Scholar]
  121. Bétaille, D.; Toledo-Moreo, R. Creating enhanced maps for lane-level vehicle navigation. IEEE Trans. Intell. Transp. Syst. 2010, 11, 786–798. [Google Scholar] [CrossRef]
  122. Peyret, F.; Laneurit, J.; Betaille, D. A novel system using enhanced digital maps and WAAS for a lane level positioning. In Proceedings of the 15th World Congress on Intelligent Transport Systems and ITS America’s 2008 Annual Meeting ITS AmericaERTICOITS JapanTransCore, New York, NY, USA, 16–20 November 2008. [Google Scholar]
  123. Adas Map. Available online: https://www.tomtom.com/products/adas-map (accessed on 10 August 2022).
  124. Zhang, R.; Chen, C.; Di, Z.; Wheeler, M.D. Visual Odometry and Pairwise Alignment for High Definition Map Creation. U.S. Patent 10,598,489, 24 March 2020. [Google Scholar]
  125. Shimada, H.; Yamaguchi, A.; Takada, H.; Sato, K. Implementation and evaluation of local dynamic map in safety driving systems. J. Transp. Technol. 2015, 5, 102. [Google Scholar] [CrossRef] [Green Version]
  126. Lee, J.; Lee, K.; Yoo, A.; Moon, C. Design and Implementation of Edge-Fog-Cloud System through HD Map Generation from LiDAR Data of Autonomous Vehicles. Electronics 2020, 9, 2084. [Google Scholar] [CrossRef]
  127. Kent, L. HERE Introduces HD Maps for Highly Automated Vehicle Testing; HERE: Amsterdam, The Netherlands, 2015; Available online: http://360.here.com/2015/07/20/here-introduces-hd-maps-for-highlyautomated-vehicle-testing/ (accessed on 16 April 2018).
  128. Bao, Z.; Hossain, S.; Lang, H.; Lin, X. High-Definition Map Generation Technologies For Autonomous Driving. arXiv 2022, arXiv:2206.05400. [Google Scholar] [CrossRef]
  129. LiDAR Boosts Brain Power for Self-Driving Cars. Available online: https://eijournal.com/resources/lidar-solutions-showcase/lidar-boosts-brain-power-for-self-driving-cars (accessed on 10 August 2022).
  130. García, M.; Urbieta, I.; Nieto, M.; González de Mendibil, J.; Otaegui, O. iLDM: An Interoperable Graph-Based Local Dynamic Map. Vehicles 2022, 4, 42–59. [Google Scholar] [CrossRef]
  131. HERE HD Live Map, Technical Paper, a Self-Healing Map for Reliable Autonomous Driving. Available online: https://engage.here.com/hubfs/Downloads/Tech%20Briefs/HERE%20Technologies%20Self-healing%20Map%20Tech%20Brief.pdf?t=1537438054632 (accessed on 10 August 2022).
  132. Semantic Maps for Autonomous Vehicles by Kris Efland and Holger Rapp, Engineering Managers, Lyft Level 5. Available online: https://medium.com/wovenplanetlevel5/semantic-maps-for-autonomous-vehicles-470830ee28b6 (accessed on 10 August 2022).
  133. Rethinking Maps for Self-Driving By Kumar Chellapilla, Director of Engineering, Lyft Level 5. Available online: https://medium.com/wovenplanetlevel5/https-medium-com-lyftlevel5-rethinking-maps-for-self-driving-a147c24758d6 (accessed on 10 August 2022).
  134. Qin, T.; Zheng, Y.; Chen, T.; Chen, Y.; Su, Q. A Light-Weight Semantic Map for Visual Localization towards Autonomous Driving. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 11248–11254. [Google Scholar] [CrossRef]
  135. Guo, C.; Lin, M.; Guo, H.; Liang, P.; Cheng, E. Coarse-to-fine Semantic Localization with HD Map for Autonomous Driving in Structural Scenes. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 1146–1153. [Google Scholar] [CrossRef]
  136. HD Map and Localization. Available online: https://developer.apollo.auto/developer.html (accessed on 10 August 2022).
  137. Wheeler, M.D. High Definition Map and Route Storage Management System for Autonomous Vehicles. U.S. Patent 10,353,931, 16 July 2019. [Google Scholar]
  138. TomTom. Available online: https://www.tomtom.com/products/hd-map (accessed on 10 August 2022).
  139. Han, S.J.; Kang, J.; Jo, Y.; Lee, D.; Choi, J. Robust ego-motion estimation and map matching technique for autonomous vehicle localization with high definition digital map. In Proceedings of the 2018 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Korea, 17–19 October 2018; pp. 630–635. [Google Scholar]
  140. Sobreira, H.; Costa, C.M.; Sousa, I.; Rocha, L.; Lima, J.; Farias, P.; Costa, P.; Moreira, A.P. Map-matching algorithms for robot self-localization: A comparison between perfect match, iterative closest point and normal distributions transform. J. Intell. Robot. Syst. 2019, 93, 533–546. [Google Scholar] [CrossRef]
  141. Hausler, S.; Milford, M. Map Creation, Monitoring and Maintenance for Automated Driving—Literature Review. 2020. Available online: https://imoveaustralia.com/wp-content/uploads/2021/01/P1%E2%80%90021-Map-creation-monitoring-and-maintenance-for-automated-driving.pdf (accessed on 10 August 2022).
  142. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  143. Luo, Q.; Cao, Y.; Liu, J.; Benslimane, A. Localization and navigation in autonomous driving: Threats and countermeasures. IEEE Wirel. Commun. 2019, 26, 38–45. [Google Scholar] [CrossRef]
  144. waymo. Available online: https://digital.hbs.edu/platform-digit/submission/way-mo-miles-way-mo-data/ (accessed on 10 August 2022).
  145. Seif, H.G.; Hu, X. Autonomous driving in the iCity—HD maps as a key challenge of the automotive industry. Engineering 2016, 2, 159–162. [Google Scholar] [CrossRef]
  146. Ahmad, F.; Qiu, H.; Eells, R.; Bai, F.; Govindan, R. CarMap: Fast 3D Feature Map Updates for Automobiles. In Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20), Santa Clara, CA, USA, 26–27 February 2022; USENIX Association: Santa Clara, CA, USA, 2020; pp. 1063–1081. [Google Scholar]
Figure 1. Map of ECE department at University of Central Florida using Cartographer package [51] and hokuyo 2D LiDAR. The details of the SLAM method is discussed in [52].
Figure 1. Map of ECE department at University of Central Florida using Cartographer package [51] and hokuyo 2D LiDAR. The details of the SLAM method is discussed in [52].
Electronics 11 03374 g001
Figure 2. Top: Real-world objects in the map. Down: Grid map representation with grid cells occupied by the real-world objects.
Figure 2. Top: Real-world objects in the map. Down: Grid map representation with grid cells occupied by the real-world objects.
Electronics 11 03374 g002
Figure 3. The complexity of data collected by a Velodyne LiDAR is demonstrated by a point cloud image of a vehicle approaching an interSection [129].
Figure 3. The complexity of data collected by a Velodyne LiDAR is demonstrated by a point cloud image of a vehicle approaching an interSection [129].
Electronics 11 03374 g003
Figure 4. The features and layers of an HD map [130].
Figure 4. The features and layers of an HD map [130].
Electronics 11 03374 g004
Figure 5. HD map structure defined by HERE: HD road (Down) consists of the topology, the direction of travel, intersections, slope, ramps, rules, boundaries, and tunnels. HD lanes (Middle) consist of lane level features, such as boundaries, types, lines, and widths. HD localization (Top) consists of road furniture, such as traffic lights and traffic signs [131].
Figure 5. HD map structure defined by HERE: HD road (Down) consists of the topology, the direction of travel, intersections, slope, ramps, rules, boundaries, and tunnels. HD lanes (Middle) consist of lane level features, such as boundaries, types, lines, and widths. HD localization (Top) consists of road furniture, such as traffic lights and traffic signs [131].
Electronics 11 03374 g005
Table 1. Simplified map representations in robotic applications.
Table 1. Simplified map representations in robotic applications.
Different Categories of Robotic Maps
CategoryProsConsDetailsRelated Papers
TopologicalEasier map extensionLack of sense of proximity, lack of explicit informationGraph-based, deals with places and their interactions[80,81,82,83,84,85,86,87,88,89,90]
MetricPrecise coordinate of objectsComputationally expensive in vast areasContains all required information for mapping or navigation algorithm[58,59,60,64,69,71,72,73,75,76,77,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,108,109,110,111,112,113,114,116,117]
GeometricEfficient data storage with low amount of information lossHard trajectory calculation and data managementData are represented with discrete geometric shapes[115,116,117]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ebrahimi Soorchaei, B.; Razzaghpour, M.; Valiente, R.; Raftari, A.; Fallah, Y.P. High-Definition Map Representation Techniques for Automated Vehicles. Electronics 2022, 11, 3374. https://doi.org/10.3390/electronics11203374

AMA Style

Ebrahimi Soorchaei B, Razzaghpour M, Valiente R, Raftari A, Fallah YP. High-Definition Map Representation Techniques for Automated Vehicles. Electronics. 2022; 11(20):3374. https://doi.org/10.3390/electronics11203374

Chicago/Turabian Style

Ebrahimi Soorchaei, Babak, Mahdi Razzaghpour, Rodolfo Valiente, Arash Raftari, and Yaser Pourmohammadi Fallah. 2022. "High-Definition Map Representation Techniques for Automated Vehicles" Electronics 11, no. 20: 3374. https://doi.org/10.3390/electronics11203374

APA Style

Ebrahimi Soorchaei, B., Razzaghpour, M., Valiente, R., Raftari, A., & Fallah, Y. P. (2022). High-Definition Map Representation Techniques for Automated Vehicles. Electronics, 11(20), 3374. https://doi.org/10.3390/electronics11203374

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop