Skip to Content
SensorsSensors
  • Article
  • Open Access

12 September 2019

Indoor and Outdoor Backpack Mapping with Calibrated Pair of Velodyne LiDARs

,
,
,
and
1
Department of Computer Graphics and Multimedia, Faculty of Information Technology, Brno University of Technology, Bozetechova 1/2, 612 66 Brno, Czech Republic
2
Geodrom Company, Bohunicka 81, 619 00 Brno, Czech Republic
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Mobile Laser Scanning Systems

Abstract

This paper presents a human-carried mapping backpack based on a pair of Velodyne LiDAR scanners. Our system is a universal solution for both large scale outdoor and smaller indoor environments. It benefits from a combination of two LiDAR scanners, which makes the odometry estimation more precise. The scanners are mounted under different angles, thus a larger space around the backpack is scanned. By fusion with GNSS/INS sub-system, the mapping of featureless environments and the georeferencing of resulting point cloud is possible. By deploying SoA methods for registration and the loop closure optimization, it provides sufficient precision for many applications in BIM (Building Information Modeling), inventory check, construction planning, etc. In our indoor experiments, we evaluated our proposed backpack against ZEB-1 solution, using FARO terrestrial scanner as the reference, yielding similar results in terms of precision, while our system provides higher data density, laser intensity readings, and scalability for large environments.

1. Introduction

In recent years, the LiDAR (Light Detection And Ranging) technology has become very popular in the field of geodesy and related fields, where the availability of 3D models of outdoor or indoor environments can be beneficial: e.g., forestry, architecture, preserving cultural heritage, construction monitoring, etc. The examples of reconstructions from similar practical applications can be found in Figure 1. Using 3D mapping can also be beneficial for time and cost reduction. The same model can be shared among different professionals in different fields of expertise without the need for personal inspection and measuring at a given place individually.
Figure 1. The motivation and the results of our work. The reconstruction of indoor environments (a) is beneficial for inspection, inventory checking and automatic floor plans generation. 3D maps of forest environments (b) is useful for quick and precise estimation of the biomass (timber) amount. The other example of 3D LiDAR mapping deployment is preserving cultural heritages or providing models of historical building, e.g., the roof in (c).
This demand causes a huge interest in developing solutions that would be able to capture the reality and provide reliable 3D reconstructions out of the box. However, there are also other requirements for such a system.
The data acquisition process has to be quick and the planning of fieldwork should be minimized. This requirement discriminates solutions based on static terrestrial lasers (e.g., Leica and Riegl of FARO companies), requiring detailed planning of the data acquisition and manual system set up on a tripod within multiple convenient viewpoints across the scene.
The solution has to be mobile and easy to handle. This naturally leads to the preference of human carried (backpack or handheld) solutions instead of terrestrial or vehicle based solutions, such as NavVis [1], which, for example, does not support traversing tilted surfaces such as ramps.
However, the necessity for reliability in terms of resulting model precision is in contradiction with these two requirements. Stationary terrestrial LiDAR solutions require time demanding scanning process while providing a great accuracy (in order of millimeters) because of fewer degrees of freedom. Although, for many applications listed above, there is no need for such precision, our goal is the difference between the reality and the resulting 3D model below 5 cm. This value was requested by the experts in the field of geodesy with whom we consulted.
In the practical applications, completeness of the final map should also be guaranteed because it might be difficult to repeat the scanning. The operator has to be aware of the fact that all necessary data of the whole environment were acquired. We fulfilled this requirement by providing a live preview of the collected data.
The resulting model has to be dense enough, so that all important objects such as furniture and other inventory can be recognized and distinguished. This is the typical issue of existing solutions such as ZEB-1, where no LiDAR intensity readings are available. Therefore, our solution relies on Velodyne LiDARs, which provide a huge amount of data and the resulting models are dense (see examples in Figure 2). It also provides the laser intensity readings, which do not depend on the lighting conditions, contrary to camera-aided solutions. Moreover, we propose laser intensity normalization, which increases the recognizability of the objects since the laser intensity readings cannot be considered as the “color” of the object as it depends on the range of measurement, the angle of incidence, and the emitted energy.
Figure 2. The example of resulting models of indoor mapping. The office environment (a) and the staircase (b) were captured by a human carrying our 4RECON backpack. The data acquisition process took 3 and 2 min, respectively.
Some of the existing solutions are not comfortable enough to use. According to practical experience of the operators, handheld solutions such as ZEB are physically difficult to operate for a longer period of time since the mapping head weighs approximately 0.4–1 kg, and it has to be carried or swept by hand.
The final requirement is an affordable price. We use Velodyne VLP-16 scanners, which are relatively cheap in comparison to the other LiDAR solutions, and a universal IMU (Inertial Measurement Unit) solution, which can be upgraded by a dual antenna and therefore reused in the outdoor environment where GNSS (Global Navigation Satellite System) is available.
The contributions of this paper can be summarized as the proposal of a LiDAR mapping solution with the following characteristics:
  • It is capable of both small indoor and large open outdoor environments mapping, georeferencing and sufficient precision in the order of centimeters. These abilities are evaluated using multiple datasets.
  • It benefits from a synchronized and calibrated dual LiDAR scanner, which significantly increases field of view. Both scanners are used for both odometry estimation and 3D model reconstruction, which enables scanning of small environments, narrow corridors, staircases, etc.
  • It provides the ability to recognize objects in the map due to sufficient point density and our novel intensity normalization for the measurements from an arbitrary range.
We also performed a precise evaluation and comparison of our previously proposed point cloud registration method CLS (Collar Line Segments) with state-of-the-art approach LOAM (LiDAR Odometry and Mapping), which has not yet been published. Moreover, we upgraded our CLS method with automatic overlap estimation for better registration flexibility.

3. Design of the Laser Mapping Backpack

This section consists of two main parts: First, the hardware design concepts are introduced. Then, the software solutions dealing with calibration, precise odometry estimation, alignment and intensity normalization are presented.
The design of our solution follows the requirements elaborated in Section 1. They have been carefully formulated and discussed with experts in the field of geodesy and geospatial data processing. Besides the essential goal of reliable 3D reconstruction performed automatically, which is demonstrated in the following section, the proposed solution does the following:
  • It fulfils the requirements for precision of the model up to 5 cm. Thanks to the robust loop closure, ambiguities (e.g., “double wall” effects) are avoided.
  • The system is comfortable to use and it is as mobile as possible. The backpack weighs 9 kg (plus 1.4 kg for the optional dual antenna extension), and it is easy to carry around various environments including stairs, narrow corridors, rugged terrain, etc.
  • The pair of synchronized and calibrated Velodyne LiDARS increases the field of view (FOV) and enables mapping of small rooms, narrow corridors, staircases, etc. (see Figure 6) without the need for special guidelines for scanning process.
    Figure 6. Various configurations of LiDAR scanners in worst case scenarios we have encountered in our experiments: narrow corridor (a,c) and staircase (b). The field of view (30° for Velodyne Puck) is displayed in color. When only single LiDAR (a) was used, the scans did not contain 3D information of the floor or the ceiling (red cross). The situation was not improved when the scanner is tilted because of failing in, e.g., staircases (b). When we added a second LiDAR, our tiled asymmetrical configuration (d) provides better top–bottom and left–right observation than the symmetrical one (c). Moreover, when the LiDARs are aligned in direction of movement (e), there is no overlap between current (violet) and future (yellow) frame, leading to lower accuracy. In our solution (f), the LiDARs are aligned perpendicularly to the walking direction solving all mentioned issues.
  • The data acquisition process is fast with verification of data completeness. There are no special guidelines for the scanning process (comparing to the requirements of ZEB) and the operator is required only to visit all places to be captured in a normal pace. Moreover, captured data are visualized online at the mobile device (smartphone, tablet) for operator to see whether everything is captured correctly.
  • Since we are using long range Velodyne LiDAR (compared to simple 2D rangefinders such as Hokuyko or Sick) and optional GNSS support, we provide a universal economically convenient solution for both indoor and outdoor use. For such scenarios, where GNSS is available, final reconstruction is georeferenced—the 3D position in the global geographical frame is assigned to every 3D point in the model.
  • The final 3D model is dense and colored by the laser intensity, which is further normalized. This helps distinguishing important objects, inventory, larger texts, signs, and some surface texture properties.

3.1. Hardware Description

The core of our backpack, in Figure 7, is the pair of Velodyne LiDAR [40] scanners VLP-16 (Pucks). Each of them contains 16 laser transmitter–receiver pairs, which are reflected into the environment by a rotating mirror with 10 Hz frequency. This frequency can be decreased or increased up to 20 Hz. However, frequency higher than 10 Hz causes serious undesirable vibration of the sensor, which makes precise odometry estimation impossible. The rotation gives the sensor 360° horizontal FOV with 0.2° horizontal resolution. Vertically, the laser beams are evenly distributed with 2° resolution covering 30° vertical FOV. Each of the scanners weighs 830 g and is considered to be a hybrid solid state LiDAR, since there are no outer moving parts. This type of scanner is able to reach 100 m range with precision around 2 cm. As mentioned above, Velodyne scanners provide also values of intensity readings, which corresponds to the surface reflectivity.
Figure 7. The initial (a) and improved (b,c) prototype of our backpack mapping solution for both indoor (b) and outdoor (c) use. The removable dual GNSS antenna provides precise heading information, aiding for outdoor odometry estimation and also georeferencing of the resulting 3D point cloud model. It should be noted that the position of LiDAR scanners is different in the initial and the later solution. This is elaborated on in the next section.
As the aiding sensor, the GNSS/INS (Inertial Navigation System) Advanced Navigation SpatialDual (https://www.advancednavigation.com/product/spatial-dual) is deployed. It integrates multiple sensors such as accelerometers, gyroscopes, magnetometer, pressure sensor, and most importantly—the dual-antenna GNSS subsystem providing reliable heading information. With RTK (Real Time Kinematics) or PPK (Post-Processed Kinematics) corrections, the system should provide 8 mm horizontal and 15 mm vertical positional accuracy, and 0.03° and 0.06° orientation precision in terms of roll/pitch and heading angle, respectively. Precise heading information is provided by a dual antenna solution and therefore it is only available outdoors. This limitation also holds for positional data. For indoor scenarios, only roll and pitch angles are reliable and they are relevant for horizontal alignment. The unit weighs 285 g and besides the 6 DoF (six Degrees of Freedom including 3D position and rotation) pose estimation it also provides 1PPS (Pulse Per Second) and NMEA messages for precise synchronization of both Velodyne LiDAR scanners. The details regarding wiring the components can be found in Figure 8.
Figure 8. Components of the system and the connections. Each Velodyne scanner is connected via a custom wiring “box” requiring power supply (red wires), 1PPS and NMEA synchronization (green) and Fast Ethernet (blue) connection with computer (PC NUC in our case).
The rest of the hardware is responsible for controlling the data acquisition and storing the data (Intel NUC Mini PC), and powering all the components with small Li-Ion battery with capacity 10,400 mAh lasting approximately 2 h.

3.2. Dual LiDAR System

During the experiments, we discovered that the limited (30°) horizontal field of view is not an issue for large open spaces. However, when the space is getting smaller and the environment shrinks (e.g., corridors narrower than 2 m), such a field of view causes serious problems, leading to poor accuracy or even total failures of the SLAM system. The worst cases and our solutions are displayed in Figure 6. We experimentally discovered that we need at least two synchronized Velodyne Puck scanners to provide a robust solution that covers both the floor/ceiling and the walls, even in small or narrow rooms.
To achieve good accuracy and to cover the environment, the scanners are mounted perpendicular to the direction of the operator movement—one in horizontal and second in vertical orientation, as displayed in Figure 6f and Figure 7b,c. All other configurations (e.g., Configuration e.) in our initial prototype in Figure 7a were not able to capture both horizontal and vertical properties of the environment, or did not provide a large coverage necessary for precise pose estimation.

3.3. Calibration of the Sensors

To leverage the full potential of using two Velodyne LiDARs, these scanners have to be properly synchronized and calibrated. As mentioned above, the sensors are synchronized via NMEA messages (GPS communication protocol) and 1PPS (Pulse Per Second) signal provided by SpatialDual inertial navigation system. Sufficient intrinsic calibration parameters of LiDAR scanners themselves (corrections) are provided by Velodyne company and processed by the driver (in ROS Velodyne package).
Therefore, the task to solve is the estimation of extrinsic calibration parameters in terms of relative 6DoF pose estimation for both laser scanners C V 1 , C V 2 and INS sensor C I in Figure 9. First, the transformation between the scanners is computed. To do so, two 3D maps of a large indoor space (a large lecture hall in our case) were built by the scanners separately using our previously published method [23]. These two 3D maps are ICP aligned. The resulting 3D geometrical transformation represents mutual position of the sensors C V 1 1 C V 2 and also the alignment of laser data they provide as presented in Figure 10. Since we are interested only in relative transformations between the sensors, the origin can be arbitrarily defined, e.g., as the position of the first Velodyne and C V 1 = I . A single frame point cloud consists of multiple (two in our case) synchronized LiDAR frames and therefore it will be denoted as the multiframe.
Figure 9. Extrinsic calibration required in our system. The mutual positions between the Velodyne scanners and the GNSS/INS unit are computed. The offsets o A 1 , o A 2 of the antennas are tape measured.
Figure 10. Two Velodyne LiDAR frames aligned into the single multiframe. This data association requires time synchronization and precise extrinsic calibration of laser scanners.
To be able to use data provided by the INS system, an extrinsic calibration C I between the laser scanners and the INS sensor needs to be estimated. All sensors are fixed on the custom made aluminum mount and therefore the translation parameters can be found in the blueprints of the mount or can be measured with millimeter precision. However, mutual rotation has to be estimated more precisely, because just a fraction of degree misalignment would cause serious errors for long range laser measurements.
We found that the rotation parameters as the transformation between the floor normal vector n i in the point cloud data and the gravity vector g i provided by the INS sensor, since these vectors should be aligned. Points of the floor are selected manually and the normal of the best fitting plane is computed. This can be performed in arbitrary software for visualization and processing of the point clouds—CloudCompare (https://www.danielgm.net/cc/) in our case. We performed multiple measurements for different inclines of the backpack in the indoor corridor with a perfectly straight floor. The final rotation R C I between the Velodynes and INS sensor was estimated by SVD (Singular value Decomposition) [41] (Equation (2)) of covariance matrix A of these 3D vector pairs (Equation (1)) (floor normal and the gravity). Multiplication with matrix E (Equation (5)) solves the ambiguity between right/left hand rotation—we always compute right-hand representation. Equations (1)–(5) are based on the work [41].
A = i n i T · g i
U Σ V * = A
e = 1 , if | V U T | 0 1 , otherwise
E = 1 0 0 0 1 0 0 0 e
R C I = V E U

3.4. Point Cloud Registration

The core element of the software part is the alignment of the point cloud data into a 3D map of the environment. There are multiple state-of-the-art approaches for point cloud registration and odometry estimation, including our previously published approach Collar Line Segments [23]. We compared our approach with LOAM [19] algorithm, using the implementation available. The results of this experiment are presented in Table 2, which shows the superior accuracy of our method, thus CLS was a natural choice for our mapping backpack solution.
Table 2. Comparison of visual odometry error for SoA method LOAM and our CLS method. The experiments were performed on KITTI Odometry dataset [22]. For CLS, frame to frame (single) or frame to multiple (10) neighboring frames (multi-frame) registrations without any loop closures were performed. In LOAM experiments, both the original online version (providing real time performance) and offline version (with full procedure for each frame omitting approximations) was used. In all data sequences, except the short sequence No. 4 where the car drives only forward without any turns, our multi frame approach outperformed the LOAM solution.
The basic idea of the CLS method is to overcome the data sparsity of 3D LiDAR scanner (e.g., Velodyne) by sampling the data by line segments. The points captured by individual laser beams form so called “ring” structures displayed in Figure 11a. There is a large empty space between these rings and while moving, same places of the scene are not repeatedly scanned, valid matches are missing and the closest point approaches (e.g., ICP) are not applicable. By using CLS, the space between the rings is also covered and correct matching of structures in the LiDAR frames is enabled.
Figure 11. The sampling of Velodyne point cloud by the Collar Line Segments (CLS) (a). The segments (purple) are randomly generated within the polar bin (blue polygon) of azimuthal resolution ϕ . The registration process (be) transforms the line segments of the target point cloud (red lines) to fit the lines of the source cloud (blue). First, the lines are matched by Euclidean distance of midpoints (c); then, the segments are extended into infinite lines and the vectors between closest points are found (d); and, finally, they are used to estimate the transformation that fits the matching lines into common planes (green in (e)).
The environment in the field of view is represented by the set of CLS line segments. They are randomly generated between the neighboring ring points within the azimuthal bin as described in Figure 11a. Since we are using two LiDAR scanners, collar line segments are generated for the scans of each sensor individually. Using the transformation established by extrinsic calibration described in Section 3.3, line segments are transformed and joined into the single set for each multiframe.
After the sampling is done, matching of the closest line segments is performed. The line segments are extended into the infinite lines, and the closest points between matching lines are used for direct estimation of translation. SVD [41] is used again for estimation of rotation parameters in the same manner, as described in Section 3.3. This description is only a brief introduction to the CLS method and more information can be found in our previous publication [23].

3.5. Overlap Estimation

This work provides a novel solution for automatic estimation of the core parameter of the CLS approach. Before the transformation is estimated, invalid matches must be discarded. In our previous work, this was done by a simple distance thresholding, or by keeping a certain portion of matches (e.g., 50%). However, using a constant threshold or portion value is not flexible enough. It can cause significant registration misalignments, when invalid matches are used, or insufficient convergence when the valid matches are ignored.
Assuming that an initial coarse alignment is known, we are able to estimate the overlap between these frames and use this value as the portion of matches to keep (e.g., for 30% overlap, 30% of best matches are kept). This solution adapts to the specific situation of each pair of LiDAR frames to be registered and leads to a significantly better precision.
The overlap value (Figure 12a) is effectively estimated by spherical z-buffer structure [42] in Figure 12b. First, the target cloud is transformed into the source cloud coordinate frame and the [ x , y , z ] coordinates of all the points are transformed to spherical coordinates ϕ , θ , r (polar angle, elevation angle, and range). Each spherical bin of the z-buffer is assigned with minimal range value from the source point cloud. The minimal value is chosen since unwanted reflections sometimes cause invalid long range measurements and therefore there is the best chance that the minimum range measurement is valid. Then, all the points of target point cloud (also transformed to spherical coordinates) with range below the value in z-buffer (including certain tolerance) are considered to be overlapping points and the ratio to all the points is considered to be the overlap value. More formally, if the point p with range p r within the spherical bin i fulfills the requirement
p r < r m i n i · t r + t a ,
it is considered to be a part of the overlap. Value r m i n i denotes the minimal range value stored within the spherical bin. Absolute t a and relative t r tolerance values represent the acceptable translation and rotation error. Especially the error of rotation causes larger displacements for larger ranges. Equation (6) follows our error model, where the error e is the distance between precise point coordinates p (which are unknown) and known erroneous coordinates p e which can be approximately estimated as:
e = | p p e | = p r e · tg ( e r ) + e t ,
where e r represents rotation, e t is the translation error, and p r e is the range of the erroneous point (see also Figure 13). In our experiments, we used the tolerance values t r = 0.1 and t a = 0.3 for the overlap estimation. This allows rotation error e r approximately 5° and translation error 30 cm for the initial coarse transformation between the scans.
Figure 12. The overlap (a) between the source (blue) and the target (purple) LiDAR frame. In this case, approximately 30% of source points are within the view volume of target frame. The view volume can be effectively represented by spherical z-buffer (b) where range information (minimum in this case) or the information regarding empty space within the spherical grid is stored.
Figure 13. The error of measurement (Euclidean distance between points p and p e ) can be split into rotation e r and translation e t part. The impact of rotation error 2 · tg ( e r / 2 ) can be simplified to tg ( e r ) due to near linear properties of tangent function for small angles.

3.6. Rolling Shutter Corrections

As mentioned in the description of Velodyne sensor, spinning frequency is approximately 10 Hz which leads to 100 ms duration of a single LiDAR scan acquisition. This is a relatively long time when significant movement is assumed. Large translation in the case of fast vehicles or possible fast rotations in case of human carrier can cause distortions in LiDAR frame displayed in Figure 14. We denote this effect as rolling shutter because it resembles rolling shutter distortion of optical sensors.
Figure 14. Example of a LiDAR frame distorted by the rolling shutter effect when the operator with mapping backpack was turning around (green) and the corrected frame (purple). This is the top view and the distortion is mostly visible on the “bent” green wall at the bottom of this picture.
This means that the LiDAR data cannot only be rigidly transformed, but a continuous transformation needs to be applied or at least approximated. The single Velodyne Puck frame consists of approximately 75 packets, each carrying a slice of the frame. Slices are evenly distributed in both time and space. Thus, for each ith frame, we compute the relative transformation T i j that occurred during the acquisition of the current frame using the global position P i of the current frame and the pose P i + 1 of the next one as:
T i j = P i 1 · P i + 1 .
The correction for each slice is estimated by interpolation of this transformation. The translation parts are interpolated linearly and, for the rotations, Spherical Linear Interpolation (SLERP) [43] over quaternion representation is used. For the first slice, zero transformation is estimated and the last one is transformed by T i j .

3.7. Pose Graph Construction and Optimization

The proposed CLS method for point cloud alignment can only provide consecutive frame-to-frame registration. However, since each registration is burdened by a small error, after some time, the accumulated error (drift) is no longer acceptable. To reduce this drift and also to close loops of revisited places, we propose an iterative process of progressive pose graph construction and optimization. The key idea of this algorithm is progressive refinement of odometry estimation from local precision within small time window to global precision across the whole model. This iterative method is described in Figure 15 and more formally in Algorithm 1.
Figure 15. Pose graph as the output of point cloud registration and the input of SLAM optimization. The goal is to estimate 6DoF poses P 1 , P 2 , , P N of graph nodes (vertices) p 1 , p 2 , , p 1 5 in the trajectory. The edges represent the transformations between LiDAR frames for given nodes estimated by point cloud registration. Black edges represent transformations between consequent frames, blue edges are for transformations within a certain neighborhood (maximum distance of three frames in this example) and the green edges (in (a)) represent visual loops of revisited places detected by a significant overlap between the given frames. When GNSS subsystem is available (b), additional visual loops are introduced as transformations from the origin O of some local geodetic (orthogonal NED) coordinate frame.
First, only consecutive frames (within neighborhood of size 1) are registered, and then the neighborhood is gradually enlarged (size d in Algorithm 1, step 1) until it covers all N frames. CLS registration is performed for each pair (ith and jth frame) within the current neighborhood where a significant overlap is found and then efficient pose graph optimization using SLAM++ framework [44] is performed. Modulo operator in Step 3 reflects the fact that we assume a circular trajectory. This assumption of beginning and ending the data acquisition process at the same place is common also for other similar solutions (ZEB-1, ZEB-REVO, etc.) [7]. It helps the system to identify at least one visual loop that guarantees reasonable results from the global SLAM-based optimization.
Algorithm 1 Progressive refinement of 6DoF poses { P i } i = 1 N for sequence of frames { f i } i = 1 N by optimizing pose graph G .
1:
for d = 2 to N 2 do
2:
  for i = 1 to N do
3:
    j : = ( i + d ) mod N
4:
    T i j : = P j 1 · P i
5:
    o i j : = OVERLAP ( f i , f j , T i j )
6:
   if o i j > t o then
7:
     T i j , e : = CLSREGISTARTION ( f i , f j , T i j , o i j )
8:
    if e MEDIANRANGE ( f i ) · t r + t a then
9:
      G : = G { EDGE ( i , j , T i j ) }
10:
    end if
11:
   end if
12:
  end for
13:
   P 1 , P 2 , , P N = OPTIMIZE ( G )
14:
end for
15:
return P 1 , P 2 , , P N
Before a pair of frames is registered, the presence of overlap larger than t o is verified (Line 5 in Algorithm 1) in order to preserve the registration stability. We used minimal 0.5 overlap in our experiments. This also plays the role of visual loop detection every time a place is revisited.
Moreover, after the CLS registration is performed, we verify the result of registration (Line 8) using the error model described in Equation (7). As the reference range value, we take the median range of the source point cloud. In our experiments, we used tolerance values t r = 0.01 and t a = 0.05 representing tolerance of approximately 0.5° in rotation and 5 cm in positional error.
For outdoor mapping, the absolute position and orientation are provided by the GNSS/INS subsystem with PPK (Post Processed Kinematics) corrections. While the global error of these poses is small, relative frame-to-frame error is much larger when compared to the accuracy of pure SLAM solution. Therefore, we combine our SLAM (in the same way as described above) with additional edges in the pose graph representing the global position in some geodetic frame, as shown in Figure 15b.

3.8. Pose Graph Verification

After the registration is performed, a new edge is added into the pose graph only if the registration error is below a certain threshold modeled by Equation (7) (Line 8 of Algorithm 1). However, this simple rejection is not robust enough—some registrations are falsely rejected or accepted. After all overlapping frames are registered, additional verification is performed for all edges.
Expected transformation T i j e is computed (Equation (9)) using alternative path T 1 , T 2 , T K 1 , T K , as described in Figure 16. The L2 norm of positional difference between expected transformation T i j e and the transformation T i j found by registration (Equations (10)–(12)) is considered as the error value related to this edge. Note that the positional difference is also affected by the difference in rotation and therefore it is included in this error.
T i j e = T 1 · T 2 · · T K 1 · T K
i j = T i j 1 · T i j e
i j = R i j | t i j
e i j = t i j 2
Figure 16. Verification of edge ( p i , p j ) representing transformation T i j is performed by comparison with transformation T 1 · T 2 T K of alternative path (blue) between ith and jth node.
For each edge, all alternative paths up to a certain length are found and their errors are estimated. We use paths of length up to 3 as a tradeoff between the time complexity and robustness. An edge is rejected when the median of these error values is below accepted threshold (10 cm in our experiments). This cannot be considered as target error of our reconstruction since the pose graph optimization process further decreases the cumulative error. The whole process is repeated until there is no edge to reject.

3.9. Horizontal Alignment of the Indoor Map

While, for outdoor environment, the model is georeferenced and aligned with NED geodetic coordinate frame (north, east, and down), there is no such possibility when mapping indoors since the GNSS signal is not available. However, practical indoor applications of our 3D mapping solution require at least horizontal alignment—the alignment of gravity vector with Z-axis and the alignment of straight floors/ceilings with XY-plane in resulting 3D model as Figure 17 shows.
Figure 17. The reconstruction built by our SLAM solution before (a) and after (b) the alignment of horizontal planes (floor, ceiling, etc.) with XY plane (blue circle).
This alignment is possible, since roll and pitch angles are provided by IMU (using measurements by accelerometers and gyroscopes) and extrinsic calibration of Velodyne sensors to the IMU frame C I estimated as described in Section 3.3. The simplest solution would be to use these roll and pitch angles directly to align the LiDAR scans individually and deploy the SLAM only to estimate the remaining parameters (heading and translation). Unfortunately, this is not possible because the accuracy of roll and pitch angles is not sufficient—error in order of degrees happens during the motion. Since our goal is to reduce the cost of our solution, we did not want to use additional expensive hardware. We rather propose an alternative approach to estimate horizontal alignment from these noisy measurements.
We can leverage the fact that there are multiple (thousands) of roll/pitch measurements and only a single transformation for horizontal alignment needs to be computed. First, we are able to split each transformation (for each LiDAR frame) estimated by SLAM into the rotation and the translation
P S L A M = R S L A M | t S L A M .
Our partial goal is to estimate horizontal alignment A h fulfilling Equation (14). The transformation of point cloud data X by SLAM rotations R S L A M and horizontal alignment A h is the same, as the transformations of these data by IMU measured rotation R I M U (including the calibration C I ). In addition, each rotation (SLAM or IMU provided) can be split into roll R R , pitch R P and heading R H (Equation (15)). Since the IMU sensor is not able to provide accurate heading information indoors, we supplement the heading R S L A M H estimated by SLAM.
R I M U · C I · X = A h · R S L A M · X
R S L A M H · R I M U P · R I M U R · C I = A h · R S L A M
A h = R S L A M H · R I M U P · R I M U R · C I · R S L A M 1
Using Equation (16), we are able to estimate the (noisy and inaccurate) horizontal alignment A h for each pair of SLAM and IMU provided rotations of the same timestamp. During the mapping, there are usually thousands of these pairs (10 pairs per second) which are synchronized. The precise horizontal alignment is then computed by averaging the quaternions [45] representing noisy partial alignments A h .

3.10. Intensities Normalization

Another quality we would like to introduce into the 3D model is the approximate surface “color” information to improve the ability of visual recognition of various objects (inventory, signs, etc.). To avoid additional HW, and preserve invariance to illumination conditions, we use the laser return intensity. However, these intensity values cannot be directly considered as surface reflectivity, since they are affected by various additional factors such as angle of incidence, range of the measurement or gain of the particular laser beam. These factors were reported by previous works [37,38,39] and also confirmed by our experiments in Figure 18.
Figure 18. The dependency of laser return intensity on: the source beam (a); range of the measurement (b); and the angle of incidence (c). We are using 2 LiDAR scanners with 16 laser beams per each scanner, 32 beams in total.
Previously published works propose various closed-form solutions of intensity normalization for long range measurements (over 10 m) [37,38,39]. However, this is not applicable for smaller indoor environments and therefore we propose an alternative solution. If the normalized intensity represents only the surface reflectivity, there should be no dependency on other factors and probability distribution of the intensities should be the same for different laser beams, angles of incidence, or ranges.
Therefore, we discretize the space of ranges and angles with some small resolution (e.g., 20 cm and 1°>, respectively) and we distribute all the points of the point cloud model into a 3D grid based on the source beam ID (already discrete), the angle of incidence and the range. Our goal is to achieve that the intensity probability distribution will be the same for each bin of points. Assuming normal distribution of surface reflectivities (“colors”), the same target distribution N ( μ , σ 2 ) will be achieved within each bin by a simple transformation:
N ( μ , σ 2 ) = N ( μ i , σ i 2 ) · σ σ i + ( μ μ i ) ,
where N ( μ i , σ i 2 ) is the original distribution of laser intensities within ith bin.
There are no ground truth data to perform any objective evaluation of our proposed method for intensity normalization. We are only able to compare the results of 3D reconstruction with and without the normalization. Examples of results can be found in Figure 19.
Figure 19. Results of 3D reconstruction without (a,c,e,g,i) and with (b,d,f,h,j) the normalization of laser intensities. One can observe more consistent intensities for solid color ceiling (b) reducing the artifacts of trajectory, while preserving the contrast with ceiling lights. Besides the consistency, normalization of intensities reduces the noise (d). The most significant improvement is the visibility of important objects e.g., markers at the electrical towers (f,h) or emergency exit doors (j) at the highway wall. All these objects can not be found in the original point clouds(e,g,i).

4. Experiments

This section presents mapping results of our system in various scenes and scenarios—outdoor environments where GNSS is available, indoor scenes with GNSS denied, small rooms, staircases, and a narrow corridor. A usable and precise solution must avoid so called “double walls” described in Figure 3, which are a typical issue in 3D reconstructions causing ambiguity. Unfortunately, evaluation of such duplicities cannot be performed automatically, thus the operator (a certified geodesist) verified the reconstructions for us by inspecting multiple slices across the model. Moreover, the data density and point coloring by the intensity readings are required for better visual recognition of various objects in the environment. All the raw data collected by our backpack solution, and also the 3D reconstructions used in this evaluation, are publicly available (http://www.fit.vutbr.cz/~ivelas/files/4RECON-dataset.zip).
Regarding the precision, our goal is to achieve 5 cm relative precision (e.g., distance of the point from ground truth) denoted as e r . For outdoor environments, there are also constraints for absolute error e a in global geodetic frame. The average of this absolute error is required to be below 14 cm for position in horizontal plane and 12 cm for height estimation. However, the constraints for maximal error are set to double of these values—up to 28 cm for horizontal and 24 cm vertical error. These values were obtained through consultation with experts in the field of geodesy and follow the requirements for creating the building models, outdoor vector maps, inventory check, etc. Global error constraints are applicable only outdoors, where some global positioning system is available. To sum up, in this section, we show that our solution provides:
  • sufficient relative precision e r under 5 cm;
  • global absolute error e a within the limits described above;
  • data density and coloring by normalized intensities for visual inspection; and
  • data consistency without ambiguity (no dual walls effects).

4.1. Comparison of Point Cloud Registration Methods

We compared our previously published CLS method [23] with different modes (online and offline) of state-of-the-art method LOAM [19] using the data of KITTI Odometry Suite [22] providing both the Velodyne LiDAR data and ground truth poses. The error metrics used in this evaluation are defined by the KITTI dataset itself. The data sequences are split into subsequences of 100 , 200 , , 800 frames (of 10 , 20 , , 80 s duration). The error e s of each subsequence is computed as:
e s = E s C s 2 l s ,
(provided by [22]) where E s is the expected position (from the ground truth) and C s is the estimated position of the LiDAR where the last frame of subsequence was taken with respect to the initial position (within given subsequence). The difference is divided by the length l s of the followed trajectory. The final error value is the average of errors e s across all the subsequences of all the lengths.
The experiment is summarized in Table 2 and it leads to the conclusion that our CLS approach outperforms LOAM with approximately 1 cm lower drift per 1 m of trajectory elapsed. For clarification, LOAM can run in two different modes. In the online mode (10 fps), mapping is skipped for a certain number of frames, which are only roughly aligned. In the offline mode, which is approximately 3 × slower, every frame undergoes the full mapping procedure.
The precision of our method was estimated for frame-to-frame approach, where only consequent frames were registered, and also for the scenario, where each frame is registered with all other frames within a small neighborhood (10 neighboring frames used in this experiment). In this experimental multi-frame approach, the final pose is estimated by simple averaging.
In our previous publication [23], the superior performance of CLS over GICP method (Generalized ICP) [46] was presented, too. All these evaluations led to the choice of CLS for the LiDAR frames registration in our 4RECON backpack solution.

4.2. Indoor Experiments

For indoor evaluation of our system, we chose two different environments—the office and staircase in Figure 20—where our partner company has already performed 3D mapping using different laser scanners and generously provided the accurate output models to us. The reconstructions from static FARO scanner achieving very high accuracy (in order of millimeters) were used as the ground truth. The same strategy has been already used for evaluation of other mapping systems [4,12,25]. For the office environment only, also the 3D reconstruction created by ZEB-1 solution was provided to us. This allowed us to compare our solution in terms of accuracy, data density, model usability and completeness.
Figure 20. Experimental environments Office (a) and Staircase (b), and the highlighted slices that were used for precision evaluation.
To evaluate the relative error, all the models of the same environment provided by different scanners (FARO, ZEB-1, and our solution 4RECON) were aligned using ICP. As displayed in Figure 20, several reference slices (8 slices per model, 16 slices in total) were created for the evaluation of precision. Within each slice, the average error (in Table 3) was estimated as the average distance of the 3D points to the ground truth model created by the FARO scanner. Our solution achieved approximately 1.5 cm relative error on average, which is only slightly worse result than 1.1 cm error for ZEB-1 that is burdened by the multiple limitations listed below in this section. Moreover, we provide information about the distribution of displacement relative error in Figure 21. The error was estimated for ZEB-1 and different modes of our system:
Table 3. Relative error e r of our method and ZEB-1 product within selected slices visualized in Figure 20. Presented values are average displacements (cm) of the points comparing with the ground truth point cloud obtained by FARO static scanner. The results are missing for ZEB-1 and Staircase dataset since there was no reconstruction using this scanner available.
Figure 21. Error e r distribution (the amount of the points within certain error) for our system 4RECON and ZEB-1 product. The experiments were performed for all test slices in Figure 20 on Office (a) and Staircase (b) dataset. Note that the model built by ZEB-1 was not available and therefore the evaluation is missing.
  • in 4RECON-10, the registrations were performed only within small neighborhood of 10 nearest frames (1 s time window) and reflects the impact of accumulation error;
  • for 4RECON-overlap, the registrations were performed for all overlapping frames as described in Section 3.7 reducing the accumulation error by loop closures at every possible location; and
  • pose graph verification (see Section 3.8) was deployed in 4RECON-verification, yielding the best results with good precision and no ambiguities.
Both ZEB-1 and our solution including pose graph verification achieved sufficient accuracy below 5 cm. Moreover, the precision of 2 cm was fulfilled for more than 70% of data. Slightly better precision of ZEB-1 solution was achieved thanks to the Hokuyo sensor with 4× higher scanning frequency while preserving much lower vibrations compared with Velodyne LiDAR.
Figure 22 also shows the precision within representative slices—horizontal slice for Office dataset and vertical slice across model of Staircase. These slices demonstrate the noise within data coming from different sensors—Hokuyo LiDAR for ZEB-1 solution and Velodyne for our 4RECON system—and also the precision for different modes of operation. For Staircase dataset, the necessity of pose graph optimization is also demonstrated.
Figure 22. Color coded errors within the horizontal reference slice of the Office dataset (a)–(d) and vertical slice in Staircase dataset (e)–(g). Blue color represents zero error, red color stands for 10 cm error and higher. The ground truth FARO data are displayed in green. The results are provided for 4RECON-10 (a,e), 4RECON-overlap (b,f), 4RECON-verification (c,g), and ZEB-1 (d). For Office dataset, there are no ambiguities (double walls) even without visual loop detection while both loop closure and pose graph verification is necessary for more challenging Staircase dataset to discard such errors. Moreover, one can observe that ZEB-1 solution yields lower noise reconstruction thanks to the less noisy Hokuyo LiDAR.
Our evaluations show that the precision of our 4RECON backpack is comparable to the solution ZEB-1 while fulfilling basic requirement for relative error below 5 cm. Note that the error values are also comparable (and in some cases better) to the precisions of other solutions in Table 1. In our solution, higher noise can be observed comparing with ZEB-1. This corresponds with higher error values and it is the main reason for little lower accuracies.
However, it is important to point out two most significant advantages of our solution comparing with ZEB solutions. First, our solution is usable in vast open spaces with fewer and more distant featuring objects, as is demonstrated in the next sections. In indoor environments featuring objects at distances significantly larger than 15–20 m [7], ZEB solutions based on the Hokuyo sensor fail.
Second, our Velodyne-based solution is able to provide much higher data density, map completeness and visibility of objects in the scene. We chose two large surfaces (the ceiling and the side wall in Figure 23) with 230 m2 in total area. Models of these surfaces created by ZEB-1 solution achieved average data density 0.9 points per cm2 (2.2 million points in total). Models created by our 4RECON backpack consist of more than 23 million points, achieving much higher data density—10.1 points per cm2. Better visibility of objects in Figure 23 is achieved thanks to the laser intensity readings provided by Velodyne sensor and employing our normalization process as described in the Section 3.10. This might appear to be only a “cosmetic” property, but the visibility of the construction elements, equipment, furniture, etc. in the scene is important for usability in real applications—e.g., an operator needs to distinguish between the window and the blackboard.
Figure 23. The comparison of data density provided by ZEB-1 (a,c) and our (b,d) solution. Since the ZEB-1 solution is based on the Hokuyo scanner, the laser intensity readings are missing and data density is much lower compared with our solution. Multiple objects which can be distinguished in our reconstruction (lamps on the ceiling in the top, furniture and other equipment in the bottom image) are not visible in the ZEB-1 model.

4.3. Outdoor Experiments

Our system is a universal solution—both for indoor scenes, where the usability was proven by the previous section, and for outdoor scenes, including vast open ones. We tested and evaluated our system during a real task—high voltage lines mapping and measurement. The area of interest, including the details of some important objects, is visualized in Figure 24. The main goal of this mission was position estimation of electric pylons (including footprint of the base, total height and the positions of the wire grips) and the heights and the hangings of the wires. Figure 24 shows that these details can be recognized in the 3D model. The usability of our 3D reconstructions was also confirmed by the geodetic company we asked for manual data inspection and evaluation.
Figure 24. The example of 3D reconstruction of open field with high voltage electrical lines (a). The model is height-colored for better visibility. The estimation of positions and height of the lines (b), towers (e), etc. was the main goal of this mapping task. The other elements (c,d) in the scene are shown for demonstration of the reconstruction quality.
In the same way as during the indoor mapping, the ambiguities in multiple instances of objects disqualifies the reconstructions to be used in practical geodetic measurements. Such error in comparison with the desired result of the reconstruction is shown in Figure 25. Multiple instances of the same object, blurred and noisy results were successfully avoided by our solution (see Figure 24 and Figure 25).
Figure 25. Example of ambiguities caused by reconstruction errors (a), which disqualifies the model to be used for practical measurements. We obtained such results when we used only poses provided by GNSS/INS subsystem without any refinements by SLAM or point cloud registration. Our solution (including SLAM) provides valid reconstructions (b), where both towers and wires (in this case) can be distinguished.
Since our solution integrates precise GNSS/INS module for outdoor scenarios, the model is georeferenced—the coordinates of all the points are bound in some global geodetic frame.
To verify the absolute positional accuracy of our model, we performed precise measurements on so-called survey markers. This is commonly used technique to verify the precision of resulting maps (including 3D maps). Precise positions of the survey markers are estimated using specialized geodetic GNSS system, which is placed statically on the survey point for several seconds, until the position converged. The precision up to 2 cm is achieved using RTK (Real Time Kinematics) which are received online via internet connection.
Survey markers (Figure 26a) are highlighted using high-reflective sprays. Thanks to the coloring of point cloud by laser intensities, these markers are also visible in the reconstructions as can be seen in Figure 26b.
Figure 26. Geodetic survey markers painted on the road (a) is also visible in the point cloud (b) thanks to the coloring by laser intensities.
The evaluation in Table 4 shows that our 3D mapping for 0.5 km test track fulfills the requirements for absolute error, as described at the beginning of this section—average error below 14 cm for position in horizontal plane and 12 cm for height estimation and maximal error up to 28 cm and 24 cm, respectively (double values of expected average error).
Table 4. Errors measured (cm) on geodetic survey marker points at the beginning and at the end of survey track. The distance between the control points is 523 m.
Thanks to the ability of point cloud coloring by laser intensities, it is possible to also run such evaluation for the validation of each 3D model, which should be used in real application. This is also an important quality, since there are requirements for double measurements in geodesy to ensure that the accuracy is sufficient.

4.4. Comparison of Single and Dual Velodyne Solution

Finally, we compared the robustness of our dual LiDAR solution over the system with single LiDAR only. We computed reconstructions of the Office environment using our solution with two synchronized and calibrated LiDARs (one aligned vertically and second horizontally) in Figure 27a,b and also using only single LiDAR—horizontally ( Figure 27c,d) or vertically aligned ( Figure 27e,f).
Figure 27. Comparison of reconstructions provided by dual LiDAR system—floor plan top view (a) and side view of the corridor (b)—with the reconstruction built using only single horizontally (c,d) or vertically (e,f) positioned Velodyne LiDAR. The reconstructions are red colored with ground truth displayed in blue.
Our evaluation shows that the dual LiDAR solution provides a valid reconstruction. However, the solution with horizontal LiDAR only is not able to provide vertically correct alignment (Figure 27d), and vice versa, the solution with vertical LiDAR is horizontally misaligned (Figure 27e).

5. Discussion

When we look on our 4RECON mapping backpack in the context of the other available solutions (see overview in Table 1), we can summarize its advantages and disadvantages.
Comparing to the ZEB products, our backpack achieves much higher data density, better visibility of the objects in the resulting model, higher comfort of data acquisition, and, most importantly, usability also in the outdoor featureless open spaces, including the option of georeferencing the reconstructed point map. However, we must admit that ZEB scanners achieve better accuracy and lower noise in the models of indoor environments.
In terms of universality of the usage, our solution also outperforms Robin and Akhka backpacks, which require GNSS readings and therefore indoor scanning is not possible. For outdoor tasks, Robin achieves better precision than our 4RECON backpack, but it is also important to point out the very high price of the Robin solution.
Laser mapping backpacks Pegasus, Viametris bMS3D and LiBackpack can be considered as the most similar solutions to our work. All these systems claim precision up to 5 cm, which is also the accuracy of 4RECON (according to the evaluation in Figure 20). The advantages of these solutions are more professional design and the presence of additional RGB cameras (for Pegasus and Viametris backpacks). The integration of panoramic RGB camera into our backpack is the plan for future work. Our solution on the other side provides open SLAM method in comparison with the proprietary solutions deployed in these backpacks, and also potentially much lower price.

6. Conclusions

This paper presents a dual LiDAR system for mobile mapping. Our solution can be easily carried as a backpack together with a reliable dual antenna GNSS/INS system. This leads to the universality of its usage. In small or narrow indoor environments with many obstacles, two LiDAR sensors increase the field of view. On the other side, in open outdoor spaces with lack of features, the reliable positional subsystem keeps the result accurate.
Thanks to the type of LiDARs used, our solution also brings multiple other beneficial properties: data density, map completeness and coloring by laser intensities normalized by our novel algorithm. The intensities enables better visual recognition of the elements in the scene as well as the visibility of geodetic survey markers for checking the model validation.
The proposed solution was evaluated in both indoor and outdoor scenarios. During the mapping of the office or staircase environment, our solution fulfilled the requirement of error below 5 cm and achieved a similar precision as solution ZEB-1. The average error in terms of the points displacements is approximately 1.5 cm. For outdoor experiments, our reconstruction met the requirements for absolute precision with 11.8 cm average error in the global geodetic frame. This proves higher universality of our mapping backpack compared to the previous ZEB-1 solution. In all our experiments, data consistency was preserved and unambiguous models were built.

Author Contributions

Conceptualization, M.V., M.S. and J.H.; methodology, M.V., M.S. and A.H.; software, M.V.; validation, M.V.; investigation, M.V. and T.S.; resources, J.H.; data curation, M.V. and T.S.; Writing—Original Draft preparation, M.V.; Writing—Review and Editing, M.S. and A.H.; visualization, M.V. and T.S.; and supervision, project administration and funding acquisition, M.S. and J.H.

Funding

This research was funded by the Technology Agency of the Czech Republic (TA CR) Competence Centres project V3C—Visual Computing Competence Center (No. TE01020415); the Ministry of Education, Youth and Sports of the Czech Republic from the National Programme of Sustainability (NPU-II) project IT4Innovations excellence in science (LQ1602); and the Brno University of Technology project Processing, recognition and imaging of multimedia and 3D data (FIT-S-17-3984).

Acknowledgments

We would like to thank our colleagues in Geodrom company for providing us valuable consultations, experiences with different mapping solutions, and necessary datasets used for evaluations published in this paper. Our research was also supported by the Ministry of Industry and Trade of the Czech Republic project 4RECON CZ.01.1.02/0.0/0.0/15 013/0004722.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. NavVis. Available online: https://www.navvis.com/ (accessed on 19 August 2019).
  2. Nava, Y. Visual-LiDAR SLAM with Loop Closure. Master’s Thesis, KTH Royal Institute of Technology, Stockholm, Sweden, 2018. [Google Scholar]
  3. GeoSLAM. Available online: https://geoslam.com/ (accessed on 19 August 2019).
  4. Sirmacek, B.; Shen, Y.; Lindenbergh, R.; Zlatanova, S.; Diakite, A. Comparison of Zeb1 and Leica C10 indoor laser scanning point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 143. [Google Scholar] [CrossRef]
  5. Bosse, M.; Zlot, R.; Flick, P. Zebedee: Design of a Spring-Mounted 3-D Range Sensor with Application to Mobile Mapping. IEEE Trans. Robot. 2012, 28, 1104–1119. [Google Scholar] [CrossRef]
  6. Bosse, M.; Zlot, R. Continuous 3D scan-matching with a spinning 2D laser. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 4312–4319. [Google Scholar] [CrossRef]
  7. GeoSLAM Ltd. The ZEB-REVO Solution. 2018. Available online: https://geoslam.com/wp-content/uploads/2018/04/GeoSLAM-ZEB-REVO-Solution_v9.pdf?x97867 (accessed on 19 July 2019).
  8. Dewez, T.J.; Plat, E.; Degas, M.; Richard, T.; Pannet, P.; Thuon, Y.; Meire, B.; Watelet, J.M.; Cauvin, L.; Lucas, J.; et al. Handheld Mobile Laser Scanners Zeb-1 and Zeb-Revo to map an underground quarry and its above-ground surroundings. In Proceedings of the 2nd Virtual Geosciences Conference: VGC 2016, Bergen, Norway, 21–23 Sepember 2016. [Google Scholar]
  9. GreenValley Internation. LiBackpack DG50, Mobile Handheld 3D Mapping System. 2019. Available online: https://greenvalleyintl.com/wp-content/uploads/2019/04/LiBackpack-DG50.pdf (accessed on 19 July 2019).
  10. GreenValley International. Available online: https://greenvalleyintl.com/ (accessed on 19 August 2019).
  11. Leica Geosystems AG. Leica Pegasus: Backpack, Mobile Reality Capture. 2017. Available online: https://www.gefos-leica.cz/data/original/skenery/mobilni-mapovani/backpack/leica_pegasusbackpack_ds.pdf (accessed on 19 July 2019).
  12. Masiero, A.; Fissore, F.; Guarnieri, A.; Piragnolo, M.; Vettore, A. Comparison of Low Cost Photogrammetric Survey with Tls and Leica Pegasus Backpack 3d Modelss. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 147. [Google Scholar] [CrossRef]
  13. Dubey, P. New bMS3D-360: The First Backpack Mobile Scanning System Including Panormic Camera. 2018. Available online: https://www.geospatialworld.net/news/new-bms3d-360-first-backpack-mobile-scanning-system-panormic-camera/ (accessed on 30 August 2019).
  14. Viametris. Available online: https://www.viametris.com/ (accessed on 20 August 2019).
  15. 3D Laser Mapping. Robin Datasheet. 2017. Available online: https://www.3dlasermapping.com/wp-content/uploads/2017/09/ROBIN-Datasheet-front-and-reverse-WEB.pdf (accessed on 19 July 2019).
  16. 3D Lasser Mapping. Available online: https://www.3dlasermapping.com/ (accessed on 19 August 2019).
  17. Rönnholm, P.; Liang, X.; Kukko, A.; Jaakkola, A.; Hyyppä, J. Quality analysis and correction of mobile backpack laser scanning data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 41. [Google Scholar] [CrossRef]
  18. Kukko, A.; Kaartinen, H.; Zanetti, M. Backpack personal laser scanning system for grain-scale topographic mapping. In Proceedings of the 46th Lunar and Planetary Science Conference, The Woodlands, TX, USA, 16–20 March 2015; Volume 2407. [Google Scholar]
  19. Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the Robotics: Science and Systems Conference (RSS 2014), At Berkeley, CA, USA, 12–16 July 2014. [Google Scholar] [CrossRef]
  20. Zhang, J.; Singh, S. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2174–2181. [Google Scholar] [CrossRef]
  21. Zhang, J.; Kaess, M.; Singh, S. Real-time depth enhanced monocular odometry. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 4973–4980. [Google Scholar] [CrossRef]
  22. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
  23. Velas, M.; Spanel, M.; Herout, A. Collar Line Segments for fast odometry estimation from Velodyne point clouds. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4486–4495. [Google Scholar] [CrossRef]
  24. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  25. Maboudi, M.; Bánhidi, D.; Gerke, M. Evaluation of indoor mobile mapping systems. In Proceedings of the GFaI Workshop 3D-NordOst 2017 (20th Application-Oriented Workshop on Measuring, Modeling, Processing and Analysis of 3D-Data), Berlin, Germany, 7–8 December 2017. [Google Scholar]
  26. Kukko, A.; Kaartinen, H.; Hyyppä, J.; Chen, Y. Multiplatform Mobile Laser Scanning: Usability and Performance. Sensors 2012, 12, 11712–11733. [Google Scholar] [CrossRef]
  27. Lauterbach, H.A.; Borrmann, D.; Heß, R.; Eck, D.; Schilling, K.; Nüchter, A. Evaluation of a Backpack-Mounted 3D Mobile Scanning System. Remote Sens. 2015, 7, 13753–13781. [Google Scholar] [CrossRef]
  28. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar] [CrossRef]
  29. Nüchter, A.; Bleier, M.; Schauer, J.; Janotta, P. Continuous-Time SLAM—Improving Google’s Cartographer 3D Mapping. In Latest Developments in Reality-Based 3D Surveying and Modelling; MDPI: Basel, Switzerland, 2018; pp. 53–73. [Google Scholar] [CrossRef]
  30. Newcombe, R.A.; Izadi, S.; Hilliges, O.; Molyneaux, D.; Kim, D.; Davison, A.J.; Kohi, P.; Shotton, J.; Hodges, S.; Fitzgibbon, A. KinectFusion: Real-time dense surface mapping and tracking. In Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland, 26–29 October 2011; pp. 127–136. [Google Scholar] [CrossRef]
  31. Deschaud, J. IMLS-SLAM: Scan-to-Model Matching Based on 3D Data. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2480–2485. [Google Scholar] [CrossRef]
  32. Kolluri, R. Provably Good Moving Least Squares. ACM Trans. Algorithms 2008, 4, 18:1–18:25. [Google Scholar] [CrossRef]
  33. Droeschel, D.; Behnke, S. Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1–9. [Google Scholar] [CrossRef]
  34. Droeschel, D.; Schwarz, M.; Behnke, S. Continuous Mapping and Localization for Autonomous Navigation in Rough Terrain Using a 3D Laser Scanner. Robot. Auton. Syst. 2017, 88, 104–115. [Google Scholar] [CrossRef]
  35. Mendes, E.; Koch, P.; Lacroix, S. ICP-based pose-graph SLAM. In Proceedings of the 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Lausanne, Switzerland, 23–27 October 2016; pp. 195–200. [Google Scholar] [CrossRef]
  36. Park, C.; Moghadam, P.; Kim, S.; Elfes, A.; Fookes, C.; Sridharan, S. Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1206–1213. [Google Scholar] [CrossRef]
  37. Kashani, A.G.; Olsen, M.J.; Parrish, C.E.; Wilson, N. A Review of LIDAR Radiometric Processing: From Ad Hoc Intensity Correction to Rigorous Radiometric Calibration. Sensors 2015, 15, 28099–28128. [Google Scholar] [CrossRef] [PubMed]
  38. Jutzi, B.; Gross, H. Normalization Of Lidar Intensity Data Based On Range And Surface Incidence Angle. ISPRS J. Photogramm. Remote Sens. 2009, 38, 213–218. [Google Scholar]
  39. Kaasalainen, S.; Jaakkola, A.; Kaasalainen, M.; Krooks, A.; Kukko, A. Analysis of Incidence Angle and Distance Effects on Terrestrial Laser Scanner Intensity: Search for Correction Methods. Remote Sens. 2011, 3, 2207–2221. [Google Scholar] [CrossRef]
  40. Velodyne LiDAR. Available online: https://velodynelidar.com/ (accessed on 19 August 2019).
  41. Nuchter, A.; Lingemann, K.; Hertzberg, J.; Surmann, H. 6D SLAM with approximate data association. In Proceedings of the ICAR ’05. Proceedings., 12th International Conference on Advanced Robotics, Seattle, WA, USA, 18–20 July 2005; pp. 242–249. [Google Scholar] [CrossRef]
  42. Velas, M.; Faulhammer, T.; Spanel, M.; Zillich, M.; Vincze, M. Improving multi-view object recognition by detecting changes in point clouds. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–7. [Google Scholar] [CrossRef]
  43. Shoemake, K. Animating Rotation with Quaternion Curves. In Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques, San Francisco, CA, USA, 22–26 July 1985; ACM: New York, NY, USA, 1985; pp. 245–254. [Google Scholar] [CrossRef]
  44. Ila, V.; Polok, L.; Solony, M.; Svoboda, P. SLAM++-A highly efficient and temporally scalable incremental SLAM framework. Int. J. Robot. Res. 2017, 36, 210–230. [Google Scholar] [CrossRef]
  45. Markley, F.L.; Cheng, Y.; Crassidis, J.L.; Oshman, Y. Averaging Quaternions. J. Guid. Control. Dyn. 2007, 30, 1193–1197. [Google Scholar] [CrossRef]
  46. Segal, A.; Hähnel, D.; Thrun, S. Generalized-ICP. In Robotics: Science and Systems; Trinkle, J., Matsuoka, Y., Castellanos, J.A., Eds.; The MIT Press: Cambridge, MA, USA, 2009. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.