Next Article in Journal
Towards the Integration and Automation of the Design Process for Domestic Drinking-Water and Sewerage Systems with BIM
Previous Article in Journal
A Project Scheduling Game Equilibrium Problem Based on Dynamic Resource Supply
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Framework of Wearable Sensor-System Development for Urban 3D Modeling

1
Department of Geoinformatic Engineering, Inha University, 100 Inha-ro, Michuhol-gu, Incheon 22212, Korea
2
Department of Civil and Environmental Engineering, Myongji University, 116 Myongji-ro, Cheoin-gu, Yongin 17058, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(18), 9061; https://doi.org/10.3390/app12189061
Submission received: 1 April 2022 / Revised: 4 June 2022 / Accepted: 12 June 2022 / Published: 9 September 2022
(This article belongs to the Section Civil Engineering)

Abstract

:
Recently, with the expansion of the smart city and autonomous driving-related technologies within complex urban structures, there has been an increase in the demand for precise 3D modeling technology. Wearable sensor systems can contribute to the construction of seamless 3D models for complex urban environments, as they can be utilized in various environments that are difficult to access using other sensor systems. Consequently, various studies have developed and utilized wearable sensor systems suitable for different target sites and purposes. However, studies have not yet suggested an overall framework for building a wearable system, including a system design method and an optimal calibration process. Therefore, this study aims to propose a framework for wearable system development, by presenting guidelines for wearable sensor system design and a calibration framework optimized for wearable sensor systems. Furthermore, calibration based on point–plane correspondences is proposed. A wearable sensor system was developed based on the proposed guidelines and it efficiently acquired data; the system calibration and data fusion results for the proposed framework showed improved performance in a comparative evaluation.

1. Introduction

The demand for detailed 3D modeling of urban landscapes has recently increased in tandem with the growing demand for smart cities and autonomous driving technologies [1]. Precise 3D modeling can be used to solve urban problems [2], as well as for autonomous driving and vehicle localization in GNSS-denied environments [3]. Furthermore, 3D modeling that reflects spatiotemporal changes facilitates the implementation of digital twins and smart cities [4]. Thus, it is considered an essential element of future technologies, with applications in the city and for transportation.
Multi-sensor systems have been considered among the important research topics of the past few years, owing to their ability to increase the efficiency of 3D modeling [5,6]. In particular, the usability of wearable multi-sensor systems equipped with LiDAR and cameras is increasing with the demand for 3D modeling of complex urban structures, such as indoor and underground spaces, urban canyons, alleys, and mixed traffic streets [3]. Wearable systems have several advantages, the system offers sufficient 3D information even in environments with insufficient texture or discontinuous depth [7,8,9]. Wearable systems mounted on the human body are available in various environments. More specifically, wearable systems can be operated in complex environments where the accessibility of mobile platforms is severely restricted, such as indoors, underground spaces, narrow alleys, mixed traffic streets, and historical buildings [10,11,12].
Although there are significant advantages of the wearable multi-sensor system for urban 3D modeling, for accurate data acquisition and fusion, the following must be carefully considered in system development. First, the sensor arrangement method for efficient data acquisition must be reviewed. Wearable sensor systems have a small maximum payload, thereby limiting the number and types of sensors used. Therefore, the configuration and arrangement of sensors must be reviewed, with the aim of acquiring data more efficiently using a relatively small number of sensors. Second, self-calibration of the on-board sensors is required. Self-calibration is the process of estimating the interior orientation parameters (IOPs) and the distortion parameters of the camera by determining the mathematical model equation related to the data acquisition of each sensor. Finally, for accurate data fusion, precise system calibration should be conducted to determine the geometric relationships between sensors. In particular, the system calibration for wearable systems should be conducted especially carefully for the following reasons. For efficient data acquisition, wearable systems have generally been designed to minimize the overlapping of FOV (Field Of View) between sensors. However, such a sensor arrangement obstructs the acquisition of common control features between sensors and limits the accuracy of system calibration. Therefore, the design and characteristics of the wearable system should be considered during the system calibration procedure.

1.1. Previous Studies

Previous studies related to wearable systems have developed systems suitable for the purpose of each study; however, details have been lacking regarding the placement method for the sensors of the wearable system for efficient data acquisition. Gong et al. [3] developed a backpack LiDAR system comprising two 3D LiDARs for 3D modeling of underground parking in a GNSS-denied environment. Filippo et al. [11] developed a wearable mapping system using a hand-held laser scanner for 3D modeling of a historical site. Chahine et al. [10] performed data fusion for data acquired in the forest area using a backpack-type wearable system equipped with LiDAR and multiple optical cameras. Liu et al. [13] developed a system to help visually impaired people move indoors using a wearable system composed of LiDAR and a camera. However, these studies did not provide specific criteria or reasons for the arrangement and design of their sensors and sensor systems. One previous study [14] related to this research indicated the method for LiDAR sensor arrangement within the backpack system; however, that study dealt only with LiDAR.
Self-calibration of optical cameras and LiDAR sensors has been sufficiently verified through various previous studies. The calibration methodology for conventional and fish-eye lens cameras has been established through various studies [15,16,17,18,19,20,21,22,23,24] from the 1960s to recent years. Furthermore, regarding LiDAR self-calibration, various methodologies have been proposed and verified through numerous studies [25,26,27,28,29,30]. Therefore, the self-calibration of optical cameras and LiDARs can be performed reliably and accurately via the approaches of previous studies.
Various studies have proposed multi-sensors system calibration approaches; however, most of the previous studies focused on the characteristics of the sensor without considering the platform used in the development of the system calibration approach. In other words, previous approaches mainly considered the correspondences between control features (points, lines, and planes) of sensor data, not the characteristics of the sensor arrangement according to the platform.
The point–point correspondence-based approach has been the most commonly used methodology of system calibration. This approach allows stable calibration, because several matched points robustly limit the relative position of the sensors. Therefore, many previous studies used the point–point correspondence-based approach to conduct system calibration between an optical camera and a LiDAR, using the camera images and intensity images [31,32,33,34,35,36] or LiDAR point clouds [37,38,39,40,41,42,43,44]. However, this approach has the disadvantage that the accuracy is dependent on the resolution of the data and the FOV overlap between the sensors [45,46,47,48,49].
The conventional point–plane correspondence-based approach is a calibration method for an optical camera and a LiDAR. The constraint condition of this approach is that LiDAR point clouds for a plane should be on the corresponding plane derived from the optical images. This approach has the advantage that it requires nothing but checkerboard calibration targets in the testbed configuration [50]. Thus, several studies have employed the conventional point–plane correspondence-based approach [51,52,53,54,55]. However, the approach has a disadvantage in that (i) the application is limited to a system consisting of an optical camera and a LiDAR with sufficient FOV overlap, and (ii) it requires an additional process for extracting 3D plane information from optical images.
The line–plane and point–line correspondence-based approaches are generally used for the systems with 2D LiDAR and cameras. These approaches offer the advantage that the positional relationship between control features is strongly constrained, and they derive accurate results [56,57]. Therefore, they have been utilized in various studies dealing with 2D LiDAR [56,57,58,59,60,61,62,63,64]. However, the line–plane and point–line correspondence-based approaches are not suited for 3D LiDAR systems, and require an additional process to extract the edge points or lines of the calibration target.
The common disadvantages of previous system calibration approaches include their unsuitability for systems consisting of sensors with insufficient FOV overlap. Furthermore, previous system calibration methods for multi-sensor systems generally required pre-processing to measure GCP (Ground Control Point) coordinates. The procedure for GCP measurement increases the time required for system calibration and makes periodic system calibration difficult.

1.2. Purpose of Study

The purpose of this study is to propose a framework for building a wearable system that generates accurate RGB-D data. More specifically, the objectives of this study are: (1) to suggest guidelines for the design of a wearable multi-sensor system, (2) to propose a framework that can accurately conduct calibration of a wearable sensor system, and (3) to propose an efficient system calibration method.
In this study, criteria for sensor placement of wearable systems are suggested for efficient and accurate RGB-D data generation. For efficient data acquisition and accurate data fusion, sensor placement and system design should be determined carefully. However, most previous studies have not expanded upon the details of the sensor placement for wearable systems. Therefore, this study suggests the criteria to be considered in the sensor arrangement of a wearable sensor system.
This study proposed a calibration framework optimized for wearable sensor systems. Because the number of mounted sensors in a wearable system is limited, the FOV overlap between the same modal sensors is generally kept small for efficient data acquisition. Therefore, previous calibration methods for sensors with high FOV overlap are not suitable for wearable systems calibration. The proposed calibration framework is optimized for wearable sensor systems and can be applied to various sensor systems. Sensor-bundle configuration proceeds based on interpretation of FOV overlap between sensors, and subsequently conducts system calibrations for each bundle.
Finally, in this study, a calibration method based on point–plane correspondence is proposed for more efficient system calibration. Previous calibration methods for multi-sensor systems require (i) the coordinates of GCPs or the length of control lines, or (ii) processes to extract points or lines on the edge of the calibration target of the test-bed. On the other hand, the proposed method does not require the pre-processed coordinates of GCPs for point or line extraction on the edge of the calibration target. Therefore, the proposed method contributes to efficient system calibration. Furthermore, the proposed method has high utility because it is applicable to sensors with low FOV overlap. It is based on bundle adjustment (BA) using a co-linearity condition, which is commonly used in optical camera calibration.
This paper describes the research in the following order. Section 2 presents the sensor arrangement method for the backpack-based wearable system used in this study, and describes the actual fabricated sensor system. Section 3 provides a detailed description of the proposed calibration framework. The contents of self-calibration for the optical camera and LiDAR sensor (mounted in the wearable system) in the previous study are briefly introduced, and the details of the proposed system calibration framework are described. Section 4 evaluates the proposed framework through the calibration of a wearable sensor system and the analysis of data fusion results. Finally, Section 5 considers the significance of this study and presents suggestions for future research.

2. Design and Production of Wearable Sensor System

In this study, a wearable sensor system was designed based on efficient data acquisition, system calibration, and data fusion accuracy. The wearable sensor system was designed in two major steps. First, the platform to be used and the type of sensor to be used were determined; second, the number of sensors and the sensor arrangement were determined through simulation.

2.1. Platform and Sensors

The platform and sensors used for building the wearable sensor system were the backpack platform, optical camera, and LiDAR, respectively. The reasons for selecting the backpack as the platform of the system were as follows. First, the backpack platform can accommodate more sensors than other wearable platforms. Also, the backpack-based wearable system was easier to move through various environments, leaving the wearer’s hands free. Moreover, it could be used in a variety of ways owing its capacity if necessary to be mounted on a wheel-based platform with a simple modification.
This study used an optical camera and 3D LiDAR for RGB-D data generation, and the sensor to be used was determined as shown in Table 1. First, fisheye lens cameras were employed to acquire RGB data efficiently. The advantage of a fisheye camera is its ability to acquire data efficiently through a wide FOV compared to a conventional camera. Therefore, the system in this study was developed using fisheye cameras to minimize the number of mounted sensors. The LiDAR sensor was adopted in the system for accurate 3D depth data acquisition. In other words, because 3D modeling using only optical images could result in low accuracy in an environment where feature points were insufficient or similar textures were repeated, in this study a multi-modal sensor system was constructed composed of an optical camera and LiDAR.

2.2. Design of Wearable Sensor System

Sensor placement criteria were prepared for efficient data acquisition and accurate data fusion, and the design of the wearable sensor system was agreed. The wearable sensor system design criteria determined in this study were as follows:
  • Criterion 1. One LiDAR sensor is installed to acquire data about the surrounding environment without gaps, while one LiDAR is installed horizontally on the ground to enable data fusion using algorithms such as SLAM (Simultaneous Localization And Mapping) or ICP (Iterative Close Point).
  • Criterion 2. For efficient data acquisition, the overlapping of FOV is minimized between cameras of the same type, thereby allowing data for a wider area to be acquired with one shot.
  • Criterion 3. The FOV of the optical camera and LiDAR allows sufficient overlap to enable efficient data fusion between different types of sensors.
  • Criterion 4. All sensors are positioned to exhibit a sufficient overlapping of FOV with at least one other sensor to ensure system calibration accuracy.
In this study, a wearable sensor system was designed based on the above criteria, followed by a review of the design created through simulation. Figure 1 shows the final determined sensor arrangement and the data acquisition simulation results obtained. As shown in Figure 1a, the sensor system comprised two fisheye lens cameras and two LiDARs. According to Criterion 1, LiDAR1 was installed horizontally and LiDAR2 was installed diagonally on the back of the backpack. The diagonal placement of LiDAR2 was determined through a previous study [14] on Li-DAR placement method for efficient data acquisition. According to Criterion 2, the fisheye cameras were installed to face the left and right of the backpack, respectively. Figure 1b,c shows the data acquisition simulation results. Here, the red and green crosses in the figure indicate the areas covered by fisheye cameras 1 and 2, respectively, and the cyan and purple dots indicate the areas covered by LiDARs 1 and 2, respectively. The figure shows that the design of the wearable system satisfies each criterion as follows. First, the fisheye cameras were placed such that their FOVs did not overlap to acquire image data for all areas around the system. Next, the LiDAR sensors seamlessly acquired 3D information of the surrounding area, while point cloud data (light blue dots) that can be used in algorithms such as SLAM was acquired through LiDAR1. Finally, with respect to Criteria 3 and 4, the FOV of camera 1 overlapped with that of LiDARs 1 and 2 in the left direction of the system movement direction (yellow arrow), while the FOV of camera 2 overlapped with that of LiDARs 1 and 2 in the right direction.
Figure 2 shows the backpack-based wearable sensor system that was produced based on the design. First, a reference LiDAR (LiDAR1 in Figure 1) was installed at the top of the platform, and depth data was obtained horizontally. The second LiDAR (LiDAR2 in Figure 1) was installed at low overlap ratio with LiDAR 1 on the back of the platform in a 60-degree diagonal direction. Two fish-eye lens cameras were installed at the top-left (camera 1) and top-right (camera 2) of the platform to acquire images for the left and right sides of the platform, respectively.

3. Calibration Framework for Wearable Sensor Systems

The calibration framework for the wearable sensor system proceeded in two stages: sensor self-calibration and system calibration. In the sensor self-calibration phase, the IOPs and distortion parameters of the fisheye camera and LiDAR embedded in the system were calculated. Next, in the system calibration step, ROPs that describe the relative geometric positions between the sensors were derived.
Self-calibrations for onboard sensors were performed using the methodology proposed in the previous study paper of this research. The self-calibration of the fisheye cameras was performed using the approach proposed by Choi et al. [24]. Next, for self-calibration of LiDAR, the method proposed by Kim et al. [30] was applied. Further details regarding self-calibration for the fisheye camera and LiDAR can be found in related previous studies [22,23,24,30].
In the proposed system calibration, the relative location between the sensors exhibiting a relatively high overlap ratio was first estimated through system calibration, based on which the relative locations for all sensors were derived. The method comprised three steps: (i) sensor-bundle configuration, (ii) system calibration for each sensor-bundle, and (iii) global optimization for the entire sensor system. First, in the sensor-bundle configuration stage, a camera and LiDAR with a relatively high FOV overlap ratio were checked and set as one sensor-bundle. Next, system calibration was performed for each sensor-bundle set employing the point–plane correspondence-based approach. Finally, in the global optimization step, the ROPs of all other sensors based on the reference sensor were calculated using ROPs between sensors derived through system calibration.

3.1. Sensor-Bundle Configuration

In this study, a multi-sensor-bundle was constructed by selecting sensors with a high FOV overlap ratio among sensors installed in the multi-sensor system. In case of insufficient FOV overlap ratio between a sensor and the reference sensor, accuracy of estimated ROPs can be reduced owing to insufficient control feature correspondences. Therefore, in this study, calibration of sensors with high FOV overlap was first carried out, and a sensor-bundle was constructed more stably to obtain the final relative orientation parameters (ROPs) using the result.
Figure 3 describes the sensor-bundle configuration method for the multi-sensor system used in this study. Figure 3a shows the ROPs to be derived from the wearable system used in this study. LiDAR 1 was set to reference sensor, and the ROPs of the two cameras and LiDAR 2 were targets to be estimated from system calibration. However, regarding the manufactured wearable system, extracting a common control object and securing the accuracy of the ROPs was challenging owing to the very low FOV overlap ratio of LiDAR1 and LiDAR2. Therefore, in this study, four sensor bundles (LiDAR1-Camera1, LiDAR1-Camera2, LiDAR2-Camera1, and LiDAR2-Camera2) were composed of an optical camera with a high FOV overlap ratio and LiDAR, as shown in Figure 3b. Subsequently, in the system calibration step, the ROPs ( rop l 1 C 1 , rop l 1 C 2 , rop l 2 C 1 , and rop l 2 C 2 ) of each sensor-bundle were estimated, whereas in the global optimization step, the final ROPs ( ROP l 1 c 1 , ROP l 1 C 2 , and ROP l 1 l 2 ) were derived using the ROPs of sensor bundles.

3.2. Point-Plane Correspondences Based System Calibration for Sensor Bundles

3.2.1. Point-Plane Correspondence of the Proposed System Calibration Method

The system calibration approach proposed in this study is based on the point–plane correspondence condition that “all control points estimated using camera images should be on the plane extracted from the LiDAR point cloud.” The manner in which the point-plane correspondences of the proposed method determine the geometric relationship between the sensor data is indicated in Figure 4; where the green, black, and blue dots represent LiDAR point clouds, GCPs, and image control points corresponding to GCPs corresponding to the yellow plane, respectively, the white dots are the initial GCPs to be input into bundle adjustment, and the red ellipses are the error ellipses of the initial GCPs.
When a planar target with GCPs was captured through LiDAR and a camera, the GCPs (black dots) positioned on the target plane were positioned on a planar patch (yellow rectangle) obtained from the LiDAR data. Therefore, any point on the planar patch (white point) can be set as the initial coordinates of GCPs, and the set GCPs yield an error ellipse that exhibit large and very small errors in the plane (U-V plane) and vertical directions, respectively, as shown in Figure 4. Exterior orientation parameters (EOPs) of each image expressed in the LiDAR coordinate system can be derived by performing BA using the collinearity condition between the image control point (blue dot) and the set initial GCPs; where the coordinates of the initial GCPs are arbitrarily selected on the plane, and those of the initial GCPs are adjusted in the BA process by reflecting the size of the error ellipse in the BA weight matrix.

3.2.2. Application of the Proposed System Calibration Method

In the proposed system calibration methodology, three or more planes installed perpendicular to each other are required to accurately adjust the coordinates of GCPs while securing BA accuracy. Figure 5 shows three patches arranged perpendicular to each other, and the white and black dots show the positions of the initial GCPs and the actual GCPs, respectively. The position (plane direction coordinates) of each GCP in the plane direction is limited by the constraint that “GCPs must be located on the corresponding plane” for GCPs on different patches. For example, the GCPs in patches 2 and 3 cannot be adjusted in directions perpendicular to the plane ( z p 2 and z p 3 respectively), as each of them must be located on a plane, and the vertical orientations of patches 2 and 3 coincide with the plane directions ( x p 1 and y p 1 ) in patch 1, respectively. Hence the constraint that the GCPs of patches 2 and 3 must be on each patch serves as a condition for positioning the GCPs of patch 1 on the plane.
The proposed system calibration method was applied to each sensor bundle as shown in Figure 6; where the green dots are LiDAR point clouds in Epoch m, and the black dots are the GCPs. When the proposed system calibration was performed using the entire image (n images) and the point cloud obtained from Epoch m, the EOPs of each image acquired in n epochs were calculated based on the LiDAR coordinate system of Epoch m. Moreover, as the relative position between the camera and LiDAR was fixed, EOPs of the image m ( EOPs l m ) are the same as the ROPs ( ROPs l C ) between the LiDAR and the camera. Therefore, through the point–plane correspondence-based approach, system calibration between LiDAR and camera can be performed and the ROPs between sensors can be estimated.

3.2.3. Advantages of the Proposed System Calibration Method

The proposed point–plane correspondence-based approach offers four advantages as follows:
  • First, the proposed methodology only requires information regarding the target plane from the point cloud. It does not require an additional complicated process to accurately extract the edge points or lines of the target plane and is less affected by the density of the point cloud.
  • Second, arbitrarily determined initial GCPs employing the BA process are used instead of the coordinates of previously measured GCPs. Therefore, the need for prior work measuring the coordinates of GCPs using a total station, etc., is eliminated and the time required to perform system calibration can be reduced.
  • Third, because the proposed method is based on BA using the collinearity condition, implementing it utilizing various existing open-source codes is easy.
  • Fourth, the proposed method offers the advantage that when sensor data for part of the target plane is acquired, information on the entire target can be used for calibration.
Figure 7 shows further specific details for the fourth advantage of the proposed system calibration method. In the proposed method, the coordinates of the initial GCPs corresponding to the control points (red and blue points in Figure 7a) were set to random points on the plane (white points in Figure 7a), which were then adjusted to their actual positions through BA. Therefore, control points outside the plane patch without corresponding point clouds (red dots in Figure 7a) can also be used for calibration. In particular, the above advantage becomes more prominent when the FOV overlap ratio between the LiDAR and the camera is low, as shown in Figure 7b. The proposed method utilizes all the control points shown in Figure 7b (red and blue dots), thereby allowing more stable calibration than when only GCPs inside the planar patch (blue dots) are used.

3.3. Global Optimization for Wearable System

In the global optimization step, the ROPs of all other sensors based on the reference sensor were derived and adjusted using the ROPs of each sensor bundle estimated in the system calibration step. In the multi-sensor system used in this study, the sensor-bundle was set as shown in Figure 3b, and the ROPs of each sensor-bundle rop l 1 c 1 , rop l 2 c 1 , rop l 1 c 2 , and rop l 2 c 2 were derived in the calibration stage, where, rop l 1 c 1 - rop l 2 c 1 - ROP l 1 l 2 and rop l 1 c 2 - rop l 2 c 2 - ROP l 1 l 2 must satisfy the closed loop condition. These two closing conditions can be expressed as Equations (1)–(4), where T l 1 l 2 and M l 1 l 2 are the translation and rotation matrices of ROP l 1 l 2 , and T l i c j and M l i c j are the translation and rotation matrices of the ROPs of the j-th camera with respect to the i-th LiDAR, respectively. In this study, while deriving ROP l 1 l 2 based on the least squares method using Equations (1)–(4), adjustments were made to the estimated ROPs in the calibration step, and the variance value of the orientation parameters calculated in the system calibration step was used as the weight for each ROP.
M l 1 l 2 ( T l 1 c 1 T l 1 l 2 ) T l 2 c 1 = 0
M l 1 c 1 M l 2 c 1 M l 1 l 2 = 0
M l 1 l 2 ( T l 1 c 2 T l 1 l 2 ) T l 2 c 2 = 0
M l 1 c 2 M l 2 c 2 M l 1 l 2 = 0

3.4. Test Bed Configuration and Data Acquisition for Calibration

In this study, using the proposed methodology, a test-bed was built to perform system calibration for the wearable sensor system, and data for calibration were acquired. Figure 8a shows the calibration target used for the test-bed construction. Each calibration target consisted of two planes forming an angle of 120°, with multiple grid patterns to be used as control points printed on each plane. Figure 8b shows the test-bed constructed using the calibration targets. A total of nine calibration targets (18 planes in total) were used in the test-bed configuration, and the target planes were installed at various angles to one another to ensure the accuracy of system calibration. Figure 9 shows the location and direction where the data for calibration were acquired. The red dots and blue arrows in the figure indicate the optical camera positions and shooting directions of each sensor bundle.

4. Evaluation of System Calibration Results

In this section, the results of system calibration through proposed framework are presented and evaluated. First, the performance of the proposed point-plane correspondences-based approach is presented through precision and accuracy of sensor-bundle ROPs. Next, a visual analysis of the accuracy of ROPs between the estimated LiDAR and the camera is performed. Finally, the accuracy of ROPs between LiDAR is estimated via global optimization and then comparatively evaluated with other approaches.
The results of system calibration for each sensor bundle using the proposed method were derived as presented in Table 2. In this study, the proposed approach was judged to exhibit high accuracy based on the standard deviation results, residuals of image control points, and correlation between orientation parameters. Table 3 presents the standard deviation of the estimated ROPs and the residuals of the image control points. The maximum standard deviation of estimated ROPs was found at X0 and κ of rop l 2 c 2 , which were 4.95 mm and 0.0119°, respectively. Further, the RMSE of the image control points were found to be a maximum and minimum of 0.68 and 0.79 pixels, respectively. In addition, the correlation between the ROPs was generally low (Table 4). The highest correlation (0.95) was that between Y0 and ω of rop l 1 c 1 ; however, it was not at a level that had a significant effect on the accuracy of system calibration.
Global optimization was performed using the estimated ROPs of each sensor bundle (Table 2) and the standard deviation of ROPs (Table 3). The results are presented in Table 5; where ROP l 1 c 1 and ROP l 1 c 2 show the corrected ROPs of cameras 1 and 2, respectively, and ROP l 1 l 2 shows the ROPs of LiDAR 1 derived through global optimization. In this study, the fusion data generated using the final ROPs was analyzed to verify the accuracy of the final system calibration results.
First, in this study, the accuracy of the final ROPs was analyzed through a visual analysis of the fusion accuracy between the camera image and the point cloud. To clearly determine the accuracy of the fusion data, a color that was easily identifiable was provided to the main plane of the image (a relatively wide plane with the edges of the plane easily identifiable on the image), and the fusion accuracy was confirmed at the corners where each plane met. The fusion accuracy visual analysis was performed using sensor data acquired at two test-sites. Figure 10 shows the optical images acquired at the test-sites; Figure 10a–d are the original images obtained at test-site 1 and the images with new colors, (e–h) are the original images obtained at test-site 2 and the images with new colors.
Figure 11 and Figure 12 show the data fusion results for test-sites 1 and 2, respectively. As evident, in both test-sites, the color of the plane corresponding to the point cloud corresponding to each plane was properly assigned and overall accurate data fusion was achieved. Initially, at test-site 1, a data fusion error was confirmed at position 1. However, considering that the error appeared on both sides of the wall simultaneously, the error may have been due to a LiDAR multi-path error at the edge and not because of an ROP error. At test-site 2, an error appeared at position 2 in Figure 11b, and the level of error was approximately 1.5 cm at a distance of approximately 4 m. However, as the error appeared along the edge of the boundary between the white and yellow planes, it did not appear significantly to affect the overall accuracy of data fusion.
To clarify the accuracy of ROPs ( ROP l 1 l 2 ) between the LiDARs, fusion error between point clouds was evaluated based on the distance between the point clouds being acquired from LiDARs 1 and 2, which correspond to the same plane in this study. With more accurate ROPs between the LiDARs, the planes scanned simultaneously by both LiDARs can be visualized at the same plane through data fusion. Conversely, with less accurate ROPs, each plane exhibits different plane shapes and the distance between the planes becomes larger. Figure 12 shows the calculation methodology for the fusion error between point clouds. The gray and green points are the point clouds of LiDAR 1 and LiDAR 2, respectively, and the black lines are planar patches extracted from the point clouds of LiDAR 1. In addition, P i is the i-th point cloud of LiDAR 2, D i is the distance (m) between LiDAR 1 and P i , and e i is the distance from P i to the planar patch. The fusion accuracy between LiDARs was evaluated based on the mean bias error (MBE), the mean absolute error (MAE), and the root mean square error (RMSE) of e i and e i ˜ ; e i ˜ was calculated using Equation (5), implying that e i was normalized based on the distance between the reference sensor (LiDAR 1) with P i set as 1 m.
e i ˜ = e i D i
In the accuracy evaluation of ROPs between LiDARs, ROPs obtained through the proposed method ( ROP l 1 l 2 ) were comparatively evaluated with nominal ROPs and ROPs derived through ICP (iterative closest point). Table 6 presents the nominal ROPs ( ROP n o m ) used in the wearable system design and ROPs obtained through ICP ( ROP i c p ) . To evaluate the accuracy of ROPs between LiDARs, planes were used located in the section where the FOVs of LiDARs 1 and 2 overlapped. Figure 13 shows the positions of the used planes (No. 1–4).
Table 7 lists the fusion error between point clouds based on ROP type and test-site location, confirming that the most accurate ROPs were derived employing the proposed methodology. The overall error based on the RMS E ˜ was the lowest for ROPs l 1 l 2 , followed by ROPs n o m and ROPs i c p . In particular, the fusion data using ROP l 1 l 2 derived using the proposed method showed the lowest fusion error at all positions. However, fusion errors for ROP n o m and ROP i c p showed very similar absolute values of MBE and MAE at each location, implying that the two ROPs used were not fused on the same plane but arranged adjacent to each other on different planes.
Figure 14 shows the fusion data for each location of the test-site, visually confirming the results in Table 7. When data fusion was performed using ROPs l 1 l 2 , the point clouds obtained from both LiDARs formed a single plane regardless of location. In contrast, where ROP n o m and ROP i c p were used, point clouds obtained from different LiDARs cannot be fused into one plane and were rather expressed as two different planes.

5. Conclusions

This study aimed to propose a framework for building a wearable sensor system optimized for complex urban structures. Thus, this study presents a workflow for wearable sensor system design and a calibration framework optimized for wearable sensor systems. Furthermore, as part of the framework, a calibration method based on point–plane correspondences is proposed for efficient system calibration of wearable systems.
The wearable sensor system design workflow was based on criteria prepared in consideration of the efficiency of data acquisition and the accuracy of system calibration and data fusion. In this study, a wearable sensor system was manufactured through the proposed design workflow, and the manufactured sensor system efficiently acquired images and point cloud data about the surrounding environment.
The proposed framework and system calibration method contributed to the accurate calibration and data fusion of the wearable sensor system, and showed better performance than the other method. The advantages of the proposed system calibration method stem from (i) a calibration process designed in two steps, and (ii) a calibration method based on point–plane correspondences. First, through the two-step system calibration method, ROPs between sensors with low FOV overlap can be more stably derived, avoiding a decrease in accuracy obtained by calculating ROPs between same-modal sensors with low FOV overlap ratio, by instead using ROPs between a LiDAR and an optical camera, with relatively high FOV overlap ratio. Second, the system calibration method proposed in this study accurately and efficiently derived ROPs between sensors in a wearable system. The proposed method does not require any prior work to acquire GCP coordinates, and it is easy to implement as a code or program based on the conventional colinear equation by modifying the weight matrix setting method. In addition, it offers the advantage that does not require estimation of a 3D point or line segment located at the edge of the target object in the lidar point cloud. The data fusion results through the proposed framework showed better performance in comparative evaluation with the results of methods using ICP and nominal values.
In conclusion, this study offers a method of efficient and accurate data acquisition using a wearable system suitable for 3D modeling in a complex urban environment. The proposed framework contributes to accurate data fusion through calibration optimized for wearable multi-sensor systems. Furthermore, the proposed point–plane correspondences-based system calibration method supports efficient calibration because it does not require a GCP coordinate measurement process. This advantage of the point–plane correspondences-based method contributes to the implementation of in situ system calibration by using various planar objects of the site. On the other hand, the limitation of the proposed method is that it requires three or more planar targets installed perpendicular to each other. Therefore, when the directions of planar objects for calibration are similar, the performance of the proposed system calibration method may decline. However, for complex urban environments, it is expected that the limitations of the proposed method will have little effect on the accuracy of system calibration, because the urban environment generally includes planar objects in various directions.
This study will contribute to accurate 3D modeling of urban spaces, the generation of the high-definition 3D road maps, and the development of smart cities, in that it supports efficient 3D data acquisition and fusion using wearable sensor systems. In the future, an in situ system calibration methodology and an accurate 3D modeling methodology for the wearable sensor system will be developed by modifying, evolving, and automating the proposed method, based on the advantage that the proposed method does not require prior work to acquire GCP coordinates.

Author Contributions

Conceptualization, K.C.; methodology, K.C.; software, K.C.; validation, K.C. and C.K.; formal analysis, K.C. and C.K.; investigation, K.C.; resources, C.K.; data curation, K.C.; writing—original draft preparation, K.C.; writing—review and editing, C.K.; visualization, K.C.; supervision, C.K.; project administration, C.K.; funding acquisition, C.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant R&D program of Digital Land Information Technology Development funded by the Ministry of Land, Infrastructure and Transport (MOLIT) (Grant RS-2022-00142501).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was supported by INHA UNIVERSITY Research Grant.

Conflicts of Interest

The authors declare no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Danilina, N.; Slepnev, M.; Chebotarev, S. Smart city: Automatic reconstruction of 3D building models to support urban development and planning. MATEC Web Conf. 2018, 251, 03047. [Google Scholar] [CrossRef]
  2. Anbari, S.; Majidi, B.; Movaghar, A. 3D modeling of urban environment for efficient renewable energy production in the smart city. In Proceedings of the 2019 7th Iranian Joint Congress on Fuzzy and Intelligent Systems, Bojnord, Iran, 29–31 January 2019. [Google Scholar]
  3. Gong, Z.; Li, J.; Luo, Z.; Wen, C.; Wang, C.; Zelek, J. Mapping and semantic modeling of underground parking lots using a backpack LiDAR system. IEEE Trans. Intell. Transp. Syst. 2019, 22, 734–746. [Google Scholar] [CrossRef]
  4. Hämäläinen, M. Smart city development with digital twin technology. In Proceedings of the 33rd Bled eConference-Enabling Technology for a Sustainable Society, Online Conference Proceedings, Bled, Slovenia, 28–29 June 2020. [Google Scholar]
  5. Huhle, B.; Jenke, P.; Straßer, W. On-the-fly scene acquisition with a handy multi-sensor system. Int. J. Intell. Syst. Technol. Appl. 2008, 5, 255–263. [Google Scholar] [CrossRef]
  6. Haala, N.; Jan, B. A multi-sensor system for positioning in urban environments. ISPRS J. Photogramm. Remote Sens. 2003, 58, 31–42. [Google Scholar] [CrossRef]
  7. Guidi, G.; Russo, M.; Ercoli, S.; Remondino, F.; Rizzi, A.; Menna, F. A multi-resolution methodology for the 3D modeling of large and complex archeological areas. Int. J. Archit. Comput. 2009, 7, 39–55. [Google Scholar] [CrossRef]
  8. Shim, H.; Adelsberger, R.; Kim, J.D.; Rhee, S.-M.; Rhee, T.; Sim, J.-Y.; Gross, M.; Kim, C. Time-of-flight sensor and color camera calibration for multi-view acquisition. Vis. Comput. 2012, 28, 1139–1151. [Google Scholar] [CrossRef]
  9. Nair, R.; Ruhl, K.; Lenzen, F.; Meister, S.; Schäfer, H.; Garbe, C.S.; Eisemann, M.; Magnor, M.; Kondermann, D. A survey on time-of-flight stereo fusion. In Time-of-Flight and Depth Imaging: Sensors, Algorithms, and Applications; Grzegorzek, M., Theobalt, C., Koch, R., Kolb, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8200, pp. 105–127. [Google Scholar]
  10. Chahine, G.; Vaidis, M.; Pomerleau, F.; Pradalier, C. Mapping in unstructured natural environment: A sensor fusion framework for wearable sensor suites. Appl. Sci. 2001, 3, 571. [Google Scholar] [CrossRef]
  11. Di Filippo, A.; Sánchez-Aparicio, L.J.; Barba, S.; Martín-Jiménez, J.A.; Mora, R.; González Aguilera, D. Use of a wearable mobile laser system in seamless indoor 3D mapping of a complex historical site. Remote Sens. 2018, 10, 1897. [Google Scholar] [CrossRef]
  12. Cabo, C.; Del Pozo, S.; Rodríguez-Gonzálvez, P.; Ordóñez, C.; Gonzalez-Aguilera, D. Comparing terrestrial laser scanning (TLS) and wearable laser scanning (WLS) for individual tree modeling at plot level. Remote Sens. 2018, 10, 540. [Google Scholar] [CrossRef]
  13. Liu, H.; Liu, R.; Yang, K.; Zhang, J.; Peng, K.; Stiefelhagen, R. Hida: Towards holistic indoor understanding for the visually impaired via semantic instance segmentation with a wearable solid-state lidar sensor. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 11–17 October 2021; pp. 1780–1790. [Google Scholar]
  14. Chung, M.; Kim, C.; Choi, K.; Chung, D.; Kim, Y. Development of LiDAR simulator for backpack-mounted mobile indoor mapping system. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2017, 35, 91–102. [Google Scholar]
  15. Brown, D.C. Decentering distortion of lenses. Photogramm. Eng. Remote Sens. 1966, 32, 444–462. [Google Scholar]
  16. Beyer, H.A. Accurate calibration of CCD-cameras. In Proceedings of the 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, IL, USA, 15–18 June 1992. [Google Scholar]
  17. Fraser, C.S. Digital camera self-calibration. ISPRS J. Photogramm. Remote Sens. 1997, 52, 149–159. [Google Scholar] [CrossRef]
  18. Remondino, F.; Fraser, C. Digital camera calibration methods: Considerations and comparisons. Int. Arch. Photogramm. Remote Sen. Spat. Inf. Sci. 2006, 36, 266–272. [Google Scholar]
  19. Kannala, J.; Brandt, S.S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1335–1340. [Google Scholar] [CrossRef] [PubMed]
  20. Miyamoto, K. Fish Eye Lens. J. Opt. Soc. Am. 1964, 54, 1060–1061. [Google Scholar] [CrossRef]
  21. Abraham, S.; Förstner, W. Fish-eye-stereo calibration and epipolar rectification. ISPRS J. Photogramm. Remote Sens. 2005, 59, 278–288. [Google Scholar] [CrossRef]
  22. Choi, K.H.; Yongil, K.; Changjae, K. Analysis of Fish-Eye Lens Camera Self-Calibration. Sensors 2019, 19, 1218. [Google Scholar] [CrossRef]
  23. Choi, K.H.; Yongmin, K.; Changjae, K. Correlation Analysis of Fish-eye Lens Camera for Acquiring Reliable Orientation Parameters. Sens. Mater. 2019, 31, 3885–3897. [Google Scholar] [CrossRef]
  24. Choi, K.H.; Kim, C. Proposed New AV-Type Test-Bed for Accurate and Reliable Fish-Eye Lens Camera Self-Calibration. Sensors 2001, 21, 2776. [Google Scholar] [CrossRef]
  25. Lichti, D.D. Error modelling, calibration and analysis of an AM–CW terrestrial laser scanner system. ISPRS J. Photogramm. Remote Sens. 2007, 61, 307–324. [Google Scholar] [CrossRef]
  26. Atanacio-Jiménez, G.; Gonzalez-Barbosa, J.-J.; Hurtado-Ramos, J.B.; Ornelas-Rodríguez, F.J.; Jiménez-Hernández, H.; García-Ramirez, T.; González-Barbosa, R. Lidar velodyne hdl-64e calibration using pattern planes. Int. J. Adv. Robot. Syst. 2011, 8, 59. [Google Scholar] [CrossRef]
  27. Chan, T.O.; Lichti, D.D.; Belton, D. A rigorous cylinder-based self-calibration approach for terrestrial laser scanners. ISPRS J. Photogramm. Remote Sens. 2015, 99, 84–99. [Google Scholar] [CrossRef]
  28. Glennie, C.; Lichti, D.D. Static calibration and analysis of the Velodyne HDL-64E S2 for high accuracy mobile scanning. Remote Sens. 2010, 2, 1610–1624. [Google Scholar] [CrossRef] [Green Version]
  29. Glennie, C.L.; Kusari, A.; Facchin, A. Calibration and Stability Analysis of the Vlp-16 Laser Scanner. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 9, 55–60. [Google Scholar] [CrossRef]
  30. Kim, H.S.; Kim, Y.; Kim, C.; Choi, K.H. Kinematic In Situ Self-Calibration of a Backpack-Based Multi-Beam LiDAR System. Appl. Sci. 2021, 11, 945. [Google Scholar] [CrossRef]
  31. Choi, K.; Kim, C.; Kim, Y. Comprehensive Analysis of System Calibration between Optical Camera and Range Finder. ISPRS Int. J. Geo-Inf. 2018, 7, 188. [Google Scholar] [CrossRef]
  32. Chen, C.; Yang, B.; Song, S. Low Cost and Efficient 3d Indoor Mapping Using Multiple Consumer Rgb-D Cameras. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2016, 41, 169–174. [Google Scholar] [CrossRef]
  33. Horaud, R.; Hansard, M.; Evangelidis, G.; Ménier, C. An overview of depth cameras and range scanners based on time-of-flight technologies. Mach. Vis. Appl. 2016, 27, 1005–1020. [Google Scholar] [CrossRef]
  34. Pandey, G.; McBride, J.R.; Savarese, S.; Eustice, R.M. Automatic extrinsic calibration of vision and lidar by maximizing mutual information. J. Field Robot. 2015, 32, 696–722. [Google Scholar] [CrossRef]
  35. Taylor, Z.; Nieto, J.; Johnson, D. Multi-Modal Sensor Calibration Using a Gradient Orientation Measure. J. Field Robot. 2015, 32, 675–695. [Google Scholar] [CrossRef]
  36. Zhang, Y.; Luo, C.; Liu, J. Walk&sketch: Create floor plans with an RGB-D camera. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Pittsburgh, PA, USA, 5–8 September 2012. [Google Scholar]
  37. Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed]
  38. Wang, W.; Yamakawa, K.; Hiroi, K.; Kaji, K.; Kawaguchi, N. A Mobile System for 3D Indoor Mapping Using LiDAR and Panoramic Camera. Spec. Interest Group Tech. Rep. IPSJ 2015, 1, 337–340. [Google Scholar]
  39. Staranowicz, A.N.; Brown, G.R.; Morbidi, F.; Mariottini, G.-L. Practical and accurate calibration of RGB-D cameras using spheres. Comput. Vis. Image Underst. 2015, 137, 102–114. [Google Scholar] [CrossRef]
  40. Veľas, M.; Spanel, M.; Materna, Z.; Herout, A. Calibration of rgb camera with velodyne lidar. In WSCG 2014 Communication Papers Proceedings; Union Agency: Plzeň, Česko, 2014; pp. 135–144. [Google Scholar]
  41. Pereira, M.; Santos, V.; Dias, P. Automatic calibration of multiple LIDAR sensors using a moving sphere as target. In Robot 2015: Second Iberian Robotics Conference; Reis, L., Moreira, A., Lima, P., Montano, L., Muñoz-Martinez, V., Eds.; Springer: Cham, Switzerland, 2016; Volume 417, pp. 477–489. [Google Scholar]
  42. Kümmerle, J.; Kühner, T.; Lauer, M. Automatic calibration of multiple cameras and depth sensors with a spherical target. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
  43. Chao, G.; Spletzer, J.R. On-line calibration of multiple lidars on a mobile vehicle platform. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010. [Google Scholar]
  44. Pusztai, Z.; Hajder, L. Accurate calibration of LiDAR-camera systems using ordinary boxes. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017. [Google Scholar]
  45. Jiao, J.; Liao, Q.; Zhu, Y.; Liu, T.; Yu, Y.; Fan, R.; Wang, L.; Liu, M. A novel dual-lidar calibration algorithm using planar surfaces. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IVS), Paris, France, 9–12 June 2019. [Google Scholar]
  46. Choi, D.-G.; Bok, Y.; Kim, J.-S.; Kweon, I.S. Extrinsic calibration of 2-d lidars using two orthogonal planes. IEEE Trans. Robot. 2016, 32, 83–98. [Google Scholar] [CrossRef]
  47. Pusztai, Z.; Eichhardt, I.; Hajder, L. Accurate calibration of multi-lidar-multi-camera systems. Sensors 2018, 18, 2139. [Google Scholar] [CrossRef] [PubMed]
  48. Chai, Z.; Sun, Y.; Xiong, Z. A Novel Method for LiDAR Camera Calibration by Plane Fitting. In Proceedings of the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Auckland, New Zealand, 9–12 July 2018. [Google Scholar]
  49. Geiger, A.; Moosmann, F.; Car, O.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. [Google Scholar]
  50. Lyu, Y.; Bai, L.; Elhousni, M.; Huang, X. An interactive lidar to camera calibration. In Proceedings of the 2019 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 24–26 September 2019; pp. 1–6. [Google Scholar]
  51. Pless, R.; Zhang, Q. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2004. [Google Scholar]
  52. Song, H.; Choi, W.; Kim, H. Robust vision-based relative-localization approach using an RGB-depth camera and LiDAR sensor fusion. IEEE Trans. Ind. Electron. 2016, 63, 3725–3736. [Google Scholar] [CrossRef]
  53. Unnikrishnan, R.; Hebert, M. Fast Extrinsic Calibration of a Laser Rangefinder to a Camera; Tech. Rep. CMU-RI-TR-05-09; Robotics Institute: Pittsburgh, PA, USA, 2005. [Google Scholar]
  54. Mirzaei, F.M.; Kottas, D.G.; Roumeliotis, S.I. 3D LIDAR camera intrinsic and extrinsic calibration: Identifiability and analytical least-squares-based initialization. Int. J. Robot. Res. 2012, 31, 452–467. [Google Scholar] [CrossRef]
  55. Pandey, G.; McBride, J.; Savarese, S.; Eustice, R. Extrinsic calibration of a 3D laser scanner and an omnidirectional camera. IFAC Proc. Vol. 2010, 43, 336–341. [Google Scholar] [CrossRef]
  56. Bok, Y.; Jeong, Y.; Choi, D.-G.; Kweon, I.S. Capturing village-level heritages with a hand-held camera-laser fusion sensor. Int. J. Comput. Vis. 2011, 94, 36–53. [Google Scholar] [CrossRef]
  57. Bok, Y.; Choi, D.-G.; Kweon, I.S. Sensor fusion of cameras and a laser for city-scale 3D reconstruction. Sensors 2014, 14, 20882–20909. [Google Scholar] [CrossRef]
  58. Zhou, L. A new minimal solution for the extrinsic calibration of a 2D LIDAR and a camera using three plane-line correspondences. IEEE Sens. J. 2013, 14, 442–454. [Google Scholar] [CrossRef]
  59. Chen, H.H. Pose determination from line-to-plane correspondences: Existence condition and closed-form solutions. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 530–541. [Google Scholar] [CrossRef]
  60. Vasconcelos, F.; Barreto, J.P.; Nunes, U. A minimal solution for the extrinsic calibration of a camera and a laser-rangefinder. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2097–2107. [Google Scholar] [CrossRef] [PubMed]
  61. Cramer, J. Automatic generation of 3d thermal maps of building interiors. ASHRAE Trans. 2014, 120, C1. [Google Scholar]
  62. Gomez-Ojeda, R.; Briales, J.; Fernandez-Moral, E.; Gonzalez, J.-J. Extrinsic calibration of a 2D laser-rangefinder and a camera based on scene corners. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3611–3616. [Google Scholar]
  63. Perez-Yus, A.; Fernandez-Moral, E.; Lopez-Nicolas, G.; Guerrero, J.J.; Rives, P. Extrinsic calibration of multiple RGB-D cameras from line observations. IEEE Robot. Autom. Lett. 2018, 3, 273–280. [Google Scholar] [CrossRef] [Green Version]
  64. Dong, W.; Isler, V. A novel method for the extrinsic calibration of a 2-D laser-rangefinder and a camera. IEEE Sens. J. 2018, 18, 4200–4211. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Sensor arrangement and data acquisition simulation results of the wearable sensor system: (a) sensor arrangement, (b) acquired simulation data (bird view), (c) acquired simulation data (floor plan).
Figure 1. Sensor arrangement and data acquisition simulation results of the wearable sensor system: (a) sensor arrangement, (b) acquired simulation data (bird view), (c) acquired simulation data (floor plan).
Applsci 12 09061 g001
Figure 2. Backpack-based wearable sensor system.
Figure 2. Backpack-based wearable sensor system.
Applsci 12 09061 g002
Figure 3. Final target ROPs and the sensor-bundle figuration: (a) ROPs that need to be derived, (b) sensor-bundle configuration.
Figure 3. Final target ROPs and the sensor-bundle figuration: (a) ROPs that need to be derived, (b) sensor-bundle configuration.
Applsci 12 09061 g003
Figure 4. Point–plane correspondences in proposed system calibration.
Figure 4. Point–plane correspondences in proposed system calibration.
Applsci 12 09061 g004
Figure 5. Constraints for positioning GCPs.
Figure 5. Constraints for positioning GCPs.
Applsci 12 09061 g005
Figure 6. Application of the proposed system calibration methodology for sensor bundles.
Figure 6. Application of the proposed system calibration methodology for sensor bundles.
Applsci 12 09061 g006
Figure 7. Advantages of the proposed method: (a) initial GCPs adjustment of the proposed method, (b) comparison of available GCPs between the proposed and the conventional method.
Figure 7. Advantages of the proposed method: (a) initial GCPs adjustment of the proposed method, (b) comparison of available GCPs between the proposed and the conventional method.
Applsci 12 09061 g007
Figure 8. Test-bed for system calibration: (a) Calibration target, (b) test-bed.
Figure 8. Test-bed for system calibration: (a) Calibration target, (b) test-bed.
Applsci 12 09061 g008
Figure 9. Configuration of calibration data acquisition.
Figure 9. Configuration of calibration data acquisition.
Applsci 12 09061 g009
Figure 10. Image data for visual analysis of system calibration results: (a) original image of camera 1 at test-site 1, (b) original image of camera 2 at test-site 1, (c) new colored image of camera 1 at test-site 1, (d) new colored image of camera 2 at test-site 1, (e) original image of camera 1 at test-site 2, (f) original image of camera 2 at test-site 2, (g) new colored image of camera 1 at test-site 2, (h) new colored image of camera 2 at test-site 2.
Figure 10. Image data for visual analysis of system calibration results: (a) original image of camera 1 at test-site 1, (b) original image of camera 2 at test-site 1, (c) new colored image of camera 1 at test-site 1, (d) new colored image of camera 2 at test-site 1, (e) original image of camera 1 at test-site 2, (f) original image of camera 2 at test-site 2, (g) new colored image of camera 1 at test-site 2, (h) new colored image of camera 2 at test-site 2.
Applsci 12 09061 g010
Figure 11. Optical camera-LiDAR data fusion results for the wearable system at (a) test-site 1, (b) test-site 2.
Figure 11. Optical camera-LiDAR data fusion results for the wearable system at (a) test-site 1, (b) test-site 2.
Applsci 12 09061 g011
Figure 12. LiDAR data fusion error calculation method.
Figure 12. LiDAR data fusion error calculation method.
Applsci 12 09061 g012
Figure 13. Fusion result between LiDAR data of the wearable system: (a) Test-site 1, (b) test-site 2.
Figure 13. Fusion result between LiDAR data of the wearable system: (a) Test-site 1, (b) test-site 2.
Applsci 12 09061 g013
Figure 14. LiDAR data fusion results: (a) ROPs l 1 l 2 at location 1, (b) ROP n o m at location 1, (c) ROP i c p at location 1, (d) ROPs l 1 l 2 at location 2, (e) ROP n o m at location 2, (f) ROP i c p at location 2, (g) ROPs l 1 l 2 at location 3, (h) ROP n o m at location 3, (i) ROP i c p at location 3, (j) ROPs l 1 l 2 at location 4, (k) ROP n o m at location 4, (l) ROP i c p at location 4.
Figure 14. LiDAR data fusion results: (a) ROPs l 1 l 2 at location 1, (b) ROP n o m at location 1, (c) ROP i c p at location 1, (d) ROPs l 1 l 2 at location 2, (e) ROP n o m at location 2, (f) ROP i c p at location 2, (g) ROPs l 1 l 2 at location 3, (h) ROP n o m at location 3, (i) ROP i c p at location 3, (j) ROPs l 1 l 2 at location 4, (k) ROP n o m at location 4, (l) ROP i c p at location 4.
Applsci 12 09061 g014
Table 1. Specifications of used sensors.
Table 1. Specifications of used sensors.
SensorModelSpecifications
CameraLensSennex DSL315Projection modelFish-eye lens
(equisolid angle projection)
BodyCameleon3 5.0 MP Color USB3 VisionImage size
(pixel size)
2448 × 2448
(0.00345 mm)
LiDARVelodyne HDL-16EChannel number16
FOVHorizontal360°
Vertical30° (±15°)
ResolutionHorizontal0.5° (10 Hz)
Vertical
Table 2. System calibration results for sensor bundles.
Table 2. System calibration results for sensor bundles.
X0 (mm)Y0 (mm)Z0 (mm)ω (°)φ (°)κ (°)
rop l 1 c 1 −232.89−121.40−16.2889.2879.66−87.85
rop l 2 c 1 −226.61−281.42−406.70−157.07100.4297.56
rop l 1 c 2 210.83−122.16−18.32273.90−100.73274.78
rop l 2 c 2 214.32−281.23−404.4336.17−80.8998.75
Table 3. Standard deviation of system calibration for sensor bundles.
Table 3. Standard deviation of system calibration for sensor bundles.
Standard Deviation of ROPsRMS-Residuals for Image Points (pixel)
X0(mm)Y0(mm)Z0(mm)ω (°)φ (°)κ (°)
rop l 1 c 1 2.253.672.570.00330.00110.00330.56
rop l 2 c 1 0.771.502.280.00580.00080.00580.48
rop l 1 c 2 2.784.554.110.00890.00190.00810.59
rop l 2 c 2 4.954.864.890.01170.00080.01190.55
Table 4. Correlation between the ROPs.
Table 4. Correlation between the ROPs.
r o p l 1 c 1 r o p l 2 c 1
X0Y0Z0ωφκX0Y0Z0ωφκ
X0-−0.270.08−0.280.790.21-0.190.17−0.260.020.28
Y0−0.27-−0.150.950.10−0.840.19-−0.08−0.86−0.420.87
Z00.08−0.15-−0.200.36−0.020.17−0.08-−0.370.920.37
ω−0.280.95−0.20-0.06−0.79−0.26−0.86−0.37-−0.02−0.79
φ0.790.100.360.06-−0.200.02−0.420.92−0.02-0.01
κ0.21−0.84−0.02−0.79−0.20-0.280.870.37−0.790.01-
rop l 1 c 2 rop l 2 c 2
X0Y0Z0ωφκX0Y0Z0ωφκ
X0-0.03−0.050.23−0.010.23-−0.010.020.020.080.02
Y00.03-0.210.010.90−0.08−0.01-0.00−0.55−0.10−0.54
Z0−0.050.21-−0.890.21−0.910.020.00-−0.080.51−0.07
ω0.230.01−0.89-−0.010.780.02−0.55−0.08-−0.050.78
φ−0.010.900.21−0.01-−0.070.08−0.100.51−0.05-−0.05
κ0.23−0.08−0.910.78−0.07-0.02−0.54−0.070.78−0.05-
Table 5. Estimated ROPs of backpack based wearable sensor system.
Table 5. Estimated ROPs of backpack based wearable sensor system.
X0 (mm)Y0 (mm)Z0 (mm)ω (°)φ (°)κ (°)
ROP l 1 c 1 −232.81−121.88−16.1389.2879.66−87.85
ROP l 1 c 2 210.51−121.09−18.66273.9−100.73274.78
ROP l 1 l 2 2.49−340.55422.7961.281.05−0.26
Table 6. Used ROPs for comparative evaluation of LiDAR data fusion accuracy.
Table 6. Used ROPs for comparative evaluation of LiDAR data fusion accuracy.
X0 (mm)Y0 (mm)Z0 (mm)ω (°)φ (°)κ (°)
ROPs n o m 0.00−350.00400.0060.000.000.00
ROPs i c p 10.50−341.00415.0059.101.49−1.15
Table 7. LiDAR data fusion accuracy by test-site.
Table 7. LiDAR data fusion accuracy by test-site.
ROPsLocation e i e i ˜
M B E ( mm ) M A E ( mm ) R M S E ( mm ) M B E ˜ ( mm ) M A E ˜ ( mm ) R M S E ˜ ( mm )
ROP l 1 l 2 1−3.1129.0834.82−1.4813.8516.58
2−1.2312.1715.32−0.828.1110.21
318.8018.8720.124.474.494.79
4−1.7016.4020.19−0.555.296.51
Overall4.1017.7120.870.417.949.52
ROP n o m 146.4346.4450.3422.1122.1123.97
220.2423.0028.6113.4915.3319.07
349.8149.8150.6811.8611.8612.07
426.7828.8634.048.649.3110.98
Overall34.3035.6839.5714.0214.6516.52
ROP i c p 1−20.1820.3323.37−9.619.6811.13
2−34.4334.4335.94−22.9522.9523.96
3117.44117.44119.0027.9627.9628.33
4−30.8730.9032.80−9.959.9710.58
Overall12.0155.1256.98−3.6417.6418.50
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Choi, K.; Kim, C. A Framework of Wearable Sensor-System Development for Urban 3D Modeling. Appl. Sci. 2022, 12, 9061. https://doi.org/10.3390/app12189061

AMA Style

Choi K, Kim C. A Framework of Wearable Sensor-System Development for Urban 3D Modeling. Applied Sciences. 2022; 12(18):9061. https://doi.org/10.3390/app12189061

Chicago/Turabian Style

Choi, Kanghyeok, and Changjae Kim. 2022. "A Framework of Wearable Sensor-System Development for Urban 3D Modeling" Applied Sciences 12, no. 18: 9061. https://doi.org/10.3390/app12189061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop