Next Article in Journal
Recent Advances in Electrochemical and Optical Biosensors Designed for Detection of Interleukin 6
Next Article in Special Issue
Joint Timekeeping of Navigation Satellite Constellation with Inter-Satellite Links
Previous Article in Journal
Lane Departure Warning Mechanism of Limited False Alarm Rate Using Extreme Learning Residual Network and ϵ-Greedy LSTM
Previous Article in Special Issue
Autonomous Exploration and Map Construction of a Mobile Robot Based on the TGHM Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Absolute Positioning and Orientation of MLSS in a Subway Tunnel Based on Sparse Point-Assisted DR

1
China University of Geosciences, NO.29 Xueyuan Road, Beijing 100083, China
2
Geophysical Exploration Academy of China Metallurgical Geology Bureau, NO.139 Sunshine North Street, Baoding 071051, China
3
Zhengyuan Geophysical Co., Ltd., NO.139 Sunshine North Street, Baoding 071051, China
4
Beijing Urban Construction Exploration & Surveying Design Research Institute CO., LTD, Beijing 100101, China
5
Urban Intelligent Perception & Precision Measurement Engineering Technology Center, Wuhan University, No.129 Luoyu Road, Wuhan 490079, China
6
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, No.129 Luoyu Road, Wuhan 430079, China
7
Wuhan Metro Bridge and Tunnel Management Co., Ltd., NO.22 Qinyuan Road, Wuhan 430000, China
8
Wuhan Hirail Profiling Technology Co., Ltd, Wuhan 430060, China
*
Authors to whom correspondence should be addressed.
Sensors 2020, 20(3), 645; https://doi.org/10.3390/s20030645
Submission received: 31 December 2019 / Revised: 20 January 2020 / Accepted: 21 January 2020 / Published: 23 January 2020
(This article belongs to the Collection Positioning and Navigation)

Abstract

:
When performing the inspection of subway tunnels, there is an immense amount of data to be collected and the time available for inspection is short; however, the requirement for inspection accuracy is high. In this study, a mobile laser scanning system (MLSS) was used for the inspection of subway tunnels, and the key technology of the positioning and orientation system (POS) was investigated. We utilized the inertial measurement unit (IMU) and the odometer as the core sensors of the POS. The initial attitude of the MLSS was obtained by using a static initial alignment method. Considering that there is no global navigation satellite system (GNSS) signal in a subway, the forward and backward dead reckoning (DR) algorithm was used to calculate the positions and attitudes of the MLSS from any starting point in two directions. While the MLSS passed by the control points distributed on both sides of the track, the local coordinates of the control points were transmitted to the center of the MLSS by using the ranging information of the laser scanner. Then, a four-parameter transformation method was used to correct the error of the POS and transform the 3-D state information of the MLSS from a navigation coordinate system (NCS) to a local coordinate system (LCS). This method can completely eliminate a MLSS’s dependence on GNSS signals, and the obtained positioning and attitude information can be used for point cloud data fusion to directly obtain the coordinates in the LCS. In a tunnel of the Beijing–Zhangjiakou high-speed railway, when the distance interval of the control points used for correction was 120 m, the accuracy of the 3-D coordinates of the point clouds was 8 mm, and the experiment also showed that it takes less than 4 h to complete all the inspection work for a 5–6 km long tunnel. Further, the results from the inspection work of Wuhan subway lines showed that when the distance intervals of the control points used for correction were 60 m, 120 m, 240 m, and 480 m, the accuracies of the 3-D coordinates of the point clouds in the local coordinate system were 4 mm, 6 mm, 7 mm, and 8 mm, respectively.

1. Introduction

The subway has gradually become the main tool for urban transportation because of its convenience and speed. With the continuous progress of urbanization, the operation mileage of subway systems has also increased. Periodic inspections of subway tracks, tunnels, and other related facilities are indispensable for ensuring operational safety. However, the structure of subway facilities is complex, the data of inspection are numerous, and the time available for inspection is short. At present, the number of inspection vehicles and the inspection technology cannot meet the needs of daily operations maintenance. Thus, new inspection vehicles and technologies need to be put into use as soon as possible [1,2,3].
A mobile laser scanning system (MLSS) utilizes a laser scanner as a measurement sensor and an inertial measurement unit (IMU) as the core sensor of the positioning and orientation system (POS) [4,5]. The 3-D point cloud of the scanned object can be quickly acquired by using a mobile measurement system (MMS). The utilization of MLSS can help maintenance personnel to rapidly reconstruct the 3-D shape of subway tunnels and related facilities [6,7,8,9]. Based on this, minor deformations and defects of subway-related facilities can be identified accurately, including tunnel diameter convergence measurement, boundary detection, ellipticity analysis, analysis of the slab staggering of duct pieces, leakage detection, seepage detection, lining detection, and so forth [3,10,11,12,13]. Furthermore, by comparing the absolute and relative position changes of multi-period observation results, we can identify the characteristics and corresponding changes of convergence, gauge, the slab staggering of duct pieces, leakage, and other diseases in the subway tunnel, based on this, the safety state of subway tunnel can be obtained. Absolute accuracy is mainly used to ensure the accuracy of disease location and the feasibility of multi-stage data comparison. Only when we get the absolute position of the point cloud of the tunnel, the subsequent diseases inspection can be carried out.
The accuracy and precision of point clouds obtained by using MLSS is affected by many factors, the most important of which is the accuracy of the POS [14]. A pure inertial navigation system (INS) has the disadvantage of error accumulation [15]. Generally, a combination of multiple navigation technologies is used to overcome the shortcomings of a single navigation technique. The best mode of an integrated navigation system is to select complementary navigation systems to assist each other, and it is better to have absolute and relative positioning techniques at the same time [16]. Absolute positioning techniques provide absolute position reference and correct the accumulated errors caused by relative positioning techniques. The accuracy of an absolute positioning technique determines the best positioning accuracy that can be obtained by an integrated navigation system, while a relative positioning technique provides high-frequency and rich state information regarding the carrier [17,18,19]. The classical techniques are the global navigation satellite system (GNSS) + INS, GNSS + dead reckoning (DR), GNSS + INS + DR, and so forth. However, there are no GNSS signals in a subway environment to provide external reference information to correct positioning errors and unify the coordinate systems [20,21,22]. Therefore, the main research problem relates to developing a method to obtain accurate positions and attitudes of MLSS by using IMU as the core sensor of POS in subway environments with no GNSS signals.
There are some studies on the inspection and maintenance of rail transit facilities using inertial integrated navigation technology, but most of them are under the conditions of GNSS signals [23,24,25,26,27,28,29]. These methods cannot use external information to correct inertial navigation errors in a subway environment with no GNSS signals.
Relative positioning techniques available in rail transit environments include INS, odometers, axle counters, and so forth. With the assistance of laser/vision data, the simultaneous localization and mapping (SLAM) method can be utilized to complete positioning and attitude determination without GNSS signals [30,31]. However, the accuracy cannot meet the requirements of the inspection of subway tunnels; thus, other absolute positioning information is still necessary for the high-precision inspection of subway tunnels. The absolute positioning information available in a subway environment includes map matching (MM), intermittent responders, coordinates of control points, and so forth [21]. However, only the accuracy of coordinates of control points can meet the requirements of the inspection of subway tunnels.
Many scholars and research institutions have carried out studies based on this. For example, absolute measurement technology uses the total station to set up stations freely at certain distance intervals and calculates the absolute coordinates of the total station by measuring multiple third-class control points (CPIII); then, high-precision coordinates are transferred to the MMS to correct the accumulated errors. The representative static track inspection trolley—Graduate Representative Programme (GRP) 1000 from the Switzerland Emberger Technology Company—is mainly used to measure the static geometry of a track. The absolute positioning accuracy of a track can reach up to 3 mm. In addition, the GEDO CE track inspection trolley from the Trimble Navigation Company is used for the high-precision control of double-block ballastless tracks. These two products are also the most widely used in track detection work at present, but their measurement efficiency is relatively low, and the detection distance of each skylight time is about 300 m. Further, operators are required to have special knowledge and skills.
Li Q. used stereo cameras to connect a track control network with MLSS [32], and Mao Q., Zhang L., and Li Q. et al. utilized a laser scanner to transfer the coordinates of railway track control points to MLSS [33]. They used the coordinates of railway track control points to correct the accumulated errors in order to improve efficiency and accuracy [34]. However, the geodetic coordinates of the initial point of MLSS have to be known, and the coordinates of point clouds in the LCS cannot be directly obtained by data fusion.
To address the existing problems in the inspection of subway tunnels, in this work, we studied and designed a set of navigation solutions and point cloud data fusion methods, and the 3-D state information of MLSS was calculated by the forward/reverse DR algorithm. By scanning railway control points with a laser scanner, the coordinates of the control points were transferred to the center of the MLSS. For data postprocessing, a four-parameter transformation method was utilized to convert the position and attitude of the MLSS from a navigation coordinate system (NCS) to an LCS, which corrected the error of POS information and reduced the time cost of point cloud data fusion.
The rest of the paper is organized as follows: Section 2 presents all the steps and related principles and formulas involved in this method. Section 3 reports the specific experimental information and results. Section 4 concludes this paper.

2. The Principles of the Method

2.1. The Whole Procedure of the Method

This paper presents a method for how to accurately obtain the absolute coordinates of laser points by using a MLSS. The implementation process of this algorithm is shown in Figure 1.
For a MLSS using IMU as its core component for navigation, obtaining the positions and attitudes of the MLSS under a targeted coordinate system so that we can use the positions and attitudes to transform the coordinates of laser points directly from the laser scanner coordinate system (LSCS) to the LCS is the main challenge.
The first step is to carry out the initial alignment process to obtain the 3-D state information of the starting time, including the position, velocity, and attitude. Based on this, the forward and backward DR algorithm can be utilized to calculate the 3-D positions, velocities, and attitudes of a MLSS under designated time intervals during the whole inspection process. Presently, the position and attitude of a MLSS are still referenced to an NCS (usually, the World Geodetic System 1984, WGS-84), and they need to be transformed into a LCS. The error correction and localization of the MLSS’s positions and attitudes is merged into one process by using the sectional four-parameter transformation method, and the coordinates of the control points are needed to calculate the corresponding parameters. The last step is to use the MLSS’s corrected and localized positions and attitudes to transform the coordinates of point clouds into a LCS, and such coordinates have both absolute and relative position information, meaning that the follow-up specific inspection work can be conducted. The objective of this study was to demonstrate a method that can acquire the 3-D state information of an MLSS in a GNSS-denied environment, so that we can obtain the high precision absolute coordinates of massive point clouds, which is only part of the point cloud application; thus, the content of the point cloud data fusion method is not discussed.

2.2. The Initial Alignment Method

The initial alignment is performed to determine the attitude of the vehicle relative to the NCS [35]; only by completing this step can the DR process be performed. The initial alignment only requires knowing the latitude of the starting point, not the longitude [36]. So, the coordinates of control points are used to roughly calculate a latitude value, and the longitude value is set at will.
The static initial alignment method is adequate for the acquisition of the initial attitude of a MLSS in the inspection of subway tunnels. Static initial alignment is mainly divided into two steps: coarse and fine alignments. Coarse alignment is used to obtain the rough value of the initial attitude of the body coordinate system relative to the NCS. Fine alignment is used to further estimate the misalignment angle through the error propagation theory of INS to further improve the accuracy of the initial attitude [37].
Current theoretical approaches of coarse alignment include vector attitude determination, analytical coarse alignment, and indirect coarse alignment. Coarse alignment algorithms are adequate for use, while research on fine alignment algorithms is still ongoing [38].
In this study, we utilized the Kalman filter to estimate the misalignment angle. The position remains unchanged and the velocity is zero in the static alignment process [39]. Based on this, the initial alignment equation is obtained by simplifying the error propagation equation of INS, as shown in formulas (1) and (2):
ϕ ˙ = ϕ × ω i e n ε n ,
δ v ˙ n = f s f n × ϕ + n ,
ε n = [ ε E ε N ε U ] = [ C 11 ε x b + C 12 ε y b + C 13 ε z b C 21 ε x b + C 22 ε y b + C 23 ε z b C 31 ε x b + C 32 ε y b + C 33 ε z b ] ,   n = [ E N U ] = [ C 11 x b + C 12 y b + C 13 z b C 21 x b + C 22 y b + C 23 z b C 31 x b + C 32 y b + C 33 z b ] ,
where ϕ is the misalignment angle, ϕ ˙ is the change rate of the misalignment angle, f s f n is the specific force, and δ v ˙ n is the change rate of the velocity error; they are all referenced to NCS. ε n is the drift of the gyroscope, and n is the drift of the accelerometer; they are also referenced to NCS and can be deemed as random constant values if ε b = [ ε x b ε y b ε z b ] T and b = [ x b y b z b ] T are also constant values. C b n is the matrix that transforms the coordinate from the body frame (it is the coordinate system of the IMU in this paper) to the NCS; it is also a constant value in static initial alignment mode [40].
The approximation is f s f n g n = [ 0 0 g ] T in static mode ( g is the value of gravity). Based on this, the formula of fine alignment is
{ ϕ ˙ E = ω U ϕ N ω N ϕ U ε E ϕ ˙ N = ω U ϕ E ε N ϕ ˙ U = ω N ϕ E ε U δ v ˙ E = g ϕ N + E δ v ˙ N = g ϕ E + N δ v ˙ U = U ,
where ω is the angular rate. The last equation ( δ v ˙ U = U ) is unrelated to the other equations, which means that the error in the upward direction can be neglected because it does not decrease the accuracy of the estimated misalignment angle. The state space model is shown in formula (4):
{ X ˙ = F X Z = H X
The random constant drift of the gyroscope and accelerometer is added to the state variables:
X = [ ϕ E ϕ N ϕ U δ v E δ v N ε E ε N ε U E N ] T .
The coefficient matrix of the state variables is as follows:
F = [ 0 ω U ω N 0 0 1 0 0 0 0 ω U 0 0 0 0 0 1 0 0 0 ω N 0 0 0 0 0 0 1 0 0 0 g 0 0 0 0 0 0 1 0 g 0 0 0 0 0 0 0 0 1 0 5 × 10 ] .
The coefficient matrix of the measurement equation is as follows:
H = [ 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 ] .
The outputs of velocity can be set as the new information. Then, the recursive solution is carried out until the result converges, and the misalignment angle can be obtained accurately.

2.3. The INS/Odometer Integrated Navigation Algorithm

In Figure 2, s is the distance increment measured by the odometer; θ , P , and V are the attitude, position, and velocity, respectively, calculated by pure INS; and P and V are the position and velocity, respectively, calculated by DR. The differences between positions calculated by INS and DR are used as the new information of the Kalman filter [41,42,43,44].
The propagation equation of position error δ p calculated by INS is:
δ p ˙ = M v δ v + M p δ p ,
M v = [ 0 1 / R M 0 1 / R N cos L 0 0 0 0 1 ] ,   M p p = [ 0 0 v N / R M 2 v E tan L / R N cos L 0 v E / R N 2 cos L 0 0 0 ] ,
where R M and R N are the curvature radii of the prime vertical and meridian, respectively, and L is the latitude.
The propagation equation of position error calculated by DR is
δ p ˙ D   =   M a D ϕ D + M p D δ p D + M T D T D ,
M a D = M v D ( v D n × ) , M T D = M v D M T D ,
where v D n × is the skew-symmetric matrix of v D n ; M v D has the same form as M v and M p D has the same form as M p p , except that the elements in the matrix are calculated by DR.
M v T D = v D [ C 13 C 11 C 12 C 23 C 21 C 22 C 33 C 31 C 32 ] ,   T D = [ α θ α ψ δ K D ] ,
where α θ and α ψ are the installation error of pitch and heading angles of the odometer coordinate system relative to the INS coordinate system, respectively, and δ K D is the scale factor error of the odometer’s outputs. α θ , α ψ , and δ K D are usually added to state variables in the GNSS/INS-integrated navigation process in order for them to be solved together; however, there are no GNSS signals in a subway environment. In [32] and [33], the coordinates of control points in a subway environment are used as external reference information to estimate the parameters α θ , α ψ , and δ K D . However, the number of control points is small and the distance interval is 60 m, which provides too little information to estimate that many unknown parameters with good accuracy. Further, the solution results change considerably at the place at which there are control points [45,46,47,48].
There are other parameters such as δ l (lever arm: vector of the center of the odometer coordinate system relative to the center of the INS coordinate system) and δ t (time synchronization error of the odometer relative to the INS). Since the structure of the instrument is very strong, α θ , α ψ , δ K D , and δ l are quite stable; so, α θ , α ψ , δ K D , and δ l should be calibrated before detection work, instead of adding them to state variables and estimating them during the filtering process, as this is more beneficial to the results. In this study, the calibration of α θ , α ψ , and δ K D was carried out on a straight track with a good GNSS signal by comparing the positions and velocities calculated by the INS/odometer and GNSS. The exact values of α θ , α ψ , and δ K D could be obtained, and δ l was obtained through the design parameters of the instrument.
δ t is difficult to calibrate because it needs a much more accurate time reference system. Thus, it was added to the state variables in this study:
{ X ˙ = F X + G W b Z = H X + V .
V p is the measurement noise. The vector of the state variables is
X = [ ϕ T ( δ v n ) T ( δ P ) T ( b ) T ( ε b ) T ( δ P D ) T δ t ] 19 × 1 T .
F = [ M a a M a v M a p C b n 0 3 × 3 0 3 × 3 0 1 × 1 M v a M v v M v p 0 3 × 3 C b n 0 3 × 3 0 1 × 1 0 3 × 3 M p v M p p 0 3 × 3 0 3 × 3 0 3 × 3 0 1 × 1 M a D 0 3 × 3   0 3 × 3   0 3 × 3 0 3 × 3 M p D 0 1 × 1 0 19 × 1 ] ,   G = [ C b n 0 3 × 3 0 3 × 3 C b n 0 13 × 6 ] ,
W b = [ w g x b w g y b w g z b w a x b w a y b w a z b ] T .
p I N S and p D R are the theoretical values of the positions calculated by pure INS and DR, respectively:
P I N S + M v C b n δ l + M v v n δ t =   P D R ,
where p ¯ I N S and p ¯ D R are the measured values of positions calculated by pure INS and DR, respectively, δ p I N S and δ p D R are the corresponding errors:
{ p ¯ I N S   =   p I N S + δ p I N S p ¯ D R = p D R + δ p D R .
Considering formula 8 and formula 9, and taking the difference between the positions calculated by INS and DR as the measurements, the observation equation is obtained as shown in formula (10):
p ¯ I N S p ¯ D R = M v C b n δ l M v v n δ t + δ p I N S δ p D R .
p ¯ I N S + M v C b n δ l p ¯ D R = δ p I N S δ p D R   M v v n δ t .
The coefficient matrix of the observation equation is:
H = [ 0 3 × 6 I 3 × 3 0 3 × 6 I 3 × 3 M v v n ] .
This is the model of the INS/odometer integrated navigation system. With this model, the correct value can be obtained by using the Kalman filter to estimate state errors and make feedback corrections.

2.4. Forward and Backward Dead Reckoning

2.4.1. Unification of the Odometer’s Outputs

As described in Section 2.3, α θ , α ψ , δ K D , and δ l are calibrated in advance, so the transformation matrix between the body frame (INS frame) and the odometer coordinate system is:
C o b = I ( α × ) = [ 1 α ψ α γ α ψ 1 α θ α γ α θ 1 ]
The relationship between v D o (the theoretical velocity in the odometer coordinate system) and v ^ D n (the vector of the velocity measured by the odometer in the NCS) is:
v ^ D n = C b n C o b ( 1 + δ K D ) v D o ,
v D o = [ 0 v D 0 ] ,
where C o b is the transformation matrix that converts the coordinate from the odometer coordinate system to the body frame, and C b n is the transformation matrix that converts the coordinate from the body frame to the NCS [49,50].
The relationship between the vector of the distance increment Δ S D o (the vector of the theoretical distance increment in the odometer coordinate system) and Δ S ^ D n (the vector of the distance increment measured by the odometer in NCS) is:
Δ S ^ D n = C b n C o b ( 1 + δ K D ) Δ S D o ,
Δ S D o = T v D 0 = T [ 0 V D 0 ] T ,   Δ S ^ D n = T v ^ D n = [ Δ S N n Δ S E n Δ S U n ] T ,
where T is the sampling time interval.

2.4.2. Forward DR

Formulas (14)–(16) show the position, velocity, and attitude update equation of forward DR:
{ L k = L k 1 + Δ S N k 1 n R M + h k 1 λ k = λ k 1 + Δ S E k 1 n sec L k 1 R N + h k 1 h k = h k 1 + Δ S U k 1 n ,
v k n = v k 1 n + T s [ C b k 1 n f s f k b ( 2 ω i e k 1 n ω e n k 1 n ) × v k 1 n + g n ] ,
C b k n = C b k 1 n ( I + T s Ω n b k b ) ,
where subscripts K 1 and K denote the order of calculation; L k , λ k , and h k are the latitude, longitude, and geodetic height, respectively; ω i e n is the earth rotation vector; ω e n n is the relative attitude change rate of the NCS relative to the earth center coordinate system ( ω i e n and ω e n n are both in the NCS); f b is the specific force measured by the accelerometer in the body frame; g n is the gravity vector in the NCS; and Ω n b k b is the attitude change rate of the body frame relative to the NCS and is measured in the body frame [51].

2.4.3. Backward DR

Formulas (17)–(19) show the position, velocity, and attitude update equation of backward DR:
{ L k 1 = L k Δ S N k 1 n R M + h k 1 L k Δ S N k 1 n R M + h k λ k 1 = λ k Δ S E k 1 n sec L k 1 R N + h k 1 λ k Δ S E k 1 n sec L k R N + h k 1 h k 1 = h k T s v U k 1 n h k Δ S U k 1 n .
{ v k 1 n = v k n T s [ C b k 1 n f s f k b ( 2 ω i e k 1 n + ω e n k 1 n ) × v k 1 n + g n ] v k n T s [ C b k n f s f k 1 b ( 2 ω i e k 1 n + ω e n k 1 n ) × v k 1 n + g n ] .
C b k 1 n = C b k n ( I + T s Ω n b k b ) 1 C b k n ( I T s Ω n b k b ) .
The meaning of each parameter was discussed in Section 2.4.2. With the backward DR algorithm, the requirements for the known coordinates of the initial point are no longer necessary. Theoretically, any point in the trajectory can be used as the starting point for forward and backward DR to obtain the 3-D state information of the MLSS [51].

2.5. Conversion Method for Obtaining Absolute Coordinates

2.5.1. Traditional Coordinate Transformation Method

Usually, point clouds use geodetic coordinates because the results of INS/odometer-integrated navigation are referenced to the Earth center coordinate system. Thus, the coordinates of point clouds need to be transformed from a geodetic coordinate system (NCS) to an LCS (usually, the geodetic coordinate system is the Gauss projection coordinate system, and sometimes the system is rotated at a specific parameter) [14]. Figure 3 shows the coordinate transformation process before the improvement.
Formula (20) shows the principle of the transformation process from an LSCS to a geodetic coordinate system and then to an LCS:
X L = T L + m C n L ( T n + C b n ( R l b X l + T l b ) ) ,
where X L is the coordinate in the LCS, X l is the coordinate in the LSCS, T l b is the translation value from the LCS to the body frame, R l b is the rotation matrix from the LSCS to the body frame, T n is the translation value from the body frame to the NCS, C b n is the rotation matrix from the body frame to the NCS, T L is the translation value from the NCS to the LCS, m is the scale parameter, and C n L is the rotation matrix from the NCS to the LCS.

2.5.2. Improved Coordinate Transformation Method

The traditional coordinate transformation method needs three steps of point cloud transformation to obtain the coordinates of the point cloud in LCS; every step of point cloud transformation requires many calculations because of the huge amount of point cloud data, which greatly prolongs the data processing time. Based on the principles described in Section 2.5.1, we changed the MLSS’s positions and attitudes obtained through the INS/odometer-integrated navigation system by eliminating the step of transformation from the geodetic coordinate system to the LCS. Then, the three steps of transformation can be simplified into two steps when compared with the traditional method described in Section 2.5.1. Figure 4 shows the coordinate transformation process after the improvement.
X L = ( T L + m C n L T n ) + m C n L C b n ( R l b X l + T l b ) ,
where ( T L + m C n L T n ) can be considered as the translation value T b L from the body frame to the LCS, T n is actually the coordinates of the MLSS in the geodetic coordinate system, and C n L C b n can be considered as the rotation matrix C b L from the body frame to the LCS. The key is to obtain the values of T L and C n L .

2.5.3. Transformation from NCS to LCS

Since the change is mainly in the horizontal direction, a four-parameter transformation method was used to calculate the transformation parameters between the NCS and the LCS.
The formula of the four-parameter transformation method is as follows:
[ x i y i ] = [ T x T y ] + [ m cos Δ ψ m sin Δ ψ m sin Δ ψ m cos Δ ψ ] [ x i y i ] ,
where the x-axis points to the east direction and the y-axis points to the north direction, T x and T y are the translation parameters, m is the scale factor, Δ ψ represents the rotation parameters, x i and y i are the original coordinates, and x i and y i are the coordinates of the targeted coordinate system [52,53].
Using the four-parameter transformation method is like bringing the trajectory from DR and the trajectory constrained by control points close to each other.
The solution process of these four parameters is as follows:
For convenience of calculation, a = T x , b = T y , c = m cos Δ ψ , and d = m sin Δ ψ . Then, the coordinate transformation formula suitable for solving the four parameters is as follows:
x i = a + x i c + y i d y i = b + y i c x i d .
By using the adjustment method based on the four-parameter transformation method with at least two control points, the error equation can be obtained as follows:
[ v x 1 v y 1 v x 2 v y 2 v x n v y n ] = [ 1 x 1 y 1 1 y 1 x 1 1 x 2 y 1 1 y 1 x 2 1 x n y n 1 y 1 x n ] [ a b c d ] [ x 1 y 1 x 2 y 2 x n y n ] .
After the four parameters are calculated, the rotation matrix from NCS to LCS can be obtained as follows:
C n L = [ m cos Δ ψ m sin Δ ψ 0 m sin Δ ψ m cos Δ ψ 0 0 0 m ] .
The rotation matrix C b L , which denotes the transformation from the body frame to the LCS, is as follows:
C b L = C n L C b n = [ T 11 T 12 T 13 T 21 T 22 T 23 T 31 T 32 T 33 ] ,
T 11 = m cos Δ ψ ( cos γ cos ψ + sin γ sin ψ sin θ ) + m sin Δ ψ ( sin γ cos ψ sin θ cos γ sin ψ ) T 21 = m sin Δ ψ ( cos γ cos ψ + sin γ sin ψ sin θ ) + m cos Δ ψ ( sin γ cos ψ sin θ cos γ sin ψ ) T 31 = m sin γ cos θ T 12 = m cos Δ ψ sin ψ cos θ + m sin Δ ψ cos ψ cos θ T 22 = m cos θ ( cos Δ ψ cos ψ sin Δ ψ sin ψ ) T 32 = m sin θ T 13 = m cos Δ ψ sin ψ cos θ + m sin Δ ψ cos ψ cos θ T 23 = m sin Δ ψ ( sin γ cos ψ cos γ sin ψ sin θ ) m cos Δ ψ ( sin γ sin ψ + cos γ cos ψ sin θ ) T 33 = m cos γ cos θ
where θ , γ , and ψ are the pitch, roll, and yaw angles of the body frame relative to the NCS, respectively. The attitudes of the body frame relative to the LCS can be obtained by using the components of C b L , which are as follows:
{ θ = arcsin ( T 32 ) = θ γ = arctan ( T 31 T 33 ) = γ ψ = arctan ( T 21 T 22 ) = Δ ψ + ψ ,
where θ , γ , and ψ are the pitch, roll, and yaw angles of the body frame relative to the LCS, respectively. The pitch and roll angles are the same as in the previous steps, and the yaw angle is the sum of the value of the previous yaw angle and the rotation angle Δ ψ .
By using the improved positions and attitudes of the MLSS, the coordinates of point clouds can be directly converted from the body frame to the LCS.

3. Experiments and Discussions

This section shows the experimental results of the inspection work performed in a tunnel of the Beijing–Zhangjiakou high-speed railway and the Wuhan subway line. The former is intended to show the detailed operation process and principle involved, and the latter is intended to verify the reliability and accuracy of the instrument and the proposed algorithm in the actual working environment.

3.1. Information about MLSS

Information about the MLSS used in the experiments is shown in Figure 5.
The inspection trolley utilizes a three-wheel structure to ensure that it is always close to the rail surface during the working process. In this experiment, GNSS equipment was only used for timing. The performance of the laser scanner of the MLSS is shown in Table 1.
The performance of IMU of MLSS is shown in Table 2.

3.2. Experiment in a Tunnel of the Beijing–Zhangjiakou High-Speed Railway

3.2.1. Experimental Conditions

The internal environment of the high-speed railway tunnel is the same as that of a subway because there is also no GNSS signal. Control points are distributed on both sides of the track at certain distance intervals; i.e., 60 m. The total length of the tunnel studied was about 5–6 km and the speed for the inspection was about 3–5 km/h, meaning about 1.5 h was required to complete the data acquisition process. As the MLSS moves forward, the laser scanner on MLSS keeps scanning; the reconstructed 3D point cloud includes the subway tunnel and its related facilities, such as control points. This experiment was carried out according to the method and procedure of the inspection of subway tunnels.

3.2.2. Experimental Results

The images generated by the data processing are discussed below.
Figure 6 shows the trajectory calculated by using the INS and odometer, but this trajectory was not corrected by control points. The green and red lines in Figure 6 represent the trajectory of the trolley calculated by the INS/odometer; the green part of the line means a relatively higher accuracy of 3-D coordinates when compared with the red part. The symbol of a red crucifix represents the control points, which are distributed on both sides along the track. The green flag represents the starting point and the end point. It can be seen that the starting and end points of the MLSS did not coincide with the control point, but when the inspection trolley passed through the control point, the coordinate of the control point could be transferred to the center of the body frame. Figure 8 will present the principle of how to extract the control points and transfer their coordinates to the center of the body frame. Then, the coordinates of the starting and end points could be obtained by using the forward and backward DR algorithm.
Figure 7 shows the strategy of using control points to correct the errors of position and orientation.
It can be seen from Figure 7 that the control points appear in pairs and are equally spaced (60 m). In the experiment of this paper, the control points used for correction were selected at equal distance intervals, and the distance interval was determined according to the situation. In Figure 7, the distance interval of control points selected for correction is 120 m.
Figure 8 shows how to extract control points from point clouds.
The approximate distance between the control points and the starting point of MLSS was recorded when we worked in the field. With the help of distance information and the point cloud processing software, we could fuse point clouds in the area of control points. After that, we got a result such as that shown in Figure 8. Figure 8 is enlarged to see the point cloud of the control point, and it will only take a few seconds to select one control point by clicking on it with a mouse. The coordinates of the control points in different coordinate systems (LSCS and LCS) can be calculated by using the position and attitude information without being corrected. The number of control points can be determined freely, and the distance interval was 120 m in this experiment.
After the control point was extracted, its coordinates in the LSCS reflected the coordinate difference between the control point and the center of MLSS, and its coordinates in the point clouds (the coordinate was obtained through calculation and relative to LCS) were compared with its real coordinates in the LCS to estimate the four parameters (mentioned in Section 2.5) to correct the positions and attitudes of the MLSS. Then, the point clouds do not need to be transformed from the body frame to a geodetic coordinate system/NCS and then transformed to LCS; instead, the positions and attitudes of the MLSS could be used again to directly transform all the other point clouds from the body frame to the LCS.
The point clouds in the LCS are shown in Figure 9.
Since the control points were distributed along a distance of 60 m and we only used half of them (extracting a pair of control points every 120 m) to calculate the four parameters, the other half were left for validation. When the whole solution process was completed, the coordinates of the laser point clouds were all relative to the LCS. The local coordinates of the control points (without having been used before) were compared to the coordinates of their corresponding laser points obtained through point cloud data fusion, and the absolute values of deviations in the horizontal coordinates, elevations, and 3-D positions are shown in Figure 10.
The abscissa represents the number of verification points, and the ordinate axis reflects the deviation in three directions. The specific statistical results of the deviations of these verification points are shown in Table 3. The red line represents the absolute values of deviations in the horizontal direction, the green line represents the absolute values of deviations in elevation, and the blue line represents the absolute values of deviations of the 3-D coordinates.
When there was no GNSS signal, the error correction and coordinate system transformation were carried out by using the control points at the distance interval of 120 m. The maximum deviations and the root mean square error (RMS) of the 3-D coordinates of the laser point cloud in the LCS were 14 mm and 8 mm, respectively, which meet the requirements for inspection results.

3.3. Experiments in the Inspection of Subway Tunnels

The inspection work was carried out on Wuhan subway lines 2 and 6 for about one year. The detection work followed the method and procedure presented in this paper. Here, we introduced the experimental conditions and statistical results of the inspection.

3.3.1. Experimental Conditions

The working conditions of the subway detection are shown in Figure 11.
All of the inspection work on the subway line was carried out at night, so the working environment was dark, but this did not affect the performance of the laser scanner.
Figure 12 shows an image of the point cloud of the subway tunnel, which displays its approximate geometry.
As showed in Figure 13, a paper target was placed next to the control point in order to make it convenient to determine the control point during postprocessing.
In order to save time, only a small number of point clouds around the control points were fused at first, and the laser points of the corresponding control points could be easily determined with the assistance of the paper target.

3.3.2. Experimental Results and Discussions

The results were obtained by a statistical analysis of the data collected from the inspection work of Wuhan subway lines 2 and line 6 for about one year. Each inspection covered about 1–2 km, which is the distance between two subway stations.
Table 4 shows the position accuracy of the point clouds obtained by using the positions and attitudes of the MLSS, which had been corrected only by the coordinates of the control points near the start and end points (we need two control points that are not paired to correct the estimation and provide absolute reference information), and there was no other correction. The absolute accuracy of the 3-D coordinates was 2.3 cm, which shows the accuracy of the point clouds that can be obtained by using INS/odometer-integrated navigation with no control point corrections within a distance of 1–2km. Obviously, the accuracy of the point clouds was not high enough. However, the distance interval at which the point clouds should be corrected by the coordinates of the control points still needs to be verified.
Table 5 shows the position accuracy of the point clouds obtained by using the MLSS’s positions and attitudes that had been corrected by the coordinates of the control points in different distance intervals. From the comparison of Table 4 and Table 5, it can be seen that the accuracy was greatly improved in both the horizontal and elevation directions after the correction of the control points.
The accuracy of the 3-D coordinates of the point clouds obtained can reach up to 4 mm when the distance interval of the control point for correction is 60 m, and it can still reach up to 8 mm even when the distance interval for correction is 480 m. Thus, only one control point every 480 m is sufficient to achieve absolute detection accuracy at the millimeter level, which will greatly reduce the work intensity.
The approximate time taken in each step of the inspection work is as follows:
(1)
The data for a 5–6 km long tunnel can be collected in 1 h.
(2)
It takes about 5–10 minutes to fuse the point clouds of the areas around control points, and it only takes a few seconds to select a control point in the point cloud. In this way, we do not need to transform all the point clouds from a body frame to geodetic coordinate system/NCS, which has been described in 2.5.1; this greatly reduces the time required.
(3)
After that, the positions and attitudes of the MLSS will be corrected instantly with the algorithm given in Section 2.5.3.
(4)
Then, the coordinates of point clouds in the LCS can be directly obtained by using the positions and attitudes, which have been corrected; this step will take about 1 h.
(5)
Finally, the inspection of the diseases of a 5–6 km long tunnel based on point clouds will take about 1 h.
Thus, 4 h is enough for the whole inspection process of a 5–6 km long tunnel; in such a short time, the traditional inspection method cannot complete an inspection for such a long-distance, and some tunnel diseases cannot even be detected at all.
However, with the increase of the number of control points used for correction, the improvement of the plane position accuracy is not as obvious as the elevation. It is speculated that the error of the INS in the elevation direction is divergent, which is determined by the internal principle of the INS itself. The algorithm optimization of the elevation direction can be further expanded, so that it can still maintain high accuracy when there are few control points. Besides, the filtering and feature recognition algorithm of point clouds can be further studied to assist in the extraction of tunnel diseases in the process of data post-processing, which can reduce the workload.

4. Conclusions

The accuracy of the 3-D coordinates of the point clouds was 8 mm in a tunnel of the Beijing–Zhangjiakou high-speed railway when the distance interval of the control points used for correction was 120 m. The results of the inspection of Wuhan subway showed that when the distance intervals of the control points used for correction were 60 m, 120 m, 240 m, and 480 m, the accuracies of the 3-D coordinates of the point clouds were 4 mm, 6 mm, 7 mm, and 8 mm, respectively, in the LCS. The accuracy of the point cloud coordinates obtained by this method can completely meet the needs of the inspection of the subway. The results of the experiment in a tunnel of the Beijing–Zhangjiakou high-speed railway were basically the same as those in the Wuhan subway, and they both verified the accuracy and reliability of the proposed method.
Assisted by a small number of control points, this method can eliminate a MLSS’s dependence on GNSS signals, and for inspection work that requires coordinates to be obtained under the LCS, the positioning and orientation information can be used for point cloud data fusion to directly obtain the coordinates in the LCS, which greatly reduces the data processing time and also improves the accuracy of point clouds. Additionally, it takes less than 4 h to complete all the inspection work for a 5–6 km long tunnel, which is much more efficient than the traditional method. The experimental results proved the feasibility and high efficiency of the method proposed in this paper.

Author Contributions

The framework was proposed by Q.W. and C.T., and further development and implementation was realized by C.D. and Q.M. F.T. and J.C. mainly performed the experimental data collection and processing. H.H. and Y.X. mainly read the paper and made some corrections. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China, grant number 41971413.

Acknowledgments

We are grateful for the support of the Wuhan Metro Bridge and Tunnel Management Co., Ltd. for the technical support and materials used for the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, R.; Wu, J.; Zeng, M. Metro Clearance detection based on laser measurement. Urban Rapid Rail Trans. 2007, 5, 70–73. [Google Scholar]
  2. Gao, X.; Yu, L.; Yang, Z. Subway lining segment faulting detection based on Kinect sensor. In Proceedings of the 2015 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 2–5 August 2015; pp. 1076–1081. [Google Scholar]
  3. Mei, W.; Wei, C.; Yu, A. Method and Application of Cross Section Survey of Subway Tunnel by Laser Scanning Car. J. Geomat. 2017, 42. [Google Scholar] [CrossRef]
  4. Schwarz, K.P.; El-Sheimy, N. Mobile mapping systems–state of the art and future trends. ISPRS Arch. 2004, 35, 10. [Google Scholar]
  5. Vock, D.M.M.; Jungmichel, M. A Low Budget Mobile Laser Scanning Solution Using on Board Sensors and Field Bus Systems of TODAY’S Consumer Automobiles. ISPRS. Arch. 2011, 34. [Google Scholar] [CrossRef] [Green Version]
  6. Kim, G.H.; Sohn, H.G.; Song, Y.S. Road Infrastructure Data Acquisition Using a Vehicle-Based Mobile Mapping System. Comput-Aided. Civ. Inf. 2006, 21, 346–356. [Google Scholar] [CrossRef]
  7. Teo, T.; Chiu, C. Pole-Like Road Object Detection From Mobile Lidar System Using a Coarse-to-Fine Approach. J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4805–4818. [Google Scholar] [CrossRef]
  8. Yang, M.; Wan, Y.; Liu, X.; Xu, J.; Wei, Z.; Chen, M.; Sheng, P. Laser data based automatic recognition and maintenance of road markings from MLS system. Opt. Laser Technol. 2018, 107, 192–203. [Google Scholar] [CrossRef]
  9. Tan, K.; Cheng, X.; Ju, Q.; Wu, S. Correction of Mobile TLS Intensity Data for Water Leakage Spots Detection in Metro Tunnels. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1711–1715. [Google Scholar] [CrossRef]
  10. Shen, B.; Zhang, W.Y.; Qi, D.P.; Wu, X.Y. Wireless Multimedia Sensor Network Based Subway Tunnel Crack Detection Method. Int. J. Distrib. Sens. N. 2015, 11, 184639. [Google Scholar] [CrossRef]
  11. Ge, R.; Zhu, Y.; Xiao, Y.; Chen, Z. The Subway Pantograph Detection Using Modified Faster R-CNN. In Proceedings of the International Forum of Digital TV and Wireless Multimedia Communication, Shanghai, China, 9–10 November 2016; pp. 197–204. [Google Scholar]
  12. Li, S.C.; Liu, Z.Y.; Liu, B.; Xu, X.J.; Wang, C.W.; Nie, L.C.; Sun, H.; Song, J.; Wang, S.R. Boulder detection method for metro shield zones based on cross-hole resistivity tomography and its physical model tests. Chin. J. Geotech. Eng. 2015, 37, 446–457. [Google Scholar]
  13. Zhang, L. Accuracy Improvement of Positioning and Orientation Systems Applied to Mobile Mapping in Complex Environments. Ph.D. Thesis, Wuhan University, Wuhan, China, 2015. [Google Scholar]
  14. Wu, F. Researches on the Theories and Algorithms of the Error Analysis and Compensation for Integrated Navigation System. Ph.D. Thesis, PLA Information Engineering University, Zhengzhou, China, 2007. [Google Scholar]
  15. Jing, H.; Slatcher, N.; Meng, X.; Hunter, G. Monitoring capabilities of a mobile mapping system based on navigation qualities. ISPRS Arch. 2016, 625–631. [Google Scholar]
  16. Barbour, N.; Schmidt, G. Inertial sensor technology trends. IEEE Sens. J. 2001, 1, 332–339. [Google Scholar] [CrossRef]
  17. Schmidt, G.T. INS/GPS Technology Trends//Advances in navigation sensors and integration technology. NATO RTO Lect. Ser. 2004, 232. [Google Scholar]
  18. Schmidt, G.; Schmidt, G. GPS/INS technology trends for military systems. In Proceedings of the Guidance, Navigation, and Control Conference, New Orleans, LA, USA, 11–13 August 1997; pp. 1018–1034. [Google Scholar]
  19. Groves, P.D. Principles of GNSS, Inertial, and Multi-sensor Integrated Navigation Systems. Trans. Aerosp. Electron. Syst. 2015, 30, 26–27. [Google Scholar] [CrossRef]
  20. Liu, J.; Cai, B.; Wang, J. Map-aided BDS/INS Integration Based Track Occupancy Estimation Method for Railway Trains. J. China Railway Soc. 2014, 36, 49–58. [Google Scholar]
  21. Larsen, M.B. High performance Doppler-inertial navigation-experimental results. In Proceedings of the OCEANS 2000 MTS/IEEE Conference and Exhibition, Providence, RI, USA, 11–14 September 2000; pp. 1449–1456. [Google Scholar]
  22. Luck, T.; Meinke, P.; Eisfeller, B.; Kreye, C.; Stephanides, J. Measurement of Line Characteristics and Track Irregularities by Means of DGPS and INS. In Proceedings of the International Symposium on Kinematic Systems in Geodesy, Geomatics and Navigation, Banff, AB, Canada, 5–8 June 2001. [Google Scholar]
  23. Chiang, K.W.; Chang, H.W. Intelligent Sensor Positioning and Orientation through Constructive Neural Network-Embedded INS/GPS Integration Algorithms. Sensors 2010, 10, 9252–9285. [Google Scholar] [CrossRef] [Green Version]
  24. Luck, T.; Lohnert, E.; Eissfeller, B.; Meinke, P. Track irregularity measurement using an INS-GPS integration technique. WIT Trans. Built Environ. 2000, 105–114. [Google Scholar] [CrossRef]
  25. Chen, Q.; Niu, X.; Zhang, Q.; Cheng, Y. Railway track irregularity measuring by GNSS/INS integration. Navig. J. Inst. Navig. 2015, 62, 83–93. [Google Scholar] [CrossRef]
  26. Chen, Q.; Niu, X.; Zuo, L.; Zhang, T.; Xiao, F.; Liu, Y.; Liu, J. A railway track geometry measuring trolley system based on aided INS. Sensors 2018, 18, 538. [Google Scholar] [CrossRef] [Green Version]
  27. Gao, Z.; Ge, M.; Li, Y.; Shen, W.; Zhang, H.; Schuh, H. Railway irregularity measuring using Rauch–Tung–Striebel smoothed multi-sensors fusion system: quad-GNSS PPP, IMU, odometer, and track gauge. GPS Solut. 2018, 22, 36. [Google Scholar] [CrossRef]
  28. Han, J.; Lo, C. Adaptive time-variant adjustment for the positioning errors of a mobile mapping platform in GNSS-hostile areas. Surv. Rev. 2017, 49, 9–14. [Google Scholar] [CrossRef]
  29. Shi, Z. Advanced Mobile Mapping System Development with Integration of Laser Data, Stereo Images and other Sensor Data. Citeseer 2014. [Google Scholar]
  30. Strasdat, H.; Montiel, J.M.; Davison, A.J. Visual SLAM: Why filter? Image Vis. Comput. 2012, 30, 65–77. [Google Scholar] [CrossRef]
  31. Li, Q.; Chen, Z.; Hu, Q.; Zhang, L. Laser-aided INS and odometer navigation system for subway track irregularity measurement. J. Surv. Eng. Asce 2017, 143, 04017014. [Google Scholar] [CrossRef]
  32. Mao, Q.; Zhang, L.; Li, Q.; Hu, Q.; Yu, J.; Feng, S.; Ochieng, W.; Gong, H. A least squares collocation method for accuracy improvement of mobile LiDAR systems. Remote Sens. 2015, 7, 7402–7424. [Google Scholar] [CrossRef] [Green Version]
  33. Jiang, Q.; Wu, W.; Li, Y.; Jiang, M. Millimeter scale track irregularity surveying based on ZUPT-aided INS with sub-decimeter scale landmarks. Sensors 2017, 17, 2083. [Google Scholar] [CrossRef] [Green Version]
  34. Sun, P.; Li, G.; Zhang, Z.; Wang, X. Research on SINS Static Alignment Algorithm and Experiment. In Proceedings of the 2015 International Conference on Network and Information Systems for Computers (ICNISC), Wuhan, China, 23–25 January 2015; pp. 305–308. [Google Scholar]
  35. Roberto, V.; Ivan, D.; Jizhong, X. Keeping a good attitude: A quaternion-based orientation filter for imus and MARGs. Sensors 2015, 15, 19302–19330. [Google Scholar]
  36. Wang, X.; Shen, G. A fast and accurate initial alignment method for strapdown inertial navigation system on stationary base. J. Control Theory Appl. 2005, 3, 145–149. [Google Scholar] [CrossRef]
  37. Wu, Y.; Zhang, H.; Wu, M.; Hu, X.; Hu, D. Observability of Strapdown INS Alignment: A Global Perspective. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 78–102. [Google Scholar]
  38. Kaygısız, B.H.; Şen, B. In-motion alignment of a low-cost GPS/INS under large heading error. J. Navig. 2015, 68, 355–366. [Google Scholar] [CrossRef] [Green Version]
  39. Chang, L.; Qin, F.; Jiang, S. Strapdown Inertial Navigation System Initial Alignment Based on Modified Process Model. IEEE Sens. J. 2019, 19, 6381–6391. [Google Scholar] [CrossRef]
  40. Shin, E.H. Accuracy Improvement of Low Cost INS/GPS for Land Applications. Master’s Thesis, The University of Calgary, Calgary, AB, Canada, December 2001. [Google Scholar]
  41. Bar-Itzhack, I.Y.; Berman, N. Control theoretic approach to inertial navigation systems. J. Guid. Control. Dynam. 1987, 10, 1442–1453. [Google Scholar]
  42. Li, Z.; Wang, J.; Li, B.; Gao, J.; Tan, X. GPS/INS/Odometer integrated system using fuzzy neural network for land vehicle navigation applications. J. Navig. 2014, 67, 967–983. [Google Scholar] [CrossRef] [Green Version]
  43. Wang, W.; Wang, D. Land vehicle navigation using odometry/INS/vision integrated system. In Proceedings of the 2008 IEEE Conference on Cybernetics and Intelligent Systems, Chengdu, China, 21–24 September 2008; pp. 754–759. [Google Scholar]
  44. Trimble, M. Dead reckoning. Cns Spectrums 2002, 7, 565. [Google Scholar] [CrossRef] [PubMed]
  45. Yan, G. Research on Strapdown Inertial Navigation Algorithm and Vehicle Integrated Navigation System. Master’s Thesis, Northwestern Polytechnic University, Xi’an, China, 2004. [Google Scholar]
  46. Yan, G. Research on Vehicle Autonomous Positioning and Orientation System. Ph.D. Thesis, Northwestern Polytechnic University, Xi’an, China, 2006. [Google Scholar]
  47. Fu, Q. Key Technologies for Vehicular Positioning and Orientation System. Ph.D. Thesis, Northwestern Polytechnic University, Xi’an, China, 2015. [Google Scholar]
  48. Jimenez, A.R.; Seco, F.; Prieto, C.; Guevara, J. A comparison of Pedestrian Dead-Reckoning algorithms using a low-cost MEMS IMU. In Proceedings of the 2009 IEEE International Symposium on Intelligent Signal Processing, Budapest, Hungary, 26–28 August 2009; pp. 37–42. [Google Scholar]
  49. Dong, C.; Mao, Q.; Ren, X.; Kou, D.; Qin, J.; Hu, W. Algorithms and instrument for rapid detection of rail surface defects and vertical short-wave irregularities based on fog and odometer. IEEE Access 2019, 7, 31558–31572. [Google Scholar] [CrossRef]
  50. Gongmin, Y.; Weisheng, Y.; Demin, X. On Reverse Navigation Algorithm and its Application to SINS Gyro-compass In-movement Alignment. In Proceedings of the 27th Chinese Control Conference, Kunming, China, 16–18 July 2008; pp. 724–729. [Google Scholar]
  51. Yoon, H.; Ham, Y.; Golparvar-Fard, M.; Spencer, B.F., Jr. Forward-backward approach for 3d event localization using commodity smartphones for ubiquitous context-aware applications in civil and infrastructure engineering. Comput. Aided Civ. Inf. 2016, 31, 245–260. [Google Scholar] [CrossRef]
  52. Li, W.; Wang, J.; Li, G.; Zhang, Y. Robust estimation and precision analysis on four-parameter coordinate transformation. J. Hebei U. Univ. 2016, 38, 24–28. [Google Scholar]
  53. Wu, Y.; Liu, J.; Ge, H.Y. Comparison of Total Least Squares and Least Squares for Four-and Seven-parameter Model Coordinate Transformation. J. Appl. Geodesy 2016, 10, 259–266. [Google Scholar] [CrossRef]
Figure 1. The overall flowchart of the method. INS: inertial navigation system.
Figure 1. The overall flowchart of the method. INS: inertial navigation system.
Sensors 20 00645 g001
Figure 2. Structure charts of INS/odometer-integrated navigation. IMU: inertial measurement unit; DR: dead reckoning; KF: Kalman filter.
Figure 2. Structure charts of INS/odometer-integrated navigation. IMU: inertial measurement unit; DR: dead reckoning; KF: Kalman filter.
Sensors 20 00645 g002
Figure 3. Procedure of previous coordinate transformation.
Figure 3. Procedure of previous coordinate transformation.
Sensors 20 00645 g003
Figure 4. Procedure of the improved coordinate transformation.
Figure 4. Procedure of the improved coordinate transformation.
Sensors 20 00645 g004
Figure 5. The structure information of the self-developed mobile laser scanning system (MLSS). (a) The overall structure of the self-developed mobile laser scanning system (MLSS). (b) Display of the internal structure. The names of the components in the instrument are as follows: 1. push rod; 2. GPS receiver antenna; 3. laser scanner; 4. inertial measurement unit (IMU) with three-axis gyroscopes and three-axis accelerometers (installed inside); 5. battery; 6. odometer; 7. 2-D laser scanner; 8. time synchronization control board; 9. industrial personal computer.
Figure 5. The structure information of the self-developed mobile laser scanning system (MLSS). (a) The overall structure of the self-developed mobile laser scanning system (MLSS). (b) Display of the internal structure. The names of the components in the instrument are as follows: 1. push rod; 2. GPS receiver antenna; 3. laser scanner; 4. inertial measurement unit (IMU) with three-axis gyroscopes and three-axis accelerometers (installed inside); 5. battery; 6. odometer; 7. 2-D laser scanner; 8. time synchronization control board; 9. industrial personal computer.
Sensors 20 00645 g005
Figure 6. Trajectory and control points.
Figure 6. Trajectory and control points.
Sensors 20 00645 g006
Figure 7. The strategy of using control points for correction. The meanings of each label in Figure 7 are as follows: 1. the railway or subway track; 2. control points used for correction; 3. control points used for verification.
Figure 7. The strategy of using control points for correction. The meanings of each label in Figure 7 are as follows: 1. the railway or subway track; 2. control points used for correction; 3. control points used for verification.
Sensors 20 00645 g007
Figure 8. Extraction of control points from point clouds.
Figure 8. Extraction of control points from point clouds.
Sensors 20 00645 g008
Figure 9. Point clouds of the railway tunnel and control points.
Figure 9. Point clouds of the railway tunnel and control points.
Sensors 20 00645 g009
Figure 10. Absolute values of deviations between the coordinates of the control points.
Figure 10. Absolute values of deviations between the coordinates of the control points.
Sensors 20 00645 g010
Figure 11. Practical working environment of the subway tunnel.
Figure 11. Practical working environment of the subway tunnel.
Sensors 20 00645 g011
Figure 12. Point cloud image of the subway tunnel.
Figure 12. Point cloud image of the subway tunnel.
Sensors 20 00645 g012
Figure 13. Point clouds of paper target.
Figure 13. Point clouds of paper target.
Sensors 20 00645 g013
Table 1. Performance of the laser scanner of the MLSS.
Table 1. Performance of the laser scanner of the MLSS.
Emission FrequencyScanning FrequencyScanning RangeMeasuring DistanceDistance Error (Reflectivity = 90%)Efficiency
10 8 p/s200 r/s360°0.5–119 m2 mm (distance = 80 m)3–5 km/h
Table 2. Performance of the IMU of the MLSS.
Table 2. Performance of the IMU of the MLSS.
Gyro BiasGyro Bias StabilityGyro Bias RepeatabilityGyro Random WalkAccelerometer BiasAccelerometer Bias Repeatability
≤±0.1°/h≤0.01°/h≤0.01°/h≤0.003°/h1/2≤0.00005 g≤0.00005 g
Table 3. Absolute values of deviations between point clouds and the corresponding control points. RMS: root mean square error.
Table 3. Absolute values of deviations between point clouds and the corresponding control points. RMS: root mean square error.
Horizontal Elevation 3-D
Maximum (m)0.0110.0130.014
Minimum (m)0.00070.0000.002
RMS (m)0.0060.0050.008
Table 4. Position accuracy of point clouds which are only corrected near the start and end points.
Table 4. Position accuracy of point clouds which are only corrected near the start and end points.
Horizontal Error (m)Elevation Error (m)3-D RMS (m)
MaximumAverageRMSMaximumAverageRMS
0.0160.0080.0050.0430.0260.0220.023
Table 5. Accuracy with control point correction in different densities.
Table 5. Accuracy with control point correction in different densities.
Distance Interval (m)Horizontal Error (m)Elevation Error (m)3-D RMS (m)
MaximumAverageRMSMaximumAverageRMS
600.0100.0050.0040.0130.0070.0020.004
1200.0100.0040.0040.0180.0090.0040.006
2400.0100.0040.0040.0120.0020.0060.007
4800.0100.0040.0040.0120.0020.0070.008

Share and Cite

MDPI and ACS Style

Wang, Q.; Tang, C.; Dong, C.; Mao, Q.; Tang, F.; Chen, J.; Hou, H.; Xiong, Y. Absolute Positioning and Orientation of MLSS in a Subway Tunnel Based on Sparse Point-Assisted DR. Sensors 2020, 20, 645. https://doi.org/10.3390/s20030645

AMA Style

Wang Q, Tang C, Dong C, Mao Q, Tang F, Chen J, Hou H, Xiong Y. Absolute Positioning and Orientation of MLSS in a Subway Tunnel Based on Sparse Point-Assisted DR. Sensors. 2020; 20(3):645. https://doi.org/10.3390/s20030645

Chicago/Turabian Style

Wang, Qian, Chao Tang, Cuijun Dong, Qingzhou Mao, Fei Tang, Jianping Chen, Haiqian Hou, and Yonggang Xiong. 2020. "Absolute Positioning and Orientation of MLSS in a Subway Tunnel Based on Sparse Point-Assisted DR" Sensors 20, no. 3: 645. https://doi.org/10.3390/s20030645

APA Style

Wang, Q., Tang, C., Dong, C., Mao, Q., Tang, F., Chen, J., Hou, H., & Xiong, Y. (2020). Absolute Positioning and Orientation of MLSS in a Subway Tunnel Based on Sparse Point-Assisted DR. Sensors, 20(3), 645. https://doi.org/10.3390/s20030645

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop