Next Article in Journal
Towards a PS-InSAR Based Prediction Model for Building Collapse: Spatiotemporal Patterns of Vertical Surface Motion in Collapsed Building Areas—Case Study of Alexandria, Egypt
Next Article in Special Issue
Magnetic Survey at the Roman Military Camp of el Benian in Mauretania Tingitana (Morocco): Results and Implications
Previous Article in Journal
VddNet: Vine Disease Detection Network Based on Multispectral Images and Depth Map
Previous Article in Special Issue
The Discovery of a Buried Temple in Paestum: The Advantages of the Geophysical Multi-Sensor Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploration of Indoor Barrier-Free Plane Intelligent Lofting System Combining BIM and Multi-Sensors

1
College of Surveying and Geo-Informatics, Tongji University, Shanghai 200092, China
2
Institute of Geotechnical Engineering, Zhejiang Sci-Tech University, Hangzhou 310018, China
3
Shanghai Foundation Engineering Group Co., Ltd., Shanghai 200433, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(20), 3306; https://doi.org/10.3390/rs12203306
Submission received: 20 August 2020 / Revised: 28 September 2020 / Accepted: 10 October 2020 / Published: 12 October 2020

Abstract

:
Lofting is an essential part of construction projects and the high quality of lofting is the basis of efficient construction. However, the most common method of lofting currently which uses the total station in a multi-person cooperative way consumes much manpower and time. With the rapid development of remote sensing and robot technology, using robots instead of manpower can effectively solve this problem, but few scholars study this. How to effectively combine remote sensing and robots with lofting is a challenging problem. In this paper, we propose an intelligent lofting system for indoor barrier-free plane environment, and design a high-flexibility, low-cost autonomous mobile robot platform based on single chip microcomputer, Micro Electro Mechanical Systems-Inertial Measurement Unit (MEMS-IMU), wheel encoder, and magnetometer. The robot also combines Building Information Modeling (BIM) laser lofting instrument and WIFI communication technology to get its own position. To ensure the accuracy of localization, the kinematics model of Mecanum wheel robot is built, and Extended Kalman Filter (EKF) is also used to fuse multi-sensor data. It can be seen from the final experimental results that this system can significantly improve lofting efficiency and reduce manpower.

Graphical Abstract

1. Introduction

The process of construction mainly includes construction preparation, project construction, project acceptance, etc. The lofting is included in the construction project and project acceptance. It is the process of interpreting construction plans and marking the location of points or axis. Lofting is performed to ensure a project is built according to engineering design plans [1,2] and it is an essential and basic link in building construction which runs through the whole construction project.
The quality of a project’s lofting depends on its efficiency and cost. In the early stages, limited by the lofting tools and methods, several operators were required to invest a lot of energy and time in lofting and calibration at the construction site. In the past decade, with the rapid development of surveying and mapping industry, the appearance of total station has greatly improved the lofting efficiency. As an instrument integrating the functions of measuring horizontal angle, vertical angle, distance, and elevation difference, it can be applied in almost all surveying fields and is easy to operate [3,4]. However, both the harsh environment of most construction sites, such as tunnels, bridges, and high-rise buildings and the atrocious weather have a non-negligible influence on the operators and the construction period. In recent years, robot technology has been continuously developed and widely used in agriculture and forestry monitoring [5,6,7], industrial measurement and control [8,9], indoor monitoring [10,11], and other fields. Many researchers apply unmanned aerial vehicles, unmanned ground robots, underwater robots, etc. to measurement and control area.
However, the use of ground mobile robots in lofting is still rare, and it is not easy to realize “unmanned lofting”. First, lofting requires high precision, many rubber wheel robots have low flexibility, which affects the accuracy of localization. Second, although Global Positioning System-Real Time Kinematic (GPS-RTK) lofting method is convenient [12], due to the high-attenuation of GPS signals indoors, this method cannot be used in the indoor environment. Third, most indoor lofting environment has few or similar features which also makes the method based on feature matching not applicable to the special environment of lofting [13,14].
In the field of surveying and mapping, lofting instruments and methods are continuously updating. At present, the most convenient instruments are GPS-RTK lofting instrument and Building Information Modeling (BIM) laser lofting instrument, of which GPS-RTK lofting instrument is not suitable for the indoor environment. BIM laser lofting instrument is a lofting instrument combining BIM model and laser localization system. Unlike the total station, this instrument can be operated by a single operator, greatly reducing manpower. It locks the prism through laser localization system to obtain the three-dimensional coordinates of the prism on the construction site. BIM provides the instrument with an independent coordinate system and necessary geometric information for lofting [15]. Workers can record lofting errors according to the model, and only need to establish a coordinate system according to the actual of the project in the model building stage, without complicated coordinate conversion. During lofting, only one worker is needed to hold a prism and walks around the construction site to find the points to be lofted according to the instructions of the Personal Digital Assistant (PDA). Currently, many companies have mass-produced this kind of instrument, such as Japan’s Topcon Company, Switzerland’s Leica Company, and America’s Trimble Company. In this paper, an intelligent lofting system is designed on the basis of this kind of instrument, and an autonomous mobile ground robot is constructed to replace humans. Among them, the instrument is BIM laser lofting instrument of Japan Topcon brand, model LN-100.
Therefore, this paper builds a robot platform using Mecanum wheels instead of traditional rubber wheels. We also combine Micro Electro Mechanical Systems-Inertial Measurement Unit (MEMS-IMU) (including accelerometer, gyroscope), magnetometer, wheel encoder, BIM laser lofting instrument, and other sensors to estimate the state of the ground robot. A wheel encoder is used to set the velocity of the robot. The BIM laser lofting instrument localizes the robot and sends the information to the embedded STM32F103 series processor mounted on the robot through WIFI communication and serial communication. Both the MEME-IMU and magnetometer provide the attitude of the robot. After that, we use the Extended Kalman Filter (EKF) algorithm to fuse these data to estimate the robot’s high-precision attitude. The STM32F103 series processor processes the data in real-time and controls the robot to move autonomously to the points to be lofted.
We mainly study the plane intelligent lofting in the indoor barrier-free environment. How to get the state of the ground robot and how to reach the points to be lofted in a timesaving and laborsaving manner is the focus of this paper. The ‘state’ here refers to the two-dimensional coordinates of the ground robot in the lofting environment x, y and Yaw angle ψ . The main innovations and contributions of this paper are:
(1)
The Meacanum wheels are deployed in the system, which improves localization accuracy from the hardware;
(2)
In data acquisition, we discuss the localization and attitude separately which will effectively reduce the amount of estimation and calculation. Figure 1 shows the integration framework;
(3)
We adopt a simple and efficient ground robot motion control theory. It is suitable for Mecanum wheel robots and indoor barrier-free environment;
(4)
We propose a system combining multi-sensors and BIM laser lofting instrument. It expands the relevant market and provides a reference for the future three-dimensional intelligent lofting under multi-scenes.
The paper is organized as follows. Section 2 summarizes the related work of robot localization and lofting environmental adaptability. The design of the robot platform is shown in Section 3. Brief introduction of BIM laser lofting instrument and acquisition of location information are introduced in Section 4. Section 5 presents multi-sensor data fusion for attitude estimation. The theory of robot motion control is described in Section 6. Section 7 shows the experimental results. Section 8 is the discussion of the experimental results. Finally, the conclusion is given in Section 9.

2. Related Work

There are many operating steps in construction lofting, such as setting up instruments, aligning, leveling, locating, and marking, among which locating accurately is a critical step. Therefore, for the robot to reach the control points accurately in a strange environment, it is essential to get the robot’s state.
Many scholars have carried out researches on the localization of robots in a strange environment. The related work mainly includes the requirement of precision, application environment, selecting and combining sensors, and others. Multi-sensors are usually used in these researches to estimate the state information of ground robots or flying robots. GPS-RTK positioning method is mature enough and has high positioning accuracy. Jiang [16] has successfully used GPS technology for the layout of the bridge construction horizontal control network and the bridge axis lofting work, and achieved high enough lofting accuracy. However, this method is commonly used for outdoor navigation or positioning and is not suitable for the indoor environment. Simultaneous localization and mapping (SLAM) technology is currently the main method for robot localization in indoor environment. It can be applied in an indoor environment and can be used for real-time mapping of the environment so that the robots can work in this environment for a long time. Many indoor robots have adopted this technology [17,18]. However, this method usually needs to compare with the dynamic map to reduce the errors. Different construction sites are completely different, therefore, SLAM technology cannot be well applied in lofting. Ultra-wideband (UWB) is also one of the common methods for indoor localization, but its accuracy is not enough [19,20]. IMU as one of the most commonly used sensors for robot state estimation can be seen in almost all current robot localization researches. However, due to the huge accumulative errors caused by wheel slip, sensor drift, and other reasons [21] (pp. 483–484), it can hardly be used alone in high-precision state estimation research. More sensors must be used for position or attitude correction, such as Lidar, camera or magnetometer, etc. Lidar/IMU [22,23,24], camera/IMU [25,26,27], Lidar/Camera/IMU [28,29,30], and other multi-sensor combinations methods can estimate the robot’s state through feature matching algorithms. However, these methods still have problems such as insufficient real-time performance and environmental adaptability. Non-feature or similar features of the lofting environment also makes these methods inapplicable. Although these methods cannot be fully applied to lofting conditions, IMU is still very efficient in estimating robot state at present.
In [31], the author proposed a method of using two GPS-RTK units to fuse IMU data to estimate the vehicle’s three-dimensional attitude, position, and velocity. However, this method cannot solve the robot localization problem in indoor or signal occlusion environment. Liu [32] describes a robot localization method based on laser SLAM, but high-precision laser radar is often costly. To solve this problem, visual slam was proposed. This method can match depth data by using feature matching algorithm [33], but the accuracy of this method needs to be corrected by loop detection which wastes a lot of time. A dim environment will also affect localization. In [34], the author used the EKF algorithm to fuse multi-sensor data including laser SLAM, visual SLAM, IMU, magnetometer, and other sensors to locate the robot. It was verified that the method can be applied to indoor and outdoor environments through many experiments. These methods of fusing multi-sensor data have achieved good results from an unknown location in an unknown environment. However, as for lofting, the robot usually starts from a known point and the first party will provide at least two known points (or a known direction and a known point) and the points to be lofted. Therefore, the points to be lofted can be reached only by solving the relative localization relationship between points. The above-mentioned methods will increase the calculation amount and cost. In [35], the author achieved state estimation by analyzing the kinematic model of the robot and corrected wheel-slippage error to a certain extent. However, this approach, which used IMU and wheel encoder requires very accurate motion estimation and cannot solve the problem of accumulative error fundamentally. The lack of external sensors for position correction will also produce huge accumulative errors after long-time or long-distance operation. Ferreira [36] described the method of integrating sensors inside buildings and BIM to determine the location of people in buildings. Although this is the localization of people, it can be seen that BIM has a guiding significance for indoor localization in some degree. Considering the existing technologies, application scenarios, and the purpose of improving lofting efficiency, this paper focuses on the localization and navigation of ground robots. It is used in the indoor barrier-free plane environment by combining BIM laser lofting instrument, BIM model, wheel encoder, MEME-IMU, and magnetometer. The final experiment shows that the results meet the lofting accuracy requirements of most construction projects.

3. General Design

The intelligent lofting system mainly includes three parts: a ground robot, a BIM laser lofting instrument, and a user terminal. Among them, the ground robot moves on the construction site autonomously, communicates with the BIM laser lofting instrument by WIFI module, and processes the data sent by the lofting instrument and multi-sensors. BIM laser lofting instrument gets the robot plane position information in real-time and sends it to the robot and the user terminal through the WIFI module. The user terminal carries out monitoring, manual analysis, and data recording on the construction site according to the feedback information. Figure 2 shows the general structure of the intelligent lofting system. This section mainly introduces the hardware design of the robot.

3.1. Robot Platform Design

The flexibility and robustness of the ground robot are the basic guarantees for high-precision localization so the size of the robot should not be too large. In this paper, the Mecanum Wheel Chassis of the Home of Balancing Trolley was selected as the chassis of the ground robot, and the secondary development was carried out on it. The chassis of the robot adopted aluminum alloy chassis with a total weight of about 3 KG. The overall size of the chassis was 200 × 250 × 70 mm. We deployed a 7 W DC motor to realize four-wheel drive, and used a 12 V lithium battery with a capacity of 3500 mah as the power supply. We also used the STM32F103 series processor as the control system of the robot, and connected the motor, and drive through the CAN bus. The maximum speed of it was 1.2 m/s, the maximum payload was 6 KG, and the maximum runtime can reach 5 h. In addition, the wheel was matched with an MG513 encoder deceleration motor, which can distinguish rotation of 0.23°.
The BIM laser lofting instrument can obtain the position of the prism on the construction site by linking with the prism through a laser beam. Therefore, the coordinate of the robot on the construction site can be obtained by combining the prism with the robot. To effectively combine the prism with the robot, the ground robot adopted a structure of double-layer. STM32F103 series processor, MEMS-IMU, and magnetometer were installed on the first layer, and a 360° prism and laser orientation device was fixed on the second layer by a customized chassis. This design can ensure the prism to receive laser beams at all angles and will not cause beam occlusion. Figure 3 shows the overall design of the robot.

3.2. Mecanum Wheel

Traditional ground robots usually use rubber wheels as driving wheels, which are low cost, high load-bearing, and durable. However, they are difficult to rotate in the moving, which is not conducive to reaching points in a high-precision way. To solve this problem, the Mecanum wheel with 360° self-rotation characteristic was selected. The Mecanum wheel was invented by the Swedish engineer Bengt Ilon [37], and it can translate and rotate in any direction in the plane through the interaction among a plurality of wheels. This section introduces the mechanical analysis of a single Mecanum wheel and the feasibility of self-rotation.
Figure 4a is the structural diagram of the Mecanum wheel, which mainly comprises a hub A and wheel roller B. A is the main support of the whole wheel, and B is a drum mounted on the hub. The angle between A and B is usually ±45°.
Figure 4b shows the force model of the bottom wheel. When the motor drives the wheel to rotate clockwise, friction force F perpendicular to the centerline of the roller is generated. F can be decomposed into a lateral direction force Fx and a longitudinal direction force Fy as shown by the dotted line. It can be seen that unlike a conventional rubber wheel, the Mecanum wheel generates an extra component force Fx perpendicular to the forward direction.
According to the above-mentioned characteristics of the Mecanum wheel, the robot can self-rotate 360° through the cooperation of multiple wheels. In our system, there were four Mecanum wheels and each wheel equipped a separate motor drive. As shown in Figure 5a, to verify the feasibility of self-rotation, taking the clockwise rotation of the robot as an example. When the wheels 1, 2, 3, and 4 rotate clockwise at the same velocity, they generate friction forces F1, F2, F3 and F4 perpendicular to the centerline of the wheel roller, and the forces are decomposed:
F i = F x i + F y i ( i = 1 , 2 , 3 , 4 )
where Fxi is the lateral component of force Fi; Fyi is the longitudinal component of force Fi.
Due to the interaction between the forces, the force model can be simplified to Figure 5b, of which, the angle between the hub axis and roller axis is 45°, so Fxi = Fyi(i = 1,2,3,4). Taking Fx1 + Fx2 and Fx3 + Fx4 as an example, the two parallel forces are the same in magnitude and opposite in direction, so the generated force couple will make the robot produce a pure rotation effect and make it rotate. The cooperation of the four Mecanum wheels can provide the robot with 3 degrees of freedom necessary for the omnidirectional rotation in the horizontal plane.

3.3. Inverse Kinematic Analysis

The velocity control of the robot can ensure that it operates stably and efficiently. Figure 6 shows the inverse kinematics analysis of the Mecanum wheel robot. The robot was equipped with four wheels symmetrically, wherein the angle α = 45° in our system. When the robot is placed on the ground horizontally, the position and the angle corresponding to the center point or of the robot are taken as state variables. The vehicle coordinate system ∑or is established based on point or, which is also the placement point of the prism.
At this time, the state of the robot in the vehicle coordinate system ∑or is P r = [ x r y r ψ r ] T , and the kinematic formula of each wheel is expressed as:
{ V x ( r ) = ( V 1 V 2 V 3 + V 4 ) / 4 V y ( r ) = ( V 1 + V 2 + V 3 + V 4 ) / 4 ω z ( r ) = ( V 1 + V 2 V 3 + V 4 ) / 4 ( a + b )
where V x ( r ) is the lateral velocity component in the vehicle coordinate system, with the right as positive, V y ( r ) is the longitudinal velocity component in the vehicle coordinate system, with the upward as positive, ω z ( r ) is the angular velocity around the center of the car body, with counterclockwise as positive, Vi (i = 1, 2, 3, 4) is the linear velocity of wheel i, a is half the length of the car, b is half the width of the car.
And the velocity of the robot in the navigation coordinate system ∑on is:
V ( n ) = [ V x ( n ) V y ( n ) ω z ( n ) ] = R ( ψ r ) V ( r ) = [ cos ψ r sin ψ r 0 sin ψ r cos ψ r 0 0 0 1 ] [ V x ( r ) V y ( r ) ω z ( r ) ]
where V(n) is the horizontal velocity component, vertical velocity component, and angular velocity of the robot in the ∑on, R ( ψ r ) is the transformation matrix from ∑or to ∑on.
The velocity of each wheel in the navigation coordinate system can be obtained from Equations (2) and (3):
[ V 1 V 2 V 3 V 4 ] = [ cos ψ r sin ψ r cos ψ r + sin ψ r ( a + b ) ( cos ψ r + sin ψ r ) cos ψ r sin ψ r a + b ( cos ψ r + sin ψ r ) cos ψ r sin ψ r ( a + b ) cos ψ r sin ψ r cos ψ r + sin ψ r a + b ] [ V x ( n ) V y ( n ) ω z ( n ) ]
Although the velocity of the real motion model of the robot is calculated, the velocity of the wheels lags behind due to friction and other reasons during actual running. Direct velocity calculation is difficult to keep the car stable, therefore, this paper uses wheel encoders and PID control to set the velocities:
V = k p e v + k i e v + k d e v
where kp is the proportional coefficient, ki is the integral coefficient, kd is the differential coefficient, e v = V i V i ( r e a l ) ( i = 1 , 2 , 3 , 4 ) , V i ( r e a l ) is the real velocity of wheels which can be obtained from wheel encoder feedback.
To verify the stability of the velocity control, a set of experiments was carried out. We commanded the robot to reach the designated position (10, 0) from the origin (0, 0) in a straight line, with a total length of 10 m and 10 times experiments. The experimental results are shown in Table 1:
It can be seen from Table 1 that the maximum error is 10 mm, the Root Mean Squared Error (RMSE) is 2.1932 mm.

4. BIM Laser Lofting Instrument

In this paper, we use a BIM laser lofting instrument and 360° prism to get the high-precision position of the robot. BIM laser lofting instrument refers to a new generation of lofting instrument that combines BIM and laser dynamic beam orientation system. The instrument is easy to operate, and can be automatically leveled or manually leveled with one key after startup. “What we see is what we get” on the construction site is realized through highly visible guiding light, and each point is lofted independently without cumulative errors. Users can use the Personal Digital Assistant (PDA) to monitor the position of the prism in real-time and move it to the points to be lofted according to instructions, with the maximum lofting distance is up to 100 m and measuring accuracy is ±3 mm (Distance)/5″ (Angle). We selected the BIM laser lofting instrument of Topcon LN-100 to carry out our experiment. Figure 7a shows the introduction of the instrument, and Figure 7b is a 360° prism used together. Table 2 shows the main parameters of the Topcon LN-100.
Among them, BIM mentioned above refers to a digital model with geometric information, state information, and professional attributes of building construction. In the process of lofting, the model mainly provides an independent coordinate system, known points, and position of points to be lofted. Users can intuitively observe the position relationship between prism and points to be lofted through the model. Figure 8 is a BIM of a teaching building in Tongji University built by Autodesk Revit, where Figure 8a is the overall appearance of the building model, and Figure 8b is the first floor of the building.
This instrument provides four station layout methods, which are applicable to almost all lofting construction environment. The four methods are, respectively, Resection method, Reference Axis (Base Point and Reference Axis) Measurement method, Backsight Point (Known Point) Measurement method, and Backsight Point (Reference Axis on the Base Point) Measurement method. Figure 9 is a schematic diagram of the four station setting methods, in which red points represent known positions, and blue triangles represent random positions:
  • Resection: the instrument is set up arbitrarily and measure two or more known points to establish a coordinate system;
  • Reference Axis (Base Point and Reference Axis) Measurement: the instrument is set up arbitrarily and measure the base point (0,0) and the point on the reference axis (x axis or y axis) to establish a coordinate system;
  • Backsight Point (Known Point) Measurement: position the instrument on a known point, measure another known point to establish a coordinate system;
  • Backsight Point (Reference Axis on the Base Point) Measurement: position the instrument on the base point (0,0), measure the point on the reference axis (x axis or y axis) to establish a coordinate system.
In our experiment, the 360° prism was fixed at the geometric center of the robot. The robot and Topcon LN-100 are connected via the WIFI communication module and Transmission Control Protocol (TCP). The position (x,y) of the prism received by the WIFI module was sent to the processor in real-time, wherein, the data transmission frequency is 20 Hz and the maximum communication distance can reach 100 m.

5. Multi-Sensors Fusion Algorithm

For the attitude estimation of the ground robot, it is usually output by multi-sensors. We use MEMS-IMU and magnetometer to get the attitude of the ground robot. Among them, magnetometer and accelerometer in MEMS-IMU output one group of robot attitude, and gyroscope in MEMS-IMU output another group of robot attitude. However, due to the defects and accumulative errors of a single sensor, neither of the two groups of estimations can be used as the attitude of the robot for a long time alone. Therefore, the EKF algorithm is adopted to fuse the two groups of data for optimal estimation of robot attitude. This part will introduce the EKF algorithm fusing multi-sensor data.

5.1. Sensors Angle Output

The common attitude measurement sensors cannot directly obtain angle data. For example, the data output by the gyroscope is usually the angular velocity of the robot and must be converted into angle through integration. This part briefly analyzes the conversion process of accelerometer, magnetometer, and gyroscope:

5.1.1. Accelerometer

The accelerometer can further get the attitude of the object by collecting acceleration components of gravity acceleration on each axis of the object. The navigation coordinate system in this paper is set as the east (x) north (y) sky (z) coordinate system. As shown in Figure 10, a calibrated accelerometer Ta is obliquely placed in space, its rotation process is R ( · ) = X ( · ) Y ( · ) Z ( · ) , and assumes that it rotates all in the positive direction of the attitude angle. It finally outputs the projection component A c = [ A x ( c ) A y ( c ) A z ( c ) ] T of the gravity acceleration g on the three axis xc, yc, zc of the carrier coordinate system ∑oc.
The rotation matrix is calculated as follows:
[ A x ( c ) A y ( c ) A z ( c ) ] = R ( · ) [ 0 0 g ] = X ( · ) Y ( · ) Z ( · ) [ 0 0 g ] = [ 1 0 0 0 cos φ sin φ 0 sin φ cos φ ] [ cos θ 0 sin θ 0 1 0 sin θ 0 cos θ ] [ cos ψ sin ψ 0 sin ψ cos ψ 0 0 0 1 ] [ 0 0 g ] = [ g sin θ g sin φ cos θ g cos φ cos θ ]
where g is the gravity acceleration, taking g = 9.8 m/s2, R ( · ) is the rotation matrix, A c = [ A x ( c ) A y ( c ) A z ( c ) ] T is the components of the gravitational acceleration on the three-axis of the carrier coordinate system ∑oc, φ, θ and ψ is the Roll angle, Pitch angle, and Yaw angle of accelerometer Ta in ∑oc.
According to Equation (6), the converted angle of the data collected by the accelerometer is:
{ θ = arcsin ( A x ( c ) / g ) φ = arctan ( A y ( c ) / A z ( c ) )

5.1.2. Magnetometer

The magnetometer can get the magnetic field intensity components on each axis of the object, and then calculate the Yaw angle ψ. Its conversion principle is similar to the accelerometer, so it will not be illustrated here with a figure. First, without considering the magnetic declination, assume that a calibrated magnetometer Tm is obliquely placed on the ground. Then, Tm outputs the magnetic field intensity components M c = [ M x ( c ) M y ( c ) M z ( c ) ] T of the earth’s magnetic field H on the three axis xc, yc, zc of the carrier coordinate system ∑oc. Finally, the Yaw angle ψ can be calculated as follows:
ψ = arctan ( M y ( c ) / M x ( c ) )
However, in the actual running of the robot, it is impossible to keep a constant plane motion. The unevenness of the road surface and the ground gap will cause the robot to generate an instantaneous Pitch angle or Roll angle. Due to the influence of Pitch and Roll, Equation (8) is no longer valid. Therefore, without considering the magnetic declination, assume that magnetometer Tm is placed obliquely in space and its rotation matrix is R ( · ) = X ( · ) Y ( · ) Z ( · ) , all rotation in the positive direction of the attitude angle. Then the rotation matrix can be calculated as follows:
[ M x ( c ) M y ( c ) M z ( c ) ] = R ( · ) [ 0 H y H z ] = X ( · ) Y ( · ) Z ( · ) [ 0 H y H z ] = [ cos θ sin ψ H y sin θ H z ( sin φ sin θ sin ψ + cos φ cos ψ ) H y + sin φ cos θ H z ( cos φ sin θ sin ψ + sin φ cos ψ ) H y + cos φ cos θ H z ]
where the Pitch angle θ and Roll angle ψ are known from Equation (7), H = [ 0 H y H z ] T is the magnetic field intensity of point oc in the earth’s magnetic field coordinate system, and Hx = 0 because the geomagnetic direction is from the geomagnetic south pole to the geomagnetic north pole, M c = [ M x ( c ) M y ( c ) M z ( c ) ] T is the component of the earth’s magnetic field on the three-axis of the carrier coordinate system ∑oc.
Simultaneous Equations (7) and (9) can get the Yaw angle ψ as follows:
ψ = arctan ( M x ( c ) cos θ + M y ( c ) sin θ sin φ + M z ( c ) sin θ cos φ cos φ M y ( c ) sin φ M z ( c ) )
The Yaw angle ψ obtained by magnetometer, Pitch angle θ, and Roll angle φ obtained by acceleration are fused with gyroscope data as a group of attitude.

5.1.3. Gyroscope

The gyroscope collects angular velocity ω i ( i = x , y , z ) of each axis when the object rotates and further gets the attitude of the object through integration with time.
As shown in Figure 11, taking the Pitch angle as an example. Assuming that the gyroscope’s Pitch angle at time t 0 is θ 0 , at time t 1 is θ 1 , and its angular velocity during this period is a function ω y ( t ) , then:
θ 1 = θ 0 + t 0 t 1 ω y ( t ) d t
where ω y ( t ) is the relationship between angular velocity on the axis y and time t .
The calculation of the φ Roll angle and the Yaw angle ψ is the same as above, namely:
{ φ 1 = φ 0 + t 0 t 1 ω x ( t ) d t ψ 1 = ψ 0 + t 0 t 1 ω z ( t ) d t
Pitch angle θ , Roll angle φ , and Yaw angle ψ obtained by the gyroscope are fused with accelerometer and magnetometer data as another group of attitudes.

5.2. Data Fusion Algorithm

Kalman filter is one of the most efficient algorithms to fuse multi-sensor data when estimating the state of a robot at present. This algorithm continuously optimizes the state of the system by inputting observation data and outputs the optimal estimation of the state of the system. However, this algorithm is generally applicable to linear systems and is not applicable to the robot system in this paper. Considering the nonlinear characteristics of the system and the high efficiency of the fusion algorithm, we adopted the EKF algorithm for multi-sensor data fusion of nonlinear systems.

Extended Kalman Filter

In this study, the sensors were directly fixed to the robot and have been pre-calibrated. The state equation of the system is expressed as:
x k = f ( x k 1 , u k 1 , w k 1 ) = [ ( I + T 2 γ ( w i k ) ) 4 × 4 0 4 × 3 0 3 × 4 I 3 × 3 ] x k 1 + w k 1
where regardless of the input variables of the system, u k 1 = 0 , x k is state variable, T is the update time, 1 2 γ ( w i k ) is a quaternion differential equation expression, w k 1 is process noise, which follows Gaussian distribution with a mean value of 0, namely w k ~ N ( 0 , Q ) .
The state variable is selected as:
x k = [ q ω q x q y q z w x k ( g ) w y k ( g ) w z k ( g ) ] T
where q = [ q ω q x q y q z ] T is a quaternion in the world coordinate system to represent the robot’s attitude, w i k ( g ) ( i = x , y , z ) represents gyro drift deviation on the axis i at time k.
γ ( w i k ) is expressed as:
γ ( w i k ) = [ 0 ( ω x k w x k ( g ) ) ( ω y k w y k ( g ) ) ( ω z k w z k ( g ) ) ω x k w x k ( g ) 0 ω z k w z k ( g ) ( ω y k w y k ( g ) ) ω y k w y k ( g ) ( ω z k w z k ( g ) ) 0 ω x k w x k ( g ) ω z k w z k ( g ) ω y k w y k ( g ) ( ω x k w x k ( g ) ) 0 ]
where ω i k ( i = x , y , z ) represents the angular velocity on the axis i at time k, which can be got from gyroscope.
The observation equation of the system is expressed as:
z k = h ( x k , v k ) = [ R 3 × 3 ( n ) 0 3 × 3 0 3 × 3 R 3 × 3 ( n ) ] [ 0 0 g 0 H y H z ] + v k = [ h 1 a h 2 m ] + v k
where z k is the observation variable, and the accelerometer and magnetometer data are used for the observation, R 3 × 3 ( n ) is rotation matrix represented by quaternions, g is gravity acceleration, H y and H z refers to the north geomagnetic component and the vertical geomagnetic component of the navigation coordinate system ∑on, h 1 a is the observation equation of accelerometer, h 2 m is the observation equation of magnetometer, v k is observation noise, which follows Gaussian distribution with a mean value of 0, namely v k ~ N ( 0 , R ) .
From Equations (13) and (16), it can be seen that functions f ( · ) and h ( · ) are nonlinear and need to be linearized. EKF gets state transition matrices F k and H k by solving partial derivatives of nonlinear functions, that is Jacobi matrices:
F k = f x k | x = x ^ k 1 = [ ( I + T 2 γ ( w k ) ) 4 × 4 δ 4 × 3 1 0 3 × 4 I 3 × 3 ]
H k = h x k | x = x ^ k = 2 [ δ 3 × 4 2 0 3 × 3 δ 3 × 4 3 0 3 × 3 ]
where, x ^ k is the estimation of x k , δ 1 is the partial derivative of f ( · ) to ω i k , and the expression is:
δ 1 = f w i k = T 2 [ q x q y q z q ω q z q y q z q ω q x q y q x q ω ]
δ 2 is the partial derivative of h 1 a to q , and the expression is:
δ 2 = h 1 a q = | g | [ q y q z q ω q x q x q ω q z q y q ω q x q y q z ]
δ 3 is the partial derivative of h 2 m to q , and the expression is:
δ 3 = h 2 m q = [ H y q z + H z q y H y q y H z q z H y q x + H z q ω H y q ω H z q x H y q ω H z q x H y q x H z q ω H y q y H z q z H y q z H z q y H y q x H z q ω H y q ω + H z q x H y q z + H z q y H y q y H z q z ]
Finally, the state update equation is obtained:
x ^ k = x ^ k + K k ( z k h ( x ^ k , 0 ) )
where x ^ k is the prior estimation of x k , K k is the Kalman gain, and the expressions of x ^ k and K k are as follows:
x ^ k = F k x ^ k 1
K k = P k H k T ( H k P k H k T + R ) 1
In Equation (24), P k is the error covariance matrix of prior estimation at time k, the expression is as follows:
P k = F k P k 1 F k 1 T + Q
where P k 1 is the error covariance matrix of posterior estimation at time k − 1, its update equation is:
P k = ( I K k H k ) P k
The update of state variables is obtained through Equation (22), the final attitude is output by the conversion relationship between quaternion and Euler angle. The Yaw angle ψ is taken as the main attitude basis in the lofting. The attitude solution diagram is shown in Figure 12.

6. Motion Control

The robot can get its own actual position and attitude ( x , y , ψ ) in an independent coordinate system through the BIM laser lofting instrument and fusion algorithm. However, how to reach the points to be lofted efficiently is still a key problem. Considering the time-saving purpose of lofting, we adopt a simple robot motion control theory based on the omnidirectional self-rotation characteristic of the Mecanum wheel.
As shown in Figure 13, the robot is at point 1 and needs to reach the point 2 to be lofted. At this time, the position and attitude of the robot at point 1 is ( x 1 , y 1 , ψ 1 ) , the position of point 2 is ( x 2 , y 2 ) which is provided by the first party. To complete the lofting of point 2, the robot needs to rotate the angle β around the geometric center and move forward L . The calculations about β and L can be solved by known data.
The solving formula are as follows:
{ β = η ψ 1 = arctan 2 ( y 2 y 1 x 2 x 1 ) ψ 1 L = ( x 2 x 1 ) 2 + ( y 2 y 1 ) 2
where, ( x 1 , y 1 ) is the position of the robot at point 1, ψ 1 is the attitude of the robot at point 1, ( x 2 , y 2 ) is the position of point 2, η is the angle between the line of point 1 and point 2 and the positive direction of the x-axis, β is the required rotation angle of the robot, L is the distance between point 1 and point 2.
However, it is almost impossible for the robot to accurately rotate the angle β or move forward L . When the robot deviates from the expected trajectory, it will correct in real-time according to its position and attitude ( x n o w , y n o w , ψ n o w ) at this time. The methods of solving β n o w and L n o w are similar to (27), namely:
{ β n o w = η n o w ψ n o w = arctan 2 ( y 2 y n o w x 2 x n o w ) ψ n o w L n o w = ( x 2 x n o w ) 2 + ( y 2 y n o w ) 2
where, ( x n o w , y n o w ) is the real-time position of the robot, ψ n o w is the real-time attitude of the robot, η n o w is the angle between the line between the real-time position of the robot and point 2 and the positive direction of the x-axis, β n o w is the required real-time rotation angle of the robot, L is the distance between the real-time position of the robot and point 2.

7. Experiment and Results

This part mainly introduces the power distribution of the robot and verifies the feasibility of the system, the localization accuracy of the robot is also verified through practical experiments.

7.1. Introduction of Power Distribution

The robot platform introduced in this paper was mainly equipped with wheel encoder, MEMS-IMU, and magnetometer sensors. It is used in combination with a BIM laser lofting instrument, which is an external sensor, to determine the point position in lofting. Among them, the wheel encoder is used to set the velocity of the robot. The BIM laser lofting instrument links with a prism fixed at the top of the robot and outputs the position information of the robot under 20 Hz. After that, the position information is sent to the embedded STM32F103 series processor of the robot through WIFI communication for data processing. MEMS-IMU and magnetometer estimate the robot attitude through EKF. The data is sent to the STM32F103 series processor through serial communication and the fusion algorithm is executed by the processor. The processor has 72 MHz CPU, 125 Dmips, 256 kb flash memory and can output the attitude estimation at 200 Hz.

7.2. Experimental Results

To verify the accuracy of localization, this experiment was carried out in an indoor barrier-free environment. The sensors were pre-corrected and the minor errors in initial orientation and alignment were ignored. We adopted the custom coordinate system method and set the instrument arbitrarily. Figure 14 is the points layout and result diagram of this experiment, 12 points were selected for the experiment, of which 2 red points were the initial points ((0, 0) was the installation position of the BIM laser lofting instrument, and (1, 0) was the initial position of the robot), and the remaining 10 black points were the points to be lofted, ‘+’ were the lofting result of 10 black points. Experimental distance from 0.72 m to 29 m. The experimental results are shown in Table 3.
The localization accuracy of the robot is crucial for lofting. Table 3 shows the positions of ten points to be lofted and the actual localization accuracy. It can be seen from Table 3, the average localization error in the experiment is 7.84 mm, RMSE is 1.427 mm, and the maximum error is the only 10.6 mm, which meets most of the lofting requirements. To compare the lofting time between our method and the traditional method, we use a total station to stake out the same ten points. For 10 points, the robot takes 7.43 min, and the total station takes 19.6 min. It can be seen that the robot lofting method proposed in this paper improves efficiency by about 163.8% compared with the traditional lofting method and reduces manpower.

8. Discussion

The intelligent lofting system proposed in this article has practical application values for many projects, such as indoor decoration and indoor lofting, and can improve the efficiency of lofting under the premise of high enough accuracy. Table 4 is the accuracy control standards for indoor lofting in GB 50210 and GB50242 of China.
According to Table 3 and Table 4, our method meets the standards of most indoor lofting projects. But for the maximum localization error, we think it is due to the error of the BIM laser lofting instrument itself and robot manufacturing error. The sensors’ resolution and the environment of the experiment site will also have an impact on accuracy. Among them, the lofting environment is one of the main factors that affect accuracy. The rugged ground will have a serious impact on the balance of the Mecanum wheel, which will also affect our subsequent outdoor lofting research. Improving the localization algorithm and the robot motion control theory to further improve lofting accuracy is also one of the future research directions of this paper.
As for the issue of lofting time, whether it is the method proposed in this article or the traditional method, we record the ground marking time as 10s. This time is more realistic, although the marking time of the robot may be shorter in the future. The leveling time of the instrument cannot be ignored, because fast leveling is also one of the advantages of the BIM laser lofting instrument.

9. Conclusions

We propose an intelligent lofting system for the indoor barrier-free plane environment combined with a BIM laser lofting instrument. This method has certain application value for indoor decoration, indoor point layout, indoor equipment pre-placement, etc. In this paper, the design of the robot platform, velocity control, robot localization, and motion control theory are described. From the final experimental results, it can be seen that the method has achieved the expected goal. For the most important localization problem, we propose a method of combining internal and external multi-sensor data to estimate the state of the robot. Compared with the traditional method, the accuracy of this method meets the requirements of most indoor lofting, and the efficiency is higher, and the robot platform runs well.
The proposal of this method has certain significance for some large-area, boring and repetitive indoor lofting work, such as tile position lofting in large office buildings, indoor decoration, factory equipment pre-placement, etc. Compared with the traditional total station lofting method, our method further reduces the manpower through the robot, improves the lofting speed, and expands the sales path of the BIM laser lofting instrument under the premise of ensuring the lofting accuracy. However, its main limitation at present is that it is only suitable for two-dimensional and indoor Barrier-free lofting environment, which is difficult to meet most of the outdoor lofting projects or indoor obstacle environment. The maintenance of the robot also costs.
Further work will adopt more sensors to the system, focus on intelligent lofting in indoor and outdoor obstacle environment, and improve localization algorithm and motion control theory to improve the localization accuracy. We believe that this method can effectively solve the existing problems in lofting and greatly improve lofting efficiency.

Author Contributions

Z.Z. was responsible for building the hardware, writing the manuscript, and designing the experiment. B.Y. was responsible for the translation and modification of this article. Z.Z. and B.Y. was responsible for conducting all the experiment together. D.Y. provided equipment and test site. X.C. provided the overall conception of this research and revised the manuscript. Z.Z. and X.C. analyzed and discussed the experimental results together. All authors have read and agreed to the published version of the manuscript.

Funding

This work is financially supported by The National Natural Science Foundation of China Research on Estimation of Stand-Level Forest Parameters from Aerial-Terrestrial LiDAR Point Clouds Using Deep Transfer Learning (project number: 41974213).

Acknowledgments

We are very grateful to the Shanghai Topcon-Sokkia Technology and Trading Co., Ltd. and Shanghai Tongsuo Surveying and Mapping Technology Co., Ltd. for their cooperation and effort in providing the BIM laser lofting instrument of Topcon LN-100.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gu, X.; Bao, F.; Cheng, X. Construction Engineering Survey. In Elementary Surveying, 4th ed.; Yang, N., Ed.; Tongji University Press: Shanghai, China, 2011; p. 288. [Google Scholar]
  2. Paul, S. Using a Transit and Tape. In Construction Surveying & Layout, 1st ed.; William, D.M., Ed.; Building News: Anaheim, UK; Washington, DC, USA, 2002; pp. 43–61. [Google Scholar]
  3. Luo, Y.; Chen, J.; Xi, W.; Zhao, P.; Qiao, X.; Deng, X.; Liu, Q. Analysis of tunnel displacement accuracy with total station. Measurement 2016, 83, 29–37. [Google Scholar] [CrossRef]
  4. Qingjuana, S.; Hui, C. Surveying and Plotting Method for the River Channel Section Based on the Total Station and CASS. Procedia Eng. 2012, 28, 424–428. [Google Scholar] [CrossRef] [Green Version]
  5. Bechar, A.; Vigneault, C. Agricultural robots for field operations: Concepts and components. Biosyst. Eng. 2016, 149, 94–111. [Google Scholar] [CrossRef]
  6. Hansen, K.D.; Garcia-Ruiz, F.; Kazmi, W.; Bisgaard, M.; la Cour-Harbo, A.; Rasmussen, J.; Andersen, H.J. An Autonomous Robotic System for Mapping Weeds in Fields. In Proceedings of the IFAC Proceedings Volumes, Gold Coast, Australia, 26–28 June 2013; pp. 217–224. [Google Scholar]
  7. Unal, I.; Kabas, O.; Sozer, S. Real-Time Electrical Resistivity Measurement and Mapping Platform of the Soils with an Autonomous Robot for Precision Farming Applications. Sensors 2020, 20, 251. [Google Scholar] [CrossRef] [Green Version]
  8. Vithanage, R.K.W.; Harrison, C.S.; De Silva, A.K.M. Autonomous rolling-stock coupler inspection using industrial robots. Robot. Comput. Integr. Manuf. 2019, 59, 82–91. [Google Scholar] [CrossRef] [Green Version]
  9. Guo, S.; Diao, Q.; Xi, F. Vision Based Navigation for Omni-directional Mobile Industrial Robot. Procedia Comput. Sci. 2017, 105, 20–26. [Google Scholar] [CrossRef]
  10. Neumann, P.P.; Hüllmann, D.; Bartholmai, M. Concept of a gas-sensitive nano aerial robot swarm for indoor air quality monitoring. Mater. Today Proc. 2019, 12, 470–473. [Google Scholar] [CrossRef]
  11. AlHaza, T.; Alsadoon, A.; Alhusinan, Z.; Jarwali, M.; Alsaif, K. New Concept for Indoor Fire Fighting Robot. Procedia-Soc. Behav. Sci. 2015, 195, 2343–2352. [Google Scholar] [CrossRef] [Green Version]
  12. Xu, J.; Zhang, L.; Zhu, Y.; Gou, H. The Application of GPS-RTK in Engineering Measurement and Position. In Proceedings of the 2009 Second International Symposium on Knowledge Acquisition and Modeling, Wuhan, China, 30 November–1 December 2009; pp. 186–189. [Google Scholar]
  13. Mueller, A.; Himmelsbach, M.; Luettel, T.; Hundelshausen, F.V.; Wuensche, H.-J. GIS-based topological robot localization through LIDAR crossroad detection. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 2001–2008. [Google Scholar]
  14. Patruno, C.; Colella, R.; Nitti, M.; Reno, V.; Mosca, N.; Stella, E. A Vision-Based Odometer for Localization of Omnidirectional Indoor Robots. Sensors 2020, 20, 875. [Google Scholar] [CrossRef] [Green Version]
  15. Frías, E.; Díaz-Vilariño, L.; Balado, J.; Lorenzo, H. From BIM to Scan Planning and Optimization for Construction Control. Remote Sens. 2019, 11, 1963. [Google Scholar] [CrossRef] [Green Version]
  16. Jiang, S.Y. Discussion on the Application of GPS Technology in Bridge Construction Plane Control Net and in Survey of Bridge Axis Lofting. Appl. Mech. Mater. 2014, 580–583, 2842–2847. [Google Scholar] [CrossRef]
  17. Lee, S.; Lee, S.; Baek, S. Vision-Based Kidnap Recovery with SLAM for Home Cleaning Robots. J. Intell. Robot. Syst. 2011, 67, 7–24. [Google Scholar] [CrossRef]
  18. Breuer, T.; Giorgana Macedo, G.R.; Hartanto, R.; Hochgeschwender, N.; Holz, D.; Hegger, F.; Jin, Z.; Müller, C.; Paulus, J.; Reckhaus, M.; et al. Johnny: An Autonomous Service Robot for Domestic Environment. J. Intell. Robot. Syst. 2011, 66, 245–272. [Google Scholar] [CrossRef]
  19. Liu, F.; Li, X.; Wang, J.; Zhang, J. An Adaptive UWB/MEMS-IMU Complementary Kalman Filter for Indoor Location in NLOS Environment. Remote Sens. 2019, 11, 2628. [Google Scholar] [CrossRef] [Green Version]
  20. Benini, A.; Mancini, A.; Longhi, S. An IMU/UWB/Vision-based Extended Kalman Filter for Mini-UAV Localization in Indoor Environment using 802.15.4a Wireless Sensor Network. J. Intell. Robot. Syst. 2012, 70, 461–476. [Google Scholar] [CrossRef]
  21. Gregory, D.; Michael, J. Inertial sensors, GPS, and odometry. In Springer Handbook of Robotics, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 20, pp. 477–479. [Google Scholar]
  22. Kumar, G.A.; Patil, A.K.; Patil, R.; Park, S.S.; Chai, Y.H. A LiDAR and IMU Integrated Indoor Navigation System for UAVs and Its Application in Real-Time Pipeline Classification. Sensors 2017, 17, 1268. [Google Scholar] [CrossRef] [Green Version]
  23. Zhang, X.; Shi, H.; Pan, J.; Zhang, C. Integrated navigation method based on inertial navigation system and Lidar. Opt. Eng. 2016, 55, 044102. [Google Scholar] [CrossRef]
  24. Liu, S.; Atia, M.M.; Karamat, T.B.; Noureldin, A. A LiDAR-Aided Indoor Navigation System for UGVs. J. Navig. 2014, 68, 253–273. [Google Scholar] [CrossRef] [Green Version]
  25. Suhr, J.K.; Jang, J.; Min, D.; Jung, H.G. Sensor Fusion-Based Low-Cost Vehicle Localization System for Complex Urban Environment. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1078–1086. [Google Scholar] [CrossRef]
  26. Faralli, A.; Giovannini, N.; Nardi, S.; Pallottino, L. Indoor real-time localisation for multiple autonomous vehicles fusing vision, odometry and IMU data. In Proceedings of the Third International Workshop on Modelling and Simulation for Autonomous Systems, Rome, Italy, 15–16 June 2016; pp. 288–297. [Google Scholar]
  27. Jiang, X.; Chen, G.; Capi, G.; Ishll, C.; Mai, Y.; Lai, Y. Development of an indoor positioning and navigation system using monocular SLAM and IMU. In Proceedings of the First International Workshop on Pattern Recognition, Tokyo, Japan, 11 July 2016; p. 100111L. [Google Scholar]
  28. Chen, M.; Yang, S.; Yi, X.; Wu, D. Real-time 3D mapping using a 2D laser scanner and IMU-aided visual SLAM. In Proceedings of the 2017 IEEE International Conference on Real-time Computing and Robotics (RCAR), Okinawa, Japan, 14–18 July 2017; pp. 297–302. [Google Scholar]
  29. Deilamsalehy, H.; Timothy, C. Sensor fused three-dimensional localization using IMU, camera and LiDAR. In Proceedings of the 2016 IEEE SENSORS, Orlando, FL, USA, 30 October–2 November 2016; pp. 376–378. [Google Scholar]
  30. Nieuwenhuisen, M.; Droeschel, D.; Beul, M.; Behnke, S. Autonomous Navigation for Micro Aerial V ehicles in Complex GNSS-denied Environment. J. Intell. Robot. Syst. 2016, 84, 199–216. [Google Scholar] [CrossRef]
  31. Aghili, F.; Salerno, A. Driftless 3D Attitude Determination and Positioning of Mobile Robots By Integration of IMU With Two RTK GPSs. IEEE/ASME Trans. Mechatron. 2013, 18, 21–31. [Google Scholar] [CrossRef]
  32. Liu, F.; Chen, Y.; Li, Y. Research on Indoor Robot SLAM of RBPF Improved with Geometrical Characteristic Localization. In Proceedings of the IEEE 2017 29th Chinese Control And Decision Conference (CCDC), Chongqing, China, 28–30 May 2017; pp. 3325–3330. [Google Scholar]
  33. Lee, T.J.; Kim, C.H.; Cho, D. A Monocular Vision Sensor-Based Efficient SLAM Method for Indoor Service Robots. IEEE Trans. Ind. Electron. 2019, 66, 318–328. [Google Scholar] [CrossRef]
  34. Du, H.; Wang, W.; Xu, C.; Xiao, R.; Sun, C. Real-Time Onboard 3D State Estimation of an Unmanned Aerial Vehicle in Multi-Environment Using Multi-Sensor Data Fusion. Sensor 2020, 20, 919. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Yi, J.; Wang, H.; Zhang, J.; Song, D.; Jayasuriya, S.; Liu, J. Kinematic Modeling and Analysis of Skid-Steered Mobile Robots With Applications to Low-Cost Inertial-Measurement-Unit-Based Motion Estimation. IEEE Trans. Robot. 2009, 25, 1087–1097. [Google Scholar]
  36. Ferreira, J.C.; Resende, R.; Martinho, S. Beacons and BIM Models for Indoor Guidance and Location. Sensors 2018, 18, 4374. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Gfrerrer, A. Geometry and kinematics of the Mecanum wheel. Comput. Aided Geom. Des. 2008, 25, 784–791. [Google Scholar] [CrossRef]
Figure 1. The framework of data integration.
Figure 1. The framework of data integration.
Remotesensing 12 03306 g001
Figure 2. General design scheme of intelligent lofting system.
Figure 2. General design scheme of intelligent lofting system.
Remotesensing 12 03306 g002
Figure 3. Overall design of robot platform.
Figure 3. Overall design of robot platform.
Remotesensing 12 03306 g003
Figure 4. Mecanum wheel: (a) The structure of Mecanum; (b) The force model of bottom wheel.
Figure 4. Mecanum wheel: (a) The structure of Mecanum; (b) The force model of bottom wheel.
Remotesensing 12 03306 g004
Figure 5. The force model of robot: (a) Force model of clockwise rotation; (b) Simplified force model.
Figure 5. The force model of robot: (a) Force model of clockwise rotation; (b) Simplified force model.
Remotesensing 12 03306 g005
Figure 6. Kinematic model of the robot.
Figure 6. Kinematic model of the robot.
Remotesensing 12 03306 g006
Figure 7. Building Information Modeling (BIM) laser lofting instrument: (a) The introduction of Topcon LN-100; (b) Sokkia 360° prism matched with Topcon LN-100.
Figure 7. Building Information Modeling (BIM) laser lofting instrument: (a) The introduction of Topcon LN-100; (b) Sokkia 360° prism matched with Topcon LN-100.
Remotesensing 12 03306 g007
Figure 8. BIM: (a) A teaching building of Tongji University; (b) The first floor of the building.
Figure 8. BIM: (a) A teaching building of Tongji University; (b) The first floor of the building.
Remotesensing 12 03306 g008
Figure 9. The methods of instrument layout: (a) Resection; (b) Reference Axis (Base Point and Reference Axis) Measurement; (c) Backsight Point (Known Point) Measurement; (d) Backsight Point (Reference Axis on the Base Point) Measurement.
Figure 9. The methods of instrument layout: (a) Resection; (b) Reference Axis (Base Point and Reference Axis) Measurement; (c) Backsight Point (Known Point) Measurement; (d) Backsight Point (Reference Axis on the Base Point) Measurement.
Remotesensing 12 03306 g009aRemotesensing 12 03306 g009b
Figure 10. The relationship between acceleration and attitude.
Figure 10. The relationship between acceleration and attitude.
Remotesensing 12 03306 g010
Figure 11. Pitch angular velocity and time function graph of gyroscope.
Figure 11. Pitch angular velocity and time function graph of gyroscope.
Remotesensing 12 03306 g011
Figure 12. The diagram of attitude solution.
Figure 12. The diagram of attitude solution.
Remotesensing 12 03306 g012
Figure 13. Motion Control of the robot.
Figure 13. Motion Control of the robot.
Remotesensing 12 03306 g013
Figure 14. Points layout and result diagram of the experiment.
Figure 14. Points layout and result diagram of the experiment.
Remotesensing 12 03306 g014
Table 1. The results of speed control experiment.
Table 1. The results of speed control experiment.
ExperimentsStarting Position/mExpected Position/mActual Position/mError/mm
1(0, 0)(10, 0)9.99010
2(0, 0)(10, 0)9.9964
3(0, 0)(10, 0)9.9919
4(0, 0)(10, 0)9.9937
5(0, 0)(10, 0)9.9937
6(0, 0)(10, 0)9.9928
7(0, 0)(10, 0)9.9937
8(0, 0)(10, 0)9.9973
9(0, 0)(10, 0)9.99010
10(0, 0)(10, 0)9.9928
Table 2. The main parameters of Topcon LN-100.
Table 2. The main parameters of Topcon LN-100.
LN-100Parameters
Measuring accuracy±3 mm (Distance)/5″ (Angle)
Laser tracking range0.9–100 m
Working range360° (Horizontal), ±25°(Vertical)
Leveling range±3°
Wireless LAN802.11 n/b/g
Communication range100 m
Table 3. The results of localization experiment.
Table 3. The results of localization experiment.
PointsPoints Position/mActual Position/mError/mm
Instrument location(0, 0)
Start point(1, 0)
1(0.6, 0.6)(0.603, 0.593)7.6
2(1, 2)(0.992, 1.993)10.6
3(3, 4)(3.000, 3.991)9.0
4(7, 8)(6.994, 7.993)9.2
5(8, 8.5)(8.004, 8.495)6.4
6(15, 10)(14.993, 10.004)8.0
7(11, 12)(10.994, 11.998)6.3
8(14, 14)(14.005, 13.996)6.4
9(17, 16)(16.995, 16.007)8.6
10(22, 20)(21.994, 19.998)6.3
Table 4. The accuracy control standards for indoor lofting in GB 50210 and GB50242 of China.
Table 4. The accuracy control standards for indoor lofting in GB 50210 and GB50242 of China.
Full Floor LengthAllowable Deviation
Single door window<30 m15 mm
≥30 m20 mm
Adjacent doors and windows 10 mm
Fire hydrant 20 mm
Plumbing equipment 15 mm
Central position of floral decoration 10 mm

Share and Cite

MDPI and ACS Style

Zhang, Z.; Cheng, X.; Yang, B.; Yang, D. Exploration of Indoor Barrier-Free Plane Intelligent Lofting System Combining BIM and Multi-Sensors. Remote Sens. 2020, 12, 3306. https://doi.org/10.3390/rs12203306

AMA Style

Zhang Z, Cheng X, Yang B, Yang D. Exploration of Indoor Barrier-Free Plane Intelligent Lofting System Combining BIM and Multi-Sensors. Remote Sensing. 2020; 12(20):3306. https://doi.org/10.3390/rs12203306

Chicago/Turabian Style

Zhang, Zijian, Xiaojun Cheng, Bilian Yang, and Dong Yang. 2020. "Exploration of Indoor Barrier-Free Plane Intelligent Lofting System Combining BIM and Multi-Sensors" Remote Sensing 12, no. 20: 3306. https://doi.org/10.3390/rs12203306

APA Style

Zhang, Z., Cheng, X., Yang, B., & Yang, D. (2020). Exploration of Indoor Barrier-Free Plane Intelligent Lofting System Combining BIM and Multi-Sensors. Remote Sensing, 12(20), 3306. https://doi.org/10.3390/rs12203306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop