Next Article in Journal
Association between Hometown Landholdings and Housing Quality of Rural Migrants in Urban Areas: Evidence from China
Next Article in Special Issue
A Science Mapping Approach-Based Review of Construction Workers’ Safety-Related Behavior
Previous Article in Journal
Design Strategies to Improve Metro Transit Station Walking Environments: Five Stations in Chongqing, China
Previous Article in Special Issue
Synergistic Relationship, Agent Interaction, and Knowledge Coupling: Driving Innovation in Intelligent Construction Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Trilateral Localization Technique Fusing Extended Kalman Filter for Mobile Construction Robot

1
School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China
2
National Demonstration Center for Experimental Engineering Training Education, Shanghai University, Shanghai 200444, China
3
China Construction Eighth Engineering Division Decoration Engineering Corp., Ltd., Shanghai 200122, China
*
Author to whom correspondence should be addressed.
Buildings 2024, 14(4), 1026; https://doi.org/10.3390/buildings14041026
Submission received: 1 March 2024 / Revised: 28 March 2024 / Accepted: 3 April 2024 / Published: 7 April 2024
(This article belongs to the Special Issue Intelligence and Automation in Construction Industry)

Abstract

:
Semi-open and chaotic environments of building sites are considered primary challenges for the localization of mobile construction robots. To mitigate environmental limitations, an improved trilateral localization technique based on artificial landmarks fusing the extended Kalman filters (EKFs) is proposed in this paper. The reflective intensity of the onboard laser is employed to identify artificial landmarks arranged in the ongoing construction environment. A trilateral positioning algorithm is then adopted and evaluated based on artificial landmarks. Multi-sensor fusion, combined with the EKF, is included to improve the positioning accuracy and reliability of the robot in complex conditions. We constructed validation scenarios in the Gazebo simulation environment to verify the required localization functionality. Concurrently, we established simulated testing environments in real-world settings, where the practicality of the proposed technique was validated through the fitting of ideal and actual localization trajectories. The effectiveness of the proposed technique was corroborated through comparative experimental results.

1. Introduction

Construction robots, an emerging technology with immense developmental potential, are anticipated to revolutionize information-based projects by providing safer, faster, greener, and smarter solutions [1]. This transformative impact is expected to catalyze a leapfrog development across the entire construction industry [2,3]. In this process, the integration of autonomous mobile manipulators in the architectural field is generating growing interest [4,5,6], offering new possibilities for on-site construction [7,8]. As shown in Figure 1, the In situ Fabricator developed by researchers from ETH Zurich has been used in several construction scenarios [9,10,11]. Mobile construction robots like the In situ Fabricator overcome the limitations of the robot working space. By combining mobile platforms, the working space of the robotic arm can be significantly expanded, enabling the mobile construction robot to be used in a wider scope of applications than the robotic arm alone.
Undoubtedly, to ensure safe human operation or self-navigation, nearly all mobile construction robots, whether teleoperated or autonomous, rely on estimated poses provided by the localization module [12]. However, construction sites are spatially complex, and construction tasks lead to a lot of chaos on the site. They have semi-open and chaotic properties, which means the robot has to locate itself in the building process with the structure they are building and maintain a globally consistent reference frame. Additionally, as the construction task progresses, the environment faced by the robot undergoes gradual changes [13], the dynamic changes in the environment during the progress of the construction task pose a challenge for establishing a consistent coordinate system necessary for accurate localization. For these reasons, the positioning of autonomous mobile construction robots for chaotic environments is still a significant issue.
Existing work has demonstrated the feasibility of using mobile construction robots for on-site building construction [5,14,15]. The In situ Fabricator, assembled by a tracked mobile platform and an ABB robot arm, has showcased this potential [9,16]. Notably, it has provided various positioning strategies for different construction scenarios [9,10,15,16,17], and these custom-developed localization system options enhanced the unobstructed mobility of mobile construction robots.
Other strategies are also employed in construction sites. For instance, SLAM-based (Simultaneous Localization and Mapping) [18] methods are used by some researchers for the localization of mobile construction robots [19,20,21]. QuicaBot, which is used to inspect the quality of buildings, employed SLAM algorithms to achieve simultaneous positioning of the robot and generated environmental maps [22]. In specific construction tasks, SLAM-based methods have also been demonstrated. Researchers employed multiple mobile construction robots to print simultaneously [23]. They utilized an adaptive Monte Carlo localization algorithm for roughscale positioning [24], and as the mobile construction robot approached the target printing site, the positioning system switched to vision control to achieve higher accuracy [25]. For a chaotic and dynamic environment, this type of positioning based on SLAM methods requires additional computational costs for map construction and dynamic maintenance [26].
To enable integration with digital construction, BIM (Building Information Modeling) is also frequently devoted as the approach for implementing the positioning of mobile construction robots [27,28]. The positioning method of this kind typically relies on BIM-generated maps and utilizes matching approaches to achieve positioning [29]. For example, Huan et al. achieved semantic localization by BIM-generated point cloud maps and iterative closest point (ICP) matching [12]. However, BIM-generated point cloud maps are static in nature and cannot be dynamically adjusted to accommodate changes in the environment or the temporary placement of obstacles.
In summary, SLAM-based localization methods face challenges in meeting the dynamic and chaotic properties of construction sites, and matching positioning using BIM-generated maps struggles to adapt the change of scene. We were inspired by the In situ Fabricator for customized positioning with the help of external sensors [17]. Construction sites are continuously changing and chaotic environments; we believe that reducing reliance on the environment will guarantee accurate positioning.
In this paper, a mobile construction robot prototype for the installation of construction panels, equipped with Mecanum wheels and ABB IRB2600 industrial robot, is introduced. As shown in Figure 2, our design utilizes Mecanum wheels to facilitate omnidirectional movements, and the robot arm is used to complete the final construction tasks. In previous work, we have addressed the issue of base planning for mobile construction robots in a large-scale construction environment [30]. This paper focuses on a newly developed localization for on-site building construction, which employs onboard laser sensors, combined with trilateral positioning and EKF, to facilitate the on-site positioning of mobile construction robots. Our primary goal is independent and reliable localization in semi-open and chaotic environmental conditions. In summary, the main contributions of this paper are as follows:
  • An artificial landmark detection approach based on laser reflection intensity is proposed, and a trilateral localization algorithm is developed using the detection and identification results.
  • The EKF-based multi-sensor fusion technique is adopted to achieve the integration of trilateral localization results and inertial sensor positioning results.
  • The accuracy and practicality of the algorithm are verified in simulation and real environments, respectively. The experimental results demonstrate the usability of the algorithm.
The remainder of this paper is organized as follows. We first introduce the mobile construction robot, its functional structure, and its localization system in Section 2. Theoretical definitions, derivations, and the fusion algorithm used for location accuracy enhancement are discussed in Section 3. A validation experiment is then conducted, and the results are analyzed in Section 4. Section 5 concludes the paper.

2. System Overview

In practical construction, architectural tasks often result in the presence of numerous miscellaneous temporary placements within the construction site. Therefore, minimizing the dependence of mobile construction robots on environmental structures can ensure the stability of the positioning in such a chaotic and changeable environment. Conventional mobile robot positioning algorithms can be classified as relative and absolute positioning [31]. Absolute positioning, using GPS, landmarks and other technologies for localization, is more capable of processing complex conditions and unpredictable environments. Thus, our mobile construction robot is developed to be a fully self-contained unit with accurate absolute positioning capability. This approach eliminates the tedious calibration process during the initial setup and ensures the uniformity of the coordinate system in the chaotic environment.
As shown in Figure 3, artificial landmark features are extracted under the robot coordinate system. The coordinates of landmarks are then converted to the global coordinate system, and a trilateral positioning algorithm is used to calculate the robot’s pose in a global coordinate system. The robot positioning accuracy is improved with the fusion of odometer data using an extended Kalman filter, the mobile unit is simplified to the global coordinate system, and the positioning system is divided into three components: artificial landmark recognition, position using the optimal trilateral algorithm, and accuracy improvement based on EKF.

3. Methodology

Our mobile construction robot prototype uses 2D laser sensors and artificial landmarks for feature extraction and matching verification. A trilateral positioning algorithm was used to solve for the global pose. The pose derived from the odometer motion model was taken as the prediction value, and the absolute pose from trilateral positioning was regarded as the observation value; they were fused to improve the positioning accuracy combined with an EKF in complex construction environments to determine the final pose output.

3.1. Identification and Extraction of Artificial Landmarks

The prerequisite for positioning is the identification of artificial landmarks. They are made up of reflective columns with highly reflective properties, and identified by the reflection intensity of the 2D laser. The 2D laser captures the point cloud as single-layer discrete data. Before feature recognition, a set of discrete points must be divided into different objects for storage, while the density of the data collected by the laser sensor is affected by the measurement distance. Thus, an adaptive clustering method is adopted in this paper to improve the accuracy of feature extraction. The method does not require a priori knowledge and allows for the rapid processing of unlabeled data [32]. The corresponding adaptive threshold δ can be defined as:
δ = Q i P i + 3 σ r = ρ i · sin ( Δ φ ) / sin ( β Δ φ ) + 3 σ r
Therein, P i 1 , P i , and  P i + 1 are adjacent laser points; Q i P i is the distance between Q i and P i , while Q i is the intersection point of the line O L P i 1 and the circle P i (supplementary details are shown in Figure 4; β is the angle between O L P i ; and P i Q i , and  σ r is the laser measurement error.
Clustering is used to aggregate discrete laser points into different storage sets, which are divided into two categories: reflective and non-reflective column sets. The reflected intensity of the laser from the surface of the reflective column is greater than that of ordinary objects. Reflective column data can be identified by a higher intensity. The surface reflection intensity threshold λ δ and reflector diameter interval D m i n , D m a x are set as limiting conditions used to identify the reflector dataset. These criteria can be expressed as:
λ set λ δ D min d D max
where λ set is the set of reflections in the dataset whose intensity is greater than the threshold, and d is the diameter of the observed reflector, which is given as 110 mm. Reflective column data { { ( ρ i , φ i ) , λ i } | i = m , , n } are acquired in the environment through the clustering and recognition of artificial landmarks, where ρ i , φ i , and  λ i represent the distance, angle, and energy values from the laser to point i on the reflective column in the laser coordinate system, respectively. The purpose of artificial landmark feature extraction is to identify the center position of the reflective column in the robot coordinate system. The radius of the reflector is defined as the distance from the center of the reflector to the point where the laser is incident on the surface.
In Equation (3), L denotes the distance between the laser and the surface of the reflecting column, R is the radius of the reflecting column, ρ c is the distance from the laser to the reflective column, φ c is the angle between the center of the reflective column in the laser coordinate system, and  ( ρ c , φ c ) is the reflective column center, given by:
φ c = 1 n m i = m n φ i ρ c = 1 n m i = m n ( L · cos ( φ c φ i ) + R · cos ( arcsin L · sin ( φ c φ i ) R ) ) .
The above expression provides the polar coordinates of the reflector center in the robot coordinate systems. The coordinates of all local reflectors in the environment are matched with preset reflectors in the global environment to establish a one-to-one correspondence. Trilateral positioning is employed to calculate the position of the mobile construction robot following successful landmark matching.

3.2. Trilateral Positioning for Mobile Construction Robot

Trilateral positioning can only be used when three or more artificial landmarks are detected; this condition is key to the localization of the mobile construction robot and can be described. Assume h reflectors C i ( i = 1 n ) are arranged in the global environment ( h 3 ) as shown in Figure 5a. The position of the reflector C i in the global coordinate system is represented as ( ρ c g , φ c g ) and any three reflectors can be selected to form a triangle Δ C 1 C 2 C 3 with C 2 denoting the center. The segments C 1 g C 2 g , C 2 g C 3 g , and the angle C 1 g C 2 g C 3 g shown in Figure 5b can be calculated as:
C 1 g C 2 g = ( x 1 g x 2 g ) 2 + ( y 1 g y 2 g ) 2 C 2 g C 3 g = ( x 3 g x 2 g ) 2 + ( y 3 g y 2 g ) 2 C 1 g C 3 g = ( x 1 g x 3 g ) 2 + ( y 1 g y 3 g ) 2 C 1 g C 2 g C 3 g = arccos ( C 1 g C 2 g 2 + C 2 g C 3 g 2 C 1 g C 3 g 2 2 C 1 g C 2 g C 2 g C 3 g )
It is assumed that the robot can observe k i ( i = 1 m ) at any given time ( ( k 3 ) ). The position of the reflector C i in the (local) robot coordinates is given by ( ρ c l , φ c l ) . Any three observed reflective columns can be then used to form a triangle C 1 C 2 C 3 , with  C 2 denoting the center. The segments C 1 l C 2 l , C 2 l C 3 l , and the angle C 1 l C 2 l C 3 l can be calculated as:
C 1 l C 2 l = ( x 1 l x 2 l ) 2 + ( y 1 l y 2 l ) 2 C 2 l C 3 l = ( x 3 l x 2 l ) 2 + ( y 3 l y 2 l ) 2 C 1 l C 3 l = ( x 1 l x 3 l ) 2 + ( y 1 l y 3 l ) 2 C 1 l C 2 l C 3 l = arccos ( C 1 l C 2 l 2 + C 2 l C 3 l 2 C 1 l C 3 l 2 2 C 1 l C 2 l C 2 l C 3 l )
This matching is considered successful if triangle parameter information in the robot coordinates can be identified in the global environment. However, we find that the relationship between the two does not follow a strict one-to-one correspondence in actual operations. As such, a difference is assumed between the matching parameters corresponding to the local triangle information and those corresponding to all global triangle information. If the minimum values of these differences are satisfied:  
C 1 g C 2 g C 1 l C 2 l min ξ C 2 g C 3 g C 2 l C 3 l min ξ C 1 g C 2 g C 3 g C 1 l C 2 l C 3 l min ψ
Linear ( ξ ) and angular ( ψ ) error thresholds are then related by the measured distances from landmarks and the moving speed of the robot. In this paper, these thresholds are chosen empirically as ξ = 300 mm and ψ = 10 . If the above relationships are not satisfied, the recognition of reflective columns in the robot coordinates is assumed to be incorrect. After the above matching, a valid reflective column is obtained for calculating the positional attitude of the mobile construction robot; these calculations are used to identify effective global coordinates for the reflector.
It is assumed that the robot can observe three reflective columns with coordinates of C 1 ( x 1 g , y 1 g ) , C 2 ( x 2 g , y 2 g ) , and  C 3 ( x 3 g , y 3 g ) at any given time as shown in Figure 6a. The distance from the laser sensor to the center of the reflector C i can be used to define the radius of a circle. These circles will intersect at a point, which defines the global pose of the robot. However, due to measurement errors in the laser sensor, this intersection will span a finite region as shown in Figure 6b. The geometric relationship shown in Figure 6a produces the following set of equations:
( x 1 g x g ) 2 + ( y 1 g y g ) 2 = ρ C 1 2 ( x 2 g x g ) 2 + ( y 2 g y g ) 2 = ρ C 2 2 ( x 3 g x g ) 2 + ( y 2 g y g ) 2 = ρ C 3 2
The theoretical position of the robot, represented as ( x g , y g ) , can be determined by solving these equations, while the circles will intersect over a region (rather than a single point) in actual scenes, due to deviations between the theoretical position of the preset reflector and the actual position of the laser sensor. As such, a least-squares approach is included to minimize measurement errors between the theoretical and actual robot position coordinates [33]. Assuming the measured global coordinates of the reflector center can be expressed as ( x i g , y i g ) ( i = 1 , 2 , , n ) , corresponding to a distance ρ C i ( i = 1 , 2 , , n ) measured by the local laser, the position ( x g , y g ) of the robot is given by:
( x g , y g ) T = ( A T A ) 1 A T b A = 2 ( x 1 g x n g ) 2 ( y 1 g y n g ) 2 ( x n 1 g x n g ) 2 ( y n 1 g y n g ) b = ( ( x 1 g ) 2 ( x n g ) 2 + ( y 1 g ) 2 ( y n g ) 2 + ( ρ C n l ) 2 ( ρ C 1 l ) 2 ) ) ( ( x n 1 g ) 2 ( x n g ) 2 + ( y n 1 g ) 2 ( y n g ) 2 + ( ρ C n l ) 2 ( ρ C n 1 l ) 2 )
The heading angle for the robot can be expressed in the global environment using the robot position ( x g , y g ) and the reflecting column coordinate C i ( x i g , y i g ) as follows:
θ g = 1 n i = 1 n ( arctan 2 ( y i g y g , x i g x g ) φ C i l )
The mobile construction robot’s pose in the global environment, acquired using the Algorithm 1 shown below, is then given by x g , y g , θ g .
Algorithm 1: Trilateral localization using reflectors.
Buildings 14 01026 i001
Systematically, artificial landmarks in the environment are extracted by combining adaptive clustering and laser reflection intensity values. After identifying the position of the artificial landmarks under the global coordinate system, the trilateral positioning algorithm is employed to obtain the absolute positioning of the mobile construction robot. Indeed, distance-based trilateral localization algorithms are widely used due to their good robustness and easier implementation [34,35].

3.3. Positioning Accuracy Improvements Based on EKF

Relying solely on absolute positioning with artificial landmarks is not sufficient in unstructured environments, particularly in the case of single-sensor positioning. Therefore, multi-sensor data fusion is used to improve the positioning accuracy for complex operations [36,37]. The extended Kalman filter, one of the most common data fusion algorithms, is employed to compensate for the limitations of individual sensors and improve the overall performance [38].
Inertial sensor data (usually consisting of the inertial measurement unit (IMU) and the odometer) and position information, obtained by the trilateral algorithm, are fused to improve the positioning accuracy. In this process, the output frequency of these joint poses is greater than that of the trilateral positioning poses. Thus, the pose output by the inertial sensors is cropped and sampled based on the frequency output by the trilateral positioning algorithm as shown in Figure 7. Poses derived from the motion model based on inertial sensor data are used as predicted values, while poses calculated using artificial landmark positioning are used as observed values.
The fusion of multi-sensor information using the EKF algorithm requires an understanding of the system model and noise statistics in advance. Posture prediction for the motion model is required to estimate the pose prediction of the robot at the current moment depending on the estimation X ^ k 1 of the robot’s pose at the previous moment and the robot’s motion caused by the control input U k 1 . This is captured from a kinematics model represented as:
X ^ k | k 1 = F ( X ^ k 1 , U k 1 ) = x ^ k 1 y ^ k 1 θ ^ k 1 + ν x Δ t cos θ k 1 ν y Δ t sin θ k 1 ν x Δ t sin θ k 1 + ν y Δ t cos θ k 1 ω k 1 Δ t
Herein, the term F k 1 denotes the Jacobian matrix of the function F at the k 1 instance. The derivative of the function F with respect to X k 1 at ( X k 1 , U k 1 ) is given by:
F k 1 = F X k 1 X ^ k 1 = 1 0 ν x Δ t sin θ k 1 ν y Δ t cos θ k 1 0 1 ν x Δ t cos θ k 1 ν y Δ t sin θ k 1 0 0 1
Noise error in the motion can be expressed as Q k = σ x 2 0 0 0 σ y 2 0 0 0 σ θ 2 . A covariance matrix P k | k 1 , used to represent the uncertainty error of predicted pose values, can be described as:
P k | k 1 = F k 1 P k 1 F k 1 T + Q k 1
where P k 1 is the error covariance matrix for the robot pose estimation at the k 1 instance.
The observation model based on artificial landmark positioning requires calculating the relationship between pose changes at adjacent times. As shown in Figure 8, ( x k , y k , θ k ) represents the robot pose measured by laser and calculated using a trilateral positioning algorithm, while ( x k 1 , y k 1 , θ k 1 ) represents the pose predicted by laser measurements based on a motion model with included errors. The offset angle of the predicted pose relative to the real pose is denoted θ R . After a rotation transformation R, the predicted pose in the global coordinate system can be expressed as:
x k , y k , = cos θ R sin θ R sin θ R cos θ R x k y k = R x k y k
and the translation can be represented by:
T x = x = x k 1 x k 2 + y k 2 cos ( θ R α ) T y = y = y k 1 x k 2 + y k 2 sin ( θ R α )
these observations can be summarized as:
Z k = H k ( X ( k ) ) = 1 0 0 0 1 0 0 0 1 x y θ = T x + x k 2 + y k 2 cos ( θ R + α ) T y + x k 2 + y k 2 sin ( θ R + α ) θ R + θ o
the covariance matrix of the observation noise is given by:
R k = σ x 2 0 0 0 σ y 2 0 0 0 σ θ 2
where σ x 2 , σ y 2 , and  σ θ 2 represent measurement noise in the x, y, and  θ directions, respectively, with magnitudes determined by convergence errors in the trilateral positioning algorithm. The EKF positioning algorithm can then be developed by combining inertial sensor motion predictions with measurement updates from trilateral positioning as follows.
The proposed approach utilizes the EKF to fuse odometer and laser sensor data to improve the positioning accuracy and reliability of the mobile construction robot in the built environment [39,40]. With the state transfer equation determined, the EKF has emerged as the criterion for the state estimation of nonlinear systems. As in Algorithm 2, the prediction process is represented by steps 1–3, while the update process is illustrated by steps 4–6. Multi-sensor data fusion using the EKF enhances mobile construction robot positioning accuracy.
Algorithm 2: Positioning accuracy improvements based on EKF.
   Input: Estimated robot pose at an initial time: X ^ ( 0 ) , Noise variance in the moving model: Q 0 , Noise variance in the measured model: R 0
   Output: Optimal pose estimation: X ^ k = x ^ , y ^ , θ ^
1 Calculate the forward state variable at time k: X ^ k | k 1 = F ( X ^ k 1 , U k 1 )
2 Calculate the prediction error covariance matrix: P k | k 1 = F k 1 P k 1 F k 1 T + Q k 1
3 Obtain the measured current pose value from trilateral positioning: Z k
4 Calculate the Kalman gain matrix: K k = P k | k 1 H k T ( H k P k | k 1 H k T + R k ) 1
5 Update the state variable at time k: X ^ k = X ^ k | k 1 + K k ( Z k H k ( X ^ k | k 1 ) )
6 Calculate the estimated error covariance: P k = ( I K k H k ) P k | k 1
7 Return X ^ k

4. Verification Experiments

The proposed algorithm is evaluated in a series of experiments. Assessment metrics for global positioning errors are first introduced and the algorithm is then tested using a simulated model in Gazebo 9. Simulation tests include primarily the extraction of artificial landmarks detected with laser reflection intensity, and the comparison of true and measured positional values which come from three different approaches. A prototype, the autonomous mobile construction robot for digital construction, is assembled to verify the robustness and stability of the proposed method in practical construction scenarios.

4.1. Initialization and Experimental Settings

Utilizing mobile construction robots in building sites is quite a novel venture. To illustrate this usability status in detail, a digital construction scenario is drawn up. As shown in Figure 9, the mobile construction robot combined with manual labor is responsible for the installation of the curtain wall [30], and the AGV handles the transport of construction materials. Here, we focus on the fused positioning of the mobile construction robot based on artificial landmarks in this site, improving the positioning accuracy of the robot to realize precise working.
In the initialization, it is essential to verify the accuracy of the artificial landmark extraction. We build the scenario in Gazebo 9 as shown in Figure 10. We give the sensors and the actual physical environment properties in this simulation scenario and set the scene centroid as the global coordinate system for the positioning system.
The initialization is performed by rotating the mobile platform 360 at the origin of the global coordinate system to construct an artificial landmark map. The platform is designed with four Mecanum wheels, so the turning radius during rotation is zero. It ensures that the global and local coordinate systems coincide at the start-up of the mobile construction robot, facilitating positioning during movement. The accuracy of the artificial landmark extraction could be evaluated by comparing the coordinates of the artificial landmarks obtained from the laser reflection intensity with the true values from the Gazebo 9. In our tests, the laser sensor is the SICK LMS111, which offers a scanning range of 270 , an angular resolution of 0.25 , and the artificial landmarks are reflective columns with a radius of 10 cm. The adaptive clustering threshold is set to 1.5 ρ i according to Formula (1). An idealized scenario is constructed to evaluate the recognition accuracy of artificial landmarks because the true values from Gazebo 9 provide a benchmark for accuracy estimation.
As shown in Table 1, the measured values are obtained from the artificial landmark detection algorithm based on laser reflection intensity. Euclidean distances are employed to evaluate the error of each artificial landmark circle fitting.
To further illustrate the accuracy of landmarks extraction, the fluctuations of the error are depicted in Figure 11. The Euclidean distance portrays the offset between the true and measured circle center coordinates. In the beginning, combined with Figure 10, we can notice that reflectors 1# to 4# are already within the scanning range of the laser, so the offset of their fitted circle coordinates is relatively minimal, while the coordinates of the reflector from 5# to 7# need to be obtained by rotating the robot, so the offset has fluctuated.

4.2. Validation with Gazebo

In our digital construction paradigm project [30], the external walls of the curbside toilets need to be tiled at night due to transportation issues. This is essential for ensuring immunity from lighting interference and quick deployment with guaranteed positioning accuracy for the mobile construction robot positioning system. As shown in Figure 12, this scenario includes toilets that need to be constructed, a mobile construction robot, and artificial landmarks for position tracking.
To ensure the consistency of the test data, we first collect data in the constructed simulation scenario and generate the dataset which is made up of sensor data collected while the robot is running in the simulation environment. This dataset is employed to benchmark three different localization methods, which are the trilateral localization approach, fused EKF with trilateral localization (approach of this paper), and the EKF Localization [41].
The root-mean-square-error (RMSE) is a common evaluation criterion for the localization of mobile robots [42,43]. It is used as the primary evaluation metric for comparisons; the localization results of the trilateral localization, the fusion algorithm of this paper, and the EKF Localization are evaluated separately in comparison with the true value of the positional trajectory.
Figure 13 portrays the positional translation error among three different algorithms based on the RSME. The 2930 poses are provided by the dataset which is generated through our simulation scenario. As described previously, the global and robot coordinate systems are coincident at the beginning moment; hence, their translation error is also zero when the pose number is zero. With the robot moving, the global coordinate system is fixed at the initial position, and the robot coordinate system changes to follow the trajectory of the movement.
The trilateral localization algorithm without EKF accuracy enhancement suffers from large and undulating translation errors. The EKF positioning, which relies entirely on the IMU and odometer, subsequently performs poorly due to cumulative errors. As can be seen from Figure 13, our approach translation error is no more than 10 cm, and this accuracy will be sufficient to guarantee the utilization of the mobile construction robot on the construction site.
The posture of the mobile robot is jointly described by the position and the yaw angle. As shown in Figure 14, the yaw angle is highly sensitive to vibration caused by the jitter during robot movement. Overall, the rotational positional errors are higher for the trilateral positioning, while the performance of both the EKF Localization and the fusion approach in this paper are similar.
Similarly, such positioning errors can be shown to differ in employing pose trajectories as shown in Figure 15. Here, we demonstrate the pose trajectory obtained by setting our approach and EKF Localization, while the true ground positional trajectory is obtained through the Gazebo setting. At the initial moment, the three trajectories are nearly coincident. This indicates that errors between the positioning results are extremely small over short periods. However, over time, the localization results of EKF Localization gradually deviate from the ground truth, while the fusion approach in this paper achieves higher overlap with the ground truth. Thus, the trajectory of our approach is closer to the true pose, demonstrating the robust nature of the fusion algorithm.

4.3. Testing in Physical Environments

The performance of our proposed algorithm is evaluated using our prototype mobile construction robot, subjected to a series of tests conducted in a physical environment. As shown in Figure 16, the underlying motion control of the mobile construction robot employs a Beckhoff system, in which IMU and odometer data ar efused after interpolation and time synchronization, for transmission to the algorithm processing layer.
The detection and recognition of artificial landmarks are the cornerstones of positioning. Therefore, artificial landmarks recognition and detection based on laser reflection intensity are first validated in on-site environments. As Figure 17 shows, four artificial landmarks are arranged on site, and the laser intensity values are mutated four times. The position of the artificial landmarks is identified from these mutation values, thus enabling the positioning of the mobile construction robot.
However, collecting ground truth poses in the real test environment is challenging compared to the simulation environment, primarily due to the difficulties in deploying motion capture systems like OptiTrack in the actual scene. To solve this issue, we establish a predetermined trajectory to evaluate the prototype’s positional accuracy as shown in Figure 18: traveling from the starting point to the work station position, completing the plate assembly, and then traveling to the next work station. If the initial point is at the global coordinate origin, the distance from the origin to the first workstation is recorded as the coordinates of the first workstation. Subsequently, the distances between adjacent workstations are measured to derive the theoretical trajectory that the mobile platform should follow. Consequently, by continuously monitoring the real-time position of the mobile platform and comparing it with the ideal path, we can estimate the positioning error of the mobile construction robot during actual task execution.
The desired coordinates for the six site locations as indicated in Figure 18 are presented in Table 2. The ideal coordinates are determined with the starting point as the origin, the positive direction of the mobile platform representing the x-axis, and the right side of the positive direction signifying the y-axis. The distances from the initial point to both the first and third station points are measured using a laser range finder. Subsequent station points, such as from the third station point to the fifth station point, are evenly spaced. This arrangement allows us to calculate the coordinates of all station points based on the measurements of the second and third station points.
The output values in Table 2 are calculated from the average of the successive output values of the proposed algorithm at the station location. During the test, the robot follows a predetermined path, cruising along it and providing localization results when it reaches the designated station points. Furthermore, for enhanced visualization of the positioning error, the output values are compared to the ideal values. As depicted in Figure 19, discrepancies between the actual output position values and the ideal values at various station points are quantified.
The horizontal axis represents the counted positions, while the vertical axis represents the deviation value along the coordinate axis direction. The deviation indicates the extent to which the actual position trajectory differs from the ideal one. A smaller deviation value corresponds to a closer match between the actual and ideal position trajectories, resulting in reduced positioning error. In the x-axis direction, the maximum deviation from the ideal position is no greater than 0.25 m, while the maximum deviation in the y-axis direction is not more than 0.45 m. Additionally, we calculate average deviations of 0.085 m and 0.138 m in the X and Y axes, respectively. This average deviation is approximately equivalent to the 10 cm positioning error in the simulation environment. Overall, the deviation in the y-axis is more pronounced than that in the x-axis for positioning. However, the average deviation in the y-axis is smaller than in the x-axis. The larger deviation values are concentrated at the fourth station, which is closer to the construction wall and lacks artificial landmarks in the vicinity. This deficiency in artificial landmarks observed by the robot at the fourth station may lead to a greater positioning error in that specific location.
Based on the aforementioned theoretical analysis, this part tests the positioning capability of the mobile construction robot in a simulation environment and an on-site scenario, respectively. From the simulation results, the translational error of our approach is no more than 10 cm, while the usability of the algorithms is verified in practical scenarios. Overall, the algorithm in this paper is sufficient for application in practical scenarios.

5. Conclusions

This study introduces a novel fusion technique, which integrates the trilateral positioning algorithm and EKF, aimed at mitigating positioning challenges encountered by mobile construction robots operating within semi-open and chaotic environments. The proposed approach entails the identification of artificial landmarks through laser reflection intensity analysis, coupled with a least-squares computation to minimize matching discrepancies. Leveraging an inertial sensor motion model, the predicted position value for the mobile construction robot is established. Subsequently, the position determined through artificial landmark localization serves as the observed value, with enhancements in accuracy facilitated by the EKF. Through a series of simulated experiments employing Gazebo 9 and physical environment assessments, it is demonstrated that positioning errors remained within acceptable thresholds. Notably, the fusion of trilateral positioning and EKF exhibits remarkable practicality in construction settings. Survey results corroborate the efficacy of the proposed fusion algorithm in localizing mobile construction robots.
In practical applications and testing scenarios, our system encounters limitations when relying solely on the positioning of the mobile platform to address location-related challenges at the task level. Specifically, the absence of a closed-loop system for the robotic end-effector’s positioning jeopardizes the precise placement of assembled work pieces. Consequently, our future endeavors will prioritize the development of a two-tier positioning system, encompassing both the mobile platform and end-effectors.

Author Contributions

Conceptualization, S.G. and L.Z.; methodology, L.Z.; validation, L.Z. and M.Z., writing—original draft preparation, L.Z.; writing—review and editing, L.Z.; visualization, H.D. and J.B.; supervision, S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. U1913603.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Conflicts of Interest

Author Jie Bai was employed by the company China Construction Eighth Engineering Division Decoration Engineering Corp., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Debrah, C.; Chan, A.P.; Darko, A. Artificial intelligence in green building. Autom. Constr. 2022, 137, 104192. [Google Scholar] [CrossRef]
  2. de Soto, B.G.; Agustí-Juan, I.; Hunhevicz, J.; Joss, S.; Graser, K.; Habert, G.; Adey, B.T. Productivity of digital fabrication in construction: Cost and time analysis of a robotically built wall. Autom. Constr. 2018, 92, 297–311. [Google Scholar] [CrossRef]
  3. Petersen, K.H.; Napp, N.; Stuart-Smith, R.; Rus, D.; Kovac, M. A review of collective robotic construction. Sci. Robot. 2019, 4, eaau8479. [Google Scholar] [CrossRef] [PubMed]
  4. Štibinger, P.; Broughton, G.; Majer, F.; Rozsypálek, Z.; Wang, A.; Jindal, K.; Zhou, A.; Thakur, D.; Loianno, G.; Krajník, T.; et al. Mobile manipulator for autonomous localization, grasping and precise placement of construction material in a semi-structured environment. IEEE Robot. Autom. Lett. 2021, 6, 2595–2602. [Google Scholar] [CrossRef]
  5. Dörfler, K.; Dielemans, G.; Lachmayer, L.; Recker, T.; Raatz, A.; Lowke, D.; Gerke, M. Additive Manufacturing using mobile robots: Opportunities and challenges for building construction. Cem. Concr. Res. 2022, 158, 106772. [Google Scholar] [CrossRef]
  6. Gawel, A.; Blum, H.; Pankert, J.; Krämer, K.; Bartolomei, L.; Ercan, S.; Farshidian, F.; Chli, M.; Gramazio, F.; Siegwart, R.; et al. A fully-integrated sensing and control system for high-accuracy mobile robotic building construction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 2300–2307. [Google Scholar] [CrossRef]
  7. Melenbrink, N.; Werfel, J.; Menges, A. On-site autonomous construction robots: Towards unsupervised building. Autom. Constr. 2020, 119, 103312. [Google Scholar] [CrossRef]
  8. Gharbia, M.; Chang-Richards, A.; Lu, Y.; Zhong, R.Y.; Li, H. Robotic technologies for on-site building construction: A systematic review. J. Build. Eng. 2020, 32, 101584. [Google Scholar] [CrossRef]
  9. Sandy, T.; Giftthaler, M.; Dörfler, K.; Kohler, M.; Buchli, J. Autonomous repositioning and localization of an in situ fabricator. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2852–2858. [Google Scholar] [CrossRef]
  10. Lussi, M.; Sandy, T.; Dörfler, K.; Hack, N.; Gramazio, F.; Kohler, M.; Buchli, J. Accurate and adaptive in situ fabrication of an undulated wall using an on-board visual sensing system. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 3532–3539. [Google Scholar] [CrossRef]
  11. Hack, N.; Lauer, W.V. Mesh-mould: Robotically fabricated spatial meshes as reinforced concrete formwork. Archit. Des. 2014, 84, 44–53. [Google Scholar] [CrossRef]
  12. Yin, H.; Lin, Z.; Yeoh, J.K. Semantic localization on BIM-generated maps using a 3D LiDAR sensor. Autom. Constr. 2023, 146, 104641. [Google Scholar] [CrossRef]
  13. Xu, Z.; Guo, S.; Song, T.; Zeng, L. Robust localization of the mobile robot driven by LiDAR measurement and matching for ongoing scene. Appl. Sci. 2020, 10, 6152. [Google Scholar] [CrossRef]
  14. Ardiny, H.; Witwicki, S.; Mondada, F. Construction automation with autonomous mobile robots: A review. In Proceedings of the 3rd RSI International Conference on Robotics and Mechatronics (ICROM), Tehran, Iran, 7–9 October 2015; pp. 418–424. [Google Scholar] [CrossRef]
  15. Dörfler, K.; Hack, N.; Sandy, T.; Giftthaler, M.; Lussi, M.; Walzer, A.N.; Buchli, J.; Gramazio, F.; Kohler, M. Mobile robotic fabrication beyond factory conditions: Case study Mesh Mould wall of the DFAB HOUSE. Constr. Robot. 2019, 3, 53–67. [Google Scholar] [CrossRef]
  16. Giftthaler, M.; Sandy, T.; Dörfler, K.; Brooks, I.; Buckingham, M.; Rey, G.; Kohler, M.; Gramazio, F.; Buchli, J. Mobile robotic fabrication at 1: 1 scale: The In situ Fabricator: System, experiences and current developments. Constr. Robot. 2017, 1, 3–14. [Google Scholar] [CrossRef]
  17. Ercan, S.; Meier, S.; Gramazio, F.; Kohler, M. Automated localization of a mobile construction robot with an external measurement device. In Proceedings of the 36th International Symposium on Automation and Robotics in Construction (ISARC 2019), Banff, AB, Canada, 21–24 May 2019; pp. 929–936. [Google Scholar] [CrossRef]
  18. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
  19. Kim, P.; Chen, J.; Cho, Y.K. SLAM-driven robotic mapping and registration of 3D point clouds. Autom. Constr. 2018, 89, 38–48. [Google Scholar] [CrossRef]
  20. Basiri, M.; Gonçalves, J.; Rosa, J.; Vale, A.; Lima, P. An autonomous mobile manipulator to build outdoor structures consisting of heterogeneous brick patterns. SN Appl. Sci. 2021, 3, 558. [Google Scholar] [CrossRef]
  21. Lakhal, O.; Chettibi, T.; Belarouci, A.; Dherbomez, G.; Merzouki, R. Robotized additive manufacturing of funicular architectural geometries based on building materials. IEEE/ASME Trans. Mechatron. 2020, 25, 2387–2397. [Google Scholar] [CrossRef]
  22. Yan, R.J.; Kayacan, E.; Chen, I.M.; Tiong, L.K.; Wu, J. QuicaBot: Quality inspection and assessment robot. IEEE Trans. Autom. Sci. Eng. 2018, 16, 506–517. [Google Scholar] [CrossRef]
  23. Zhang, X.; Li, M.; Lim, J.H.; Weng, Y.; Tay, Y.W.D.; Pham, H.; Pham, Q.C. Large-scale 3D printing by a team of mobile robots. Autom. Constr. 2018, 95, 98–106. [Google Scholar] [CrossRef]
  24. Zhang, L.; Zapata, R.; Lepinay, P. Self-adaptive Monte Carlo localization for mobile robots using range finders. Robotica 2012, 30, 229–244. [Google Scholar] [CrossRef]
  25. Tiryaki, M.E.; Zhang, X.; Pham, Q.C. Printing-while-moving: A new paradigm for large-scale robotic 3D Printing. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 2286–2291. [Google Scholar] [CrossRef]
  26. Lázaro, M.T.; Capobianco, R.; Grisetti, G. Efficient long-term mapping in dynamic environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 153–160. [Google Scholar] [CrossRef]
  27. Moura, M.S.; Rizzo, C.; Serrano, D. Bim-based localization and mapping for mobile robots in construction. In Proceedings of the IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Santa Maria da Feira, Portugal, 28–29 April 2021; pp. 12–18. [Google Scholar] [CrossRef]
  28. Kim, K.; Peavy, M. BIM-based semantic building world modeling for robot task planning and execution in built environments. Autom. Constr. 2022, 138, 104247. [Google Scholar] [CrossRef]
  29. Zhao, X.; Cheah, C.C. BIM-based indoor mobile robot initialization for construction automation using object detection. Autom. Constr. 2023, 146, 104647. [Google Scholar] [CrossRef]
  30. Xie, D.J.; Zeng, L.D.; Xu, Z.; Guo, S.; Cui, G.H.; Song, T. Base position planning of mobile manipulators for assembly tasks in construction environments. Adv. Manuf. 2023, 11, 93–110. [Google Scholar] [CrossRef]
  31. Campbell, S.; O’Mahony, N.; Carvalho, A.; Krpalkova, L.; Riordan, D.; Walsh, J. Where am I? Localization techniques for mobile robots a review. In Proceedings of the 6th International Conference on Mechatronics and Robotics Engineering (ICMRE), Barcelona, Spain, 12–15 February 2020; pp. 43–47. [Google Scholar] [CrossRef]
  32. Feng, X.; Guo, S.; Li, X.; He, Y. Robust mobile robot localization by tracking natural landmarks. In Proceedings of the Artificial Intelligence and Computational Intelligence: International Conference, AICI 2009, Shanghai, China, 7–8 November 2009; Proceedings 1. Springer: Berlin/Heidelberg, Germany, 2009; pp. 278–287. [Google Scholar] [CrossRef]
  33. Zhou, Y. An efficient least-squares trilateration algorithm for mobile robot localization. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 3474–3479. [Google Scholar] [CrossRef]
  34. Xu, H.; Ding, Y.; Wang, R.; Shen, W.; Li, P. A novel radio frequency identification three-dimensional indoor positioning system based on trilateral positioning algorithm. J. Algorithms Comput. Technol. 2016, 10, 158–168. [Google Scholar] [CrossRef]
  35. Zheng, S.; Li, Z.; Liu, Y.; Zhang, H.; Zou, X. An optimization-based UWB-IMU fusion framework for UGV. IEEE Sens. J. 2022, 22, 4369–4377. [Google Scholar] [CrossRef]
  36. Censi, A.; Franchi, A.; Marchionni, L.; Oriolo, G. Simultaneous calibration of odometry and sensor parameters for mobile robots. IEEE Trans. Robot. 2013, 29, 475–492. [Google Scholar] [CrossRef]
  37. Li, C.; Wang, S.; Zhuang, Y.; Yan, F. Deep sensor fusion between 2D laser scanner and IMU for mobile robot localization. IEEE Sens. J. 2019, 21, 8501–8509. [Google Scholar] [CrossRef]
  38. Erdem, A.T.; Ercan, A.Ö. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking. IEEE Trans. Image Process. 2014, 24, 538–548. [Google Scholar] [CrossRef] [PubMed]
  39. Cui, Y.; Liu, S.; Yao, J.; Gu, C. Integrated positioning system of unmanned automatic vehicle in coal mines. IEEE Trans. Instrum. Meas. 2021, 70, 8503013. [Google Scholar] [CrossRef]
  40. Wang, J.; Alipouri, Y.; Huang, B. Dual neural extended Kalman filtering approach for multirate sensor data fusion. IEEE Trans. Instrum. Meas. 2020, 70, 6502109. [Google Scholar] [CrossRef]
  41. Teslić, L.; Škrjanc, I.; Klančar, G. EKF-based localization of a wheeled mobile robot in structured environments. J. Intell. Robot. Syst. 2011, 62, 187–203. [Google Scholar] [CrossRef]
  42. Zhu, J.; Kia, S.S. Cooperative localization under limited connectivity. IEEE Trans. Robot. 2019, 35, 1523–1530. [Google Scholar] [CrossRef]
  43. Yu, C.; Liu, Z.; Liu, X.J.; Xie, F.; Yang, Y.; Wei, Q.; Fei, Q. DS-SLAM: A semantic visual SLAM towards dynamic environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1168–1174. [Google Scholar] [CrossRef]
Figure 1. Mobile construction robots for architectural applications.
Figure 1. Mobile construction robots for architectural applications.
Buildings 14 01026 g001
Figure 2. The proposed prototype mobile construction robot system. The platform is equipped with an IRB2600 industrial robot (ABB Robotics), which can achieve on-site localization in unstructured environments.
Figure 2. The proposed prototype mobile construction robot system. The platform is equipped with an IRB2600 industrial robot (ABB Robotics), which can achieve on-site localization in unstructured environments.
Buildings 14 01026 g002
Figure 3. A flowchart for the proposed positioning system.
Figure 3. A flowchart for the proposed positioning system.
Buildings 14 01026 g003
Figure 4. The geometric interpretation of the adaptive threshold in the laser coordinate system.
Figure 4. The geometric interpretation of the adaptive threshold in the laser coordinate system.
Buildings 14 01026 g004
Figure 5. The geometric configuration of the reflector in the (a) global and (b) robot coordinate systems.
Figure 5. The geometric configuration of the reflector in the (a) global and (b) robot coordinate systems.
Buildings 14 01026 g005
Figure 6. Ideally, these circles should intersect at a single point, that is, the location of the mobile construction robot (a). however, the intersection of three circles in a finite region due to matching errors (b).
Figure 6. Ideally, these circles should intersect at a single point, that is, the location of the mobile construction robot (a). however, the intersection of three circles in a finite region due to matching errors (b).
Buildings 14 01026 g006
Figure 7. A diagram of the proposed fusion framework. Predicted values are derived from the motion model, while observed values are acquired using trilateral positioning, descent sampling, frequency alignment, and optimal EKF positional attitude.
Figure 7. A diagram of the proposed fusion framework. Predicted values are derived from the motion model, while observed values are acquired using trilateral positioning, descent sampling, frequency alignment, and optimal EKF positional attitude.
Buildings 14 01026 g007
Figure 8. The posture measurement update process based on trilateral positioning.
Figure 8. The posture measurement update process based on trilateral positioning.
Buildings 14 01026 g008
Figure 9. A demonstration of the application of mobile construction robots in digital construction.
Figure 9. A demonstration of the application of mobile construction robots in digital construction.
Buildings 14 01026 g009
Figure 10. The idealized scenario was simulated in Gazebo to estimate the extraction accuracy of the artificial landmarks.
Figure 10. The idealized scenario was simulated in Gazebo to estimate the extraction accuracy of the artificial landmarks.
Buildings 14 01026 g010
Figure 11. Artificial landmark map and error fluctuations based on Euclidean distances.
Figure 11. Artificial landmark map and error fluctuations based on Euclidean distances.
Buildings 14 01026 g011
Figure 12. Simulated scenario for testing positioning accuracy, using the mobile construction robot to tile the facade of the toilet.
Figure 12. Simulated scenario for testing positioning accuracy, using the mobile construction robot to tile the facade of the toilet.
Buildings 14 01026 g012
Figure 13. Translational posture errors for the three approaches.
Figure 13. Translational posture errors for the three approaches.
Buildings 14 01026 g013
Figure 14. Rotational posture error for the three approaches.
Figure 14. Rotational posture error for the three approaches.
Buildings 14 01026 g014
Figure 15. Posture trajectory tracking in the Gazebo simulation environment.
Figure 15. Posture trajectory tracking in the Gazebo simulation environment.
Buildings 14 01026 g015
Figure 16. Mobile construction robot prototype for testing and the relationships between coordinate systems.
Figure 16. Mobile construction robot prototype for testing and the relationships between coordinate systems.
Buildings 14 01026 g016
Figure 17. Simulation of plate assembly test scenarios and recognition of artificial landmarks based on laser reflection intensity.
Figure 17. Simulation of plate assembly test scenarios and recognition of artificial landmarks based on laser reflection intensity.
Buildings 14 01026 g017
Figure 18. Schematic illustrating the station path for simulating the assembly process.
Figure 18. Schematic illustrating the station path for simulating the assembly process.
Buildings 14 01026 g018
Figure 19. Deviation of the actual position output value from the ideal value.
Figure 19. Deviation of the actual position output value from the ideal value.
Buildings 14 01026 g019
Table 1. Comparison of artificial landmark circle center coordinates extraction based on laser reflection intensity with the true value coordinates in Gazebo (m).
Table 1. Comparison of artificial landmark circle center coordinates extraction based on laser reflection intensity with the true value coordinates in Gazebo (m).
No.Axis1#2#3#4#5#6#7#
True valueX-axis−0.893.583.87−0.41−4.43−5.52−5.24
Y-axis−6.28−3.773.685.613.99−0.46−4.59
Measured valuesX-axis−0.873.863.85−0.38−4.37−5.48−5.21
Y-axis−6.28−3.773.685.613.98−0.46−4.55
Euclidean distance\0.020.020.020.030.060.040.05
Table 2. The ideal site location coordinates and the average coordinates output by the positioning algorithm.
Table 2. The ideal site location coordinates and the average coordinates output by the positioning algorithm.
SiteCoordinate AxisSite1Site2Site3Site4Site5Site6
Ideal value (m)X-axis030303
Y-axis001.21.22.42.4
Output value (m)X-axis0.0183.071−0.0753.0640.0772.911
Y-axis0.0140.0311.2211.3152.4612.382
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zeng, L.; Guo, S.; Zhu, M.; Duan, H.; Bai, J. An Improved Trilateral Localization Technique Fusing Extended Kalman Filter for Mobile Construction Robot. Buildings 2024, 14, 1026. https://doi.org/10.3390/buildings14041026

AMA Style

Zeng L, Guo S, Zhu M, Duan H, Bai J. An Improved Trilateral Localization Technique Fusing Extended Kalman Filter for Mobile Construction Robot. Buildings. 2024; 14(4):1026. https://doi.org/10.3390/buildings14041026

Chicago/Turabian Style

Zeng, Lingdong, Shuai Guo, Mengmeng Zhu, Hao Duan, and Jie Bai. 2024. "An Improved Trilateral Localization Technique Fusing Extended Kalman Filter for Mobile Construction Robot" Buildings 14, no. 4: 1026. https://doi.org/10.3390/buildings14041026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop