The experiments and simulations presented in this section used different scenarios to find the best configuration for locating a mobile robot in a controlled environment. The experiments used cameras of the OptiTrack system to provide a ground-truth reference, enabling the analysis of the effectiveness of the LBBA and providing reliable information for the robot’s (LIMO) orientation, emulating a digital compass.
4.1. Performance Associated to the Real Tests
This section describes the experiments conducted with the LIMO robot in a controlled environment, utilizing the proposed LBBA. The main objectives were to validate the accuracy of the global localization algorithm and evaluate the effectiveness of trajectory-tracking control in autonomous navigation tasks. The robot was equipped with a LiDAR laser sensor capable of performing 180 measurements per scan. In addition, the OptiTrack installation was used as a ground truth reference to validate the localization and trajectory-tracking control. Additionally, a digital compass mimicked by the OptiTrack system was incorporated into the system, helping to reduce the randomness in the algorithm’s estimates and guiding the particles toward a more accurate orientation, as mentioned before.
The experiments adopted a previously known map containing obstacles that avoid excessive ambiguities. Initially, the robot had no information about its exact location in the environment. The LBBA was used to perform global localization, and the tests demonstrated its effectiveness in quickly determining the robot’s initial position. This step is critical, as the mission only begins when the global localization returns a reliable position estimate.
Figure 7 describes the test arena, consisting of a Linux PC running ROS
melodic, which establishes Wi-Fi communication between the control computer and the robot, with the control signals sent to the vehicles at a rate of 30 Hz. As for the controller adopted, it was the kinematic one discussed in [
18].
The mission began after the global localization phase, and during it, we applied the local localization algorithm. This algorithm focuses on a probability region along the robot’s movement trajectory, optimizing the use of particles in the area where the robot is. This procedure allowed the robot to maintain a suitable frequency of position updates for trajectory control during the missions.
The experiments used two trajectories: a lemniscate of Bernoulli and an ellipse. The lemniscate, shaped as an 8, is widely used in experiments on robot motion control to test the ability to smoothly and precisely control the robot movements because the continuous changes in direction and curvature make it challenging enough to validate control algorithms.
Figure 8 and
Figure 9 illustrate both trajectory-tracking experiments, respectively. In the second case, the ellipsoidal trajectory, human intervention through a joystick takes the robot to a point entirely out of the trajectory in two different moments along the navigation channel, marked as blue squares inside black boxes. As one can see from
Figure 9, the localization system could provide the necessary information for the controller to guide the robot to resume the trajectory.
In addition to the two trajectory-tracking tasks, the robot also performed a positioning mission, programmed to reach five waypoints distributed throughout the map, as shown in
Figure 10.
Using the LBBA for position estimation was crucial for trajectory control and positioning success. Instead of spreading particles across the entire map, the local localization strategy allowed the use of a smaller number of particles without compromising the accuracy of the estimates.
Figure 11 shows the flowchart of the localization algorithm, detailing the global and local localization steps.
The flowchart presented describes a robotic navigation system using a Robot Operating System (ROS), comprising several nodes responsible for the location and control of the robot. The process begins with the activation of all nodes: Global Location (Node 1), Local Location (Node 3), Robot Control (Node 2), and Ground Truth (Node 4). Initially, the Global Location node estimates the robot’s position in the entire map. If the estimated position is acceptable, the robot starts moving toward the desired trajectory/position, with the Robot Control node managing this movement. From this moment on, Local Location becomes responsible for continuously estimating the robot’s position during navigation. The Ground Truth node (based on the OptiTrack motion capture system) provides the robot’s orientation to the localization system and the real robot position for comparison with the one estimated by our algorithm. It also assists in correcting the robot’s orientation within the working environment. This feedback loop ensures accurate navigation and continuous adjustments as the robot moves across the map.
The close correspondence between the estimated and real trajectories demonstrated the algorithm’s accuracy, as shown in
Figure 8 and
Figure 9, which present an overlay of the trajectories estimated by the LBBA proposed here and the real trajectories provided by the OptiTrack system, considering the trajectory-tracking of a lemniscate of Bernoulli and an ellipse, respectively. There, one can see that the trajectory generated using the positions estimated by our algorithm is very close to the ground truth, which demonstrates the accuracy of the estimates.
Tools such as error graphs and boxplots illustrate the distribution of errors and evaluate the performance of the localization system. The estimates tend to converge to the reference values over time or during the algorithm’s execution, as evidenced by
Figure 8 and
Figure 9. Indeed, a visual analysis of overlapping line graphs allows the progressive approximation of the estimates to the ground truth to be verified. In the experiment related to the lemniscate trajectory, the localization algorithm demonstrated the ability to estimate the position accurately and efficiently over 15 min.
On the other hand, in the experiment in which the robot followed an elliptical trajectory, besides proving the ability of the proposed algorithm to follow the desired path, we also evaluated its robustness. A human operator using a joystick made interventions, forcing the robot away from the desired trajectory. The system, however, demonstrated to be able, after resuming the control of the robot, to resume the desired trajectory, validating the robustness of the proposed algorithm.
The accuracy of mobile robot position estimation is a crucial factor directly impacting performance in tasks related to online trajectory planning, motion control, and environmental interaction. This study performs a quantitative analysis of the position estimation error of the LIMO robot in three different types of trajectories: elliptical, waypoints, and lemniscate. The results are presented in the form of error graphs over time, providing a critical assessment of the robustness of the navigation system and the challenges encountered in each type of trajectory (
Figure 12).
The analysis was performed by comparing the position the robot’s navigation system estimated with the real position obtained by high-precision sensors. The errors were recorded over time, and the results were visualized in graphs with a trend line representing the average error. This approach allowed a detailed analysis of the system’s accuracy in different navigation contexts.
The elliptical trajectory, characterized by continuous and smooth curves, presented a moderate variation in error over time. The maximum error recorded was approximately 0.2 m, with more pronounced peaks observed around 50 s (top graph). These peaks can be attributed to the system’s difficulty in maintaining high accuracy on curves, where slight deviations in the estimation of the angular orientation propagate to the linear position error. The average error on this trajectory was 0.03 m, showcasing the good performance of the estimation algorithm.
The waypoint navigation presented a distinct error behavior, with more pronounced fluctuations in the first 15 s. The maximum error recorded was approximately 0.1 m, with higher peaks between 30 and 35 s, which coincide with transition moments between different waypoints (center graph). This suggests the navigation system faces challenges during the robot’s reorientation at these transition points. The average error along this navigation channel was 0.02 m, the lowest among the three experiments analyzed, indicating that the robot could perform accurate position corrections throughout most of the route.
With its figure-of-eight pattern, the lemniscate trajectory introduced greater complexity due to the continuous variation in curvature and direction. The maximum error recorded was similar to that of the elliptical trajectory, around 0.2 m, but presented more frequent fluctuations over time (lower graph). The average error was 0.03 m, which reflects the ability of the navigation system to maintain an acceptable overall accuracy, even on a trajectory with continuous orientation changes.
The results show that the performance of the system in estimating the robot’s position varies according to the trajectory’s complexity. Trajectories involving continuous changes in curvature, such as the elliptical and lemniscate, presented higher error peaks, suggesting that the estimation and control algorithm are more sensitive to abrupt variations in the angular orientation and linear velocity. This behavior emphasizes the importance of information fusion with the compass sensor, especially when dealing with more complex trajectories, such as those analyzed here.
As
Table 1 and
Table 2 show, the filtered algorithm presents a lower error than a version without filtering. Therefore, using filtered data provided better conditions for controlling the robot during the missions. This behavior can be justified by the ability of the filters to smooth the sensory data, removing rapid oscillations and noise, which could induce undesired behaviors in the controller. The conclusion is that data smoothing allows the controller to generate more consistent and stable movement commands, resulting in more efficient robot behavior. Filtering also reduces the controller’s sensitivity to small fluctuations, which, in the case of raw data, could result in abrupt and ineffective corrections, highlighting the advantage of using filtered data to improve control performance.
The Root Mean Square Error (RMSE) index is the basis of such an analysis. It measures the difference between the estimated and real values (ground truth) over time and is a metric widely used to evaluate the accuracy of localization algorithms. Such an index is calculated as
where
n is the number of samples,
are the estimated values, and
are the true values. Besides presenting lower RMSE values, the filtered data provided smoother and more predictable data, which resulted in improved robot control.
Additionally, filters help mitigate temporary error spikes that can severely impact robot performance if not properly corrected. By providing a more stable and predictable data set, the controller can predict the robot’s movement more effectively, facilitating smoother tracking of complex trajectories, such as lemniscate and elliptical. In this way, the filtered data provide greater overall stability to the control system, allowing the robot to maintain robust and more efficient performance throughout the mission.
Figure 13 shows the analysis of the errors of the localization algorithm in a lemniscate trajectory experiment, comparing the estimates with the real position of the robot. The distribution of errors, illustrated by the boxplot, demonstrates a significant concentration of values close to zero, as the interquartile range indicates. The median of the error values, represented by the red line inside the box, is slightly below the center, suggesting a slight asymmetry in the data. The narrow width of the interquartile range indicates that most of the errors remain within a narrow range, reflecting the algorithm’s consistency most of the time. However, many outliers are observed above the box, revealing more significant errors at specific moments. These atypical values indicate that, although the algorithm’s overall performance is accurate, there are situations in which the robot’s localization error increases considerably. Furthermore, the whiskers, which show the variability of errors without considering outliers, are relatively short. This reinforces that most errors remain within a small range, with occasional exceptions that indicate deviations in the system’s behavior at certain moments of the experiment.
The histogram of
Figure 14 shows the distribution of the Euclidean error along the lemniscate trajectory. Analysis of this graph reveals that most of the errors are concentrated in low values, with the highest frequency of occurrences around errors close to zero, which indicates the excellent overall performance of the localization algorithm.
A sharp drop in frequency is observed as the error increases, with a few values above 0.05, suggesting that most of the algorithm’s estimates are pretty accurate. However, there are a small number of more significant errors, reaching values around 0.15 or more, which indicates the occurrence of some estimates with more significant errors but which are rare.
The greater concentration around small errors, combined with the low frequency of more significant errors, reinforces the idea that the algorithm performs well in most cases, with a few exceptions where the error is more pronounced. This asymmetric distribution, with a long tail on the right, suggests that the most significant deviations are sporadic and may be related to the specific conditions of the experiment, such as noise or temporary limitations in the localization process. Therefore, the histogram confirms that the localization algorithm has a high accuracy most of the time, with few moments of significant error.
Therefore, one can conclude that all the experiments were successfully conducted, demonstrating that the LBBA proposed here is effective for localization and trajectory control in known environments, establishing itself as a promising tool for applications in autonomous robotics. Finally,
Figure 15 shows the LIMO robot in action during the experimental tests to give an idea of the experimental setup.
4.2. Performance Evaluation Through Simulated Tests
In this section, we compare the performance of the leader-based bat algorithm (LBBA) as proposed here against the performances of two other bio-inspired algorithms: the Manta Ray Foraging Optimization (MRFO) algorithm and the Black Widow Optimization (BWO) algorithm, also focused on optimization and also applicable to mobile robot localization. These three algorithms are based on natural behaviors and are designed to handle complex global optimization problems in search spaces with multiple local optima. This makes them particularly suitable for the robotic localization problem. Although MRFO and BWO yielded strong results, our LBBA proposal outperformed them in accuracy and robustness, indicating that it may offer distinct advantages in this context.
Essentially, LBBA is an extension of the bat algorithm (BA). The difference, as detailed in this text, is the inclusion of multiple leaders that direct the swarm to specific regions of the search space. This feature provides a unique distributed exploration framework, efficiently covering larger areas and avoiding traps in local optima. Considering robotic localization, the algorithm can obtain more accurate estimates of the robot’s initial position by exploring different regions simultaneously and adjusting itself to avoid suboptimal solutions that could result in significant errors. By its turn, MRFO, inspired by the foraging behavior of manta rays, incorporates three distinct strategies, namely chain, cyclone, and somersault, which seek a balance between exploration and intensive exploration in promising regions. Each strategy uses structured moves to ensure that the algorithm can switch between a global search and localized exploration, adapting to the configuration of the search space. This makes MRFO suitable for situations where adaptation to new regions is critical. However, this approach may require more particles to maintain accuracy in restricted areas, such as a region of interest (ROI) [
19]. The BWO, in turn, is based on the reproductive and cannibalistic behavior of the black widow spider, where low-fitness individuals are eliminated to optimize exploration. This process of discarding unpromising hypotheses improves the algorithm’s convergence rate, which helps reduce the initial calculation time in complex environments [
20]. However, the dynamics of cannibalism, by rapidly reducing particle diversity, can limit its adaptability in restricted areas, such as an ROI, making it difficult to obtain an accurate robot position.
Accurate and rapid robot localization in a mapped environment initially requires a global estimate of its position. For this task, particles are randomly distributed throughout the map, allowing the algorithm to compare the readings of a real sensor with simulated readings to identify the robot’s position. However, this initial phase requires a significant amount of particles, which increases the computational cost. At this point, the robustness of LBBA in exploring multiple regions with independent leadership proves advantageous, ensuring broad coverage and rapid convergence to an accurate initial estimate.
After obtaining a reliable initial position, the algorithm switches to a local localization procedure based on the definition of an ROI, a circle centered on the last estimated robot position, with a radius of 20 cm, inside which the robot is supposed to be. This refinement significantly reduces the number of particles required since the algorithm focuses the search on the area with the highest probability of containing the robot. This improves computational efficiency, allowing for real-time precise localization without requiring the entire map to be explored again.
A relevant aspect contributing to the accuracy and robustness of localization is the integration of a compass, which assists in orientation and avoids unfavorable positioning hypotheses. Data fusion between the compass and other sensors (LIDAR and odometry) reduces variability and the need for more particles, improving the algorithm’s reliability and enabling localization with a lower computational cost. This feature is especially useful in ROI, where orientation is more sensitive to deviations.