In this section, the performance of the designed controllers is first evaluated based on the results obtained during experimental validation. Following this, the two navigation approaches are presented in detail: one relying solely on encoder feedback and the other incorporating both encoder and gyroscope data. A comparative analysis is then conducted between the two methods, highlighting their respective strengths, limitations, and suitability for the intended application.
3.1. Velocity and Position Control
For the control stage, the final simulation results are illustrated in
Figure 10, which was obtained using the Simulink environment. In
Figure 10a, a reference of 100° was applied, representing the desired angular displacement of the robot. As can be observed, the system reaches the steady-state in approximately 8 s, with zero final position error. This confirms that the implemented control strategy ensures accurate tracking of the reference input.
As shown in
Figure 10b, the control signal generated for both motors maintains a simple shape, with PWM values remaining below the maximum value of 255. This indicates that actuator saturation does not occur for reference angles below 100°. If a larger rotation is required, the same rotation command can be executed multiple times. For instance, to achieve a 180° rotation, the 90° rotation command can be applied two consecutive times. This approach only results in a doubled response time (which is still minimal) while preserving the system’s stability and control accuracy.
Robustness analysis was carried out for the inner control loop by varying the parameters of the fixed transfer function used for velocity control by ±30%. The results confirmed that the system’s performance remained stable and did not exhibit significant degradation, indicating good robustness with respect to parameter variations. These performance values are summarized in
Table 1 and illustrated in
Figure 11a, confirming the reliability of the velocity controller. Subsequently, the system’s ability to reject output disturbances in the inner loop was evaluated. As shown in
Figure 11b, two step disturbances were applied: a negative step of amplitude
at
s and a positive step of amplitude
at
s. In both cases, the controller promptly and effectively rejected the perturbations, further demonstrating the robustness of the inner loop design.
For the outer loop, the robustness was tested by modifying the time constant
of the fixed part by ±30%. Again, no significant changes were observed in the system behavior. The results are presented in
Table 2 and depicted in
Figure 11c. Regarding the outer loop’s disturbance response, a step disturbance of amplitude
was applied at the system output. As seen in
Figure 11d, the system did not reject the disturbance but rather accumulated it. However, applying a disturbance of the same amplitude but with the opposite sign would result in disturbance elimination.
For the outer loop, robustness was evaluated by varying the time constant
of the fixed part by ±30%. The results showed that the system behavior remained consistent with no significant variations, confirming the robustness of the outer loop controller to parameter changes. These findings are summarized in
Table 2 and illustrated in
Figure 11c. Regarding the disturbance response of the outer loop, a step disturbance with an amplitude of
was applied at the system output. As depicted in
Figure 11d, the system did not reject the disturbance but instead accumulated it over time. This behavior is expected given the integral nature of the outer loop, which inherently integrates persistent disturbances rather than rejecting them immediately. It should be noted that applying a disturbance of the same amplitude but opposite sign would counterbalance this effect, resulting in disturbance elimination.
3.2. Maze Solving
To evaluate the performance of different control strategies for maze navigation, two experiments were conducted. The same mazes were solved using both approaches. The goal was to compare the two approaches in terms of execution time, system complexity, cost, and power consumption. These performances are illustrated in
Table 3,
Table 4 and
Table 5.
As outlined in
Table 3, the encoder-only approach exhibits distinct advantages, particularly with regard to reduced cost and energy consumption, attributable to the simplification of the robot’s physical structure. The objective of the conducted experiments is to demonstrate that a robot with a highly simplified architecture can still perform complex tasks such as maze solving. With regard to path deviation, both approaches demonstrated an accumulation of error over time. In order to address this issue, it is recommended that the robot should periodically reset its control parameters and validate sensor accuracy. The achievement of this objective may be realized through the implementation of continuous trajectory monitoring, utilizing a vision system. This would also facilitate real-time tracking of the robot’s state. Unlike the simpler approach of merely reading values from a sensor, the encoder-only configuration requires a deeper understanding of control theory. However, with a slightly increased computational effort, it can achieve performance levels comparable to the gyroscope-based configuration in maze-solving tasks (see
Table 4) while benefiting from a simpler hardware design. Given the robot’s limited motor capabilities, it was not possible to achieve a significantly better response time than with the gyroscope-based approach in this experiment. Attempting to reduce the response time further would have required a more aggressive control signal, pushing the motor into saturation.
Figure 12 and
Table 5 present an analysis of positioning errors in the two maze scenarios, using both the encoder-based and gyroscope-based methods. The error values are estimates and cannot be considered exact measurements as they are calculated as the Euclidean distance between the center of the start line and the center of the finish line. Discrepancies may arise between the actual distances and those computed from the images due to the camera’s angle of inclination. Additionally, as the start line is not located precisely at the edge of the robot, the values may appear slightly larger than the actual distance between the robot and the line. An accumulation of errors can be observed in both cases, especially in the second test where the path is longer. Nevertheless, the robot manages to reach the target relatively close in both situations. These experiments highlight that the two navigation strategies have comparable path deviations, with similar error magnitudes.
Figure 13 and
Figure 14 illustrate the two mazes, over which a grid was superimposed to generate a matrix representation. From this matrix, the shortest path was extracted between the green line and the orange line. This path is highlighted in yellow. The corresponding movement commands were then transmitted to the robot via Bluetooth.
For the front command, the robot moves forward a fixed distance of 8.5 cm. During this forward motion, a velocity controller is employed to maintain a constant speed, thereby ensuring both consistency in movement and repeatability across different commands.
For rotational movements, namely, the left and right commands, two distinct control strategies were implemented to achieve 90°. The first strategy relies solely on encoder feedback and uses a position controller that integrates motor speed over time to compute angular displacement. The second strategy introduces additional feedback from a gyroscope (MPU6050) to monitor the robot’s rotation around the Z-axis. To reduce the influence of gyroscope noise during rotation, the system does not rely on matching an exact angular value. Instead, a threshold-based approach is applied: the robot first records its initial orientation, then starts rotating. The rotation is stopped when the difference between the current gyroscope reading and the initial value exceeds ±80 degrees. This method ensures a practical and noise-tolerant turning mechanism, reducing the risk of false detections or premature stops due to sensor fluctuations.
A YOLOv8 network was trained to detect the start and finish lines, and its performance is illustrated in
Figure 15. The model’s performance was evaluated based on the progression of training losses and detection metrics. Training losses decreased gradually and remained stable throughout the epochs, indicating effective learning without overfitting. In terms of evaluation metrics, the model achieved high precision, recall, and mean average precision (mAP) scores on the validation set, demonstrating its ability to detect and correctly classify target objects. Overall, the model shows good generalization and is well suited to the task.
Additionally,
Figure 16 illustrates one of the validation batches, highlighting that the two lines were successfully detected despite being captured under varying lighting conditions and differences in structural appearance. This demonstrates the model’s robustness and ability to accurately identify the lines even when environmental factors and visual characteristics change.
A key advantage of the proposed vision-based approach is that the entire maze structure is captured in a single frame at the start of the process. Consequently, challenges such as dead ends do not impact the navigation strategy as the system possesses comprehensive knowledge of the environment and can compute the optimal path from the outset. In configurations with multiple possible paths, the Breadth-First Search (BFS) algorithm is used to reliably identify the shortest route between the start and finish points. However, the current implementation does not account for dynamic elements, such as moving obstacles. We recognize this as a limitation and propose integrating real-time visual feedback from the camera to enable dynamic path correction and obstacle avoidance during navigation in the future work section.
As demonstrated, navigation using only encoders, although it requires more complex analysis and advanced control strategies, can achieve results that are comparable to those obtained with a gyroscope-based approach. In some situations, it may even provide better performance.
This article places a stronger emphasis on the navigation aspect of the robot, in contrast to [
12], which focuses more on solving mazes of any shape or structure. Moreover, the proposed approach can also support diagonal movement by rotating 45°, even though the robot is not equipped with omnidirectional wheels.
The robot’s hardware architecture was deliberately designed to be as minimal as possible in order to demonstrate that a complex task such as maze solving can still be successfully addressed without relying on expensive components or intricate sensor fusion. Unlike the system described in [
16], which employs a variety of sensors—such as ultrasonic, inertial, and color sensors—and multiple algorithmic strategies to navigate a maze, the proposed solution utilizes only a single sensor type alongside a Bluetooth communication module. This streamlined design not only reduces cost and energy consumption but also simplifies implementation, making it especially well suited for educational use cases or scenarios where hardware resources are constrained. Despite this simplicity, the robot achieves competitive performance. For example, during Test 1, the robot reached the target using 13 movement commands (the number of yellow cells from the
Figure 13b), each corresponding to a step of 8.5 cm. This resulted in a total path length of 110.5 cm completed in 13.7 s (
Table 4). By extrapolating from this result, the robot would be capable of traversing a 1000 cm path in approximately 2 min. This speed indicates a higher operational efficiency than that achieved by the more sensor-rich platform described in [
16], which takes more time to cover this distance. These results support the idea that effective and responsive navigation in structured environments can be achieved even with minimal hardware, provided that control strategies and perception methods are well optimized.
As highlighted in [
17], the gyroscope component can introduce errors, particularly due to its sensitivity during forward motion. This issue is also noted in [
18], where an attempt is made to calibrate the sensor while the robot remains stationary. However, this method does not always guarantee accurate or long-term calibration. In the presented paper, the issue was addressed through the use of control algorithms. These controllers provide precise movement responses. Furthermore, the control approach offers an additional advantage: it allows for a predictable execution time for each movement command. This time-based control method provides more consistency and reliability, especially when executing sequential navigation commands where timing and orientation are critical. Specifically for the gyroscope, the system waits until the robot completes a rotation by a specified angle before proceeding.
However, the system has some limitations. Both the encoder-only approach and the approach using a gyroscope exhibit error accumulation over time. In addition, the rotational phase poses a critical challenge for the encoder-only approach as any unexpected interactions during this stage, such as minor collisions or surface irregularities, can cause significant drift. As the encoders lack direct feedback on the robot’s actual orientation, these deviations remain uncorrected and ultimately affect the robot’s ability to follow precise trajectories. This issue could be addressed through a hybrid strategy combining offline processing and real-time analysis in future work. This would enable the robot to navigate from one point to another within a building without electrical power—provided it is equipped with additional ultrasonic or infrared sensors to avoid collisions with surrounding objects, particularly in cases where unforeseen obstacles are present and not previously mapped. The real-time component would support continuous trajectory monitoring, enabling the robot to correct errors introduced by the encoder or gyroscope.
In scenarios where the maze cannot be fully captured within a single camera’s field of view, a distributed vision system is a promising solution. Multiple cameras are deployed at different vantage points, each capturing a portion of the environment. The system then collectively reconstructs the complete maze layout. The cameras can then operate cooperatively, sharing their local observations to build a global map, which can be used for path planning and navigation. This approach enables robots to handle larger and more complex mazes that exceed the spatial limitations of a single camera setup. While this approach introduces new challenges such as synchronization, calibration, and data fusion across cameras, it also opens valuable avenues for future development and real-world applicability.