1. Introduction
Researchers and many companies have introduced different types of cleaning robot designs over the last two decades [
1,
2,
3,
4,
5]. A study was conducted on some of the vacuum robots available in the market about cleaning efficiency, which showed that the manual vacuum cleaner is still more efficient for cleaning than vacuum robots [
6]. The coverage area issue of the vacuum robots was also mentioned in the same article. The area above the floor, such as skirting, is unreachable for the vacuum robots (iRobot, Xiaomi Mi Robot Vacuum, etc.) having a fixed inlet on the bottom side. A vacuum robot with a flexible vacuum inlet may increase the cleaning efficiency and coverage area problems. In 1997, an autonomous vacuum cleaner, “Koala”, was introduced [
7], which has a two-degrees-of-freedom robotic arm, cleaning head on the second link of the arm.
The paper presents experimental verification of the method, which was developed by the second author and presented in paper [
8], including a stability analysis. The results described here are a continuation of these studies. The authors’ goal was to fill the gap between theory and practical implementation based on available low-cost components. Adaptive manufacturing was used to cut down the cost of manufacturing the robot. Practical verification of the effectiveness of the algorithm is an undeniably important phase of the research process. Proper operation, even though the test was used on a relatively simple, inexpensive mobile platform equipped with units of low computing power, is an additional advantage of the method presented here. Although the algorithm was not tested on expensive laboratory hardware equipped with high-precision sensors and actuators, it proved to be effective.
Figure 1 demonstrates the cleaning robot, where the inlet pipe of the vacuum system was attached to the end effector of the robotic arm. The robot is equipped with a LIDAR sensor and web camera, giving the opportunity to develop more complex methods. The touch screen on the robot helps to displays the information from the robot. A low-level proportional–integral–derivative (PID) speed controller is used to achieve the desired speeds. A high-level controller [
8] helps the robot to follow a virtual robot path with avoiding obstacles.
Creating a virtual model of the robot opens another field where we can verify the algorithm and potential problems in the design of the robot [
9]. The FPGA-based controller also shows high potentials for the robotic arm [
10] and power-saving architecture for the motions [
11].
Artificial potential fields were introduced by Khatib [
12] in 1986. In this control algorithm, attraction (to the goal) and repulsion (from the obstacle) are negated gradients of the artificial potential functions (APF). Many articles have been published using the navigation function (NF) [
12,
13,
14,
15,
16,
17]. The control of multiple mobile robots based on the kinematic model [
18,
19] and dynamic model [
20,
21] have also been published.
A sliding mode control strategy with an artificial potential field was shown in [
22]. An obstacle potential field function considers the robot’s size and obstacle size and accordingly changes the weight of the obstacle potential field function, which adaptively makes the robot escape from local minima [
23].
Authors of [
24] addressed the problem of tracking the control of multiple differentially driven mobile platforms (leader–follower approach is used). The algorithm is distinguished by its simplicity and ease of implementation. The components of the control have an obvious physical interpretation, which facilitates the tuning based on observations of the robot’s behavior. The linear velocity control depends on the linear velocity of the reference robot (virtual leader) and the position error along the axis perpendicular to the axle of the robot wheels. The angular velocity is a function of the angular velocity of the virtual leader, orientation error, and the position error along the axle of the robot wheels, transformed by a non-linear function and activated periodically by a square persistently exciting function. The stability proof is based on the decomposition of the error dynamics into position and orientation subsystems that are ISS (input to state stable). Simulations confirm the effective error reduction for stadium-circuit shape trajectories composed of two straight lines and two half circumferences. The collision avoidance was not taken into account.
Collision avoidance with other robots and static obstacles was added in [
8]. The position errors were replaced by correction variables, which combine the position errors and the gradients of APFs surrounding obstacles. When robots are outside of collision avoidance regions of other robots/obstacles (APF takes the value zero) the analysis given in [
24] is actual. For the other cases, stability analysis based on the Lyapunov-like function was presented. Numerical verification was performed on a formation of fifteen robots in the environment with five obstacles. The desired trajectory was composed of straight lines and arcs.
The trajectory tracking algorithm proposed in [
8] was based on [
24]. In its original form, the collision avoidance was not included. It was assumed that the initial positions of robots guarantee that no collisions would occur. The extension proposed in [
24] removes this limitation. The method guarantees the stability of the closed-loop system and collisions avoidance. A tracking method with similar computational complexity was proposed in work [
25]. It does not involve a persistent excitation block. The stability of the system was also proven. Another method that was considered is the VFO (vector field orientation) algorithm [
26]. This approach uses a special vector field and an auxiliary orientation variable. Stability was also proven for it, but its implementation on a real robot is more complex. In [
27], this approach was extended by collision avoidance capability. There is tracking and collision avoidance for the multiple robots, but it takes into account the robot’s dynamics (mass, inertia) and is very complex to implement [
20].
Section 2 shows the mechanical design and electronics parts of the robot. The control algorithm is presented in
Section 3.
Section 4 describes the methodology, and the experiment results are presented in
Section 5.
2. Mechanical Design and Electronics Parts
Solid Edge 2020 software was used as a modeling tool to design our robot. The software’s synchronous technology was one of the main features by which authors modified the parts and assembly at any time. The 3D-printing support of this software makes it more user-friendly. The cloud feature of the software was helpful for the authors to work in a team and shared the design. The following points were considered while designing the parts:
The designs were saved as STL files, which were later used in the 3D printer. The “Creality Ender 3” 3D printer was used to print the parts. PLA material was used, which had the property of showing minor printing failures without close enclosure of the printer. The printer was set up for 200 degrees Celsius for the nozzle temperature and 60 degrees Celsius for the heat bed. The robot consist of mainly three assemblies as follows:
Robotic arm.
Vacuum system.
Mobile platform.
2.1. Robotic Arm
The robot was equipped with a four-degrees-of-freedom robotic arm. All the joints consisted of revolute joints. The total length of the fully enlarged robotic arm was 44.8 cm, and the width was 14.8 cm.
Table 1 shows the details of the servo motors used in the robotic arm. Ball bearings were used to support and transfer the weight of one part to another. ABEC 3 skateboard bearings were used, with an outer diameter of 22 mm, core diameter of 8 mm, and 7 mm width.
2.2. Vacuum System
The vacuum system of the robot consists of 3D-printed parts, such as a fan, collector box, cover of the collector box, and fan chamber. Other parts include the filter, flexible pipe, and brushless motor. One end of the flexible tube is connected to the vacuum system and another end to the end-effector of the robotic arm. The collector box has a volume of 378 cubic cm (10 cm × 6 cm × 6.3 cm).
Figure 2 shows the cut sectional view of the vacuum system. When the vacuum system starts vacuuming the floor, the air with the dust comes into the collector box, where the layer of vertical filter stops the dust, while allowing the air to pass through it.
Table 2 shows the electronics items used for the vacuum system.
2.3. Mobile Platform
The robot has a four-wheel-drive platform, which gives the robot more power to carry the weight of its robotic arm and vacuum system. DC geared motors with encoders are used in the mobile platform. A low-level PID speed controller is implemented in Arduino Mega. Each wheel has a diameter of 10 cm. The total length of the mobile platform is 23.9 cm, and the width is 24.9 cm with wheels. The lead battery is used to power the electronics of the mobile platform.
The upper body of the mobile platform is made up of three parts. These parts play a vital role in connecting the lower body part of the mobile platform with the vacuum system and robotic arm. The LIDAR holder provides the platform for the LIDAR sensor. The touch screen holder is also designed to hold a 7 inch touch screen connected to the top of the vacuum system. The height of the complete robot is 54.1 cm from the ground, a length of 54.1 cm when the robotic arm is fully enlarged, and 24.9 cm is the width of the platform with tires.
Table 3 shows the components of the electronics part of the mobile platform. The robot in total has two batteries. A 6 V DC battery is used to power the motor drivers and low-level controllers, such as Arduino UNO and Arduino Mega, while a Lipo battery is used to power the servo motors of the robotic arm, vacuum system, and Raspberry Pi computer. The robot has three controllers. Raspberry Pi acts as the master controller connected through the serial link to Arduino Mega, which serves as the first slave controller. The Arduino Mega is also connected with Arduino Uno, using serial communication. As a result, Arduino Uno serves as the second slave controller, which controls the direction and speed of one motor. The other three motors are controlled with Arduino Mega, due to the limitations of the external interrupt pins. The L298n motor drivers control the motors’ direction and speed. Each motor driver is capable of controlling two motors. Two wires from the microcontroller to the motor driver control the direction, and one wire controls the speed of the motor.
“Camera Tracer HD WEB008” is the digital camera used for the vision system. The device captures HD-quality images with a resolution of 1280 × 720 pixels. The lens has a wide-angle view of horizontal 100 degrees and can capture images with 30 frames per second. “Hokuyo urg-04lx-ug01” is used as the LIDAR sensor.
3. Control Algorithm
The control algorithm for the mobile platform was taken from the research paper [
8]. The control algorithm was tested in the numerical simulation in the same research paper. To move one step further, the authors decided to test the algorithm with the proposed cleaning robot. With this control algorithm, the cleaning robot tried to follow a virtual robot path while avoiding the obstacles with artificial potential functions (APF). A repulsive field was generated to repel the robot from the obstacles.
The kinematics of the robot is given by the following formula:
where vector
is the control vector with
v denoting linear velocity control, and
denotes the angular velocity control of the follower robot. Vector
denotes the pose, and
x,
y,
are the position coordinates and orientation of the robot with respect to a global, fixed coordinate frame.
The robot has to follow the virtual robot, which moves with the desired linear velocity and angular velocity along the planned trajectory. The robot should achieve the same velocities as that of the virtual robot; are the position coordinates of the virtual leader, which acts as a reference position of the real robot, and its orientation converges to the orientation of the virtual leader .
The APF repels the robot from the obstacles as it rises to infinity near the obstacle’s borders, (j—number of the obstacles) and decreases to zero at some distance , .
Equations for APF are written as follows:
which gives output
, the distance between the robot and the
j-th obstacle, which is defined as the Euclidean length
.
Scaling the function given by Equation (
2) within the range
can be given as follows:
which is used later to avoid collisions.
In further description terms, the ’collision area’ is used for location fulfilling conditions . The range is called the ’collision avoidance area’.
The goal of the controller is to drive the robot along the desired trajectory along with avoiding obstacles, which result brings the following quantities to zero:
Assumption 1. where is location of the center of the j-th obstacle.
Assumption 2. If the robot gets into the avoidance region, its desired trajectory is temporarily frozen (, ). If the robot leaves the collision avoidance area, then its desired coordinates are immediately updated. As long as the robot remains in the avoidance region, its desired coordinates are periodically updated at certain discrete instants of time. The time period of this update process is large in comparison to the main control loop sample time.
Assumption 1 tells that the desired path of the virtual robot should be planned in such a way that in the steady state, the virtual robot should remain outside of the collision avoidance regions.
Assumption 2 means that the tracking process is temporarily suspended when the follower robot gets into collision regions because collision avoidance has a higher priority. Once the robot is outside of the collision detection region, it updates the reference to the new values. In addition, when the robot is in the collision avoidance region, its reference trajectory is periodically updated. It supports leaving the unstable equilibrium points (which occurs, for example, when one robot is located precisely between the other robots and its goal) if the reference trajectory is exciting enough. In rare cases, the robot may get stuck at a saddle point, but the set of such points is of measure zero and is not considered further.
The error with respect to the coordinate frame fixed to the robot can be calculated as follows:
The error dynamics can be written with the above equations and non-holonomic constraint
as follows:
where
and
are controls of the reference virtual robot.
The position error and collision avoidance terms can be used to calculate position correction variables as follows:
depends on
x and
y according to Equation (
3);
M is a number of static obstacles in the task space. The robot avoids collisions with them. The correction variables can be transformed to the local coordinate frame fixed to the geometric center of the mobile platform:
Equation (
8) can be transformed to the following form [
8]:
where each derivative of the APF is transformed from the global coordinate frame to the local coordinate frame fixed to the robot. Finally, the correction variables expressed with respect to the local coordinate frame are as follows:
The trajectory tracking algorithm combined with collision avoidance is as follows:
where
and
are the positive constant design parameters, while
depends on
and continuously differentiable function.
plays a vital role in the persistent excitation of the virtual robots angular velocity.
Assumption 3. If the value of the linear control signal is less than the considered threshold value , i.e., (—positive constant), it is replaced by a new scalar function , where the following holds: Substituting (
11) into (
6), the error dynamics is given by the following equations:
Transforming (
13) using (
11) and taking into account Assumption 2 (when the robot gets into the collision avoidance area, velocities
and
are substituted as 0) error dynamics can be expressed in the following form:
A stability analysis based on Lyapunov-like function was given in [
8].
The control algorithm generates the linear and angular velocity for the real robot. The equations which convert these control velocities into wheel velocities are as follows:
where
is the robot right side velocities for both wheels, and
is the robot left side velocities for both wheels.
v and
are the linear and angular velocities respectively.
L is the distance between the left- and right-side wheels.
Now, the velocities for each wheels can be written using (
15) as follows:
where
represents the linear speed for the right-hand side front wheel,
represents the linear speed for the right-hand side back wheel,
represents the linear speed for the left-hand side front wheel, and
represents the linear speed for the left-hand side back wheel.
Wheel velocities are desired signals of the motor’s PID controllers.