Next Article in Journal
ADCNet: Anomaly-Driven Cross-Modal Contrastive Network for Medical Report Generation
Previous Article in Journal
Understanding Factors Influencing Generative AI Use Intention: A Bayesian Network-Based Probabilistic Structural Equation Model Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Learning Control Design for a Class of Mobile Robots

1
Doctoral School of Exact and Technical Sciences, University of Zielona Góra, 65-516 Zielona Góra, Poland
2
Faculty of Engineering and Technical Sciences, University of Zielona Góra, 65-516 Zielona Góra, Poland
3
Institute of Control and Computation Engineering, University of Zielona Góra, 65-516 Zielona Góra, Poland
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(3), 531; https://doi.org/10.3390/electronics14030531
Submission received: 21 December 2024 / Revised: 23 January 2025 / Accepted: 25 January 2025 / Published: 28 January 2025
(This article belongs to the Section Systems & Control Engineering)

Abstract

:
The paper presents the design of iterative learning control for a class of mobile robots. This control strategy allows driving the considered system, which executes the same control task in trials, to the predefined reference within the consecutive iterations by improving the control signal gradually. The control problem being stated concerns a mobile robot, and hence, its kinematic model is presented. The considered model is nonlinear as it is related to the robot orientation angle. Thus, the linearization strategy is introduced by dividing the range of possible orientation angles to four quarters and then deriving a linear parameter-varying system. As a distinct research topic, the feasible/optimal number selection of polytope vertices of each LPV submodel are considered. Next, for the resulting bank of models, the switched iterative control scheme is transformed into closed-loop differential linear repetitive processes. Subsequently, based on the fact that ensuring the so-called stability along the trial is equivalent to the convergence of the original model output to the predefined reference, an appropriate stabilization condition is applied in order to compute the feedback controller gains. The overall effectiveness and performance of the proposed methodology are evaluated through comprehensive simulation examples.

1. Introduction

In recent years, machine learning and artificial intelligence have become highly discussed topics with significant advancements in these fields of computer science, automatic control, and robotics. Their applications are impressive [1]. However, their deployment, particularly online, can still present challenges due to computational complexity. For example, training neural networks requires considerable time and resources, which may not always be feasible for real-time execution. In addition, many edge or executive devices, such as low-cost mobile robots, lack the necessary computational power. This is one of the reasons why these devices have become the focus of our research.
This paper addresses the problem of controlling a mobile robot with two degrees of freedom. The robot is a four-wheeled platform (two parallel axles) with a drive on the rear axle and steerable front wheels. Robot platforms of this type are very popular, and their models can be found in many references in the literature [2,3,4]. Moreover, these models can be successfully applied for other vehicles with similar characteristics, e.g., bicycles. The control of mobile robots has gained considerable attention in recent years due to their wide range of applications in industries like manufacturing, logistics, autonomous vehicles, and service robots. One of the primary challenges in mobile robot control is ensuring precise trajectory tracking, particularly when robots are tasked with performing repetitive operations in uncertain environments where the design of feedforward signals is imperfect. To address the above challenges, several different strategies were proposed. Undoubtedly, model predictive control (MPC) constitutes an attractive solution due to its natural ability to incorporate various constraints. It can also be applied to changing environments. One possible strategy is developed in [5]. It is based on a non-calibrated switching algorithm that determines the drive or brake adaptive regulation of MPC. It also avoids high-frequency oscillations by taking into account the braking torque. An alternative MPC solution is proposed in [6]. It presents an MPC torque controller for velocity tracking, which compensates the nonlinearities caused by the forces of gravity, friction, and adhesion. Subsequently, the work [7] tackles with an optimal velocity MPC under steering constraint, which is combined with the so-called pose correction. Finally, the paper [8] deals with a compensatory lateral MPC technique for path tracking. It also combines MPC with an active disturbance rejection. In spite of the incontestable appeal of MPC, it involves the continuous solving of a related quadratic programming problem. Owing to the limited computational devices embedded within a mobile robot, it may be the limiting factor on the way toward the MPC application. Thus, an alternative solution is proposed, which is based on iterative learning control (ILC) [9,10,11]. In particular, it can be applied to the mobile robots, which perform repetitive tasks (or trials) in such a way as to improve their trial-to-trial performance. Indeed, this technique is particularly useful for systems that perform the same task repeatedly, as it allows for progressive control refinement to minimize tracking errors. ILC was initially developed for robotic manipulator applications [12] (iterative pick-up and placement task), but over time, the field has expanded significantly. Similarly to MPC, it can also deal with possibly time-varying constraints [11]. Contrarily to MPC, it does not involve solving the quadratic programming problem. This feature recommends its possible application to mobile robots.
During each trial, information from previous trials is used to update the control signal, reducing the so-called tracking error (defined as the difference between the current trial output and the reference signal). Eventually, this process leads to the tracking error converging to an equilibrium point (zero in the case of linear deterministic systems). Two main approaches can be applied to ILC design: the two-stage approach [10,13] and the one-stage one. The two-stage approach assumes that the parameters of the ILC scheme are designed independently, while the one-stage design redefines the ILC scheme within the structure of linear repetitive processes (LRPs) [14], applying stabilization results for that class of systems. For the one-stage ILC design, as mentioned earlier, the system is modeled as a closed-loop DLRP, where one of the state vectors represents the tracking error from the original system. It has been proven that using controllers that ensure stability throughout the trial process drives the tracking error to zero. The final challenge is how to compute the stabilizing controllers for the ILC scheme. To solve this problem, the linear matrix inequality (LMI) procedure [15] can be utilized. As mentioned above, ILC is particularly appealing for mobile robots performing repetitive tasks, such as following the same path multiple times. It uses information from previous iterations (trials) to sequentially improve performance, making it ideal for applications such as automated delivery robots or warehouse navigation. However, vehicle control presents specific challenges due to the inherent nonlinearities in their dynamics. Traditional ILC designs often assume linear system dynamics or rely on simple linearization techniques. More advanced methods, known as the robust approach, address uncertainties and model disturbances [16,17,18,19]. However, these approaches may not fully capture the complexities of mobile robot behavior, especially when considering orientation-dependent kinematics. Recent advances have tackled this issue by developing linear parameter varying (LPV) models to better approximate nonlinear systems [20].
The scheme of the proposed ILC strategy is depicted in Figure 1. The focus is on developing an ILC scheme that incorporates a state-space model of the robot. Since the vehicle kinematics are nonlinear, particularly due to its orientation angle in the global reference frame, a tailored linearization strategy is applied. For each region, an LPV submodel is developed, allowing for more accurate approximations and a feasible controller design. Having such a model, it is possible to determine ILC gain parameters K 1 and K 2 (see Figure 1), which make it possible to obtain a reference tracking ILC strategy. The paper is organized as follows. Section 2 provides a necessary background about ILC. A mobile robot and its model are described in Section 3. Section 4 provides LPV-based ILC design. Section 5 presents a set of case studies, which is followed by the experimental verification on a test platform (Section 6). Subsequently, Section 7 provides a suitable discussion about the results present in the paper. Finally, the last section concludes the paper.

2. Iterative Learning Control

As aforementioned in the ILC analysis, the finite-time iteration (or trial) dependence is used for control purposes. Let the subscript k denote the trial number. This leads to the following state-space model of the considered system
d d t x k ( t ) = A ˜ x k ( t ) + B ˜ u k ( t ) , y k ( t ) = C ˜ x k ( t ) ,                      
where k > 0 , 0 t T is the time with the finite horizon T and x k ( t ) R n , u k ( t ) R m , y k ( t ) R r are the state, the input and the output vectors, respectively. x ˙ k ( t ) denotes a derivative with respect to the time variable t . Define also the reference signal y r e f ( t ) , 0 t T as a goal to be reached at the output and the consequence of the applied control strategy after a finite number of trials. To complete the model, it is necessary to provide the set of boundary conditions, i.e.,
x k ( 0 ) = f k ( 0 ) , k > 0 ,
where f k ( 0 ) are bounded functions. The control task can be alternatively defined as one allowing minimization of the tracking error, which is defined as
e k ( t ) = y r e f ( t ) y k ( t ) .
and we define the control signal in an iterative manner, i.e.,
u k + 1 ( t ) = u k ( t ) + Δ u k + 1 ( t ) ,
where Δ u k + 1 ( t ) is the input correction factor. Note that in (4), it is assumed that the control signal used in the next trial is constructed as its counterpart in the current iteration improved by some signal generated basing on the control quality in the current trial.
Next, introduce the following signal
η k + 1 ( t ) = 0 t x k + 1 ( τ ) x k ( τ ) d τ ,
which defines the trial–to–trial state increment integral and is equivalent to
d d t η k + 1 ( t ) = x k + 1 ( t ) x k ( t ) .
Combining (1), (4), (5) and (6) yields the following
d d t η k + 1 ( t ) = A ˜ η k + 1 ( t ) + B ˜ 0 t Δ u k + 1 ( τ ) d τ .
Note, moreover, that
e k + 1 ( t ) e k ( t ) = C ˜ x k + 1 ( t ) x k ( t ) = C ˜ d d t η k + 1 ( t ) ,
or
e k + 1 ( t ) = e k ( t ) C ˜ A ˜ η k + 1 ( t ) C ˜ B ˜ 0 t Δ u k + 1 ( τ ) d τ .
Hence, (7) and (9) can be gathered to provide the following state-space model
d d t η k + 1 ( t ) e k + 1 ( t ) = A ˜ 0 C ˜ A ˜ I η k + 1 ( t ) e k ( t ) + B ˜ C ˜ B ˜ 0 t Δ u k + 1 ( τ ) d τ ,
with the boundary conditions defined as follows
η k + 1 ( 0 ) = 0 , k > 0   e 0 ( t ) = y r e f ( t ) .
It is straightforward to see that the model (10) takes the form of the specific subclass of 2D, i.e., DLRP [11,14]. There are t w o independent variables in the model – the time (which in this case is continuous) and the trial number (obviously discrete). The extended state vector contains two factors η k + 1 ( t ) and e k ( t ) , and the input signal is denoted as 0 t Δ u k + 1 ( τ ) d τ . Let us assume that the control law obeys
Δ u k + 1 ( t ) = K 1 d d t η k + 1 ( t ) + K 2 d d t e k ( t ) .
Hence, (10) can be rewritten in the following form
d d t η k + 1 ( t ) e k + 1 ( t ) = A ˜ + B ˜ K 1 B ˜ K 2 C ˜ A ˜ C ˜ B ˜ K 1 I C ˜ B ˜ K 2 η k + 1 ( t ) e k ( t )
which is the closed-loop model of DLRP under consideration.

ILC Gains Design

As aforementioned, the ILC scheme (13) takes the form of DLRP, and it is straightforward to see that the results developed originally for this subclass of dynamical systems can be applied to analysis and/or synthesis of the controller. Thus, since the tracking error e k ( t ) is part of the extended state vector, ensuring the stability along the trial (DLRP stability along the pass) guarantees that the error would tend to zero (according to stability theory). This in turn is equivalent to observing that the output y k ( t ) of the original system (1) tends to the prescribed reference. In what follows, the problem of designing ILC gain matrices K 1 and K 2 is tackled. Their structure must ensure stability along the trial of the underlying ILC model. Hence, the following lemma can be recalled [14].
Lemma 1.
Stability along the trial of holds for (13) if, and only if,
C ( s , z 2 ) = det ( s I A ^ 1 z 2 A ^ 2 ) 0 i n I × U ¯
where U ¯ = { ( z 2 C : | z 2 | 1 } , I = { s C : R e ( s ) 0 } , and
A ^ 1 = A ˜ + B ˜ K 1 B ˜ K 2 0 0 , A ^ 2 = 0 0 C ˜ A ˜ C ˜ B ˜ K 1 I C ˜ B ˜ K 2 .
Note that Lemma 1 provides the necessary and sufficient stability condition. However, its practical applicability for the purpose of stability investigation and controller design is seriously limited. This is because the lemma is based, in fact, on performing an infinite number of numerical tests. Hence, the Lyapunov stability theory can be used in order to provide a more applicable condition. This leads to the task of solving a set of LMIs [15]. However, it has to be underlined that stating the controller design problem in terms of LMI may appear with some conservativeness. Moreover, the well-known conditions are only the sufficient ones. Also, if the second block row of (13) is considered, then it can be concluded that it represents a special case of a 1D discrete model. As a consequence, e k ( t ) , 0 t T , k tends to zero when this dynamical model is stable; that is, all eigenvalues of the block term I C ˜ B ˜ K 2 lie inside the unit circle of the complex plane. Note that the eigenvalues of I are obviously equal to one; hence, the role of K 2 is now to move the closed-loop matrix eigenvalues into the interior of the unit circle. Moreover, the distance of the largest module eigenvalue to the unit circle clearly plays an important role in the tracking error decay rate in the following trials. It is straightforward to see that the K 2 based stabilization is possible if the pair I , C ˜ B ˜ is controllable, i.e.,
r a n k ( W ) = dim ( y ) , z C
where : W = z I I C ˜ B ˜
and dim ( ) denotes the length of the underlying vector. Note that this condition means that the full history of the current trial is available for the control purposes in the subsequent trial. However, when this condition is not true, and hence, the above rank condition is not satisfied, the ILC scheme should be redefined in such a way as to apply the feedback information from the more distant trials or more distant time moments (see, e.g., [21]).
Let us recall the well-known results (see e.g., [14]) regarding the control synthesis for DRLPs.
Lemma 2.
Consider the ILC scheme of (13) given as a controlled DLRP. Then, there exist controller matrices K 1 , K 2 such that the considered scheme converges to zero if there exist Y 0 , Z 0 , N and M of appropriate dimensions and the following LMI holds
Y A ˜ T + A ˜ Y + N T B ˜ T + B ˜ N ( ) ( ) M T B ˜ T Z ( ) C ˜ A ˜ Y C ˜ B ˜ N Z C ˜ B ˜ M Z 0
If this condition holds, then the control law matrices K 1 and K 2 are given by
K 1 = N Y 1 , K 2 = M Z 1 .
In (17), Lyapunov decision matrices Y and Z are the same as those used for the controllers computation (18), which can lead to an increase in the conservativeness of the LMI. To prevent this, so-called slack matrices can be introduced, as shown in Lemma [22].
Lemma 3.
Consider the ILC scheme of (13) given in terms of DLRP. Then, there exist controller matrices K 1 , K 2 such that the underlying scheme converges to zero if there exist Y 0 , Z 0 , slack matrices G Y , G Z and N, M of appropriate dimensions, and the following LMI holds
G Y T A ˜ T + A ˜ G Y + N T B ˜ T + B ˜ N ( ) ( ) ( ) C ˜ A ˜ G Y Z ( ) ( ) G Y T A ˜ T G Y + Y G Y T A ˜ T C ˜ T N T B ˜ T C ˜ T G Y G Y T ( ) M T B ˜ T G Z T M T B ˜ T C ˜ T 0 Z G Z G Z T 0
If this condition holds, then the control law matrices K 1 and K 2 are given by
K 1 = N G Y 1 , K 2 = M G Z 1 .
Finally, note that (according to [22]) the values of gain matrices K 1 and K 2 obtained according to (20) guarantee the asymptotic stability of the system (13) fed by (12). This simply guarantees that the tracking error e k ( t ) converges toward zero. This is possible due to the deterministic setting of the system (1). Also, when (19) is considered, it can be noted that K 1 is responsible for the original system stability and K 2 refers to the gradual decrease of the tracking error. Thus, to overcome disturbances and other unappealing effects, the proposed approach should be suitably extended. This issue will constitute a future research direction.

3. Mobile Robot Platform and Its Model

This study focuses on car-like mobile robots with Ackerman steering geometry, moving at relatively low speed (less than 0.5 m s ). For this scenario, the forces acting on the wheels and therefore slip angles are expected to be negligible, which justifies the choice of a kinematic model. Like in [2,3], a four-wheel model was replaced with a bicycle model due to its simplicity (without the loss of generality). The considered plant is presented in Figure 2. As it can be observed, Ackerman’s steering geometry forces all wheels to travel in circular paths around a common Instantaneous Center of Rotation ( I C R ), effectively preventing slippage. This configuration indicates that the steering angles of the right front wheel δ r and the left front wheel δ l are interdependent. Considering this relationship, they can be effectively represented by a single wheel in the bicycle model, which also travels around I C R and is controlled by the variable δ . In order to calculate back the steering angles δ r and δ l on the basis of δ , the following relationship can be used [23]:
δ r , l = arctan L ± D / 2 + L / tan ( δ ) ,
where D [ m ] is the distance between the two front wheels. In practice, during the experiment described in Section 6, a single steering value δ is sent to the robot to control a servomotor, and then the mechanical link adjusts the angles δ r and δ l properly. This further justifies the use of the bicycle model.
Let us proceed to the kinematic model of the mobile robot considered. For that purpose, let us remind the lateral vehicle dynamics of the form [24]
d d t x y θ = v cos ( θ + β ) v sin ( θ + β ) v cos β L tan ( δ F ) tan ( δ R )
where x, y are the coordinates of the center of gravity (COG) of the vehicle, θ is the vehicle heading angle (or orientation [rad], v [m/s] is the linear velocity of the rear axle and ω [rad/s] is the angular velocity with respect to the point ( x , y ) , δ F , δ R are the front and rear wheel steering angles, and β stands for the direction of the velocity at COG according to the longitudinal axis, which is simply called vehicle slip. For the purpose of further deliberations, the following assumptions are imposed:
Assumption 1:
Slip angle is equal to zero, i.e., β = 0 .
Assumption 2:
Rear steering angle is equal to zero δ R = 0 .
The first assumption is caused by the fact that an additional slip angle β estimator is required, which is not present in the current mobile robot setup. Note also that the robot will drive on a level surface with a constant rolling resistance parameter, and hence, such an assumption is fully acceptable. On the other hand, such a limitation can evidently impair the trajectory tracking, stability, and robustness to disturbances. Indeed, any surface slippery or disturbance may impair the robot performance. This topic is beyond the scope of the current research, and it will constitute the subject of future research directions. Indeed, one way to settle this issue is to treat disturbances and other unappealing effects like faults. As a result, the fault-tolerant control approaches can be applied [25,26]. The second assumption is due to the inability of the current setup to perform rear angle steering. As a consequence, δ F = δ and δ R = 0 . Under the above stated assumption [4], the nonlinear kinematic model is given as
d d t x y θ = v cos ( θ ) v sin ( θ ) ω ,
while ω is related to the steering angle δ [rad] and the distance between the rear and front wheel axles L = 0.15 [ m ] :
ω = v L tan ( δ ) .
Note that velocity v and the steering angle δ are input signals of the bicycle model. One can see the nonlinear relationship between these variables to obtain ω . Therefore, for the purpose of designing the controller, ω will be taken as the second input of the system instead of δ . In order to provide the control signal to the actual object, the value of δ is recalculated based on the transformed equation (24)
δ = arctan ω L v .
Finally, the position coordinates ( x , y ) of a robot and its orientation θ can be defined as outputs.

Derivation of LPV Model

Note now that since (23) is clearly nonlinear and does not comply with the (1), in order to design the ILC scheme described in Section 2, it is necessary to redefine the considered model. In what follows, it is clear that the input matrix can be treated as dependent on the parameter θ . Hence, it is purposeful to present (23) in terms of LPV model, i.e.,
d d t x = A x + B ( θ ) u y = C x                    
Robot orientation is affected by the second input. Then, let us describe the parameter vector ϑ = [ ϑ 1 ϑ 2 ] = [ c o s ( θ ) s i n ( θ ) ] . Arriving at the eventual system dynamics matrices,
A = 0 0 0 0 0 0 0 0 0 , B ( θ ) = ϑ 1 0 ϑ 2 0 0 1 , C = 1 0 0 0 1 0 0 0 1
Since all states are observable directly, matrix C takes the identity form.

4. LPV Controller Design

Given the highly nonlinear, state-dependent nature of the state–space matrices, carefully selected controllers are to be found. A popular practice in robust control, where parameters of interest vary, is to find specific control gains that ensure coverage of the entire assumed range. Such designs have to be conservative, in other words, include extremum values that the plant may reach. An attempt was made to envelope (27) with a simple polytope: square. This is equivalent to finding a controller for a set of fixed input matrices B = { B ( θ ) | θ { π / 4 , 3 π / 4 , 5 π / 4 , 7 π / 4 } } that adhere to the linear process (1), where all inequalities in (17) must be jointly met. Finally, implementing such feasibility problems in a semidefinite programming solver, i.e., SeDuMi (https://yalmip.github.io/solver/sedumi/, accessed on 20 December 2024) yielded mostly zero-filled (inapplicable) gains K 1 , K 2 i.e.,
K 1 = 0.0000 0.0000 0.0000 0.0000 0.0000 0.0817 , K 2 = 0.0000 0.0000 0.0000 0.0000 0.0000 0.4525
This indicates conflicting relations of the system response to an input. The trigonometric functions in question change signs every time robot orientation θ takes a value in a different quarter circle. A positive velocity increment contributes either positively or negatively toward position errors along axes X-Y. For this reason, four separate controllers that operate in disjoint circle sections are proposed. The angle of the sections is set to π / 2 . The selected nodes are shown in Figure 3. As a result, gains for the first quadrant θ ( 0 , π / 2 ) were as follows:
K 1 = 0.0809 0.0809 0.0000 0.0000 0.0000 0.1618 , K 2 = 0.2133 0.2132 0.0000 0.0000 0.0000 0.4266
To instantiate a different sector θ ( π / 2 , π ) of the circle,
K 1 = 0.0809 0.0809 0.0000 0.0000 0.0000 0.1617 , K 2 = 0.2132 0.2133 0.0000 0.0000 0.0000 0.4265
Despite matrices being computed independently, their magnitudes are almost identical. Sign changes agree with the prior assumptions. A closer inspection of their values suggests that the feedback from signal d d t η k + 1 ( t ) denoted by (6) plays the role of an overshoot damper, counteracting input increments computed from error. Eventually, the derived controller was of the generalized DLRP form (12)
Δ u k + 1 ( t ) = K 1 ( ϑ ( t ) ) d d t η k + 1 ( t ) + K 2 ( ϑ ( t ) ) d d t e k ( t ) ,
with matrices being selected at every instant t, which was ruled by division, as shown in Figure 3.

5. Case Studies

Let us start with the implementation details. In particular, the control signal was calculated in an iterative manner with a MATLAB script, which can be summarized as follows:
  • Based on Equations (19) and (20), calculate K 1 and K 2 .
  • Gather the reference points.
  • Set the initial settings ( x k ( 0 ) and u k = 1 ( t ) ).
  • Within one trial, the trajectory traveled by the robot is integrated with the ode45 solver. At each step of this function, the control signal is calculated with (4), (28) and fed to the robot model (23).
  • A new tracking error e k + 1 is calculated.
  • Repeat points 4–5 until a satisfactory value of e k + 1 is obtained.

5.1. Ellipse Drive

In order to produce non-trivial trajectories s r e f ( t ) = ( x ( t ) , y ( t ) , θ ( t ) ) for the robot to traverse, two distinct shapes were chosen, ellipse and Lissajous ones. The parametrized equations of the ellipse are given by
x ( t ) = a cos ( t ) cos ( α ) b sin ( t ) sin ( α )
y ( t ) = a cos ( t ) sin ( α ) + b sin ( t ) cos ( α )
for a = 2 , b = 1 with rotation angle α = 25 . Its tangent to the curve at a specific time instant was taken as θ ( t ) = atan 2 ( y ( t ) , x ( t ) ) . Moreover, the simulation time range was t [ 0 , 2 π ] . During all N k ILC trials, the initial conditions of the vehicle state were assumed to be aligned with the curves x k ( t = 0 ) = s ( 0 ) for k [ 1 N k ] . The control signals in the first iteration were assumed to be constant: v k = 1 ( t ) = ω k = 1 ( t ) = 1 . The results concerning the application of the obtained control laws (12) are shown in Figure 4. Note that the initial location of the robot is shown as a square. Figure 5 shows which controllers were used in the final iteration. The two-dimensional evolution of all states of the system along time-trial space is shown in Figure 6. Finally, in Figure 7, one can see the exponential convergence trend of the error norm e k = | | s r e f s k | | over the entire trial set.

5.2. Lissajous Curve Drive

Parametrized Lissajous curve equations are as follows:
x ( t ) = a sin ( a t + δ ) , y ( t ) = b sin ( b t ) ,
with a = 1 , b = 2 , δ = π / 2 . Other assumptions remained the same as in the ellipse case, except for the angular velocity ω k = 1 ( t ) = { 1 | t [ 0 , π ) , 1 | t [ π , 2 π ] } , for clarity of presentation. The resulting trajectories are shown in Figure 8. Compared to the ellipse case, more complex switching behavior can be observed in Figure 9. A respective 2D evolution is shown in Figure 10. From Figure 11, one can deduce a more challenging characteristic of the curve, as the error oscillates along relatively smaller values.

6. Experimental Evaluation

6.1. The Experimental Platform

As an experimental platform, the Waveshare 17607 JetRacer (https://www.waveshare.com/jetracer-ai-kit.htm, accessed on 20 December 2024) was selected. It is a model of a real wheel drive car. The front steering axle (Ackerman steering) is responsible for controlling the direction of travel. The drive is implemented by two servos placed on the rear axle. In this case, a Raspberry Pi 4 minicomputer was used as the platform control unit. In order to determine the location and orientation of the robot, an AruCo marker was placed on it, see Figure 12. A camera with a wide-angle fish eye lens was placed above the workspace. In order to minimize errors resulting from the lens used, the camera was calibrated before starting the experiments with the use of calibration patterns (https://ch.mathworks.com/help/vision/ug/calibration-patterns.html, accessed on 20 December 2024). Thanks to the use of software for handling ArUco markers available in the OpenCV library, it is possible to clear the camera image and then to localize the robot relatively accurately in real time. Note also that the robot control is handled through a Python 3.13.0 script. Every 60 milliseconds, the current control signal (values of δ ( t ) and v ( t ) ) is checked, and then the servo position and the speed of the rear wheels are controlled using built-in functions.

6.2. Experimental Results

Due to limited space and the robot’s relatively large turning radius, the physical experiment was performed based on the ellipse and a U-shaped reference paths. The parametric equations of the U-shape path are given by
x ( t ) = 0.5 ( 4 ( t 0.5 ) ) 2 + 2 ,
y ( t ) = 2 1 + exp ( 10 ( t 0.5 ) ) .
To carry out the experiment, the following steps were taken:
  • The experimental setup was prepared (camera placement and calibration), and the robot was calibrated (characteristics of the steering angle and wheels speed were examined).
  • A reference trajectory was planned.
  • The proposed ILC algorithm was used offline (based on simulation, as in Section 5) to calculate the robot control signals.
  • The control signals were uploaded to the robot, and the experiment was conducted.
The outcome of the final pass (repetition) can be seen in the video at the links:
A projection of the recorded positions from the picture coordinate frame to the global one and a comparison with the reference path can be seen in Figure 13 and Figure 14 for the U-shaped and ellipse paths, respectively. Finally, it can be seen that the robot follows the set trajectory quite well.

7. Discussion

In (27), one can immediately notice the typical characteristic for mobile robotics, i.e., the global position is described by three pure integrator poles ( λ ( A ) = [ 0 , 0 , 0 ] ). Such a low-dimensional description is still useful to showcase the potential of a method, as a moderate velocity range does not exhibit significant dynamic nonlinearities. Judging by the convergence shapes given in Figure 4 and Figure 8, the last iteration is within a close distance to reference values. For such trajectories, angular and linear velocities are in complementary relation to each other. Finally, note that jumps visible in error convergence plots (Figure 7 and Figure 11) could be caused by the operating point region switching.

8. Conclusions

This paper presents the design and implementation of the one-stage ILC scheme for a class of mobile robots. Since the original model of the robot is nonlinear, it is transformed into the equivalent LPV model containing the linear submodels. Then, for each submodel, the ILC design is defined, and an appropriate LMI is solved in order to provide the controller gain matrices. In the last step, they are applied in the control procedure. The proposed approach is attractive thanks to its application in the global reference frame, using measured states directly, without the need for local frame kinematic-based control. Moreover, experiments show the effectiveness of the proposed algorithm for the low-speed applications. It should be also noted that the preliminary results opened interesting research horizons: for instance, finer input nonlinearity division or expansion to velocity-dependant steering angle relation. A natural development would also be a comparison with the behavior of the full four-wheel model. A major milestone in future research will involve studying how skidding affects the learning process at higher driving speeds as well as strategies for preventing this phenomenon. This has implications for performance, wear, and safety. In addition, unexpected changes in the coefficient of friction of the surface (e.g., a wet surface) will be very important. Since these changes can occur in an unpredictable pattern between iterations (for instance, each time a mobile robot can skid on a different section of the path), future work should focus on extending the system model to include forces and incorporate robust control strategies with feedback from the current trial. The linearization of highly nonlinear systems has to be performed carefully, as the study has shown. Results from numerical procedures may not agree with requirements, and refinements might be necessary, such as the one we have developed and benchmarked. Finally, it should be noted that the current setup of the mobile robot is very simple and does not include such features like obstacle detection. This reduces the ability for including such knowledge in the trajectory-planning procedure. Thus, enhancing the mobile robot with such a feature can be one of the future research and development directions.

Author Contributions

Conceptualization, M.W.; Formal analysis; B.S., D.Z. and P.B.; Experimental results: K.W., D.Z. and P.B.; Writing—original draft, P.B., M.W., D.Z. and B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science Centre in Poland, grant number 2020/37/B/ST7/03280.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ILCiterative learning control
DLRPdifferential linear repetitive process
LMIlinear matrix inequality
LPVlinear parameter varying

References

  1. Dev, P.; Jain, S.; Arora, P.K.; Kumar, H. Machine learning and its impact on control systems: A review. Mater. Today Proc. 2021, 47, 3744–3749. [Google Scholar] [CrossRef]
  2. Feng, D.; Krogh, B.H. Satisficing feedback strategies for local navigation of autonomous mobile robots. IEEE Trans. Syst. Man Cybern. 1990, 20, 1383–1395. [Google Scholar] [CrossRef]
  3. Bemporad, A.; De Luca, A.; Oriolo, G. Local incremental planning for a car-like robot navigating among obstacles. In Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, 22–28 April 1996; Volume 2, pp. 1205–1211. [Google Scholar] [CrossRef]
  4. Vial, P.; Puig, V. Kinematic/Dynamic SLAM for Autonomous Vehicles Using the Linear Parameter Varying Approach. Sensors 2022, 22, 8211. [Google Scholar] [CrossRef] [PubMed]
  5. Zhu, M.; Chen, H.; Xiong, G. A model predictive speed tracking control approach for autonomous ground vehicles. Mech. Syst. Signal Process. 2017, 87, 138–152. [Google Scholar] [CrossRef]
  6. Santos, H.B.; Teixeira, M.A.S.; Dalmedico, N.; de Oliveira, A.S.; Neves, F., Jr.; Ramos, J.E.; de Arruda, L.V.R. Model predictive torque control for velocity tracking of a four-wheeled climbing robot. Sensors 2020, 20, 7059. [Google Scholar] [CrossRef] [PubMed]
  7. Ding, T.; Zhang, Y.; Ma, G.; Cao, Z.; Zhao, X.; Tao, B. Trajectory tracking of redundantly actuated mobile robot by MPC velocity control under steering strategy constraint. Mechatronics 2022, 84, 102779. [Google Scholar] [CrossRef]
  8. Tian, Y.; Ma, H.; Ma, L.; Li, S.; Wang, Z.; Wang, X.; Li, L. Path Tracking Control of Commercial Vehicle Emergency Obstacle Avoidance Based on MPC and Active Disturbance Rejection Control. IEEE Trans. Transp. Electrif. 2024, in press. [CrossRef]
  9. Owens, D.H. Iterative Learning Control; Advances in Industrial Control; Springer: London, UK, 2016. [Google Scholar] [CrossRef]
  10. Bristow, D.A.; Tharayil, M.; Alleyne, A.G. A survey of iterative learning control. IEEE Control Syst. Mag. 2006, 26, 96–114. [Google Scholar]
  11. Rogers, E.; Chu, B.; Freeman, C.T.; Lewin, P.L. Iterative Learning Control Algorithms and Experimental Benchmarking; Wiley: Hoboken, NJ, USA, 2023. [Google Scholar]
  12. Arimoto, S.; Kawamura, S.; Miyazaki, F. Bettering Operation of Robots by Learning. J. Robot. Syst. 1984, 1, 123–140. [Google Scholar] [CrossRef]
  13. Chin, I.; Qin, S.; Lee, K.; Cho, M. A two-stage iterative learning control technique combined with real-time feedback for independent disturbance rejection. Automatica 2004, 40, 1913–1922. [Google Scholar] [CrossRef]
  14. Rogers, E.; Gałkowski, K.; Owens, D.H. Control Systems Theory and Applications for Linear Repetitive Processes; LNCIS; Springer: Berlin/Heidelberg, Germany, 2007; Volume 49. [Google Scholar]
  15. Boyd, S.; Feron, E.; Ghaoui, L.E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; Siam: Philadelphia, PA, USA, 1994. [Google Scholar]
  16. Paszke, W.; Rogers, E.; Gałkowski, K.; Cai, Z. Robust finite frequency range Iterative Learning Control design and experimental verification. Control Eng. Pract. 2013, 21, 1310–1320. [Google Scholar] [CrossRef]
  17. Sornmo, O.; Bernhardsson, B.; Kroling, O.; Gunnarsson, P.; Tenghamn, H. Frequency-domain iterative learning control of a marine vibrator. Control Eng. Pract. 2016, 47, 70–80. [Google Scholar] [CrossRef]
  18. Sulikowski, B.; Galkowski, K.; Rogers, E.; Kummert, A. Robust Iterative Learning Control for ladder circuits with mixed uncertainties. In Proceedings of the 16th International Conference on Control, Automation, Robotics and Vision, ICARCV 2020, Shenzhen, China, 13–15 December 2020; pp. 932–937. [Google Scholar]
  19. Rafajłowicz, E.; Rafajłowicz, W. Iterative learning in optimal control of linear dynamic processes. Int. J. Control 2018, 91, 1522–1540. [Google Scholar] [CrossRef]
  20. Toth, R. Modeling and Identification of Linear Parameter-Varying Systems; Lecture Notes in Control and Information Sciences; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef]
  21. Paszke, W.; Rogers, E.; Gałkowski, K. Design of iterative learning control schemes for systems with zero Markov parameters. In Proceedings of the 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 6083–6088. [Google Scholar]
  22. Paszke, W.; Gałkowski, K. Design of iterative learning control schemes for a class of spatially interconnected systems. In Proceedings of the 36th Chinese Control Conference—CCC 2017, Dalian, China, 26–28 July 2017; pp. 3527–3532. [Google Scholar]
  23. Kim, M.; Lee, S.; Lim, J.; Choi, J.; Kang, S.G. Unexpected collision avoidance driving strategy using deep reinforcement learning. IEEE Access 2020, 8, 17243–17252. [Google Scholar] [CrossRef]
  24. Rajamani, R.; Phanomchoeng, G.; Piyabongkarn, D.; Lew, J. Algorithms for real-time estimation of individual wheel tire-road friction coefficients. IEEE/ASME Trans. Mechatron. 2012, 17, 1183–1195. [Google Scholar] [CrossRef]
  25. Kukurowski, N.; Mrugalski, M.; Pazera, M.; Witczak, M. Fault-tolerant tracking control for a non-linear twin-rotor system under ellipsoidal bounding. Int. J. Appl. Math. Comput. Sci. 2022, 32, 171–183. [Google Scholar] [CrossRef]
  26. Witczak, M.; Pazera, M.; Majdzik, P.; Matysiak, R. Development of a guaranteed minimum detectable sensor fault diagnosis scheme. Int. J. Appl. Math. Comput. Sci. 2024, 34, 409–423. [Google Scholar] [CrossRef]
Figure 1. A scheme of the proposed ILC for mobile robots.
Figure 1. A scheme of the proposed ILC for mobile robots.
Electronics 14 00531 g001
Figure 2. Kinematic scheme of the system.
Figure 2. Kinematic scheme of the system.
Electronics 14 00531 g002
Figure 3. Regional divisions of system nonlinearity.
Figure 3. Regional divisions of system nonlinearity.
Electronics 14 00531 g003
Figure 4. Ellipse trajectory following results.
Figure 4. Ellipse trajectory following results.
Electronics 14 00531 g004
Figure 5. Controller switching (for ellipse trajectory).
Figure 5. Controller switching (for ellipse trajectory).
Electronics 14 00531 g005
Figure 6. Error convergence surfaces (for ellipse trajectory). For readability, the color temperature corresponds to z-axis value in 3D plots.
Figure 6. Error convergence surfaces (for ellipse trajectory). For readability, the color temperature corresponds to z-axis value in 3D plots.
Electronics 14 00531 g006
Figure 7. Algorithm convergence (for ellipse trajectory).
Figure 7. Algorithm convergence (for ellipse trajectory).
Electronics 14 00531 g007
Figure 8. Lissajous curve trajectory following results.
Figure 8. Lissajous curve trajectory following results.
Electronics 14 00531 g008
Figure 9. Controller switching (for Lissajous curve trajectory).
Figure 9. Controller switching (for Lissajous curve trajectory).
Electronics 14 00531 g009
Figure 10. Error convergence surfaces (for Lissajous curve trajectory). For readability, the color temperature corresponds to z-axis value in 3D plots.
Figure 10. Error convergence surfaces (for Lissajous curve trajectory). For readability, the color temperature corresponds to z-axis value in 3D plots.
Electronics 14 00531 g010
Figure 11. Algorithm convergence (for Lissajous curve trajectory).
Figure 11. Algorithm convergence (for Lissajous curve trajectory).
Electronics 14 00531 g011
Figure 12. The Jetracer robot with AruCo marker used in experiments.
Figure 12. The Jetracer robot with AruCo marker used in experiments.
Electronics 14 00531 g012
Figure 13. Experimental evaluation—path traveled by the robot (blue line) with a U-shaped reference path.
Figure 13. Experimental evaluation—path traveled by the robot (blue line) with a U-shaped reference path.
Electronics 14 00531 g013
Figure 14. Experimental evaluation—path traveled by the robot (blue line) with ellipse-shaped reference path.
Figure 14. Experimental evaluation—path traveled by the robot (blue line) with ellipse-shaped reference path.
Electronics 14 00531 g014
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zaborniak, D.; Balik, P.; Woźniak, K.; Sulikowski, B.; Witczak, M. Iterative Learning Control Design for a Class of Mobile Robots. Electronics 2025, 14, 531. https://doi.org/10.3390/electronics14030531

AMA Style

Zaborniak D, Balik P, Woźniak K, Sulikowski B, Witczak M. Iterative Learning Control Design for a Class of Mobile Robots. Electronics. 2025; 14(3):531. https://doi.org/10.3390/electronics14030531

Chicago/Turabian Style

Zaborniak, Dominik, Piotr Balik, Kacper Woźniak, Bartłomiej Sulikowski, and Marcin Witczak. 2025. "Iterative Learning Control Design for a Class of Mobile Robots" Electronics 14, no. 3: 531. https://doi.org/10.3390/electronics14030531

APA Style

Zaborniak, D., Balik, P., Woźniak, K., Sulikowski, B., & Witczak, M. (2025). Iterative Learning Control Design for a Class of Mobile Robots. Electronics, 14(3), 531. https://doi.org/10.3390/electronics14030531

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop